
on Econometrics 
By:  Jan J.J. Groen (Federal Reserve Bank of New York); George Kapetanios (Queen Mary, University of London) 
Abstract:  This paper revisits a number of datarich prediction methods, like factor models, Bayesian ridge regression and forecast combinations, which are widely used in macroeconomic forecasting, and compares these with a lesser known alternative method: partial least squares regression. Under the latter, linear, orthogonal combinations of a large number of predictor variables are constructed such that these linear combinations maximize the covariance between the target variable and each of the common components constructed from the predictor variables. We provide a theorem that shows that when the data comply with a factor structure, principal components and partial least squares regressions provide asymptotically similar results. We also argue that forecast combinations can be interpreted as a restricted form of partial least squares regression. Monte Carlo experiments confirm our theoretical result that principal components and partial least squares regressions are asymptotically similar when the data has a factor structure. These experiments also indicate that when there is no factor structure in the data, partial least squares regression outperforms both principal components and Bayesian ridge regressions. Finally, we apply partial least squares, principal components and Bayesian ridge regressions on a large panel of monthly U.S. macroeconomic and financial data to forecast, for the United States, CPI inflation, core CPI inflation, industrial production, unemployment and the federal funds rate across different subperiods. The results indicate that partial least squares regression usually has the best outofsample performance relative to the two other datarich prediction methods. 
Keywords:  Macroeconomic forecasting, Factor models, Forecast combination, Principal components, Partial least squares, (Bayesian) ridge regression 
JEL:  C22 C53 E37 E47 
Date:  2008–03 
URL:  http://d.repec.org/n?u=RePEc:qmw:qmwecw:wp624&r=ecm 
By:  Wagner P. Gaglianone; Luiz Renato Lima; Oliver Linton 
Abstract:  We propose an alternative backtest to evaluate the performance of ValueatRisk (VaR) models. The presented methodology allows us to directly test the performance of many competing VaR models, as well as identify periods of an increased risk exposure based on a quantile regression model (Koenker & Xiao, 2002). Quantile regressions provide us an appropriate environment to investigate VaR models, since they can naturally be viewed as a conditional quantile function of a given return series. A Monte Carlo simulation is presented, revealing that our proposed test might exhibit more power in comparison to other backtests presented in the literature. Finally, an empirical exercise is conducted for daily S&P500 return series in order to explore the practical relevance of our methodology by evaluating five competing VaRs through four different backtests. 
Date:  2008–02 
URL:  http://d.repec.org/n?u=RePEc:bcb:wpaper:161&r=ecm 
By:  Andros Kourtellos; Thanasis Stengos; Chih Ming Tan 
Abstract:  This paper extends the simple threshold regression framework of Hansen (2000) and Caner and Hansen (2004) to allow for endogeneity of the threshold variable. We develop a concentrated twostage least squares (C2SLS) estimator of the threshold parameter that is based on an inverse Mills ratio bias correction. Our method also allows for the endogeneity of the slope variables. We show that our estimator is consistent and investigate its performance using a Monte Carlo simulation that indicates the applicability of the method is finite samples. We also illustrate its usefulness with an empirical example from economic growth. 
Date:  2008–03 
URL:  http://d.repec.org/n?u=RePEc:ucy:cypeua:32008&r=ecm 
By:  Proietti, Tommaso; Riani, Marco 
Abstract:  We address the problem of seasonal adjustment of a nonlinear transformation of the original time series, such as the BoxCox transformation of a time series measured on a ratio scale, or the ArandaOrdaz transformation of proportions, which aims at enforcing two essential features: additivity and orthogonality of the components. The posterior mean and variance of the seasonally adjusted series admit an analytic finite representation only for particular values of the transformation parameter, e.g. for a fractional BoxCox transformation parameter. Even if available, the analytical derivation can be tedious and difficult. As an alternative we propose to compute the two conditional moments of the seasonally adjusted series by means of numerical and Monte Carlo integration. The former is both fast and reliable in univariate applications. The latter uses the algorithm known as the simulation smoother and it is most useful in multivariate applications. We present several case studies dealing with robust seasonal adjustment under the square root and the fourth root transformation, the seasonal adjustment of the ratio of two series, and the adjustment of time series of proportions. Our overall conclusion is that robust seasonal adjustment under transformations can be carried out routinely and that the possibility of transforming the scale ought to be considered as a further option for improving the quality of seasonal adjustment. 
Keywords:  Structural Time Series Models; BoxCox Transformation; Aranda–Ordaz Transformation; Simulation Smoother; Forward Search; Numerical Integration. 
JEL:  C32 C22 C15 
Date:  2007–12–03 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:7862&r=ecm 
By:  Necati Tekatli 
Abstract:  There is recent interest in the generalization of classical factor models in which the idiosyncratic factors are assumed to be orthogonal and there are identification restrictions on crosssectional and time dimensions. In this study, we describe and implement a Bayesian approach to generalized factor models. A flexible framework is developed to determine the variations attributed to common and idiosyncratic factors. We also propose a unique methodology to select the (generalized) factor model that best fits a given set of data. Applying the proposed methodology to the simulated data and the foreign exchange rate data, we provide a comparative analysis between the classical and generalized factor models. We find that when there is a shift from classical to generalized, there are significant changes in the estimates of the structures of the covariance and correlation matrices while there are less dramatic changes in the estimates of the factor loadings and the variation attributed to common factors. 
Date:  2007–10–15 
URL:  http://d.repec.org/n?u=RePEc:aub:autbar:730.08&r=ecm 
By:  CheLin Su; Kenneth L. Judd 
Abstract:  Maximum likelihood estimation of structural models is often viewed as computationally difficult. This impression is due to a focus on the Nested FixedPoint approach. We present a direct optimization approach to the general problem and show that it is significantly faster than the NFXP approach when applied to the canonical Zurcher bus repair model. The NFXP approach is inappropriate for estimating games since it requires finding all Nash equilibria of a game for each parameter vector considered, a generally intractable computational problem. We formulate the problem of maximum likelihood estimation of games as a constrained optimization problem that is qualitatively no more difficult to solve than standard maximum likelihood problems. The direct optimization approach is also applicable to other structural estimation methods such as methods of moments, and also allows one to use computationally intensive bootstrap methods to calculate inference. The MPEC approach is also easily implemented on software with highlevel interfaces. Furthermore, all the examples in this paper were computed using only free resources available on the web. 
Keywords:  Constrained Optimization, Structural Estimation, Maximum Likelihood Estimation, Games with Multiple Equilibria 
JEL:  C13 C61 
Date:  2008–01 
URL:  http://d.repec.org/n?u=RePEc:nwu:cmsems:1460&r=ecm 
By:  Claude Lopez 
Abstract:  This paper proposes a version of the DFGLS test that incorporates up to two breaks in the intercept, namely the DFGLSTB test. While the asymptotic properties of the DFGLS test remain valid, the presence of changes in the intercept has an impact on the small sample properties of the test. Hence, ?nite sample critical values for the DFGLSTB test are tabulated while aMonte Carlo study highlights its enhanced power. 
Date:  2008 
URL:  http://d.repec.org/n?u=RePEc:cin:ucecwp:200801&r=ecm 
By:  Noman, Abdullah 
Abstract:  The paper investigates the validity of PPP by using 15 OECD countries data of monthly frequency from 1980:01 to 2005:12 and tests for the symmetry and proportionality hypotheses. The test for PPP is conducted in the framework of the General Relative PPP (RPPP) as proposed by Coakley et al. (2005) using the MeanGroup (MG) estimators of Pesaran and Smith (1995). We apply two variants of the MG estimators, namely MG and CMG, where the latter takes into account the problem of crosssectional dependence (CSD). The symmetry null is unequivocally accepted for both estimators in CPI as well as PPI panels. The proportionality null, however, is rejected in the CPI panel with MG procedure but accepted with CMG. In the PPI panel, the MG estimate cannot reject the null while the CMG estimate marginally rejects it. Our findings are only partially supportive of the general RPPP. 
Keywords:  Purchasing Power Parity, MeanGroup Regression, Symmetry and Proportionality, OECD, General Relative PPP 
JEL:  C23 F31 
Date:  2008–03–17 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:7825&r=ecm 
By:  Mishra, SK 
Abstract:  In the extant literature a suggestion has been made to solve the nearest correlation matrix problem by a modified von Neumann approximation. In this paper it has been shown that obtaining the nearest positive semidefinite matrix of a given negative definite correlation matrix by such method is either infeasible or suboptimal. First, if a given matrix is already positive semidefinite, there is no need to obtain any other semidefinite matrix closest to it. When the given matrix is negative definite (Q), then only we seek a positive semidefinite matrix closest to it. Then the proposed procedure fails as we cannot find log(Q). Then, if we replace negative eigenvalues of Q by a zero or nearzero values, we obtain a positivedefinite matrix, but it is not nearest to the Q matrix; there are indeed other procedures to obtain better approximation 
Keywords:  Nearest correlation matrix problem; positive semidefinite; negative definite; von Neumann divergence; Bergman divergence 
JEL:  C44 C63 C02 C01 
Date:  2008–03–16 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:7798&r=ecm 
By:  Aureo de Paula (Department of Economics, University of Pennsylvania) 
Abstract:  Consider two random variables X and Y. In initial probability and statistics courses, a discussion of various concepts of dissociation between X and Y is customary. These concepts typically involve independence and uncorrelatedness. An example is shown where E(Y^nX) = E(Y^n) and E(X^nY) = E(X^n) for n = 1, 2,… and yet X and Y are not stochastically independent. The bivariate distribution is constructed using a wellknown example in which the distribution of a random variable is not uniquely determined by its sequence of moments. Other similar families of distributions with identical moments can be used to display such a pair of random variables. It is interesting to note in class that even such a degree of dissociation between the moments of X and Y does not imply stochastic independence. and yet X and Y are not stochastically independent. The bivariate distribution is constructed using a wellknown example in which the distribution of a random variable is not uniquely determined by its sequence of moments. Other similar families of distributions with identical moments can be used to display such a pair of random variables. It is interesting to note in class that even such a degree of dissociation between the moments of X and Y does not imply stochastic independence. 
Keywords:  indeterminate distributions, moments, independence 
JEL:  A2 C00 
Date:  2008–03–07 
URL:  http://d.repec.org/n?u=RePEc:pen:papers:08010&r=ecm 
By:  Ting Qin; Walter Enders (Department of Economics, St. Cloud State University) 
Abstract:  A key feature of Gallant’s Flexible Fourier Form is that the essential characteristics of one or more structural breaks can be captured using a small number of low frequency components from a Fourier approximation. We introduce a variant of the Flexible Fourier Form into the trend function of U.S. real GDP in order to allow for gradual effects of unknown numbers of structural breaks occurring at unknown dates. We find that the Fourier components are significant and that there are multiple breaks in the trend. In addition to the productivity slowdown in the 1970s, our trend also captures a productivity resumption in the late 1990s and a slowdown in the late 1950s. Our cycle corresponds very closely to the NBER chronology. We compare the decomposition from our model with those from a standard unobserved components model, the HP filter, and the Perron and Wada (2005) model. We find that our decomposition has several favorable characteristics over the other models and has very different implications about the recovery from the recent recession. 
Keywords:  Flexible Fourier Form, Smooth Trend Breaks, Fourier Approximation 
Date:  2007–01 
URL:  http://d.repec.org/n?u=RePEc:scs:wpaper:0809&r=ecm 
By:  Melvin J. Hinich; John Foster; Philip Wild (School of Economics, The University of Queensland) 
Abstract:  The purpose of this article is to investigate the ability of bandpass filters commonly used in economics to extract a known periodicity. The specific bandpass filters investigated include a Discrete Fourier Transform (DFT) filter, together with those proposed by Hodrick and Prescott (1997) and Baxter and King (1999). Our focus on the cycle extraction properties of these filters reflects the lack of attention that has been given to this issue in the literature, when compared, for example, to studies of the trend removal properties of some of these filters. The artificial data series we use are designed so that one periodicity deliberately falls within the passband while another falls outside. The objective of a filter is to admit the ‘bandpass’ periodicity while excluding the periodicity that falls outside the passband range. We find that the DFT filter has the best extraction properties. The filtered data series produced by both the HodrickPrescott and BaxterKing filters are found to admit low frequency components that should have been excluded. 
Date:  2008 
URL:  http://d.repec.org/n?u=RePEc:qld:uq2004:358&r=ecm 
By:  Melvin J. Hinich; John Foster; Philip Wild (School of Economics, The University of Queensland) 
Abstract:  The purpose of this article is to investigate the ability of an assortment of frequency domain bandpass filters proposed in the economics literature to extract a known periodicity. The specific bandpass filters investigated include a conventional Discrete Fourier Transform filter, together with the filter recently proposed in IacobucciNoullez (2004, 2005). We employ simulation methods whereby the abovementioned filters are applied to artificial data in order to investigate their cycle extraction properties. We also investigate the implications and complications that may arise from the Gibbs Effect in practical settings that typically confront applied macroeconomists. 
Date:  2008 
URL:  http://d.repec.org/n?u=RePEc:qld:uq2004:357&r=ecm 