
on Econometrics 
By:  Mikkel PlagborgMøller 
Abstract:  This dissertation consists of three independent chapters on econometric methods for macroeconomic analysis. In the first chapter, I propose to estimate structural impulse response functions from macroeconomic time series by doing Bayesian inference on the Structural Vector Moving Average representation of the data. This approach has two advantages over Structural Vector Autoregression analysis: It imposes prior information directly on the impulse responses in a flexible and transparent manner, and it can handle noninvertible impulse response functions. The second chapter, which is coauthored with B. J. Bates, J. H. Stock, and M. W. Watson, considers the estimation of dynamic factor models when there is temporal instability in the factor loadings. We show that the principal components estimator is robust to empirically large amounts of instability. The robustness carries over to regressions based on estimated factors, but not to estimation of the number of factors. In the third chapter, I develop shrinkage methods for smoothing an estimated impulse response function. I propose a datadependent criterion for selecting the degree of smoothing to optimally trade off bias and variance, and I devise novel shrinkage confidence sets with valid frequentist coverage. 
Date:  2016–01 
URL:  http://d.repec.org/n?u=RePEc:qsh:wpaper:441671&r=ecm 
By:  JIN SEO CHO (Yonsei University); PETER C.B. PHILLIPS (Yale University University of Auckland, Singapore Management University & University of Southampton) 
Abstract:  We provide a methodology for testing a polynomial model hypothesis by extending the approach and results of Baek, Cho, and Phillips (2015; BCP) that tests for neglected nonlinearity using power transforms of regressors against arbitrary nonlinearity. We examine and generalize the BCP quasilikelihood ratio test dealing with the multifold identification problem that arises under the null of the polynomial model. The approach leads to convenient asymptotic theory for inference, has omnibus power against general nonlinear alternatives, and allows estimation of an unknown polynomial degree in a model by way of sequential testing, a technique that is useful in the application of sieve approximations. Simulations show good performance in the sequential test procedure in identifying and estimating unknown polynomial order. The approach, which can be used empirically to test for misspecification, is applied to a Mincer (1958, 1974) equation using data from Card (1995). The results confirm that Mincer¡¯s log earnings equation is easily shown to be misspecified by including nonlinear effects of experience and schooling on earnings, with some flexibility required in the respective polynomial degrees. 
Keywords:  QLR test; Asymptotic null distribution; Misspecification; Mincer equation; Nonlinearity; Polynomial model; Power Gaussian process; Sequential testing. 
JEL:  C12 C18 C46 C52 
Date:  2016–08 
URL:  http://d.repec.org/n?u=RePEc:yon:wpaper:2016rwp90&r=ecm 
By:  Cyrus J. DiCiccio; Joseph P. Romano; Michael Wolf 
Abstract:  In the presence of conditional heteroskedasticity, inference about the coefficients in a linear regression model these days is typically based on the ordinary least squares estimator in conjunction with using heteroskedasticity consistent standard errors. Similarly, even when the true form of heteroskedasticity is unknown, heteroskedasticity consistent standard errors can be used to base valid inference on a weighted least squares estimator. Using a weighted least squares estimator can provide large gains in efficiency over the ordinary least squares estimator. However, intervals based on plugin standard errors often have coverage that is below the nominal level, especially for small sample sizes. In this paper, it is shown that a bootstrap approximation to the sampling distribution of the weighted least squares estimate is valid, which allows for inference with improved finitesample properties. Furthermore, when the model used to estimate the unknown form of the heteroskedasticity is misspecified, the weighted least squares estimator may be less efficient than the ordinary least squares estimator. To address this problem, a new estimator is proposed that is asymptotically at least as efficient as both the ordinary and the weighted least squares estimator. Simulation studies demonstrate the attractive finitesample properties of this new estimator as well as the improvements in performance realized by bootstrap confidence intervals. 
Keywords:  Bootstrap, conditional heteroskedasticity, HC standard errors 
JEL:  C12 C13 
Date:  2016–08 
URL:  http://d.repec.org/n?u=RePEc:zur:econwp:232&r=ecm 
By:  Peter Reinhard Hansen (University of North Carolina at Chapel Hill, United States); Pawel Janus (UBS Global Asset Management, Zürich, Switzerland); Siem Jan Koopman (VU University Amsterdam, the Netherlands) 
Abstract:  We propose a novel multivariate GARCH model that incorporates realized measures for the variance matrix of returns. The key novelty is the joint formulation of a multivariate dynamic model for outerproducts of returns, realized variances and realized covariances. The updating of the variance matrix relies on the score function of the joint likelihood function based on Gaussian and Wishart densities. The dynamic model is parsimonious while each innovation still impacts all elements of the variance matrix. Monte Carlo evidence for parameter estimation based on different small sample sizes is provided. We illustrate the model with an empirical application to a portfolio of 15 U.S. financial assets. 
Keywords:  highfrequency data; multivariate GARCH; multivariate volatility; realised covariance; score; Wishart density 
JEL:  C32 C52 C58 
Date:  2016–08–11 
URL:  http://d.repec.org/n?u=RePEc:tin:wpaper:20160061&r=ecm 
By:  JIN SEO CHO (Yonsei University); PETER C.B. PHILLIPS (Yale University University of Auckland, Singapore Management University & University of Southampton) 
Abstract:  We provide a new test for equality of two symmetric positivedefinite matrices that leads to a convenient mechanism for testing specification using the information matrix equality and the sandwich asymptotic covariance matrix of the GMM estimator. The test relies on a new characterization of equality between two k dimensional symmetric positivedefinite matrices A and B: the traces of AB1 and BA1 are equal to k if and only if A = B. Using this criterion, we introduce a class of omnibus test statistics for equality and examine their null and local alternative approximations under some mild regularity conditions. A preferred test in the class with good omnidirectional power is recommended for practical work. Monte Carlo experiments are conducted to explore performance characteristics under the null and local as well as fixed alternatives. The test is applicable in many settings, including GMM estimation, SVAR models and high dimensional variance matrix settings. 
Keywords:  Matrix equality; Trace; Determinant; Arithmetic mean; Geometric mean; Harmonic mean; Sandwich covariance matrix; Eigenvalues. 
JEL:  C01 C12 C52 
Date:  2016–08 
URL:  http://d.repec.org/n?u=RePEc:yon:wpaper:2016rwp89&r=ecm 
By:  Antonia Arsova (Leuphana University Lueneburg, Germany); Deniz Dilan Karaman Örsal (Leuphana University Lueneburg, Germany) 
Abstract:  This paper takes a multiple testing perspective on the problem of determining the cointegrating rank in macroeconometric panel data with crosssectional dependence. The testing procedure for a common rank among the panel units is based on Simes’ (1986) intersection test and requires only the pvalues of suitable individual test statistics. A Monte Carlo study demonstrates that this simple test is robust to crosssectional dependence and has reasonable size and power properties. A multivariate version of Kendall’s tau is used to test an important assumption underlying Simes’ procedure for dependent statistics. The method is illustrated by testing the validity of the monetary exchange rate model for 8 OECD countries in the postBretton Woods era. 
Keywords:  panel cointegration rank test, crosssectional dependence, multiple testing, common factors, likelihoodratio 
JEL:  C12 C15 C33 
Date:  2016–03 
URL:  http://d.repec.org/n?u=RePEc:lue:wpaper:357&r=ecm 
By:  Bach, P.; Farbmacher, H.; Spindler, M. 
Abstract:  Heterogeneous effects are prevalent in many economic settings. As the functional form between outcomes and regressors is often unknown apriori, we propose a semiparametric negative binomial count data model based on the local likelihood approach and generalized product kernels, and apply the estimator to model demand for health care. The local likelihood framework allows us to leave the functional form of the conditional mean unspecified while still exploiting basic assumptions in the count data literature (e.g., nonnegativity). The generalized product kernels allow us to simultaneously model discrete and continuous regressors, which reduces the curse of dimensionality and increases its applicability as many regressors in the demand model for health care are discrete. 
Keywords:  semiparametric; nonparametric; count data; health care demand; 
JEL:  I10 C14 C25 
Date:  2016–08 
URL:  http://d.repec.org/n?u=RePEc:yor:hectdg:16/20&r=ecm 
By:  Cattaneo, Matias D. (University of Michigan); Crump, Richard K. (Federal Reserve Bank of New York); Farrell, Max H. (University of Chicago); Schaumburg, Ernst (Federal Reserve Bank of New York) 
Abstract:  Portfolio sorting is ubiquitous in the empirical finance literature, where it has been widely used to identify pricing anomalies in different asset classes. Despite the popularity of portfolio sorting, little attention has been paid to the statistical properties of the procedure or to the conditions under which it produces valid inference. We develop a general, formal framework for portfolio sorting by casting it as a nonparametric estimator. We give precise conditions under which the portfolio sorting estimator is consistent and asymptotically normal, and we also establish consistency of both the FamaMacBeth variance estimator and a new plugin estimator. Our framework bridges the gap between portfolio sorting and crosssectional regressions by allowing for linear conditioning variables when sorting. In addition, we obtain a valid mean square error expansion of the sorting estimator, which we employ to develop optimal choices for the number of portfolios. We show that the choice of the number of portfolios is crucial for drawing accurate conclusions from the data and we provide a simple, datadriven procedure that balances higherorder bias and variance. In many practical settings the optimal number of portfolios varies substantially across applications and subsamples and is, in many cases, much larger than the standard choices of five or ten portfolios used in the literature. We give formal and intuitive justifications for this finding based on the biasvariance tradeoff underlying the portfolio sorting estimator. To illustrate the relevance of our results, we revisit the size and momentum anomalies. 
Keywords:  portfolio sorts; stock market anomalies; firm characteristics; nonparametric estimation; partitioning; crosssectional regressions 
JEL:  C12 C13 C23 C51 G12 
Date:  2016–08–01 
URL:  http://d.repec.org/n?u=RePEc:fip:fednsr:788&r=ecm 
By:  Pennoni, Fulvia; Romeo, Isabella 
Abstract:  We propose a short review between two alternative ways of modeling stability and change of longitudinal data when timefixed and timevarying covariates referred to the observed individuals are available. They both build on the foundation of the finite mixture models and are commonly applied in many fields. They look at the data by a different perspective and in the literature they have not been compared when the ordinal nature of the response variable is of interest. The latent Markov model is based on timevarying latent variables to explain the observable behavior of the individuals. The model is proposed in a semiparametric formulation as the latent Markov process has a discrete distribution and it is characterized by a Markov structure. The growth mixture model is based on a latent categorical variable that accounts for the unobserved heterogeneity in the observed trajectories and on a mixture of normally distributed random variable to account for the variability of growth rates. To illustrate the main differences among them we refer to a real data example on the self reported health status. 
Keywords:  Dynamic factor model, ExpectationMaximization algorithm, ForwardBackward recursions, Latent trajectories, Maximum likelihood, Monte Carlo methods. 
JEL:  C02 C14 C18 C3 C33 C38 C63 I11 I12 
Date:  2016–07 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:72939&r=ecm 
By:  Badi Baltagi (Center for Policy Research, Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 13244); Chihwa Kao (Center for Policy Research, Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 13244); Fa wang (Center for Policy Research, Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 13244) 
Abstract:  This paper studies the asymptotic power for the sphericity test in a fixed effect panel data model proposed by Baltagi, Feng and Kao (2011), (JBFK). This is done under the alternative hypotheses of weak and strong factors. By weak factors, we mean that the Euclidean norm of the vector of the factor loadings is O(1). By strong factors, we mean that the Euclidean norm of the vector of factor loadings is O(pn), where n is the number of individuals in the panel. To derive the limiting distribution of JBFK under the alternative, we first derive the limiting distribution of its raw data counterpart. Our results show that, when the factor is strong, the test statistic diverges in probability to infinity as fast as Op(nT). However, when the factor is weak, its limiting distribution is a rightward mean shift of the limit distribution under the null. Second, we derive the asymptotic behavior of the difference between JBFK and its raw data counterpart. Our results show that when the factor is strong this difference is as large as Op(n). In contrast, when the factor is weak, this difference converges in probability to a constant. Taken together, these results imply that when the factor is strong, JBFK is consistent, but when the factor is weak, JBFK is inconsistent even though its asymptotic power is nontrivial. 
Keywords:  Asymptotic power; Sphericity; John Test; Weak Factor; Strong Factor; High Dimensional Inference; Panel Data 
JEL:  C12 C33 
Date:  2016–03 
URL:  http://d.repec.org/n?u=RePEc:max:cprwps:189&r=ecm 
By:  Mateescu, Dan (Institute for Economic Forecasting, Romanian Academy) 
Abstract:  We proposed a regression model where the dependent variable is made not up of points but in segments. This situation corresponds to the markets throughout the day are observed values. Currency market or the stock exchange are examples in which values are different and variable throughout the day. 
Keywords:  regression, weighted segments, center of mass 
JEL:  C02 C22 C58 
Date:  2016–07 
URL:  http://d.repec.org/n?u=RePEc:rjr:wpiecf:160720&r=ecm 
By:  JIN SEO CHO (Yonsei University); PETER C.B. PHILLIPS (Yale University University of Auckland, Singapore Management University & University of Southampton) 
Abstract:  This supplement provides proofs of the subsidiarly lemmas and the main results given in the text of ¡°Pythagorean Generalization of Testing the Equality of Two Symmetric Positive Definite Matrices¡± by Cho and Phillips (2016). 
Date:  2016–08 
URL:  http://d.repec.org/n?u=RePEc:yon:wpaper:2016rwp89a&r=ecm 