nep-ecm New Economics Papers
on Econometrics
Issue of 2016‒08‒21
twelve papers chosen by
Sune Karlsson
Örebro universitet

  1. Essays in Macroeconometrics By Mikkel Plagborg-Møller
  2. Sequentially Testing Polynomial Model Hypotheses using Power Transforms of Regressors By JIN SEO CHO; PETER C.B. PHILLIPS
  3. Improving weighted least squares inference By Cyrus J. DiCiccio; Joseph P. Romano; Michael Wolf
  4. Realized Wishart-GARCH: A Score-driven Multi-Asset Volatility Model By Peter Reinhard Hansen; Pawel Janus; Siem Jan Koopman
  5. Pythagorean Generalization of Testing the Equality of Two Symmetric Positive Definite Matrices By JIN SEO CHO; PETER C.B. PHILLIPS
  6. An intersection test for the cointegrating rank in dependent panel data By Antonia Arsova; Deniz Dilan Karaman Örsal
  7. Semiparametric Count Data Modeling with an Application to Health Service Demand By Bach, P.; Farbmacher, H.; Spindler, M.
  8. Characteristic-sorted portfolios: estimation and inference By Cattaneo, Matias D.; Crump, Richard K.; Farrell, Max H.; Schaumburg, Ernst
  9. Latent Markov and growth mixture models for ordinal individual responses with covariates: a comparison By Pennoni, Fulvia; Romeo, Isabella
  10. Asymptotic Power of the Sphericity Test Under Weak and Strong Factors in a Fixed Effects Panel Data Model By Badi Baltagi; Chihwa Kao; Fa wang
  11. The Linear Regression Of Weighted Segments By Mateescu, Dan
  12. Online Supplement to ¡°Pythagorean Generalization of Testing the Equality of Two Symmetric Positive Definite Matrices¡± By JIN SEO CHO; PETER C.B. PHILLIPS

  1. By: Mikkel Plagborg-Møller
    Abstract: This dissertation consists of three independent chapters on econometric methods for macroeconomic analysis. In the first chapter, I propose to estimate structural impulse response functions from macroeconomic time series by doing Bayesian inference on the Structural Vector Moving Average representation of the data. This approach has two advantages over Structural Vector Autoregression analysis: It imposes prior information directly on the impulse responses in a flexible and transparent manner, and it can handle noninvertible impulse response functions. The second chapter, which is coauthored with B. J. Bates, J. H. Stock, and M. W. Watson, considers the estimation of dynamic factor models when there is temporal instability in the factor loadings. We show that the principal components estimator is robust to empirically large amounts of instability. The robustness carries over to regressions based on estimated factors, but not to estimation of the number of factors. In the third chapter, I develop shrinkage methods for smoothing an estimated impulse response function. I propose a data-dependent criterion for selecting the degree of smoothing to optimally trade off bias and variance, and I devise novel shrinkage confidence sets with valid frequentist coverage.
    Date: 2016–01
    URL: http://d.repec.org/n?u=RePEc:qsh:wpaper:441671&r=ecm
  2. By: JIN SEO CHO (Yonsei University); PETER C.B. PHILLIPS (Yale University University of Auckland, Singapore Management University & University of Southampton)
    Abstract: We provide a methodology for testing a polynomial model hypothesis by extending the approach and results of Baek, Cho, and Phillips (2015; BCP) that tests for neglected nonlinearity using power transforms of regressors against arbitrary nonlinearity. We examine and generalize the BCP quasi-likelihood ratio test dealing with the multifold identification problem that arises under the null of the polynomial model. The approach leads to convenient asymptotic theory for inference, has omnibus power against general nonlinear alternatives, and allows estimation of an unknown polynomial degree in a model by way of sequential testing, a technique that is useful in the application of sieve approximations. Simulations show good performance in the sequential test procedure in identifying and estimating unknown polynomial order. The approach, which can be used empirically to test for misspecification, is applied to a Mincer (1958, 1974) equation using data from Card (1995). The results confirm that Mincer¡¯s log earnings equation is easily shown to be misspecified by including nonlinear effects of experience and schooling on earnings, with some flexibility required in the respective polynomial degrees.
    Keywords: QLR test; Asymptotic null distribution; Misspecification; Mincer equation; Nonlinearity; Polynomial model; Power Gaussian process; Sequential testing.
    JEL: C12 C18 C46 C52
    Date: 2016–08
    URL: http://d.repec.org/n?u=RePEc:yon:wpaper:2016rwp-90&r=ecm
  3. By: Cyrus J. DiCiccio; Joseph P. Romano; Michael Wolf
    Abstract: In the presence of conditional heteroskedasticity, inference about the coefficients in a linear regression model these days is typically based on the ordinary least squares estimator in conjunction with using heteroskedasticity consistent standard errors. Similarly, even when the true form of heteroskedasticity is unknown, heteroskedasticity consistent standard errors can be used to base valid inference on a weighted least squares estimator. Using a weighted least squares estimator can provide large gains in efficiency over the ordinary least squares estimator. However, intervals based on plug-in standard errors often have coverage that is below the nominal level, especially for small sample sizes. In this paper, it is shown that a bootstrap approximation to the sampling distribution of the weighted least squares estimate is valid, which allows for inference with improved finite-sample properties. Furthermore, when the model used to estimate the unknown form of the heteroskedasticity is misspecified, the weighted least squares estimator may be less efficient than the ordinary least squares estimator. To address this problem, a new estimator is proposed that is asymptotically at least as efficient as both the ordinary and the weighted least squares estimator. Simulation studies demonstrate the attractive finite-sample properties of this new estimator as well as the improvements in performance realized by bootstrap confidence intervals.
    Keywords: Bootstrap, conditional heteroskedasticity, HC standard errors
    JEL: C12 C13
    Date: 2016–08
    URL: http://d.repec.org/n?u=RePEc:zur:econwp:232&r=ecm
  4. By: Peter Reinhard Hansen (University of North Carolina at Chapel Hill, United States); Pawel Janus (UBS Global Asset Management, Zürich, Switzerland); Siem Jan Koopman (VU University Amsterdam, the Netherlands)
    Abstract: We propose a novel multivariate GARCH model that incorporates realized measures for the variance matrix of returns. The key novelty is the joint formulation of a multivariate dynamic model for outer-products of returns, realized variances and realized covariances. The updating of the variance matrix relies on the score function of the joint likelihood function based on Gaussian and Wishart densities. The dynamic model is parsimonious while each innovation still impacts all elements of the variance matrix. Monte Carlo evidence for parameter estimation based on different small sample sizes is provided. We illustrate the model with an empirical application to a portfolio of 15 U.S. financial assets.
    Keywords: high-frequency data; multivariate GARCH; multivariate volatility; realised covariance; score; Wishart density
    JEL: C32 C52 C58
    Date: 2016–08–11
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20160061&r=ecm
  5. By: JIN SEO CHO (Yonsei University); PETER C.B. PHILLIPS (Yale University University of Auckland, Singapore Management University & University of Southampton)
    Abstract: We provide a new test for equality of two symmetric positive-definite matrices that leads to a convenient mechanism for testing specification using the information matrix equality and the sandwich asymptotic covariance matrix of the GMM estimator. The test relies on a new characterization of equality between two k dimensional symmetric positive-definite matrices A and B: the traces of AB-1 and BA-1 are equal to k if and only if A = B. Using this criterion, we introduce a class of omnibus test statistics for equality and examine their null and local alternative approximations under some mild regularity conditions. A preferred test in the class with good omni-directional power is recommended for practical work. Monte Carlo experiments are conducted to explore performance characteristics under the null and local as well as fixed alternatives. The test is applicable in many settings, including GMM estimation, SVAR models and high dimensional variance matrix settings.
    Keywords: Matrix equality; Trace; Determinant; Arithmetic mean; Geometric mean; Harmonic mean; Sandwich covariance matrix; Eigenvalues.
    JEL: C01 C12 C52
    Date: 2016–08
    URL: http://d.repec.org/n?u=RePEc:yon:wpaper:2016rwp-89&r=ecm
  6. By: Antonia Arsova (Leuphana University Lueneburg, Germany); Deniz Dilan Karaman Örsal (Leuphana University Lueneburg, Germany)
    Abstract: This paper takes a multiple testing perspective on the problem of determining the cointegrating rank in macroeconometric panel data with cross-sectional dependence. The testing procedure for a common rank among the panel units is based on Simes’ (1986) intersection test and requires only the p-values of suitable individual test statistics. A Monte Carlo study demonstrates that this simple test is robust to crosssectional dependence and has reasonable size and power properties. A multivariate version of Kendall’s tau is used to test an important assumption underlying Simes’ procedure for dependent statistics. The method is illustrated by testing the validity of the monetary exchange rate model for 8 OECD countries in the post-Bretton Woods era.
    Keywords: panel cointegration rank test, cross-sectional dependence, multiple testing, common factors, likelihood-ratio
    JEL: C12 C15 C33
    Date: 2016–03
    URL: http://d.repec.org/n?u=RePEc:lue:wpaper:357&r=ecm
  7. By: Bach, P.; Farbmacher, H.; Spindler, M.
    Abstract: Heterogeneous effects are prevalent in many economic settings. As the functional form between outcomes and regressors is often unknown apriori, we propose a semiparametric negative binomial count data model based on the local likelihood approach and generalized product kernels, and apply the estimator to model demand for health care. The local likelihood framework allows us to leave the functional form of the conditional mean unspecified while still exploiting basic assumptions in the count data literature (e.g., non-negativity). The generalized product kernels allow us to simultaneously model discrete and continuous regressors, which reduces the curse of dimensionality and increases its applicability as many regressors in the demand model for health care are discrete.
    Keywords: semiparametric; nonparametric; count data; health care demand;
    JEL: I10 C14 C25
    Date: 2016–08
    URL: http://d.repec.org/n?u=RePEc:yor:hectdg:16/20&r=ecm
  8. By: Cattaneo, Matias D. (University of Michigan); Crump, Richard K. (Federal Reserve Bank of New York); Farrell, Max H. (University of Chicago); Schaumburg, Ernst (Federal Reserve Bank of New York)
    Abstract: Portfolio sorting is ubiquitous in the empirical finance literature, where it has been widely used to identify pricing anomalies in different asset classes. Despite the popularity of portfolio sorting, little attention has been paid to the statistical properties of the procedure or to the conditions under which it produces valid inference. We develop a general, formal framework for portfolio sorting by casting it as a nonparametric estimator. We give precise conditions under which the portfolio sorting estimator is consistent and asymptotically normal, and we also establish consistency of both the Fama-MacBeth variance estimator and a new plug-in estimator. Our framework bridges the gap between portfolio sorting and cross-sectional regressions by allowing for linear conditioning variables when sorting. In addition, we obtain a valid mean square error expansion of the sorting estimator, which we employ to develop optimal choices for the number of portfolios. We show that the choice of the number of portfolios is crucial for drawing accurate conclusions from the data and we provide a simple, data-driven procedure that balances higherorder bias and variance. In many practical settings the optimal number of portfolios varies substantially across applications and subsamples and is, in many cases, much larger than the standard choices of five or ten portfolios used in the literature. We give formal and intuitive justifications for this finding based on the bias-variance trade-off underlying the portfolio sorting estimator. To illustrate the relevance of our results, we revisit the size and momentum anomalies.
    Keywords: portfolio sorts; stock market anomalies; firm characteristics; nonparametric estimation; partitioning; cross-sectional regressions
    JEL: C12 C13 C23 C51 G12
    Date: 2016–08–01
    URL: http://d.repec.org/n?u=RePEc:fip:fednsr:788&r=ecm
  9. By: Pennoni, Fulvia; Romeo, Isabella
    Abstract: We propose a short review between two alternative ways of modeling stability and change of longitudinal data when time-fixed and time-varying covariates referred to the observed individuals are available. They both build on the foundation of the finite mixture models and are commonly applied in many fields. They look at the data by a different perspective and in the literature they have not been compared when the ordinal nature of the response variable is of interest. The latent Markov model is based on time-varying latent variables to explain the observable behavior of the individuals. The model is proposed in a semi-parametric formulation as the latent Markov process has a discrete distribution and it is characterized by a Markov structure. The growth mixture model is based on a latent categorical variable that accounts for the unobserved heterogeneity in the observed trajectories and on a mixture of normally distributed random variable to account for the variability of growth rates. To illustrate the main differences among them we refer to a real data example on the self reported health status.
    Keywords: Dynamic factor model, Expectation-Maximization algorithm, Forward-Backward recursions, Latent trajectories, Maximum likelihood, Monte Carlo methods.
    JEL: C02 C14 C18 C3 C33 C38 C63 I11 I12
    Date: 2016–07
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:72939&r=ecm
  10. By: Badi Baltagi (Center for Policy Research, Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 13244); Chihwa Kao (Center for Policy Research, Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 13244); Fa wang (Center for Policy Research, Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 13244)
    Abstract: This paper studies the asymptotic power for the sphericity test in a fixed effect panel data model proposed by Baltagi, Feng and Kao (2011), (JBFK). This is done under the alternative hypotheses of weak and strong factors. By weak factors, we mean that the Euclidean norm of the vector of the factor loadings is O(1). By strong factors, we mean that the Euclidean norm of the vector of factor loadings is O(pn), where n is the number of individuals in the panel. To derive the limiting distribution of JBFK under the alternative, we first derive the limiting distribution of its raw data counterpart. Our results show that, when the factor is strong, the test statistic diverges in probability to infinity as fast as Op(nT). However, when the factor is weak, its limiting distribution is a rightward mean shift of the limit distribution under the null. Second, we derive the asymptotic behavior of the difference between JBFK and its raw data counterpart. Our results show that when the factor is strong this difference is as large as Op(n). In contrast, when the factor is weak, this difference converges in probability to a constant. Taken together, these results imply that when the factor is strong, JBFK is consistent, but when the factor is weak, JBFK is inconsistent even though its asymptotic power is nontrivial.
    Keywords: Asymptotic power; Sphericity; John Test; Weak Factor; Strong Factor; High Dimensional Inference; Panel Data
    JEL: C12 C33
    Date: 2016–03
    URL: http://d.repec.org/n?u=RePEc:max:cprwps:189&r=ecm
  11. By: Mateescu, Dan (Institute for Economic Forecasting, Romanian Academy)
    Abstract: We proposed a regression model where the dependent variable is made not up of points but in segments. This situation corresponds to the markets throughout the day are observed values. Currency market or the stock exchange are examples in which values are different and variable throughout the day.
    Keywords: regression, weighted segments, center of mass
    JEL: C02 C22 C58
    Date: 2016–07
    URL: http://d.repec.org/n?u=RePEc:rjr:wpiecf:160720&r=ecm
  12. By: JIN SEO CHO (Yonsei University); PETER C.B. PHILLIPS (Yale University University of Auckland, Singapore Management University & University of Southampton)
    Abstract: This supplement provides proofs of the subsidiarly lemmas and the main results given in the text of ¡°Pythagorean Generalization of Testing the Equality of Two Symmetric Positive Definite Matrices¡± by Cho and Phillips (2016).
    Date: 2016–08
    URL: http://d.repec.org/n?u=RePEc:yon:wpaper:2016rwp-89a&r=ecm

This nep-ecm issue is ©2016 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.