nep-ecm New Economics Papers
on Econometrics
Issue of 2017‒03‒19
ten papers chosen by
Sune Karlsson
Örebro universitet

  1. Heteroskedasticity-Robust Standard Errors for Dynamic Panel Data Models with Fixed Effects By Chirok Han; Hyoungjong Kim
  2. Evaluating Restricted Common Factor models for non-stationary data By Francesca Di Iorio; Stefano Fachin
  3. Data Revisions and DSGE Models By Galvao, Ana Beatriz
  4. A note on conditional covariance matrices for elliptical distributions By Piotr Jaworski; Marcin Pitera
  5. A Highly Efficient Regression Estimator for Skewed and/or Heavy-tailed Distributed Errors By Lorenzo Ricci; Vincenzo Verardi; Catherine Vermandele
  6. Data driven partition-of-unity copulas with applications to risk management By Dietmar Pfeifer; Andreas M\"andle; Olena Ragulina
  7. How to Use One Instrument to Identify Two Elasticities By Gavrilova, Evelina; Zoutman, Floris T.; Hopland, Arnt O.
  8. Small-Sample Tests for Stock Return Predictability with Possibly Non-Stationary Regressors and GARCH-Type Effects By Sermin Gungor; Richard Luger
  9. Topological Data Analysis of Financial Time Series: Landscapes of Crashes By Marian Gidea; Yuri Katz
  10. What univariate models tell us about multivariate macroeconomic models By Mitchell, James; Robertson, Donald; Wright, Stephen

  1. By: Chirok Han (Department of Economics, Korea University, Seoul, Republic of Korea); Hyoungjong Kim (Department of Economics, Korea University, Seoul, Republic of Korea)
    Abstract: For linear dynamic panel data models with fixed effects, practitioners often use clustered covariance estimators for inference in the presence of cross-sectional or temporal heteroskedasticity in idiosyncratic errors. The performance of a clustered estimator heavily depends on the magnitude of the cross-sectional dimension(n). When n is small, inferences using clustered estimators are compromised. A paper by Stock and Watson (2008) provides a solution under strict exogeneity if the idiosyncratic errors are possibly heteroskedastic but serially uncorrelated. Their method, however, is not generalizable to dynamic panel data models, although heteroskedasticity-robust inferences have natural relevance to dynamic models due to the requirement of serial uncorrelatedness for model identification. In the present paper, we provide a solution for instrumental variables and generalized method of moments estimators using predetermined instruments, including popular estimators for dynamic panel models. Asymptotics are established, and the findings are verified by simulations.
    Keywords: Heteroskedasticity-robust covariance estimation, Dynamic panel data, Cluster co- variance estimator, Instrumental variable estimation
    JEL: C12 C23
    Date: 2017
  2. By: Francesca Di Iorio (University of Naples Federico II); Stefano Fachin ("Sapienza" University of Rome)
    Abstract: We propose to evaluate restrictions on the loadings of approximate Factor models comparing the estimated number of factors of the unconstrained and constrained models. A difference between the two estimates is evidence against the constraints, which should thus be rejected. To take into account possible finite sample bias of the model selection procedure, we develop a bootstrap algorithm for the estimation of the probability of rejecting cor- rect constraints. For non-stationary factor models we show analytically that the algorithm is asymptotically valid, and by simulation that the evaluation procedure has good small sample properties.
    Keywords: Non-stationary factor model, principal components, loadings restrictions, large data sets, stationary bootstrap.
    JEL: C12 C33 C55
    Date: 2017–03
  3. By: Galvao, Ana Beatriz (Warwick Business School, University of Warwick)
    Abstract: The typical estimation of DSGE models requires data on a set of macroeconomic aggregates, such as output, consumption and investment, which are subject to data revisions. The conventional approach employs the time series that is currently available for these aggregates for estimation, implying that the last observations are still subject to many rounds of revisions. This paper proposes a release-based approach that uses revised data of all observations to estimate DSGE models, but the model is still helpful for real-time forecasting. This new approach accounts for data uncertainty when predicting future values of macroeconomic variables subject to revisions, thus providing policy-makers and professional forecasters with both backcasts and forecasts. Application of this new approach to a medium-sized DSGE model improves the accuracy of density forecasts, particularly the coverage of predictive intervals, of US real macrovariables. The application also shows that the estimated relative importance of business cycle sources varies with data maturity.
    Keywords: data revisions ; medium-sized DSGE models ; forecasting ; variance decomposition JEL Classification Numbers: C53
    Date: 2016
  4. By: Piotr Jaworski; Marcin Pitera
    Abstract: In this short note we provide an analytical formula for the conditional covariance matrices of the elliptically distributed random vectors, when the conditioning is based on the values of any linear combination of the marginal random variables. We show that one could introduce the univariate invariant depending solely on the conditioning set, which greatly simplifies the calculations. As an application, we show that one could define uniquely defined quantile-based sets on which conditional covariance matrices must be equal to each other if only the vector is multivariate normal. The similar results are obtained for conditional correlation matrices of the general elliptic case.
    Date: 2017–03
  5. By: Lorenzo Ricci; Vincenzo Verardi; Catherine Vermandele
    Abstract: This paper proposes a simple maximum likelihood regression estimator that outperforms Least Squares in terms of efficiency and mean square error for a large number of skewed and/or heavy tailed error distributions.
    JEL: C13 C16 G17
    Date: 2016–11–18
  6. By: Dietmar Pfeifer; Andreas M\"andle; Olena Ragulina
    Abstract: We present a constructive and self-contained approach to data driven general partition-of-unity copulas that were recently introduced in the literature. In particular, we consider Bernstein-, negative binomial and Poisson copulas and present a solution to the problem of fitting such copulas to highly asymmetric data.
    Date: 2017–03
  7. By: Gavrilova, Evelina (Dept. of Business and Management Science, Norwegian School of Economics); Zoutman, Floris T. (Dept. of Business and Management Science, Norwegian School of Economics); Hopland, Arnt O. (Dept. of Business and Management Science, Norwegian School of Economics)
    Abstract: We show that an insight from taxation theory allows identification of both the supply and demand elasticities with only one instrument. Ramsey (1928) and subsequent models of taxation assume that a tax levied on the demand side only affects demand through the price after taxation. Econometrically, we show that this assumption functions as an additional exclusion restriction. Under the Ramsey Exclusion Restriction (RER) a tax reform can serve to simultaneously identify elasticities of supply and demand. We develop a TSLS estimator for both elasticities, a test to assess instrument strength and a test for the RER. Our result extends to a supply-demand system with J goods, and a setting with supply-side or non-linear taxes. Further, we show that key results in the sufficient statistics literature rely on the RER. One example is Harberger’s formula for the excess burden of a tax. We apply our method to the Norwegian labor market.
    Keywords: Tax Reform; Instrumental Variable; Supply and Demand Elasticities; Tax Incidence; Payroll Taxation
    JEL: C36 H22 H31 H32 J22 J23
    Date: 2017–02–27
  8. By: Sermin Gungor; Richard Luger
    Abstract: We develop a simulation-based procedure to test for stock return predictability with multiple regressors. The process governing the regressors is left completely free and the test procedure remains valid in small samples even in the presence of non-normalities and GARCH-type effects in the stock returns. The usefulness of the new procedure is demonstrated both in a simulation study and by examining the ability of a group of financial variables to predict excess stock returns. We find robust evidence of predictability during the period 1948–2014, driven entirely by the term spread. This empirical evidence, however, is much weaker over subsamples.
    Keywords: Asset Pricing, Econometric and statistical methods, Financial markets
    JEL: C12 C32 G14
    Date: 2017
  9. By: Marian Gidea; Yuri Katz
    Abstract: We explore the evolution of daily returns of four major US stock market indices during the technology crash of 2000, and the financial crisis of 2007-2009. Our methodology is based on topological data analysis (TDA). We use persistence homology to detect and quantify topological patterns that appear in multidimensional time series. Using a sliding window, we extract time-dependent point cloud data sets, to which we associate a topological space. We detect transient loops that appear in this space, and we measure their persistence. This is encoded in real-valued functions referred to as a 'persistence landscapes'. We quantify the temporal changes in persistence landscapes via their $L^p$-norms. We test this procedure on multidimensional time series generated by various non-linear and non-equilibrium models. We find that, in the vicinity of financial meltdowns, the $L^p$-norms exhibit strong growth prior to the primary peak, which ascends during a crash. Remarkably, the average spectral density at low frequencies of the time series of $L^p$-norms of the persistence landscapes demonstrates a strong rising trend for 250 trading days prior to either dotcom crash on 03/10/2000, or to the Lehman bankruptcy on 09/15/2008. Our study suggests that TDA provides a new type of econometric analysis, which goes beyond the standard statistical measures. The method can be used to detect early warning signals of imminent market crashes. We believe that this approach can be used beyond the analysis of financial time series presented here.
    Date: 2017–03
  10. By: Mitchell, James (Warwick Business School, University of Warwick); Robertson, Donald (Faculty of Economics, University of Cambridge); Wright, Stephen (Department of Economics, Maths & Statistics Birkbeck College, University of London)
    Abstract: A longstanding puzzle in macroeconomic forecasting has been that a wide variety of multivariate models have struggled to out-predict univariate representations. We seek an explanation for this puzzle in terms of population properties. We show that if we just know the univariate properties of a time-series, yt, this can tell us a lot about the dimensions and the predictive power of the true (but unobservable) multivariate macroeconomic model that generated yt. We illustrate using data on U.S. inflation. We find that, especially in recent years, the univariate properties of inflation dictate that even the true multivariate model for inflation would struggle to out-predict a univariate model. Furthermore, predictions of changes in ination from the true model would either need to be IID or have persistence properties quite unlike those of most current macroeconomic models.
    Keywords: Forecasting ; Macroeconomic Models ; Autoregressive Moving Average Representations ; Predictive Regressions ; Nonfundamental Representations ; Inflation Forecasts JEL Classification Numbers: C22 ; C32 ; C53 ; E37
    Date: 2016

This nep-ecm issue is ©2017 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.