nep-ets New Economics Papers
on Econometric Time Series
Issue of 2007‒01‒02
four papers chosen by
Yong Yin
SUNY at Buffalo

  1. What does a technology shock do? A VAR analysis with model-based sign restrictions By Luca Dedola; Stefano Neri
  2. A Simple Benchmark for Forecasts of Growth and Inflation By Marcellino, Massimiliano
  3. A Two-step estimator for large approximate dynamic factor models based on Kalman filtering By Catherine Doz; Domenico Giannone; Lucrezia Reichlin
  4. Finite-Sample Stability of the KPSS Test By Jönsson , Kristian

  1. By: Luca Dedola (European Central Bank, Research Department); Stefano Neri (Bank of Italy, Research Department)
    Abstract: This paper estimates the effects of technology shocks in VAR models of the U.S., identified by imposing restrictions on the sign of impulse responses. These restrictions are consistent with the implications of a popular class of DSGE models, with both real and nominal frictions, and with sufficiently wide ranges for their parameters. This identification strategy thus substitutes theoretically-motivated restrictions for the atheoretical assumptions on the time-series properties of the data that are key to long-run restrictions. Stochastic technology improvements persistently increase real wages, consumption, investment and output in the data; hours worked are very likely to increase, displaying a hump-shaped pattern. Contrary to most of the related VAR evidence, results are not sensitive to a number of specification assumptions, including those on the stationarity properties of variables.
    Keywords: technology shocks, DSGE models, bayesian VAR methods, identification
    JEL: C3 E3
    Date: 2006–12
    URL: http://d.repec.org/n?u=RePEc:bdi:wptemi:td_607_06&r=ets
  2. By: Marcellino, Massimiliano
    Abstract: A theoretical model for growth or inflation should be able to reproduce the empirical features of these variables better than competing alternatives. Therefore, it is common practice in the literature, whenever a new model is suggested, to compare its performance with that of a benchmark model. However, while the theoretical models become more and more sophisticated, the benchmark typically remains a simple linear time series model. Recent examples are provided, e.g., by articles in the real business cycle literature or by new-keynesian studies on inflation persistence. While a time series model can provide a reasonable benchmark to evaluate the value added of economic theory relative to the pure explanatory power of the past behavior of the variable, recent developments in time series analysis suggest that more sophisticated time series models could provide more serious benchmarks for economic models. In this paper we evaluate whether these complicated time series models can really outperform standard linear models for GDP growth and inflation, and should therefore substitute them as benchmarks for economic theory based models. Since a complicated model specification can over-fit in sample, i.e. the model can spuriously perform very well compared to simpler alternatives, we conduct the model comparison based on the out of sample forecasting performance. We consider a large variety of models and evaluation criteria, using real time data and a sophisticated bootstrap algorithm to evaluate the statistical significance of our results. Our main conclusion is that in general linear time series models can be hardly beaten if they are carefully specified, and therefore still provide a good benchmark for theoretical models of growth and inflation. However, we also identify some important cases where the adoption of a more complicated benchmark can alter the conclusions of economic analyses about the driving forces of GDP growth and inflation. Therefore, comparing theoretical models also with more sophisticated time series benchmarks can guarantee more robust conclusions.
    Keywords: growth; inflation; non-linear models; time-varying models
    JEL: C2 C53 E30
    Date: 2006–12
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:6012&r=ets
  3. By: Catherine Doz (Université de Cergy-Pontoise (Théma)); Domenico Giannone (Université Libre de Bruxelles, ECARES and CEPR); Lucrezia Reichlin (European Central Bank, ECARES and CEPR)
    Abstract: This paper shows consistency of a two step estimator of the parameters of a dynamic approximate factor model when the panel of time series is large (n large). In the first step, the parameters are first estimated from an OLS on principal components. In the second step, the factors are estimated via the kalman smoother. This projection allows to consider dynamics in the factors and heteroskedasticity in the idiosyncratic variance. The analysis provides theoretical backing for the estimator considered in Giannone, Reichlin, and Sala (2004) and Giannone, Reichlin,and Small (2005).
    Keywords: Factor Models, Kalman filter, principal components, large cross-sections
    JEL: C51 C32 C33
    Date: 2006
    URL: http://d.repec.org/n?u=RePEc:ema:worpap:2006-23&r=ets
  4. By: Jönsson , Kristian (Department of Economics, Lund University)
    Abstract: In the current paper, the finite-sample stability of various implementations of the KPSS test is studied. The implementations considered differ in how the so-called long-run variance is estimated under the null hypothesis. More specifically, the effects that the choice of kernel, the value of the bandwidth parameter and the application of a prewhitening filter have on the KPSS test are investigated. It is found that the finite-sample distribution KPSS test statistic can be very unstable when the Quadratic Spectral kernel is used and/or a prewhitening filter is applied. The instability manifests itself through making the small-sample distribution of the test statistic sensitive to the specific process that generates the data under the null hypothesis. This in turn implies that the size of the test can be hard to control. For the cases investigated in the current paper, it turns out that using the Bartlett kernel in the long-run variance estimation renders the most stable test. By supplying an empirical application, we illustrate the adverse effects that can occur when care is not taken in choosing what test implementation to employ when testing for stationarity in small-sample situations.
    Keywords: Stationarity; Unit root; KPSS test; Size distortion; Long-run variance; Monte Carlo simulation; Private consumption; Permanent Income Hypothesis
    JEL: C12 C13 C14 C15 C22 E21
    Date: 2006–12–14
    URL: http://d.repec.org/n?u=RePEc:hhs:lunewp:2006_023&r=ets

This nep-ets issue is ©2007 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.