nep-ets New Economics Papers
on Econometric Time Series
Issue of 2009‒06‒10
eleven papers chosen by
Yong Yin
SUNY at Buffalo

  1. Pooling versus model selection for nowcasting with many predictors: an application to German GDP By Kuzin, Vladimir; Marcellino, Massimiliano; Schumacher, Christian
  2. MIDAS versus mixed-frequency VAR: nowcasting GDP in the euro area By Kuzin, Vladimir; Marcellino, Massimiliano; Schumacher, Christian
  3. Testing for structural breaks in dynamic factor models By Breitung, Jörg; Eickmeier, Sandra
  4. Common and spatial drivers in regional business cycles By Erdenebat Bataa; Denise R. Osborn; Marianne Sensier; Dick van Dijk
  5. Modeling and Forecasting Stock Return Volatility Using a Random Level Shift Model By Yang K. Lu; Pierre Perron
  6. On the Usefulness or Lack Thereof of Optimality Criteria for Structural Change Tests By Pierre Perron; Yohei Yamamoto
  7. Long-Memory and Level Shifts in the Volatility of Stock Market Return Indices By Pierre Perron; Zhongjun Qu
  8. Testing Jointly for Structural Changes in the Error Variance and Coefficients of a Linear Regression Model By Pierre Perron; Jing Zhou
  9. A Stochastic Volatility Model with Random Level Shifts: Theory and Applications to S&P 500 and NASDAQ Return Indices By Zhongjun Qu; Pierre Perron
  10. Estimating and Testing Multiple Structural Changes in Models with Endogenous Regressors By Pierre Perron; Yohei Yamamoto
  11. Testing for Breaks in Coefficients and Error Variance: Simulations and Applications By Jing Zhou; Pierre Perron

  1. By: Kuzin, Vladimir; Marcellino, Massimiliano; Schumacher, Christian
    Abstract: This paper discusses pooling versus model selection for now- and forecasting in the presence of model uncertainty with large, unbalanced datasets. Empirically, unbalanced data is pervasive in economics and typically due to di¤erent sampling frequencies and publication delays. Two model classes suited in this context are factor models based on large datasets and mixed-data sampling (MIDAS) regressions with few predictors. The specification of these models requires several choices related to, amongst others, the factor estimation method and the number of factors, lag length and indicator selection. Thus, there are many sources of mis-specification when selecting a particular model, and an alternative could be pooling over a large set of models with different specifications. We evaluate the relative performance of pooling and model selection for now- and forecasting quarterly German GDP, a key macroeconomic indicator for the largest country in the euro area, with a large set of about one hundred monthly indicators. Our empirical findings provide strong support for pooling over many specifications rather than selecting a specific model.
    Keywords: casting, forecast combination, forecast pooling, model selection, mixed - frequency data, factor models, MIDAS
    JEL: C53 E37
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:zbw:bubdp1:7572&r=ets
  2. By: Kuzin, Vladimir; Marcellino, Massimiliano; Schumacher, Christian
    Abstract: This paper compares the mixed-data sampling (MIDAS) and mixed-frequency VAR (MF-VAR) approaches to model speci…cation in the presence of mixed-frequency data, e.g., monthly and quarterly series. MIDAS leads to parsimonious models based on exponential lag polynomials for the coe¢ cients, whereas MF-VAR does not restrict the dynamics and therefore can su¤er from the curse of dimensionality. But if the restrictions imposed by MIDAS are too stringent, the MF-VAR can perform better. Hence, it is di¢ cult to rank MIDAS and MF-VAR a priori, and their relative ranking is better evaluated empirically. In this paper, we compare their performance in a relevant case for policy making, i.e., nowcasting and forecasting quarterly GDP growth in the euro area, on a monthly basis and using a set of 20 monthly indicators. It turns out that the two approaches are more complementary than substitutes, since MF-VAR tends to perform better for longer horizons, whereas MIDAS for shorter horizons.
    Keywords: nowcasting, mixed-frequency data, mixed-frequency VAR, MIDAS
    JEL: C53 E37
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:zbw:bubdp1:7576&r=ets
  3. By: Breitung, Jörg; Eickmeier, Sandra
    Abstract: From time to time, economies undergo far-reaching structural changes. In this paper we investigate the consequences of structural breaks in the factor loadings for the specification and estimation of factor models based on principal components and suggest test procedures for structural breaks. It is shown that structural breaks severely inflate the number of factors identified by the usual information criteria. Based on the strict factor model the hypothesis of a structural break is tested by using Likelihood-Ratio, Lagrange-Multiplier and Wald statistics. The LM test which is shown to perform best in our Monte Carlo simulations, is generalized to factor models where the common factors and idiosyncratic components are serially correlated. We also apply the suggested test procedure to a US dataset used in Stock and Watson (2005) and a euro-area dataset described in Altissimo et al. (2007). We find evidence that the beginning of the so-called Great Moderation in the US as well as the Maastricht treaty and the handover of monetary policy from the European national central banks to the ECB coincide with structural breaks in the factor loadings. Ignoring these breaks may yield misleading results if the empirical analysis focuses on the interpretation of common factors or on the transmission of common shocks to the variables of interest.
    Keywords: Dynamic factor models, structural breaks, number of factors, Great Moderation, EMU
    JEL: C01 C12 C3
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:zbw:bubdp1:7574&r=ets
  4. By: Erdenebat Bataa; Denise R. Osborn; Marianne Sensier; Dick van Dijk
    Abstract: To shed light on changes in international inflation, this paper proposes an iterative procedure to discriminate between structural breaks in the coefficients and the disturbance covariance matrix of a system of equations, allowing these components to change at different dates. Conditional on these, recursive procedures are proposed to analyze the nature of change, including tests to identify individual coefficient shifts and to discriminate between volatility and correlation breaks. Using these procedures, structural breaks in monthly cross-country inflation relationships are examined for major G-7 countries (US, Euro area, UK and Canada) and within the Euro area (France, Germany and Italy). Overall, we find few dynamic spillovers between countries, although the Euro area leads inflation in North America, while Germany leads France. Contemporaneous inflation correlations are generally low in the 1970s and early 1980s, but inter-continental correlations increase from the end of the 1990s, while Euro area countries move from essentially idiosyncratic inflation to co-movement in the mid-1980s.
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:man:cgbcrp:119&r=ets
  5. By: Yang K. Lu (Boston University); Pierre Perron (Boston University)
    Abstract: We consider the estimation of a random level shift model for which the series of interest is the sum of a short memory process and a jump or level shift component. For the latter component, we specify the commonly used simple mixture model such that the component is the cumulative sum of a process which is 0 with some probability (1-a) and is a random variable with probability a. Our estimation method transforms such a model into a linear state space with mixture of normal innovations, so that an extension of Kalman filter algorithm can be applied. We apply this random level shift model to the logarithm of absolute returns for the S&P 500, AMEX, Dow Jones and NASDAQ stock market return indices. Our point estimates imply few level shifts for all series. But once these are taken into account, there is little evidence of serial correlation in the remaining noise and, hence, no evidence of long-memory. Once the estimated shifts are introduced to a standard GARCH model applied to the returns series, any evidence of GARCH effects disappears. We also produce rolling out-ofsample forecasts of squared returns. In most cases, our simple random level shifts model clearly outperforms a standard GARCH(1,1) model and, in many cases, it also provides better forecasts than a fractionally integrated GARCH model.
    Keywords: structural change, forecasting, GARCH models, long-memory
    JEL: C22
    Date: 2008–09
    URL: http://d.repec.org/n?u=RePEc:bos:wpaper:wp2008-012&r=ets
  6. By: Pierre Perron (Boston University); Yohei Yamamoto (Boston University)
    Abstract: Elliott and Müller (2006) considered the problem of testing for general types of parameter variations, including infrequent breaks. They developed a framework that yields optimal tests, in the sense that they nearly attain some local Gaussian power envelop. The main ingredient in their setup is that the variance of the process generating the changes in the parameters must go to zero at a fast rate. They recommended the so-called qLˆL test, a partial sums type test based on the residuals obtained from the restricted model. We show that for breaks that are very small, its power is indeed higher than other tests, including the popular sup-Wald test. However, the differences are very minor. When the magnitude of change is moderate to large, the power of the test is very low in the context of a regression with lagged dependent variables or when a correction is applied to account for serial correlation in the errors. In many cases, the power goes to zero as the magnitude of change increases. The power of the sup-Wald test does not show this non-monotonicity and its power is far superior to the qLˆL test when the break is not very small. We claim that the optimality of the qLˆL test does not come from the properties of the test statistics but the criterion adopted, which is not useful to analyze structural change tests. Instead, we use the concept of the relative approximate Bahadur slopes to assess the relative efficiency of two tests. When doing so, it is shown that the sup-Wald test indeed dominates the qLˆL test and, in many cases, the latter has zero relative asymptotic efficiency.
    Keywords: structural change, local asymptotics, Bahadur efficiency, hypothesis testing, parameter variations
    JEL: C22
    Date: 2008–05
    URL: http://d.repec.org/n?u=RePEc:bos:wpaper:wp2008-006&r=ets
  7. By: Pierre Perron (Boston University); Zhongjun Qu (Boston University)
    Abstract: Recently, there has been an upsurge of interest in the possibility of confusing long memory and structural changes in level. Many studies have shown that when a stationary short memory process is contaminated by level shifts the estimate of the fractional differencing parameter is biased away from zero and the autocovariance function exhibits a slow rate of decay, akin to a long memory process. Partly based on results in Perron and Qu (2007), we analyze the properties of the autocorrelation function, the periodogram and the log periodogram estimate of the memory parameter when the level shift component is specified by a simple mixture model. Our theoretical results explain many findings reported and uncover new features. We confront our theoretical predictions using log-squared returns as a proxy for the volatility of some assets returns, including daily S&P 500 returns over the period 1928-2002. The autocorrelations and the path of the log periodogram estimates follow patterns that would obtain if the true underlying process was one of short-memory contaminated by level shifts instead of a fractionally integrated process. A simple testing procedure is also proposed, which reinforces this conclusion.
    Keywords: structural change, jumps, long memory processes, fractional integration, frequency domain estimates
    JEL: C22
    Date: 2008–08
    URL: http://d.repec.org/n?u=RePEc:bos:wpaper:wp2008-004&r=ets
  8. By: Pierre Perron (Boston University); Jing Zhou (BlackRock, Inc.)
    Abstract: We provide a comprehensive treatment of the problem of testing jointly for structural change in both the regression coefficients and the variance of the errors in a single equation regression involving stationary regressors. Our framework is quite general in that we allow for general mixing-type regressors and the assumptions imposed on the errors are quite mild. The errors’ distribution can be non-normal and conditional heteroskedasticity is permissable. Extensions to the case with serially correlated errors are also treated. We provide the required tools for addressing the following testing problems, among others: a) testing for given numbers of changes in regression coefficients and variance of the errors; b) testing for some unknown number of changes less than some pre-specified maximum; c) testing for changes in variance (regression coefficients) allowing for a given number of changes in regression coefficients (variance); and d) estimating the number of changes present. These testing problems are important for practical applications as witnessed by recent interests in macroeconomics and finance for which documenting structural change in the variability of shocks to simple autoregressions or vector autoregressive models has been a concern.
    Keywords: Change-point, Variance shift, Conditional heteroskedasticity, Likelihood ratio tests
    Date: 2008–07
    URL: http://d.repec.org/n?u=RePEc:bos:wpaper:wp2008-011&r=ets
  9. By: Zhongjun Qu (Boston University); Pierre Perron (Boston University)
    Abstract: Empirical ?ndings related to the time series properties of stock returns volatility indicate autocorrelations that decay slowly at long lags. In light of this, several long-memory models have been proposed. However, the possibility of level shifts has been advanced as a possible explanation for the appearance of long-memory and there is growing evidence suggesting that it may be an important feature of stock returns volatility. Nevertheless, it remains a conjecture that a model incorporating random level shifts in variance can explain the data well and produce reasonable forecasts. We show that a very simple stochastic volatility model incorporating both a random level shift and a short-memory component indeed provides a better in-sample fit of the data and produces forecasts that are no worse, and sometimes better, than standard stationary short and long-memory models. We use a Bayesian method for inference and develop algorithms to obtain the posterior distributions of the parameters and the smoothed estimates of the two latent components. We apply the model to daily S&P 500 and NASDAQ returns over the period 1980.1-2005.12. Although the occurrence of a level shift is rare, about once every two years, the level shift component clearly contributes most to the total variation in the volatility process. The half-life of a typical shock from the short-memory component is very short, on average between 8 and 14 days. We also show that, unlike common stationary short or long-memory models, our model is able to replicate keys features of the data. For the NASDAQ series, it forecasts better than a standard stochastic volatility model, and for the S&P 500 index, it performs equally well.
    Keywords: Bayesian estimation, Structural change, Forecasting, Long-memory, State-space models, Latent process
    JEL: C11 C12 C53 G12
    Date: 2008–06
    URL: http://d.repec.org/n?u=RePEc:bos:wpaper:wp2008-007&r=ets
  10. By: Pierre Perron (Department of Economics, Boston University); Yohei Yamamoto (Department of Economics, Boston University)
    Abstract: We consider the problem of estimating and testing for multiple breaks in a single equation framework with regressors that are endogenous, i.e., correlated with the errors. First, we show based on standard assumptions about the regressors, instruments and errors that the second stage regression of the instrumental variable (IV) procedure involves regressors and errors that satisfy all the assumptions in Perron and Qu (2006) so that the results about consistency, rate of convergence and limit distributions of the estimates of the break dates, as well as the limit distributions of the tests, are obtained as simple consequences. More importantly from a practical perspective, we show that even in the presence of endogenous regressors, it is still preferable to simply estimate the break dates and test for structural change using the usual ordinary least-squares (OLS) framework. It delivers estimates of the break dates with higher precision and tests with higher power compared to those obtained using an IV method. To illustrate the relevance of our theoretical results, we consider the stability of the New Keynesian hybrid Phillips curve. IV-based methods do not indicate any instability. On the other hand, OLS-based ones strongly indicate a change in 1991:1 and that after this date the model looses all explanatory power.
    Keywords: structural change, instrument variables, two-stage least-squares, parameter variations
    JEL: C22
    Date: 2008–10
    URL: http://d.repec.org/n?u=RePEc:bos:wpaper:wp2008-017&r=ets
  11. By: Jing Zhou (BlackRock, Inc.); Pierre Perron (Boston University)
    Abstract: In a companion paper, Perron and Zhou (2008) provided a comprehensive treatment of the problem of testing jointly for structural change in both the regression coefficients and the variance of the errors in a single equation regression model involving stationary regressors, allowing the break dates for the two components to be different or overlap. The aim of this paper is twofold. First, we present detailed simulation analyses to document various issues related to their procedures: a) the inadequacy of the two step procedures that are commonly applied; b) which particular version of the necessary correction factor exhibits better finite sample properties; c) whether applying a correction that is valid under more general conditions than necessary is detrimental to the size and power of the tests; d) the finite sample size and power of the various tests proposed; e) the performance of the sequential method in determining the number and types of breaks present. Second, we apply their testing procedures to various macroeconomic time series studied by Stock and Watson (2002). Our results reinforce the prevalence of change in mean, persistence and variance of the shocks to these series, and the fact that for most of them an important reduction in variance occurred during the 1980s. In many cases, however, the so-called “great moderation” should instead be viewed as a “great reversion”.
    Keywords: Change-point, Variance shift, Conditional heteroskedasticity, Likelihood ratio tests, the “Great moderation.”
    JEL: C22
    Date: 2008–07
    URL: http://d.repec.org/n?u=RePEc:bos:wpaper:wp2008-010&r=ets

This nep-ets issue is ©2009 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.