
on Econometric Time Series 
By:  Kuzin, Vladimir; Marcellino, Massimiliano; Schumacher, Christian 
Abstract:  This paper discusses pooling versus model selection for now and forecasting in the presence of model uncertainty with large, unbalanced datasets. Empirically, unbalanced data is pervasive in economics and typically due to di¤erent sampling frequencies and publication delays. Two model classes suited in this context are factor models based on large datasets and mixeddata sampling (MIDAS) regressions with few predictors. The specification of these models requires several choices related to, amongst others, the factor estimation method and the number of factors, lag length and indicator selection. Thus, there are many sources of misspecification when selecting a particular model, and an alternative could be pooling over a large set of models with different specifications. We evaluate the relative performance of pooling and model selection for now and forecasting quarterly German GDP, a key macroeconomic indicator for the largest country in the euro area, with a large set of about one hundred monthly indicators. Our empirical findings provide strong support for pooling over many specifications rather than selecting a specific model. 
Keywords:  casting, forecast combination, forecast pooling, model selection, mixed  frequency data, factor models, MIDAS 
JEL:  C53 E37 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:zbw:bubdp1:7572&r=ets 
By:  Kuzin, Vladimir; Marcellino, Massimiliano; Schumacher, Christian 
Abstract:  This paper compares the mixeddata sampling (MIDAS) and mixedfrequency VAR (MFVAR) approaches to model speci cation in the presence of mixedfrequency data, e.g., monthly and quarterly series. MIDAS leads to parsimonious models based on exponential lag polynomials for the coe¢ cients, whereas MFVAR does not restrict the dynamics and therefore can su¤er from the curse of dimensionality. But if the restrictions imposed by MIDAS are too stringent, the MFVAR can perform better. Hence, it is di¢ cult to rank MIDAS and MFVAR a priori, and their relative ranking is better evaluated empirically. In this paper, we compare their performance in a relevant case for policy making, i.e., nowcasting and forecasting quarterly GDP growth in the euro area, on a monthly basis and using a set of 20 monthly indicators. It turns out that the two approaches are more complementary than substitutes, since MFVAR tends to perform better for longer horizons, whereas MIDAS for shorter horizons. 
Keywords:  nowcasting, mixedfrequency data, mixedfrequency VAR, MIDAS 
JEL:  C53 E37 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:zbw:bubdp1:7576&r=ets 
By:  Breitung, Jörg; Eickmeier, Sandra 
Abstract:  From time to time, economies undergo farreaching structural changes. In this paper we investigate the consequences of structural breaks in the factor loadings for the specification and estimation of factor models based on principal components and suggest test procedures for structural breaks. It is shown that structural breaks severely inflate the number of factors identified by the usual information criteria. Based on the strict factor model the hypothesis of a structural break is tested by using LikelihoodRatio, LagrangeMultiplier and Wald statistics. The LM test which is shown to perform best in our Monte Carlo simulations, is generalized to factor models where the common factors and idiosyncratic components are serially correlated. We also apply the suggested test procedure to a US dataset used in Stock and Watson (2005) and a euroarea dataset described in Altissimo et al. (2007). We find evidence that the beginning of the socalled Great Moderation in the US as well as the Maastricht treaty and the handover of monetary policy from the European national central banks to the ECB coincide with structural breaks in the factor loadings. Ignoring these breaks may yield misleading results if the empirical analysis focuses on the interpretation of common factors or on the transmission of common shocks to the variables of interest. 
Keywords:  Dynamic factor models, structural breaks, number of factors, Great Moderation, EMU 
JEL:  C01 C12 C3 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:zbw:bubdp1:7574&r=ets 
By:  Erdenebat Bataa; Denise R. Osborn; Marianne Sensier; Dick van Dijk 
Abstract:  To shed light on changes in international inflation, this paper proposes an iterative procedure to discriminate between structural breaks in the coefficients and the disturbance covariance matrix of a system of equations, allowing these components to change at different dates. Conditional on these, recursive procedures are proposed to analyze the nature of change, including tests to identify individual coefficient shifts and to discriminate between volatility and correlation breaks. Using these procedures, structural breaks in monthly crosscountry inflation relationships are examined for major G7 countries (US, Euro area, UK and Canada) and within the Euro area (France, Germany and Italy). Overall, we find few dynamic spillovers between countries, although the Euro area leads inflation in North America, while Germany leads France. Contemporaneous inflation correlations are generally low in the 1970s and early 1980s, but intercontinental correlations increase from the end of the 1990s, while Euro area countries move from essentially idiosyncratic inflation to comovement in the mid1980s. 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:man:cgbcrp:119&r=ets 
By:  Yang K. Lu (Boston University); Pierre Perron (Boston University) 
Abstract:  We consider the estimation of a random level shift model for which the series of interest is the sum of a short memory process and a jump or level shift component. For the latter component, we specify the commonly used simple mixture model such that the component is the cumulative sum of a process which is 0 with some probability (1a) and is a random variable with probability a. Our estimation method transforms such a model into a linear state space with mixture of normal innovations, so that an extension of Kalman filter algorithm can be applied. We apply this random level shift model to the logarithm of absolute returns for the S&P 500, AMEX, Dow Jones and NASDAQ stock market return indices. Our point estimates imply few level shifts for all series. But once these are taken into account, there is little evidence of serial correlation in the remaining noise and, hence, no evidence of longmemory. Once the estimated shifts are introduced to a standard GARCH model applied to the returns series, any evidence of GARCH effects disappears. We also produce rolling outofsample forecasts of squared returns. In most cases, our simple random level shifts model clearly outperforms a standard GARCH(1,1) model and, in many cases, it also provides better forecasts than a fractionally integrated GARCH model. 
Keywords:  structural change, forecasting, GARCH models, longmemory 
JEL:  C22 
Date:  2008–09 
URL:  http://d.repec.org/n?u=RePEc:bos:wpaper:wp2008012&r=ets 
By:  Pierre Perron (Boston University); Yohei Yamamoto (Boston University) 
Abstract:  Elliott and Müller (2006) considered the problem of testing for general types of parameter variations, including infrequent breaks. They developed a framework that yields optimal tests, in the sense that they nearly attain some local Gaussian power envelop. The main ingredient in their setup is that the variance of the process generating the changes in the parameters must go to zero at a fast rate. They recommended the socalled qLˆL test, a partial sums type test based on the residuals obtained from the restricted model. We show that for breaks that are very small, its power is indeed higher than other tests, including the popular supWald test. However, the differences are very minor. When the magnitude of change is moderate to large, the power of the test is very low in the context of a regression with lagged dependent variables or when a correction is applied to account for serial correlation in the errors. In many cases, the power goes to zero as the magnitude of change increases. The power of the supWald test does not show this nonmonotonicity and its power is far superior to the qLˆL test when the break is not very small. We claim that the optimality of the qLˆL test does not come from the properties of the test statistics but the criterion adopted, which is not useful to analyze structural change tests. Instead, we use the concept of the relative approximate Bahadur slopes to assess the relative efficiency of two tests. When doing so, it is shown that the supWald test indeed dominates the qLˆL test and, in many cases, the latter has zero relative asymptotic efficiency. 
Keywords:  structural change, local asymptotics, Bahadur efficiency, hypothesis testing, parameter variations 
JEL:  C22 
Date:  2008–05 
URL:  http://d.repec.org/n?u=RePEc:bos:wpaper:wp2008006&r=ets 
By:  Pierre Perron (Boston University); Zhongjun Qu (Boston University) 
Abstract:  Recently, there has been an upsurge of interest in the possibility of confusing long memory and structural changes in level. Many studies have shown that when a stationary short memory process is contaminated by level shifts the estimate of the fractional differencing parameter is biased away from zero and the autocovariance function exhibits a slow rate of decay, akin to a long memory process. Partly based on results in Perron and Qu (2007), we analyze the properties of the autocorrelation function, the periodogram and the log periodogram estimate of the memory parameter when the level shift component is specified by a simple mixture model. Our theoretical results explain many findings reported and uncover new features. We confront our theoretical predictions using logsquared returns as a proxy for the volatility of some assets returns, including daily S&P 500 returns over the period 19282002. The autocorrelations and the path of the log periodogram estimates follow patterns that would obtain if the true underlying process was one of shortmemory contaminated by level shifts instead of a fractionally integrated process. A simple testing procedure is also proposed, which reinforces this conclusion. 
Keywords:  structural change, jumps, long memory processes, fractional integration, frequency domain estimates 
JEL:  C22 
Date:  2008–08 
URL:  http://d.repec.org/n?u=RePEc:bos:wpaper:wp2008004&r=ets 
By:  Pierre Perron (Boston University); Jing Zhou (BlackRock, Inc.) 
Abstract:  We provide a comprehensive treatment of the problem of testing jointly for structural change in both the regression coefficients and the variance of the errors in a single equation regression involving stationary regressors. Our framework is quite general in that we allow for general mixingtype regressors and the assumptions imposed on the errors are quite mild. The errors’ distribution can be nonnormal and conditional heteroskedasticity is permissable. Extensions to the case with serially correlated errors are also treated. We provide the required tools for addressing the following testing problems, among others: a) testing for given numbers of changes in regression coefficients and variance of the errors; b) testing for some unknown number of changes less than some prespecified maximum; c) testing for changes in variance (regression coefficients) allowing for a given number of changes in regression coefficients (variance); and d) estimating the number of changes present. These testing problems are important for practical applications as witnessed by recent interests in macroeconomics and finance for which documenting structural change in the variability of shocks to simple autoregressions or vector autoregressive models has been a concern. 
Keywords:  Changepoint, Variance shift, Conditional heteroskedasticity, Likelihood ratio tests 
Date:  2008–07 
URL:  http://d.repec.org/n?u=RePEc:bos:wpaper:wp2008011&r=ets 
By:  Zhongjun Qu (Boston University); Pierre Perron (Boston University) 
Abstract:  Empirical ?ndings related to the time series properties of stock returns volatility indicate autocorrelations that decay slowly at long lags. In light of this, several longmemory models have been proposed. However, the possibility of level shifts has been advanced as a possible explanation for the appearance of longmemory and there is growing evidence suggesting that it may be an important feature of stock returns volatility. Nevertheless, it remains a conjecture that a model incorporating random level shifts in variance can explain the data well and produce reasonable forecasts. We show that a very simple stochastic volatility model incorporating both a random level shift and a shortmemory component indeed provides a better insample fit of the data and produces forecasts that are no worse, and sometimes better, than standard stationary short and longmemory models. We use a Bayesian method for inference and develop algorithms to obtain the posterior distributions of the parameters and the smoothed estimates of the two latent components. We apply the model to daily S&P 500 and NASDAQ returns over the period 1980.12005.12. Although the occurrence of a level shift is rare, about once every two years, the level shift component clearly contributes most to the total variation in the volatility process. The halflife of a typical shock from the shortmemory component is very short, on average between 8 and 14 days. We also show that, unlike common stationary short or longmemory models, our model is able to replicate keys features of the data. For the NASDAQ series, it forecasts better than a standard stochastic volatility model, and for the S&P 500 index, it performs equally well. 
Keywords:  Bayesian estimation, Structural change, Forecasting, Longmemory, Statespace models, Latent process 
JEL:  C11 C12 C53 G12 
Date:  2008–06 
URL:  http://d.repec.org/n?u=RePEc:bos:wpaper:wp2008007&r=ets 
By:  Pierre Perron (Department of Economics, Boston University); Yohei Yamamoto (Department of Economics, Boston University) 
Abstract:  We consider the problem of estimating and testing for multiple breaks in a single equation framework with regressors that are endogenous, i.e., correlated with the errors. First, we show based on standard assumptions about the regressors, instruments and errors that the second stage regression of the instrumental variable (IV) procedure involves regressors and errors that satisfy all the assumptions in Perron and Qu (2006) so that the results about consistency, rate of convergence and limit distributions of the estimates of the break dates, as well as the limit distributions of the tests, are obtained as simple consequences. More importantly from a practical perspective, we show that even in the presence of endogenous regressors, it is still preferable to simply estimate the break dates and test for structural change using the usual ordinary leastsquares (OLS) framework. It delivers estimates of the break dates with higher precision and tests with higher power compared to those obtained using an IV method. To illustrate the relevance of our theoretical results, we consider the stability of the New Keynesian hybrid Phillips curve. IVbased methods do not indicate any instability. On the other hand, OLSbased ones strongly indicate a change in 1991:1 and that after this date the model looses all explanatory power. 
Keywords:  structural change, instrument variables, twostage leastsquares, parameter variations 
JEL:  C22 
Date:  2008–10 
URL:  http://d.repec.org/n?u=RePEc:bos:wpaper:wp2008017&r=ets 
By:  Jing Zhou (BlackRock, Inc.); Pierre Perron (Boston University) 
Abstract:  In a companion paper, Perron and Zhou (2008) provided a comprehensive treatment of the problem of testing jointly for structural change in both the regression coefficients and the variance of the errors in a single equation regression model involving stationary regressors, allowing the break dates for the two components to be different or overlap. The aim of this paper is twofold. First, we present detailed simulation analyses to document various issues related to their procedures: a) the inadequacy of the two step procedures that are commonly applied; b) which particular version of the necessary correction factor exhibits better finite sample properties; c) whether applying a correction that is valid under more general conditions than necessary is detrimental to the size and power of the tests; d) the finite sample size and power of the various tests proposed; e) the performance of the sequential method in determining the number and types of breaks present. Second, we apply their testing procedures to various macroeconomic time series studied by Stock and Watson (2002). Our results reinforce the prevalence of change in mean, persistence and variance of the shocks to these series, and the fact that for most of them an important reduction in variance occurred during the 1980s. In many cases, however, the socalled “great moderation” should instead be viewed as a “great reversion”. 
Keywords:  Changepoint, Variance shift, Conditional heteroskedasticity, Likelihood ratio tests, the “Great moderation.” 
JEL:  C22 
Date:  2008–07 
URL:  http://d.repec.org/n?u=RePEc:bos:wpaper:wp2008010&r=ets 