
on Forecasting 
By:  Eickmeier, Sandra; Ng, Tim 
Abstract:  We look at how large international datasets can improve forecasts of national activity. We use the case of New Zealand, an archetypal small open economy. We apply “datarich” factor and shrinkage methods to tackle the problem of efficiently handling hundreds of predictor data series from many countries. The methods covered are principal components, targeted predictors, weighted principal components, partial least squares, elastic net and ridge regression. Using these methods, we assess the marginal predictive content of international data for New Zealand GDP growth. We find that exploiting a large number of international predictors can improve forecasts of our target variable, compared to more traditional models based on small datasets. This is in spite of New Zealand survey data capturing a substantial proportion of the predictive information in the international data. The largest forecasting accuracy gains from including international predictors are at longer forecast horizons. The forecasting performance achievable with the datarich methods differs widely, with shrinkage methods and partial least squares performing best. We also assess the type of international data that contains the most predictive information for New Zealand growth over our sample. 
Keywords:  Forecasting, factor models, shrinkage methods, principal components, targeted predictors, weighted principal components, partial least squares 
JEL:  C33 C53 F47 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:zbw:bubdp1:7580&r=for 
By:  Schumacher, Christian 
Abstract:  This paper considers factor forecasting with national versus factor forecasting withinternational data. We forecast German GDP based on a large set of about 500 time series, consisting of German data as well as data from Euroarea and G7 countries. For factor estimation, we consider standard principal components as well as variable preselection prior to factor estimation using targeted predictors following Bai and Ng [Forecasting economic time series using targeted predictors, Journal of Econometrics 146 (2008), 304317]. The results are as follows: Forecasting without data preselection favours the use of German data only, and no additional information content can be extracted from international data. However, when using targeted predictors for variable selection, international data generally improves the forecastability of German GDP. 
Keywords:  forecasting, factor models, international data, variable selection 
JEL:  C53 E27 F47 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:zbw:bubdp1:7579&r=for 
By:  Kuzin, Vladimir; Marcellino, Massimiliano; Schumacher, Christian 
Abstract:  This paper discusses pooling versus model selection for now and forecasting in the presence of model uncertainty with large, unbalanced datasets. Empirically, unbalanced data is pervasive in economics and typically due to di¤erent sampling frequencies and publication delays. Two model classes suited in this context are factor models based on large datasets and mixeddata sampling (MIDAS) regressions with few predictors. The specification of these models requires several choices related to, amongst others, the factor estimation method and the number of factors, lag length and indicator selection. Thus, there are many sources of misspecification when selecting a particular model, and an alternative could be pooling over a large set of models with different specifications. We evaluate the relative performance of pooling and model selection for now and forecasting quarterly German GDP, a key macroeconomic indicator for the largest country in the euro area, with a large set of about one hundred monthly indicators. Our empirical findings provide strong support for pooling over many specifications rather than selecting a specific model. 
Keywords:  casting, forecast combination, forecast pooling, model selection, mixed  frequency data, factor models, MIDAS 
JEL:  C53 E37 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:zbw:bubdp1:7572&r=for 
By:  Zagaglia, Paolo (Dept. of Economics, Stockholm University) 
Abstract:  This paper studies the forecasting performance of the general equilibrium model of bond yields of Marzo, Söderström and Zagaglia (2008), where longterm interest rates are an integral part of the monetary transmission mechanism. The model is estimated with Bayesian methods on Euro area data. I investigate the outofsample predictive performance across different model specifications, including that of De Graeve, Emiris and Wouters (2009). The accuracy of point forecasts is evaluated through both univariate and multivariate accuracy measures. I show that taking into account the impact of the term structure of interest rates on the macroeconomy generates superior outofsample forecasts for both real variables, such as output, and inflation, and for bond yields. 
Keywords:  Monetary policy; yield curve; general equilibrium; bayesian estimation 
JEL:  E43 E44 E52 
Date:  2009–05–20 
URL:  http://d.repec.org/n?u=RePEc:hhs:sunrpe:2009_0014&r=for 
By:  Yang K. Lu (Boston University); Pierre Perron (Boston University) 
Abstract:  We consider the estimation of a random level shift model for which the series of interest is the sum of a short memory process and a jump or level shift component. For the latter component, we specify the commonly used simple mixture model such that the component is the cumulative sum of a process which is 0 with some probability (1a) and is a random variable with probability a. Our estimation method transforms such a model into a linear state space with mixture of normal innovations, so that an extension of Kalman filter algorithm can be applied. We apply this random level shift model to the logarithm of absolute returns for the S&P 500, AMEX, Dow Jones and NASDAQ stock market return indices. Our point estimates imply few level shifts for all series. But once these are taken into account, there is little evidence of serial correlation in the remaining noise and, hence, no evidence of longmemory. Once the estimated shifts are introduced to a standard GARCH model applied to the returns series, any evidence of GARCH effects disappears. We also produce rolling outofsample forecasts of squared returns. In most cases, our simple random level shifts model clearly outperforms a standard GARCH(1,1) model and, in many cases, it also provides better forecasts than a fractionally integrated GARCH model. 
Keywords:  structural change, forecasting, GARCH models, longmemory 
JEL:  C22 
Date:  2008–09 
URL:  http://d.repec.org/n?u=RePEc:bos:wpaper:wp2008012&r=for 
By:  Kuzin, Vladimir; Marcellino, Massimiliano; Schumacher, Christian 
Abstract:  This paper compares the mixeddata sampling (MIDAS) and mixedfrequency VAR (MFVAR) approaches to model speci cation in the presence of mixedfrequency data, e.g., monthly and quarterly series. MIDAS leads to parsimonious models based on exponential lag polynomials for the coe¢ cients, whereas MFVAR does not restrict the dynamics and therefore can su¤er from the curse of dimensionality. But if the restrictions imposed by MIDAS are too stringent, the MFVAR can perform better. Hence, it is di¢ cult to rank MIDAS and MFVAR a priori, and their relative ranking is better evaluated empirically. In this paper, we compare their performance in a relevant case for policy making, i.e., nowcasting and forecasting quarterly GDP growth in the euro area, on a monthly basis and using a set of 20 monthly indicators. It turns out that the two approaches are more complementary than substitutes, since MFVAR tends to perform better for longer horizons, whereas MIDAS for shorter horizons. 
Keywords:  nowcasting, mixedfrequency data, mixedfrequency VAR, MIDAS 
JEL:  C53 E37 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:zbw:bubdp1:7576&r=for 
By:  Zhongjun Qu (Boston University); Pierre Perron (Boston University) 
Abstract:  Empirical ?ndings related to the time series properties of stock returns volatility indicate autocorrelations that decay slowly at long lags. In light of this, several longmemory models have been proposed. However, the possibility of level shifts has been advanced as a possible explanation for the appearance of longmemory and there is growing evidence suggesting that it may be an important feature of stock returns volatility. Nevertheless, it remains a conjecture that a model incorporating random level shifts in variance can explain the data well and produce reasonable forecasts. We show that a very simple stochastic volatility model incorporating both a random level shift and a shortmemory component indeed provides a better insample fit of the data and produces forecasts that are no worse, and sometimes better, than standard stationary short and longmemory models. We use a Bayesian method for inference and develop algorithms to obtain the posterior distributions of the parameters and the smoothed estimates of the two latent components. We apply the model to daily S&P 500 and NASDAQ returns over the period 1980.12005.12. Although the occurrence of a level shift is rare, about once every two years, the level shift component clearly contributes most to the total variation in the volatility process. The halflife of a typical shock from the shortmemory component is very short, on average between 8 and 14 days. We also show that, unlike common stationary short or longmemory models, our model is able to replicate keys features of the data. For the NASDAQ series, it forecasts better than a standard stochastic volatility model, and for the S&P 500 index, it performs equally well. 
Keywords:  Bayesian estimation, Structural change, Forecasting, Longmemory, Statespace models, Latent process 
JEL:  C11 C12 C53 G12 
Date:  2008–06 
URL:  http://d.repec.org/n?u=RePEc:bos:wpaper:wp2008007&r=for 
By:  Lence, Sergio H.; Wu, Jingtao; Lawrence, John 
Keywords:  Agribusiness, Agricultural Finance, 
Date:  2008 
URL:  http://d.repec.org/n?u=RePEc:ags:nc1007:48135&r=for 