nep-for New Economics Papers
on Forecasting
Issue of 2011‒08‒09
ten papers chosen by
Rob J Hyndman
Monash University

  1. Forecasting Under Strucural Break Uncertainty By Jing Tian; Heather M. Anderson
  2. Predicting Short-Term Interest Rates: Does Bayesian Model Averaging Provide Forecast Improvement? By Chew Lian Chua; Sandy Suardi; Sarantis Tsiaplias
  3. Understanding and forecasting aggregate and disaggregate price dynamics By Colin Bermingham; Antonello D’Agostino
  4. Inflation Forecast Contracts By Hans Gersbach; Volker Hahn
  5. Detection of Crashes and Rebounds in Major Equity Markets By Wanfeng Yan; Reda Rebib; Ryan Woodard; Didier Sornette
  6. Large Vector Auto Regressions By Song Song; Peter J. Bickel
  7. The importance of time series extrapolation for macroeconomic expectations By Michael W.M. Roos; Ulrich Schmidt
  8. Quantile Forecasts of Financial Returns Using Realized GARCH Models By Toshiaki Watanabe
  9. Hedonic Predicted House Price Indices Using Time-Varying Hedonic Models with Spatial Autocorrelation By Alicia Rambaldi; Prasada Rao
  10. Estimation and Inference in Predictive Regressions By Eiji Kurozumi; Kohei Aono

  1. By: Jing Tian; Heather M. Anderson
    Abstract: This paper proposes two new weighting schemes that average forecasts using different estimation windows to account for structural change. We let the weights reflect the probability of each time point being the most-recent break point, and we use the reversed ordered Cusum test statistics to capture this intuition. The second weighting method simply imposes heavier weights on those forecasts that use more recent information. The proposed combination forecasts are evaluated using Monte Carlo techniques, and we compare them with forecasts based on other methods that try to account for structural change, including average forecasts weighted by past forecasting performance and techniques that first estimate a break point and then forecast using the post break data. Simulation results show that our proposed weighting methods often outperform the others in the presence of structural breaks. An empirical application based on a NAIRU Phillips curve model for the United States indicates that it is possible to outperform the random walk forecasting model when we employ forecasting methods that account for break uncertainty.
    Keywords: Forecasting with Structural breaks, Parameter Shifts, break Uncertainty, Structural break Tests, Choice of Estimation Sample, Forecast Combinations, NAIRU Phillips Curve.
    JEL: C22 C53 E37
    Date: 2011–07
  2. By: Chew Lian Chua (Melbourne Institute of Applied Economic and Social Research, The University of Melbourne); Sandy Suardi (School of Economics and Finance, La Trobe University); Sarantis Tsiaplias (KPMG, Australia)
    Abstract: This paper examines the forecasting qualities of Bayesian Model Averaging (BMA) over a set of single factor models of short-term interest rates. Using weekly and high frequency data for the one-month Eurodollar rate, BMA produces predictive likelihoods that are considerably better than the majority of the short-rate models, but marginally worse off than the best model in each dataset. We observe preference for models incorporating volatility clustering for weekly data and simpler short rate models for high frequency data. This is contrary to the popular belief that a diffusion process with volatility clustering best characterizes the short rate.
    Keywords: Bayesian model averaging, out-of-sample forecasts
    JEL: C11 C53
    Date: 2011–01
  3. By: Colin Bermingham (Central Bank of Ireland, Dame Street, Dublin 2, Ireland.); Antonello D’Agostino (European Central Bank, Kaiserstrasse 29, D-60311 Frankfurt am Main, Germany and Central Bank of Ireland.)
    Abstract: The issue of forecast aggregation is to determine whether it is better to forecast a series directly or instead construct forecasts of its components and then sum these component forecasts. Notwithstanding some underlying theoretical results, it is generally accepted that forecast aggregation is an empirical issue. Empirical results in the literature often go unexplained. This leaves forecasters in the dark when confronted with the option of forecast aggregation. We take our empirical exercise a step further by considering the underlying issues in more detail. We analyse two price datasets, one for the United States and one for the Euro Area, which have distinctive dynamics and provide a guide to model choice. We also consider multiple levels of aggregation for each dataset. The models include an autoregressive model, a factor augmented autoregressive model, a large Bayesian VAR and a time-varying model with stochastic volatility. We find that once the appropriate model has been found, forecast aggregation can significantly improve forecast performance. These results are robust to the choice of data transformation. JEL Classification: E17, E31, C11, C38.
    Keywords: Aggregation, Forecasting, Inflation.
    Date: 2011–08
  4. By: Hans Gersbach (ETH Zurich, Switzerland); Volker Hahn (ETH Zurich, Switzerland)
    Abstract: We introduce a new type of incentive contract for central bankers: inflation forecast contracts, which make central bankers’ remunerations contingent on the precision of their inflation forecasts. We show that such contracts enable central bankers to influence inflation expectations more effectively, thus facilitating more successful stabilization of current inflation. Inflation forecast contracts improve the accuracy of inflation forecasts, but have adverse consequences for output. On balance, paying central bankers according to their forecasting performance improves welfare.
    Keywords: central banks, incentive contracts, transparency, inflation targeting, inflation forecast targeting, intermediate targets
    JEL: E58
    Date: 2011–07
  5. By: Wanfeng Yan; Reda Rebib; Ryan Woodard; Didier Sornette
    Abstract: Financial markets are well known for their dramatic dynamics and consequences that affect much of the world's population. Consequently, much research has aimed at understanding, identifying and forecasting crashes and rebounds in financial markets. The Johansen-Ledoit-Sornette (JLS) model provides an operational framework to understand and diagnose financial bubbles from rational expectations and was recently extended to negative bubbles and rebounds. Using the JLS model, we develop an alarm index based on an advanced pattern recognition method with the aim of detecting bubbles and performing forecasts of market crashes and rebounds. Testing our methodology on 10 major global equity markets, we show quantitatively that our developed alarm performs much better than chance in forecasting market crashes and rebounds. We use the derived signal to develop elementary trading strategies that produce statistically better performances than a simple buy and hold strategy.
    Date: 2011–07
  6. By: Song Song; Peter J. Bickel
    Abstract: One popular approach for nonstructural economic and financial forecasting is to include a large number of economic and financial variables, which has been shown to lead to significant improvements for forecasting, for example, by the dynamic factor models. A challenging issue is to determine which variables and (their) lags are relevant, especially when there is a mixture of serial correlation (temporal dynamics), high dimensional (spatial) dependence structure and moderate sample size (relative to dimensionality and lags). To this end, an integrated solution that addresses these three challenges simultaneously is appealing. We study the large vector auto regressions here with three types of estimates. We treat each variable's own lags different from other variables' lags, distinguish various lags over time, and is able to select the variables and lags simultaneously. We first show the consequences of using Lasso type estimate directly for time series without considering the temporal dependence. In contrast, our proposed method can still produce an estimate as efficient as an oracle under such scenarios. The tuning parameters are chosen via a data driven "rolling scheme" method to optimize the forecasting performance. A macroeconomic and financial forecasting problem is considered to illustrate its superiority over existing estimators.
    Keywords: Time Series, Vector Auto Regression, Regularization, Lasso, Group Lasso, Oracle estimator
    JEL: C13 C14 C32 E30 E40 G10
    Date: 2011–08
  7. By: Michael W.M. Roos; Ulrich Schmidt
    Abstract: This paper presents a simple experiment on how laypeople form macroeconomic expectations. Subjects have to forecast inflation and GDP growth. By varying the information provided in different treatments, we can assess the importance of historical time-series information versus information acquired outside the experimental setting such as knowledge of expert forecasts. It turns out that the availability of historical data has a dominant impact on expectations and wipes out the influence of outside-lab information completely. Consequently, backward-looking behavior can be identified unambiguously as a decisive factor in expectation formation
    Keywords: expectations, macroeconomic experiment, use of information, inflation forecasts
    JEL: D83 D84 E37
    Date: 2011–08
  8. By: Toshiaki Watanabe
    Abstract: This article applies the realized GARCH model, which incorporates the GARCH model with realized volatility (RV), to quantile forecasts of financial returns such as Value-at-Risk and expected shortfall. This model has certain advantages in the application to quantile forecasts because it can adjust the bias of RV casued by microstructure noise and non-trading hours and enables us to estimate the parameters in the return distribution jointly with the other parameters. Student's t- and skewed strudent's t-distributions as well as normal distribution are used for the return distribution. The EGARCH model is used for comparison. Main results for the S&P 500 stock index are: (1) the realized GARCH model with the skewed student's t-distribution performs better than that with the normal and student's t-distributions and the EGARCH model using the daily returns only, and (2) the performance does not improve if the realized kernel, which takes account of microstructure noise, is used instead of the plain realized volatility, implying that the realized GARCH model can adjust the bias of RV caused by microstructure noise.
    Keywords: Expected shortfall, GARCH, Realized volatility, Skewed student's t-distribution, Value-at-Risk
    JEL: C52 C53
    Date: 2011–07
  9. By: Alicia Rambaldi (School of Economics, The University of Queensland); Prasada Rao (School of Economics, The University of Queensland)
    Abstract: Hedonic housing price indices are computed from estimated hedonic pricing models. The commonly used time dummy hedonic model and the rolling window hedonic model fail to account for changing consumer preferences over hedonic characteristics and typically these models do not account for the presence of spatial correlation in prices reflecting the role of locational characteristics. This paper develops a class of models with time-varying hedonic coefficients and spatially correlated errors, provides an assessment of the predictive performance of these compared to the commonly used hedonic models, and constructs and compares corresponding price index series. Alternative weighting systems, plutocratic versus democratic, are considered for the class of hedonic imputed price indices. Accounting for seasonality in house sales data, monthly chained indices and annual chained indices based on averages of year-on-year monthly indexes are presented. The empirical results are based on property sales data for Brisbane, Australia over the period 1985 to 2005. On the basis of root mean square prediction error criterion the time-varying parameter with spatial errors is found to be the best performing model and the rolling-window model to be the worst performing model.
    Date: 2011
  10. By: Eiji Kurozumi; Kohei Aono
    Abstract: This paper proposes new point estimates for predictive regressions. Our estimates are easily obtained by the least squares and the instrumental variable methods. Our estimates, called the plug-in estimates, have nice asymptotic properties such as median unbiasedness and the approximated normality of the associated t-statistics. In addition, the plug-in estimates are shown to have good finite sample properties via Monte Carlo simulations. Using the new estimates, we investigate U.S. stock returns and find that some variables, which have not been statistically detected as useful predictors in the literature, are able to predict stock returns. Because of their nice properties, our methods complement the existing statistical tests for predictability to investigate the relations between stock returns and economic variables.
    Keywords: unit root, near unit root, bias, median unbiased, stock return
    JEL: C13 C22
    Date: 2011–05

This nep-for issue is ©2011 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.