nep-for New Economics Papers
on Forecasting
Issue of 2017‒01‒08
seven papers chosen by
Rob J Hyndman
Monash University

  1. Forecast Combination for Euro Area Inflation - A Cure in Times of Crisis? By Kirstin Hubrich; Frauke Skudelny
  2. Predicting Economic Recessions Using Machine Learning Algorithms By Rickard Nyman; Paul Ormerod
  3. Regime Shifts in Excess Stock Return Predictability: An Out-of-Sample Portfolio Analysis By Giulia Dal Pra; Massimo Guidolin; Manuela Pedio; Fabiola Vasile
  4. Bayesian Poisson log-bilinear models for mortality projections with multiple populations By Katrien Antonio; Anastasios Bardoutsos; Wilbert Ouburg
  5. Alternative Bayesian compression in Vector Autoregressions and related models By Mike G. Tsionas
  6. A dynamic Nelson-Siegel model with forward-looking indicators for the yield curve in the US t Title By Fausto Vieira; Fernando Chague, Marcelo Fernandes
  7. Smets and Wouters model estimated with skewed shocks - empirical study of forecasting properties By Grzegorz Koloch

  1. By: Kirstin Hubrich; Frauke Skudelny
    Abstract: The period of extraordinary volatility in euro area headline inflation starting in 2007 raised the question whether forecast combination methods can be used to hedge against bad forecast performance of single models during such periods and provide more robust forecasts. We investigate this issue for forecasts from a range of short-term forecasting models. Our analysis shows that there is considerable variation of the relative performance of the different models over time. To take that into account we suggest employing performance-based forecast combination methods, in particular one with more weight on the recent forecast performance. We compare such an approach with equal forecast combination that has been found to outperform more sophisticated forecast combination methods in the past, and investigate whether it can improve forecast accuracy over the single best model. The time-varying weights assign weights to the economic interpretations of the forecast stemming from different models. The combination methods are evaluated for HICP headline inflation and HICP excluding food and energy. We investigate how forecast accuracy of the combination methods differs between pre-crisis times, the period after the global financial crisis and the full evaluation period including the global financial crisis with its extraordinary volatility in inflation. Overall, we find that, first, forecast combination helps hedge against bad forecast performance and, second, that performance-based weighting tends to outperform simple averaging.
    Keywords: Forecasting ; Euro area inflation ; Forecast combinations ; Forecast evaluation
    JEL: C32 C52 C53 E31 E37
    Date: 2016–08
  2. By: Rickard Nyman; Paul Ormerod
    Abstract: Even at the beginning of 2008, the economic recession of 2008/09 was not being predicted. The failure to predict recessions is a persistent theme in economic forecasting. The Survey of Professional Forecasters (SPF) provides data on predictions made for the growth of total output, GDP, in the United States for one, two, three and four quarters ahead since the end of the 1960s. Over a three quarters ahead horizon, the mean prediction made for GDP growth has never been negative over this period. The correlation between the mean SPF three quarters ahead forecast and the data is very low, and over the most recent 25 years is not significantly different from zero. Here, we show that the machine learning technique of random forests has the potential to give early warning of recessions. We use a small set of explanatory variables from financial markets which would have been available to a forecaster at the time of making the forecast. We train the algorithm over the 1970Q2-1990Q1 period, and make predictions one, three and six quarters ahead. We then re-train over 1970Q2-1990Q2 and make a further set of predictions, and so on. We did not attempt any optimisation of predictions, using only the default input parameters to the algorithm we downloaded in the package R. We compare the predictions made from 1990 to the present with the actual data. One quarter ahead, the algorithm is not able to improve on the SPF predictions. Three and six quarters ahead, the correlations between actual and predicted are low, but they are very significantly different from zero. Although the timing is slightly wrong, a serious downturn in the first half of 2009 could have been predicted six quarters ahead in late 2007. The algorithm never predicts a recession when one did not occur. We obtain even stronger results with random forest machine learning techniques in the case of the United Kingdom.
    Date: 2017–01
  3. By: Giulia Dal Pra; Massimo Guidolin; Manuela Pedio; Fabiola Vasile
    Abstract: We analyze the recursive, out-of-sample performance of asset allocation decisions based on financial ratio-predictability under single-state linear and regime-switching models. We adopt both a statistical perspective to analyze whether models based on the dividend-price, earning price, and book-to-market ratios can forecast excess equity returns, and an economic approach that turns predictions into portfolio strategies. The strategies consist of a portfolio switching approach, a mean-variance framework, and a long-run dynamic model. We report an interesting disconnect between a statistical perspective, whereby the ratios yield a modest forecasting power, and a portfolio approach, by which a moderate predictability is occasionally sufficient to yield significant portfolio out performance, especially before transaction costs and when regimes are taken into account. However, also when regimes are considered, predictability gives high payoffs only to long-horizon, highly risk-averse asset managers. Moreover, different strategies deliver different performance rankings across predictors. Finally, we find evidence inconsistent with the notion that increasing sophistication in the way portfolio decisions are modeled, delivers a superior performance.
    Keywords: predictability, Markov switching, economic value, optimal portfolio choice
    Date: 2016
  4. By: Katrien Antonio; Anastasios Bardoutsos; Wilbert Ouburg
    Abstract: Life insurers, pension funds, health care providers and social security institutions face increasing expenses due to continuing improvements of mortality rates. The actuarial and demographic literature has introduced a myriad of (deterministic and stochastic) models to forecast mortality rates of single populations. This paper presents a Bayesian analysis of two related multi-population mortality models of log-bilinear type, designed for two or more populations. Using a larger set of data, multi-population mortality models allow joint modelling and projection of mortality rates by identifying characteristics shared by all subpopulations as well as sub-population speci?c e?ects on mortality. This is important when modeling and forecasting mortality of males and females, regions within a country and when dealing with index-based longevity hedges. Our ?rst model is inspired by the two factor Lee & Carter model of Renshaw and Haberman (2003) and the common factor model of Carter and Lee (1992). The second model is the augmented common factor model of Li and Lee (2005). This paper approaches both models in a statistical way, using a Poisson distribution for the number of deaths at a certain age and in a certain time period. Moreover, we use Bayesian statistics to calibrate the models and to produce mortality forecasts. We develop the technicalities necessary for Markov Chain Monte Carlo ([MCMC]) simulations and provide software implementation (in R) for the models discussed in the paper. Key bene?ts of this approach are multiple. We jointly calibrate the Poisson likelihood for the number of deaths and the times series models imposed on the time dependent parameters, we enable full allowance for parameter uncertainty and we are able to handle missing data as well as small sample populations. We compare and contrast results from both models to the results obtained with a frequentist single population approach and a least squares estimation of the augmented common factor model.
    Keywords: projected life tables, multi-population stochastic mortality models, Bayesian statistics, Poisson regression, one factor Lee & Carter model, two factor Lee & Carter model, Li & Lee model, augmented common factor model
    Date: 2015
  5. By: Mike G. Tsionas (Athens University of Economics and Business)
    Abstract: In this paper we reconsider large Bayesian Vector Autoregressions (BVAR) from the point of view of Bayesian Compressed Regression (BCR). First, we show that there are substantial gains in terms of out-of-sample forecasting by treating the problem as an error-in-variables formulation and estimating the compression matrix instead of using random draws. As computations can be e?ciently organized around a standard Gibbs sampler, timings and computa-tional complexity are not a?ected severely. Second, we extend the Multivariate Autoregressive Index model to the BCR context and show that we have, again, gains in terms of out-of-sample forecasting. The new techniques are used in U.S data featuring medium-size, large and huge BVARs
    Keywords: Bayesian Vector Autoregressions; Bayesian Compressed Re-gression; Error-in-Variables; Forecasting; Multivariate Autoregressive Index model.
    JEL: C11 C13
    Date: 2016–11
  6. By: Fausto Vieira; Fernando Chague, Marcelo Fernandes
    Abstract: This paper proposes a Factor-Augmented Dynamic Nelson-Siegel (FADNS) model to predict the yield curve in the US that relies on a large data set of weekly financial and macroeconomic variables. The FADNS model significantly improves interest rate forecasts relative to the extant models in the literature. For longer horizons, it beats autoregressive alternatives, with a reduction in mean absolute error of up to 40%. For shorter horizons, it offers a good challenge to autoregressive forecasting models, outperforming them for the 7- and 10-year yields. The out-of-sample analysis shows that the good performance comes mostly from the forward-looking nature of the variables we employ. Including them reduces the mean absolute error in 5 basis points on average with respect to models that reflect only past macroeconomic events.
    Date: 2016–12–07
  7. By: Grzegorz Koloch
    Abstract: In this paper we estimate a Smets and Wouters (2007) model with shocks following a closed skew normal distribution (csn) introduced in Gonzalez-Farias et al. (2004), which nests a normal distribution as a special case. In the paper we discuss priors for model parameters, including skewness-related parameters of shocks, i.e. location, scale and skewness parameters. Using data ranging from 1991Q1 to 2012Q2 we estimate the model and recursively verify its out-of sample forecasting properties for time period 2007Q1 - 2012Q2, therefore including the recent financial crisis, within a forecasting horizon from 1 up to 8 quarters ahead. Using a RMSE measure we compare the forecasting performance of the model with skewed shocks wit a model estimated using normally distributed shocks. We find that inclusion of skewness can help forecasting some variables (consumption, investment and hours worked), but, on the other hand, results in deterioration in the other ones (output, inflation wages and the short rate).
    Keywords: DSGE, Forecasting, Closed Skew-Normal Distribution
    JEL: C51 C13 E32
    Date: 2016–12

This nep-for issue is ©2017 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.