nep-for New Economics Papers
on Forecasting
Issue of 2015‒01‒03
eleven papers chosen by
Rob J Hyndman
Monash University

  1. Stochastic Household Forecasts by Coherent Random Shares Predictions By Keilman, Nico; van Duin, Coen
  2. Deterministic and stochastic trends in the Lee-Carter mortality model By Laurent Callot; Niels Haldrup; Malene Kallestrup Lamb
  3. A Bayesian MIDAS Approach to Modeling First and Second Moment Dynamics By Pettenuzzo, Davide; Timmermann, Allan G; Valkanov, Rossen
  4. On Forecasting Conflict in Sudan: 2009-2012 By Bessler, David; Kibriya, Shahriar; Chen, Junyi; Price, Ed
  5. A Consumption-Based Approach to Exchange Rate Predictability By Jair N. Ojeda-Joya
  6. "Volatility and Quantile Forecasts by Realized Stochastic Volatility Models with Generalized Hyperbolic Distribution" By Makoto Takahashi; Toshiaki Watanabe; Yasuhiro Omori
  7. Inflation Forecasts and Forecaster Herding: Evidence from South African Survey Data By Christian Pierdzioch; Monique B. Reid; Rangan Gupta
  8. Yield curve and Recession Forecasting in a Machine Learning Framework By Theophilos Papadimitriou; Periklis Gogas; Maria Matthaiou; Efthymia Chrysanthidou
  9. How do oil price forecast errors impact inflation forecast errors? An empirical analysis from French and US inflation forecasts. By F. Bec; A. De Gaye
  10. Heterogeneous Agents, the Financial Crisis and Exchange Rate Predictability By Buncic, Daniel; Piras, Gion Donat
  11. On corporate financial distress prediction: what can we learn from private firms in a small open economy? By Evangelos C. Charalambakis

  1. By: Keilman, Nico (Dept. of Economics, University of Oslo); van Duin, Coen (Statistics Netherland)
    Abstract: We compute a stochastic household forecast for the Netherlands by the random share method. Time series of shares of persons in nine household positions, broken down by sex and five-year age group for the years 1996-2010 are modelled by means of the Hyndman-Booth-Yasmeen product-ratio variant of the Lee-Carter model. This approach reduces the dimension of the data set by collapsing the age dimension into one scalar. As a result, the forecast task implies predicting two time series of time indices for each household position for men and women. We model these time indices as a Random Walk with Drift (RWD), and compute prediction intervals for them. Prediction intervals for random shares are simulated based on the Lee-Carter model. The random shares are combined with population numbers from an independently computed stochastic population forecast of the Netherlands. <p> Our general conclusion is that the method proposed in this paper is useful for generating errors around expected values of shares that are computed independently. In case one wishes to use this method for computing the expected values for the household shares as well, one has to include cohort effects in the Lee Carter model. This requires long time series of data.
    Keywords: Forecast; Household formation; Families; Monte Carlo; Simulation; Random shares; Single parent; The Netherlands
    JEL: C15 J10
    Date: 2014–04–14
  2. By: Laurent Callot (VU University Amsterdam, the Tinbergen Institute and CREATES); Niels Haldrup (Aarhus University and CREATES); Malene Kallestrup Lamb (Aarhus University and CREATES)
    Abstract: The Lee and Carter (1992) model assumes that the deterministic and stochastic time series dynamics loads with identical weights when describing the development of age specific mortality rates. Effectively this means that the main characteristics of the model simplifies to a random walk model with age specific drift components. But restricting the adjustment mechanism of the stochastic and linear trend components to be identical may be a too strong simplification. In fact, the presence of a stochastic trend component may itself result from a bias induced by properly fitting the linear trend that characterizes mortality data. We find empirical evidence that this feature of the Lee-Carter model overly restricts the system dynamics and we suggest to separate the deterministic and stochastic time series components at the benefit of improved fit and forecasting performance. In fact, we find that the classical Lee-Carter model will otherwise over estimate the reduction of mortality for the younger age groups and will under estimate the reduction of mortality for the older age groups. In practice, our recommendation means that the Lee-Carter model instead of a one-factor model should be formulated as a two (or several)-factor model where one factor is deterministic and the other factors are stochastic. This feature generalizes to the range of models that extend the Lee-Carter model in various directions.
    Keywords: Mortality modelling, factor models, principal components, stochastic and deterministic trends
    JEL: C2 C23 J1 J11
    Date: 2014–11–19
  3. By: Pettenuzzo, Davide; Timmermann, Allan G; Valkanov, Rossen
    Abstract: We propose a new approach to predictive density modeling that allows for MIDAS effects in both the first and second moments of the outcome and develop Gibbs sampling methods for Bayesian estimation in the presence of stochastic volatility dynamics. When applied to quarterly U.S. GDP growth data, we find strong evidence that models that feature MIDAS terms in the conditional volatility generate more accurate forecasts than conventional benchmarks. Finally, we find that forecast combination methods such as the optimal predictive pool of Geweke and Amisano (2011) produce consistent gains in out-of-sample predictive performance.
    Keywords: Bayesian estimation; GDP growth; MIDAS regressions; out-of-sample forecasts; stochastic volatility
    JEL: C11 C32 C53 E37
    Date: 2014–09
  4. By: Bessler, David; Kibriya, Shahriar; Chen, Junyi; Price, Ed
    Abstract: The paper considers univariate and multivariate models to forecast monthly conflict events in the Sudan over the out-of-sample period 2009 – 2012. The models used to generate these forecasts were based on a specification from a machine learning algorithm fit to 2000 – 2008 monthly data. The idea here is that for policy purposes we need models that can forecast conflict events before they occur. The model that includes previous month’s wheat price performs better than a similar model which does not include past wheat prices (the univariate model). Both models did not perform well in forecasting conflict in a neighborhood of the 2012 “Heglig Crisis”. Such a result is generic, as “outlier or unusual events” are hard for models and policy experts to forecast.
    Keywords: Machine learning algorithm; Commodity prices
    JEL: C53 C54 O1
    Date: 2014–08
  5. By: Jair N. Ojeda-Joya
    Abstract: This paper provides evidence of short-run predictability for the real exchange rate by performing out-of-sample tests of a forecasting equation which is derived from a consumption-based asset pricing model. In this model, the real exchange rate is predictable as a result of the implications of preferences with habit persistence on the pricing of international assets. The implied predictors are: domestic, US and world consumption growth. Empirical exercises show evidence of short-term predictability on the bilateral rates of 15 out of 17 countries vis-à-vis the US over the post Bretton-Woods float. A GMM estimation of the parameters of the model also finds evidence of the presence of habits in consumers’ preferences. Classification JEL: C5, F31, F47, G15
    Date: 2014–12
  6. By: Makoto Takahashi (Center for the Study of Finance and Insurance, Osaka University); Toshiaki Watanabe (Institute of Economic Research, Hitotsubashi University); Yasuhiro Omori (Faculty of Economics, The University of Tokyo)
    Abstract: The realized stochastic volatility model of Takahashi, Omori, and Watanabe (2009), which incorporates the asymmetric stochastic volatility model with the realized volatility, is extended by employing a wider class distribution, the generalized hyperbolic skew Student's t-distribution, for nancial returns. The extension makes it possible to consider the heavy tail and skewness in nancial returns. With the Bayesian estimation scheme via Markov chain Monte Carlo method, the model enables us to estimate the parameters in the return distribution and in the model jointly. It also makes it possible to forecast volatility and return quantiles by sampling from their posterior distributions jointly. The model is applied to quantile forecasts of nancial returns such as value-at-risk and expected shortfall as well as volatility forecasts and those forecasts are evaluated by several backtesting procedures. Empirical results with the US index, Dow Jones Industrial Average, show that the extended model ts the data better and improves the volatility and quantile forecasts.
    Date: 2014–12
  7. By: Christian Pierdzioch (Department of Economics, Helmut-Schmidt-University); Monique B. Reid (Department of Economics, University of Stellenbosch); Rangan Gupta (Department of Economics, University of Pretoria)
    Abstract: We use South African survey data to study whether short-term inflation forecasts are unbiased. Depending on how we model a forecaster’s information set, we find that forecasts are biased due to forecaster herding. Evidence of forecaster herding is strong when we assume that the information set contains no information on the contemporaneous forecasts of others. When we randomly allocate forecasters into a group of early forecasters who can only observe the past forecasts of others and late forecasters who can observe the contemporaneous forecasters of their predecessors, then evidence of forecaster herding weakens. Further, evidence of forecaster herding is strong and significant in times of high inflation volatility. In time of low inflation volatility, in contrast, forecaster anti-herding seems to dominate
    Keywords: inflation rate, forecasting, forecaster herding
    JEL: C53 D82 E37
    Date: 2014
  8. By: Theophilos Papadimitriou (Department of Economics, Democritus University of Thrace, Greece); Periklis Gogas (Department of Economics, Democritus University of Thrace, Greece; The Rimini Centre for Economic Analysis, Italy); Maria Matthaiou (Department of Economics, Democritus University of Thrace, Greece); Efthymia Chrysanthidou (Department of Economics, Democritus University of Thrace, Greece)
    Abstract: In this paper, we investigate the forecasting ability of the yield curve in terms of the U.S. real GDP cycle. More specifically, within a Machine Learning (ML) framework, we use data from a variety of short (treasury bills) and long term interest rates (bonds) for the period from 1976:Q3 to 2011:Q4 in conjunction with the real GDP for the same period, to create a model that can successfully forecast output fluctuations (inflation and output gaps) around its long-run trend. We focus our attention in correctly forecasting the instances of output gaps referred for the purposes of our analysis here as recessions. In this effort, we applied a Support Vector Machines (SVM) technique for classification. The results show that we can achieve an overall forecasting accuracy of 66,7% and a 100% accuracy in forecasting recessions.
    Date: 2014–11
  9. By: F. Bec; A. De Gaye
    Abstract: This paper proposes an empirical investigation of the impact of oil price forecast errors on inflation forecast errors for two different sets of recent forecasts data: the median of SPF inflation forecasts for the U.S. and the Central Bank inflation forecasts for France. Mainly two salient points emerge from our results. First, there is a significant contribution of oil price forecast errors to the explanation of inflation forecast errors, whatever the country or the period considered. Second, the pass-through of oil price forecast errors to inflation forecast errors is multiplied by around 2 when the oil price volatility is large.
    Keywords: Forecast errors, Inflation rate, Oil price, Threshold model.
    JEL: C22 E31 E37
    Date: 2014
  10. By: Buncic, Daniel; Piras, Gion Donat
    Abstract: We construct an empirical heterogeneous agent model which optimally combines forecasts from fundamentalist and chartists agents and evaluate its out-of-sample forecast performance using daily date covering the period from January 1999 to June 2014 for six of the most widely traded currencies. We use daily financial data such as level, slope and curvature yield curve factors, equity prices, as well as risk aversion and global trade activity measures in the fundamentalist agent's predictor set to obtain a proxy the market's view on the state of the macroeconomy. Chartist agents rely upon standard momentum, moving average and relative strength index indicators in their predictor set. The individual agent specific forecasts are computed using the recently proposed flexible dynamic model averaging framework and are then aggregated into a model combined forecast using a forecast combination regression. We show that our empirical heterogeneous agent model produces statistically significant and sizable forecast improvements over the standard random walk benchmark, reaching out-of-sample $R^2$ values of 1.41, 1.07, 0.99 and 0.74 percent at the daily one-step ahead horizon for 4 out of the 6 currencies that we consider. Forecast gains remain significant for horizons up to three-days ahead. We show further that for 5 out of the 6 currencies, a substantial part of the forecast gains are realised over the September 2008 to February 2009 period, that is, around the time of the Lehman Brothers collapse. The time series evolution of the dynamic model combination weights shows that for the first half of the out-of-sample evaluation period, fundamentalist agents dominated the combination forecasts, while the last third of the out-of-sample period was driven by chartist agents.
    Keywords: Empirical heterogeneous agent model, forecasting, time varying parameter model, state-space modelling, model combination, exchange rate predictability, financial crisis
    JEL: C22 C52 C53 E17 F31 G17
    Date: 2014–12
  11. By: Evangelos C. Charalambakis (Bank of Greece)
    Abstract: We use a large panel dataset that includes nearly 31,000 Greek private firms to investigate which variables impact on the prediction of corporate financial distress. Based on a multi-period logit model that accounts for industry effects, we identify six firm-specific variables that best describe the probability of financial distress for Greek private firms. In particular, the results show that profitability, leverage, the ratio of retained earnings to total assets, the ability of a firm to export, liquidity and the ability of a firm to pay out dividends are strong predictors of financial distress. We also find that GDP growth and a dummy variable that considers the effect of the Greek debt crisis affect the probability of financial distress. In-sample and out-of-sample forecast tests show that the model that includes the six firm-specific variables , GDP growth and industry dummies exhibits the highest predictive ability. Finally, the predictive ability of the model remains high when we increase the forecast horizon.
    Keywords: corporate financial distress; bankruptcy prediction; hazardmodel; financial statements
    JEL: G13 G17 G33 C41
    Date: 2014–11

This nep-for issue is ©2015 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.