nep-for New Economics Papers
on Forecasting
Issue of 2018‒01‒08
eight papers chosen by
Rob J Hyndman
Monash University

  1. On Long Memory Origins and Forecast Horizons By J. Eduardo Vera-Vald\'es
  2. Real-time forecast evaluation of DSGE models with stochastic volatility By Diebold, Francis X.; Schorfheide, Frank; Shin, Minchul
  3. Assessing the 2016 U.S. Presidential Election Popular Vote Forecasts By Graefe, Andreas; Armstrong, J. Scott; Jones, Randall J.; Cuzan, Alfred G.
  4. Spikes and memory in (Nord Pool) electricity price spot prices By Tommaso Proietti; Niels Haldrup; Oskar Knapik
  5. Appropriate monetary policy and forecast disagreement at the FOMC By Schultefrankenfeld, Guido
  6. Which Model to Forecast the Target Rate? By Maarten van Oordt
  7. Why Are Inflation Forecasts Sticky? Theory and Application to France and Germany By Frédérique Bec; Raouf Boucekkine; Caroline Jardet
  8. A Simple Model for Now-Casting Volatility Series By BREITUNG, Jörg; HAFNER, Christian M.

  1. By: J. Eduardo Vera-Vald\'es
    Abstract: Most long memory forecasting studies assume that the memory is generated by the fractional difference operator. We argue that the most cited theoretical arguments for the presence of long memory do not imply the fractional difference operator, and assess the performance of the autoregressive fractionally integrated moving average $(ARFIMA)$ model when forecasting series with long memory generated by nonfractional processes. We find that high-order autoregressive $(AR)$ models produce similar or superior forecast performance than $ARFIMA$ models at short horizons. Nonetheless, as the forecast horizon increases, the $ARFIMA$ models tend to dominate in forecast performance. Hence, $ARFIMA$ models are well suited for forecasts of long memory processes regardless of the long memory generating mechanism, particularly for medium and long forecast horizons. Additionally, we analyse the forecasting performance of the heterogeneous autoregressive ($HAR$) model which imposes restrictions on high-order $AR$ models. We find that the structure imposed by the $HAR$ model produces better long horizon forecasts than $AR$ models of the same order, at the price of inferior short horizon forecasts in some cases. Our results have implications for, among others, Climate Econometrics and Financial Econometrics models dealing with long memory series at different forecast horizons. We show in an example that while a short memory autoregressive moving average $(ARMA)$ model gives the best performance when forecasting the Realized Variance of the S\&P 500 up to a month ahead, the $ARFIMA$ model gives the best performance for longer forecast horizons.
    Date: 2017–12
  2. By: Diebold, Francis X.; Schorfheide, Frank; Shin, Minchul
    Abstract: Recent work has analyzed the forecasting performance of standard dynamic stochastic general equilibrium (DSGE) models, but little attention has been given to DSGE models that incorporate nonlinearities in exogenous driving processes. Against that background, we explore whether incorporating stochastic volatility improves DSGE forecasts (point, interval, and density). We examine real-time forecast accuracy for key macroeconomic variables including output growth, inflation, and the policy rate. We find that incorporating stochastic volatility in DSGE models of macroeconomic fundamentals markedly improves their density forecasts, just as incorporating stochastic volatility in models of financial asset returns improves their density forecasts.
    Keywords: Dynamic Stochastic General Equilibrium Model,Prediction,Stochastic Volatility
    JEL: E17 E27 E37 E47
    Date: 2017
  3. By: Graefe, Andreas; Armstrong, J. Scott; Jones, Randall J.; Cuzan, Alfred G.
    Abstract: The PollyVote uses evidence-based techniques for forecasting the popular vote in presidential elections. The forecasts are derived by averaging existing forecasts generated by six different forecasting methods. In 2016, the PollyVote correctly predicted that Hillary Clinton would win the popular vote. The 1.9 percentage-point error across the last 100 days before the election was lower than the average error for the six component forecasts from which it was calculated (2.3 percentage points). The gains in forecast accuracy from combining are best demonstrated by comparing the error of PollyVote forecasts with the average error of the component methods across the seven elections from 1992 to 2012. The average errors for last 100 days prior to the election were: public opinion polls (2.6 percentage points), econometric models (2.4), betting markets (1.8), and citizens’ expectations (1.2); for expert opinions (1.6) and index models (1.8), data were only available since 2004 and 2008, respectively. The average error for PollyVote forecasts was 1.1, lower than the error for even the most accurate component method.
    Keywords: election, forecasting, voting
    JEL: C53 D72
    Date: 2017–02–07
  4. By: Tommaso Proietti (CEIS & DEF, University of Rome "Tor Vergata"); Niels Haldrup (Aarhus University); Oskar Knapik (Aarhus University)
    Abstract: Electricity spot prices are subject to transitory sharp movements commonly referred to as spikes. The paper aims at assessing their effects on model based inferences and predictions, with reference to the Nord Pool power exchange. We identify a spike as a price value which deviates substantially from the normal price, where the latter is defined as the expectation arising from a model accounting for long memory at the zero and at the weekly seasonal frequencies, given the knowledge of the past realizations. Hence, a spike is associated to a time series innovation with size larger than a specified threshold. The latter regulates the robustness of the estimates of the underlying price level and it is chosen by a data driven procedure that focuses on the ability to predict future prices. The normal price is computed by a modified Kalman filter, which robustifies the inferences by cleaning the spikes, i.e. shrinking an observation deviating substantially from the normal price towards the one-step-ahead prediction. Our empirical application illustrates the effects of the spikes on the estimates of the parameters governing the persistence of the series; moreover, a real time rolling forecasting exercise is used to establish the amount of cleaning for optimizing the predicting accuracy at different horizons.
    Keywords: Robustness,Kalman Filter,Long Memory.
    JEL: C22 C53 Q41
    Date: 2017–12–18
  5. By: Schultefrankenfeld, Guido
    Abstract: I assess how dissenting views on appropriate monetary policy result in disagreement about the macroeconomic outlook of Federal Open Market Committee members. FOMC members that voted for a higher Fed Funds Rate than the majority of voters also forecast higher inflation rates, while they forecast lower unemployment rates relative to the consensus view on the future economy. Voters that tighten their stance revise inflation forecasts to the upside and unemployment forecasts to the downside. Members that switched their voting status between forecasting rounds, i.e., switched from voting with the majority to being a dissenting minority voter, or switched vice versa, are significantly more hesitant in revising their macroeconomic forecasts.
    Keywords: Federal Reserve System,Federal Open Market Committee,Federal Funds Rate,Dissent,Forecast Disagreement
    JEL: C12 E52
    Date: 2017
  6. By: Maarten van Oordt
    Abstract: Specifications of the Federal Reserve target rate that have more realistic features mitigate in-sample over-fitting and are favored in the data. Imposing a positivity constraint and discrete increments significantly increases the accuracy of model out-of-sample forecasts for the level and volatility of the Federal Reserve target rates. In addition, imposing the constraints produces different estimates of the response coefficients. In particular, a new and simple specification, where the target rate is the maximum between zero and the prediction of an ordered-choice Probit model, is more accurate and has higher response coefficients to information about inflation and unemployment.
    Keywords: Financial markets, Interest rates
    JEL: E43
    Date: 2017
  7. By: Frédérique Bec (THEMA - Théorie économique, modélisation et applications - Université de Cergy Pontoise - CNRS - Centre National de la Recherche Scientifique, CREST - Centre de Recherche en Économie et Statistique - INSEE - ENSAE ParisTech - École Nationale de la Statistique et de l'Administration Économique); Raouf Boucekkine (GREQAM - Groupement de Recherche en Économie Quantitative d'Aix-Marseille - EHESS - École des hautes études en sciences sociales - AMU - Aix Marseille Université - CNRS - Centre National de la Recherche Scientifique - ECM - Ecole Centrale de Marseille, IMéRA - Institute for Advanced Studies - Aix-Marseille University, IUF - Institut Universitaire de France - M.E.N.E.S.R. - Ministère de l'Éducation nationale, de l’Enseignement supérieur et de la Recherche); Caroline Jardet (Centre de recherche de la Banque de France - Banque de France)
    Abstract: This paper proposes a theoretical model of forecasts formation which implies that in presence of information observation and forecasts communication costs, rational professional forecasters might find it optimal not to revise their forecasts continuously, or at any time. The threshold time- and state-dependence of the observation review and forecasts revisions implied by this model are then tested using inflation forecast updates of professional forecasters from recent Consensus Economics panel data for France and Germany. Our empirical results support the presence of both kinds of dependence, as well as their threshold-type shape. They also imply an upper bound of the optimal time between two information observations of about six months and the co-existence of both types of costs, the observation cost being about 1.5 times larger than the communication cost.
    Keywords: forecast revision,binary choice models,information and communication costs
    Date: 2017–11
  8. By: BREITUNG, Jörg (University of Cologne); HAFNER, Christian M. (Université catholique de Louvain, CORE, Belgium)
    Abstract: Popular volatility models focus on the conditional variance given past observations, whereas the (arguably most important) information in the current observation is ignored. This paper proposes a simple model for now-casting volatilities based on a specific ARMA representation of the log-transformed squared returns that allows us to estimate current volatility as a function of current and past returns. The model can be viewed as a stochastic volatility model with perfect correlation between the two error terms. It is shown that the volatility nowcasts are invariant to this correlation and therefore the estimated volatilities coincide. An extension of our now-casting model is proposed that takes into account the so-called leverage effect. The alternative models are applied to estimate daily return volatilities from the S&P 500 stock price index.
    Keywords: EGARCH, stochastic volatility, ARMA, realized volatility, leverage
    JEL: C22 C58
    Date: 2016–10–01

This nep-for issue is ©2018 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.