nep-for New Economics Papers
on Forecasting
Issue of 2016‒07‒16
eight papers chosen by
Rob J Hyndman
Monash University

  1. Estimating and Forecasting Generalized Fractional Long Memory Stochastic Volatility Models By Peiris, S.; Asai, M.; McAleer, M.J.
  2. Forecasting China's Economic Growth and Inflation By Patrick Higgins; Tao Zha; Karen Zhong
  3. Point, interval and density forecasts of exchange rates with time-varying parameter models By Abbate, Angela; Marcellino, Massimiliano
  4. Assessing the economic value of probabilistic forecasts in the presence of an inflation target By Christopher McDonald; Craig Thamotheram; Shaun P. Vahey; Elizabeth C. Wakerly
  5. Perception vs Reality: How does the British electorate evaluate economic performance of incumbent governments in the post war period? By Jonathon M. Clegg
  6. Large Vector Autoregressions with Stochastic Volatility and Flexible Priors By Clark, Todd E.; Carriero, Andrea; Marcellino, Massimiliano
  7. Is the intrinsic value of macroeconomic news announcements related to their asset price impact? By Gilbert, Thomas; Scotti, Chiara; Strasser, Georg; Vega, Clara
  8. Land use predictions on a regular grid at different scales and with easily accessible covariates By Chakir, Raja; Laurent, Thibault; Ruiz-Gazen, Anne; Thomas-Agnan, Christine; Vignes, Céline

  1. By: Peiris, S.; Asai, M.; McAleer, M.J.
    Abstract: In recent years fractionally differenced processes have received a great deal of attention due to its flexibility in financial applications with long memory. This paper considers a class of models generated by Gegenbauer polynomials, incorporating the long memory in stochastic volatility (SV) components in order to develop the General Long Memory SV (GLMSV) model. We examine the statistical properties of the new model, suggest using the spectral likelihood estimation for long memory processes, and investigate the finite sample properties via Monte Carlo experiments. We apply the model to three exchange rate return series. Overall, the results of the out-of-sample forecasts show the adequacy of the new GLMSV model.
    Keywords: Stochastic volatility, GARCH models, Gegenbauer Polynomial, Long Memory, Spectral Likelihood, Estimation, Forecasting
    JEL: C18 C21 C58
    Date: 2016–06–01
  2. By: Patrick Higgins; Tao Zha; Karen Zhong
    Abstract: Although macroeconomic forecasting forms an integral part of the policymaking process, there has been a serious lack of rigorous and systematic research in the evaluation of out-of-sample model-based forecasts of China's real GDP growth and CPI inflation. This paper fills this research gap by providing a replicable forecasting model that beats a host of other competing models when measured by root mean square errors, especially over long-run forecast horizons. The model is shown to be capable of predicting turning points and to be usable for policy analysis under different scenarios. It predicts that China's future GDP growth will be of L-shape rather than U-shape.
    JEL: C53 E1 E17
    Date: 2016–07
  3. By: Abbate, Angela; Marcellino, Massimiliano
    Abstract: We explore whether modelling parameter time variation improves the point, interval and density forecasts of nine major exchange rates vis-a-vis the US dollar over the period 1976-2015. We find that modelling parameter time variation is needed for an accurate calibration of forecast confidence intervals, and is better suited at long horizons and in high-volatility periods. The biggest forecast improvements are obtained by modelling time variation in the volatilities of the innovations, rather than in the slope parameters. Moreover, we do not find evidence that parameter time variation helps to unravel exchange rate predictability by macroeconomic fundamentals. Finally, an economic evaluation of the different forecast models reveals that controlling for parameter time variation leads to higher portfolios returns, and to higher utility values for investors.
    Keywords: exchange rates,forecasting,density forecasts,BVAR,time-varying parameters
    JEL: C11 C53 F31 F37
    Date: 2016
  4. By: Christopher McDonald; Craig Thamotheram; Shaun P. Vahey; Elizabeth C. Wakerly
    Abstract: We consider the fundamental issue of what makes a “good” probability forecast for a central bank operating within an inflation targeting framework. We provide two examples in which the candidate forecasts comfortably outperform those from benchmark specifications by conventional statistical metrics such as root mean squared prediction errors and average logarithmic scores. Our assessment of economic significance uses an explicit loss function that relates economic value to a forecast communication problem for an inflation targeting central bank. We analyse the Bank of England’s forecasts for inflation during the period in which the central bank operated within a strict inflation targeting framework in our first example. In our second example, we consider forecasts for inflation in New Zealand generated from vector autoregressions, when the central bank operated within a flexible inflation targeting framework. In both cases, the economic significance of the performance differential exhibits sensitivity to the parameters of the loss function and, for some values, the differentials are economically negligible.
    Keywords: Forecasting inflation, Inflation targeting, Cost-loss ratio, Forecast evaluation, Monetary policy
    Date: 2016–06
  5. By: Jonathon M. Clegg (Faculty of History, University of Oxford)
    Abstract: Rational retrospective voting models have dominated the literature on election forecasting and the economic vote since they were first proposed by Anthony Downs in 1957. The theory views voters as appraisers of incumbent government’s past performance, which acts as the principal source of information individuals use when making their vote. Pure retrospective voting requires far less of the electorate in order to hold a government accountable and empirical work based on this theory has been very adept at predicting election outcomes and explaining individual voting decisions. In terms of the time period assessed to form judgements on past performance however, there is a surprising disconnect between the theoretical line of thought and actual testing. The sensible assumption of retrospective voting models is that voters, looking to judge a government’s past performance, should assess changes in their own welfare over an entire term of office, with little or no discounting of past events. The majority of empirical studies however, focus on economic performance over shorter time horizons, usually within a year of an election. There have only been a handful of studies attempting to empirically test the correct temporal relationship between changes in economic indicators and election outcomes, despite its importance for retrospective voting models and democratic accountability. This working paper empirically tests over which time horizons changes in macroeconomic fundamentals continue to have a significant bearing on election outcomes in Post War Britain. It finds that longer-term measures of economic change, over entire government terms, are better at predicting changes in incumbent’s vote shares than shorter-term measures, closer to the election period. This has important consequences for future voting models and is a promising result for democratic accountability.
    JEL: D72 C52
    Date: 2016–03–10
  6. By: Clark, Todd E. (Federal Reserve Bank of Cleveland); Carriero, Andrea; Marcellino, Massimiliano
    Abstract: Recent research has shown that a reliable vector autoregressive model (VAR) for forecasting and structural analysis of macroeconomic data requires a large set of variables and modeling time variation in their volatilities. Yet, there are no papers jointly allowing for stochastic volatilities and large datasets, due to computational complexity. Moreover, homoskedastic VAR models for large datasets so far restrict substantially the allowed prior distributions on the parameters. In this paper we propose a new Bayesian estimation procedure for (possibly very large) VARs featuring time varying volatilities and general priors. This is important both for reduced form applications, such as forecasting, and for more structural applications, such as computing response functions to structural shocks. We show that indeed empirically the new estimation procedure performs very well for both tasks.
    Keywords: forecasting; models; structural shocks;
    JEL: C11 C13 C33 C53
    Date: 2016–06–30
  7. By: Gilbert, Thomas; Scotti, Chiara; Strasser, Georg; Vega, Clara
    Abstract: The literature documents a heterogeneous asset price response to macroeconomic news announcements: Some announcements have a strong impact on asset prices and others do not. In order to explain these differences, we estimate a novel measure of the intrinsic value of a macroeconomic announcement, which we define as the announcement's ability to nowcast GDP growth, inflation, and the Federal Funds Target Rate. Using the same nowcasting framework, we then decompose this intrinsic value into the announcement's characteristics: its relation to fundamentals, timing, and revision noise. We find that in the 1998–2013 period, a significant fraction of the variation in the announcements' price impact on the Treasury bond futures market can be explained by differences in intrinsic value. Furthermore, our novel measure of timing explains significantly more of this variation than the announcements' relation to fundamentals, reporting lag (which previous studies have used as a measure of timing), or revision noise. JEL Classification: G14, E44
    Keywords: coordination role of public information, learning, macroeconomic announcements, macroeconomic forecasting, price discovery
    Date: 2016–02
  8. By: Chakir, Raja; Laurent, Thibault; Ruiz-Gazen, Anne; Thomas-Agnan, Christine; Vignes, Céline
    Abstract: We propose in this paper models that allow to predict land use (urban, agriculture, forests, natural grasslands and soil) at the points of the Teruti-Lucas survey from easily accessible covariates. Our approach involves two steps : first we model land use at the Teruti Lucas point level and second, we propose a method to aggregate land use on regular meshes. The model of the first stage provides fine level predictions. The second step aggregates these predictions on the tiles of the mesh comparing several methods. We are considering various regular meshes of the territory to study the prediction quality depending on the resolution. We show that with easily accessible variables we have an acceptable prediction quality at the point level and that the quality of prediction is improved from the very first stage of aggregation.
    Keywords: land use models, Teruti-Lucas survey, classication tree
    JEL: C21 C25 Q15 R14
    Date: 2016–07

This nep-for issue is ©2016 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.