nep-for New Economics Papers
on Forecasting
Issue of 2014‒07‒28
twelve papers chosen by
Rob J Hyndman
Monash University

  1. Forecasting Chinese GDP Growth with Mixed Frequency Data: Which Indicators to Look at? By Heiner Mikosch; Ying Zhang
  2. Probabilistic skill in ensemble seasonal forecasts By Leonard Smith; Hailiang Du; Emma Suckling; Falk Niehörster
  3. Predicting and Capitalizing on Stock Market Bears in the U.S. By Bertrand Candelon; Jameel Ahmed; Stefan Straetmans
  4. Cash Demand Forecasting in ATMs by Clustering and Neural Networks By V. KAMINI; V. RAVI; A. PRINZIE; D. VAN DEN POEL
  5. An evaluation of decadal probability forecasts from state-of-the-art climate models By Emma Suckling; Leonard Smith
  6. Empirical Evidence on the Importance of Aggregation, Asymmetry, and Jumps for Volatility Prediction By Diep Duong; Norman Swanson
  7. Long-Run Restrictions and Survey Forecasts of Output, Consumption and Investment By Michael P. Clements;
  8. Adaptive Models and Heavy Tails By Davide Delle Monache; Ivan Petrella
  9. Collinsville solar thermal project: Energy economics and Dispatch forecasting - Draft repor By William Paul Bell; Phil Wild; John Foster
  10. Policy-oriented macroeconomic forecasting with hybrid DGSE and time-varying parameter VAR models By Stelios D. Bekiros; Alessia Paccagnini
  11. Growth: Now and Forever? By Giang Ho; Paolo Mauro
  12. Improving the Reliability of Real-time Hodrick-Prescott Filtering Using Survey Forecasts By Jaqueson K. Galimberti; Marcelo L. Moura

  1. By: Heiner Mikosch (KOF Swiss Economic Institute, ETH Zurich, Switzerland); Ying Zhang (Partners Group AG, Baar, Switzerland)
    Abstract: Building on a mixed data sampling (MIDAS) model we evaluate the predictive power of a variety of monthly macroeconomic indicators for forecasting quarterly Chinese GDP growth. We iterate the evaluation over forecast horizons from 370 days to 1 day prior to GDP release and track the release days of the indicators so as to only use information which is actually available at the respective day of forecast. This procedure allows us to detect how useful a specific indicator is at a specific forecast horizon relative to other indicators. Despite being published with an (additional) lag of one month the OECD leading indicator outperforms the leading indicators published by the Conference Board and by Goldman Sachs. Albeit being smaller in terms of market volume, the Shenzhen Composite Stock Exchange Index outperforms the Shanghai Composite Stock Exchange Index and several Hong Kong Stock Exchange indices. Consumer price in flation is especially valuable at forecast horizons of 11 to 7 months. The reserve requirement ratio for small banks proves to be a robust predictor at forecast horizons of 9 to 5 months, whereas the big banks reserve requirement ratio and the prime lending rate have lost their leading properties since 2009. Industrial production can be quite valuable for now- or even forecasting, but only if it is released shortly after the end of a month. Neither monthly retail sales, investment, trade, electricity usage, freight traffic nor the manufacturing purchasing managers' index of the Chinese National Bureau of Statistics help much for now- or forecasting. Our results might be relevant for experts who need to know which indicator releases are really valuable for predicting quarterly Chinese GDP growth, and which indicator releases have less predictive content.
    Keywords: Forecasting, mixed frequency data, MIDAS, China, GDP growth
    JEL: C53 E27
    Date: 2014–07
    URL: http://d.repec.org/n?u=RePEc:kof:wpskof:14-359&r=for
  2. By: Leonard Smith; Hailiang Du; Emma Suckling; Falk Niehörster
    Abstract: Operational seasonal forecasting centres employ simulation models to make probability forecasts of future conditions on seasonal to annual lead times. Skill in such forecasts is reflected in the information they add to purely empirical statistical models, or to earlier versions of simulation models. An evaluation of seasonal probability forecasts from the DEMETER and the ENSEMBLES multi-model ensemble experiments is presented. Two particular regions are considered (Nino3.4 in the Pacific and Main Development Region in the Atlantic); these regions were chosen before any spatial distribution of skill were examined. The ENSEMBLES models are found to have skill against the climatological distribution on seasonal time scales; for models in ENSEMBLES which have a clearly defined predecessor model in DEMETER the improvement from DEMETER to ENSEMBLES is discussed. Due to the long lead times of the forecasts and the evolution of observation technology, the forecast-outcome archive for seasonal forecast evaluation is small; arguably evaluation data for seasonal forecasting will always be precious. 22 Issues of information contamination from in-sample evaluation are discussed, impacts (both positive and negative) of variations in cross-validation protocol are demonstrated. Other difficulties due to the small forecast-outcome archive are identified. The claim that the multi-model ensemble provides a “better” probability forecast than the best single model is examined and challenged. Significant forecast information beyond the climatological distribution is also found in a probability forecast based on persistence. On seasonal time scales, the ENSEMBLES simulation based probability forecasts add significantly more information to empirical probability forecasts than on decadal scales. It is suggested most skilful operational seasonal forecasts available would meld information both from simulation models and empirical models.
    Date: 2014–02
    URL: http://d.repec.org/n?u=RePEc:lsg:lsgwps:wp151&r=for
  3. By: Bertrand Candelon; Jameel Ahmed; Stefan Straetmans
    Abstract: his paper attempts to predict the bear conditions on the US stock market. To this aim we elaborate simple predictive regressions, static and dynamic binary choice (BCM) as well as Markov-switching models. The in- and out-of-sample prediction ability is evaluated and we compare the forecasting performance of various specifications across as well as within models. It turns out that various dynamic extensions of static versions of probit and logit models reveal additional predictive information for both in- and out-of-sample fit. We also find that binary models outperform the Markov-switching model. With respect to the macro-financial variables, terms spreads, inflation and money supply turn out to be useful predictors. The results lead to useful implications for investors practicing active portfolio and risk management and for policy makers as tools to get early warning signals.
    Keywords: Bear stock market, S&P 500 Index, Macro-financial variables, Dynamic Binary Response models, Markov-switching model, Bry-Boschan algorithm, Active Trading Strategies.
    JEL: C22 C25 C53 G11 G17
    Date: 2014–07–15
    URL: http://d.repec.org/n?u=RePEc:ipg:wpaper:2014-409&r=for
  4. By: V. KAMINI; V. RAVI; A. PRINZIE; D. VAN DEN POEL (-)
    Abstract: To improve ATMs’ cash demand forecasts, this paper advocates the prediction of cash demand for groups of ATMs with similar day-of-the week cash demand patterns. We first clustered ATM centers into ATM clusters having similar day-of-the week withdrawal patterns. To retrieve “day-of-the-week” withdrawal seasonality parameters (effect of a Monday, etc) we built a time series model for each ATMs. For clustering, the succession of 7 continuous daily withdrawal seasonality parameters of ATMs is discretized. Next, the similarity between the different ATMs’ discretized daily withdrawal seasonality sequence is measured by the Sequence Alignment Method (SAM). For each cluster of ATMs, four neural networks viz., general regression neural network (GRNN), multi layer feed forward neural network (MLFF), group method of data handling (GMDH) and wavelet neural network (WNN) are built to predict an ATM center’s cash demand. The proposed methodology is applied on the NN5 competition dataset. We observed that GRNN yielded the best result of 18.44% symmetric mean absolute percentage error (SMAPE), which is better than the result of Andrawis et al. (2011). This is due to clustering followed by a forecasting phase. Further, the proposed approach yielded much smaller SMAPE values than the approach of direct prediction on the entire sample without clustering. From a managerial perspective, the clusterwise cash demand forecast helps the bank’s top management to design similar cash replenishment plans for all the ATMs in the same cluster. This cluster-level replenishment plans could result in saving huge operational costs for ATMs operating in a similar geographical region.
    Keywords: Time Series, Neural Networks, SAM method, Clustering, ATM Cash withdrawal forecasting
    Date: 2013–11
    URL: http://d.repec.org/n?u=RePEc:rug:rugwps:13/865&r=for
  5. By: Emma Suckling; Leonard Smith
    Abstract: While state-of-the-art models of the Earth’s climate system have improved tremendously over the last twenty years, nontrivial structural flaws still hinder their ability to forecast the decadal dynamics of the Earth system realistically. Contrasting the skill of those models not only with each other but also with their physical basis effectively and quantify their ability to add information to operational forecasts. The skill of decadal probabilistic hindcasts for annual global-mean and regional-mean temperatures from the EU ENSEMBLES project is contrasted with several empirical models. Both the ENSEMBLES models and a “Dynamic Climatology” empirical model show probabilistic skill above that of a static climatology for global-mean temperature. The Dynamic Climatology model, however, often outperforms the ENSEMBLES models. The fact that empirical models display skill similar to that of today’s state-of-the-art simulation models suggests that empirical forecasts can improve decadal forecasts for climate services, just as in weather, medium range, and seasonal forecasting. It is suggested that the direct comparison of simulation models with empirical models becomes a regular component of large model forecast evaluations. Doing so would clarify the extend to which state-of-the-art simulation models provide information beyond that available from simpler empirical models and clarify current limitations in using simulation forecasting for decision-support. Ultimately the skill of simulation models based on physical principles is expected to surpass that of empirical models in a changing climate; their direct comparison provides information on progress towards that goal which is not available in model-model intercomparisons.
    Date: 2014–02
    URL: http://d.repec.org/n?u=RePEc:lsg:lsgwps:wp150&r=for
  6. By: Diep Duong (Rutgers University); Norman Swanson (Rutgers University)
    Abstract: Many recent modelling advances in finance topics ranging from the pricing of volatility-based derivative products to asset management are predicated on the importance of jumps, or discontinuous movements in asset returns. In light of this, a number of recent papers have addressed volatility predictability, some from the perspective of the usefulness of jumps in forecasting volatility. Key papers in this area include Andersen, Bollerslev, Diebold and Labys (2003), Corsi (2004), Andersen, Bollerslev and Diebold (2007), Corsi, Pirino and Reno (2008), Barndorff, Kinnebrock, and Shephard (2010), Patton and Shephard (2011), and the references cited therein. In this paper, we review the extant literature and then present new empirical evidence on the predictive content of realized measures of jump power variations (including upside and downside risk, jump asymmetry, and truncated jump variables), constructed using instantaneous returns, i.e., |r_{t}|^{q}, 0≤q≤6, in the spirit of Ding, Granger and Engle (1993) and Ding and Granger (1996). Our prediction experiments use high frequency price returns constructed using S&P500 futures data as well as stocks in the Dow 30; and our empirical implementation involves estimating linear and nonlinear heterogeneous autoregressive realized volatility (HAR-RV) type models. We find that past "large" jump power variations help less in the prediction of future realized volatility, than past "small" jump power variations. Additionally, we find evidence that past realized signed jump power variations, which have not previously been examined in this literature, are strongly correlated with future volatility, and that past downside jump variations matter in prediction. Finally, incorporation of downside and upside jump power variations does improve predictability, albeit to a limited extent.
    Keywords: realized volatility, jumps, jump power variations, forecasting, jump test
    JEL: C58 C53 C22
    Date: 2013–07–27
    URL: http://d.repec.org/n?u=RePEc:rut:rutres:201321&r=for
  7. By: Michael P. Clements (ICMA Centre, Henley Business School, University of Reading);
    Abstract: We consider whether imposing long-run restrictions on survey respondents’ long-horizon forecasts will enhance their accuracy. The restrictions are motivated by the belief that the macro-variables consumption, investment and output move together in the long run, and that this should be evident in long-horizon forecasts. The restrictions are imposed by exponential-tilting of simple auxiliary forecast densities. We find a modest overall improvement in forecast accuracy of around 7% on MSFE for the consumption-output ratio, but there are times when much larger gains were realizable. The transformation of the data/forecasts on which accuracy is assessed is shown to play an important role.
    Date: 2014–02
    URL: http://d.repec.org/n?u=RePEc:rdg:icmadp:icma-dp2014-02&r=for
  8. By: Davide Delle Monache (Queen Mary University of London); Ivan Petrella (Birkbeck, University of London and CEPR)
    Abstract: This paper proposes a novel and flexible framework to estimate autoregressive models with time-varying parameters. Our setup nests various adaptive algorithms that are commonly used in the macroeconometric literature, such as learning-expectations and forgetting-factor algorithms. These are generalized along several directions: specifically, we allow for both Student-t distributed innovations as well as time-varying volatility. Meaningful restrictions are imposed to the model parameters, so as to attain local stationarity and bounded mean values. The model is applied to the analysis of inflation dynamics. Allowing for heavy-tails leads to a significant improvement in terms of fit and forecast. Moreover, it proves to be crucial in order to obtain well-calibrated density forecasts.
    Keywords: Time-varying parameters, Score-driven models, Heavy-tails, Adaptive algorithms, Inflation
    JEL: C22 C51 C53 E31
    Date: 2014–07
    URL: http://d.repec.org/n?u=RePEc:qmw:qmwecw:wp720&r=for
  9. By: William Paul Bell (School of Economics, University of Queensland); Phil Wild (School of Economics, University of Queensland); John Foster (School of Economics, University of Queensland)
    Abstract: This report primarily aims to provide both dispatch and wholesale spot price forecasts for the lifetime of the proposed hybrid gas-solar thermal plant at Collinsville. This report is the second of two reports and uses the findings of Bell, Wild and Foster (2014b) in the first report.
    Keywords: Energy Economics, Electricity Markets, Renewable Energy, Solar Thermal
    JEL: Q48 Q41 Q43 L94 C61 Q2
    Date: 2014–07
    URL: http://d.repec.org/n?u=RePEc:qld:uqeemg:6-2014&r=for
  10. By: Stelios D. Bekiros; Alessia Paccagnini
    Abstract: Micro-founded dynamic stochastic general equilibrium (DSGE) models appear to be particularly suited for evaluating the consequences of alternative macroeconomic policies. Recently, increasing efforts have been undertaken by policymakers to use these models for forecasting, although this proved to be problem- atic due to estimation and identification issues. Hybrid DSGE models have become popular for dealing with some of model misspecifications and the trade-off between theoretical coherence and empirical fit, thus allowing them to compete in terms of predictability with VAR models. However, DSGE and VAR models are still linear and they do not consider time-variation in parameters that could account for inher- ent nonlinearities and capture the adaptive underlying structure of the economy in a robust manner. This study conducts a comparative evaluation of the out-of-sample predictive performance of many different specifications of DSGE models and various classes of VAR models, using datasets for the real GDP, the harmonized CPI and the nominal short-term interest rate series in the Euro area. Simple and hybrid DSGE models were implemented including DSGE-VAR and Factor Augmented DGSE, and tested against standard, Bayesian and Factor Augmented VARs. Moreover, a new state-space time-varying VAR model is presented. The total period spanned from 1970:1 to 2010:4 with an out-of-sample testing period of 2006:1-2010:4, which covers the global financial crisis and the EU debt crisis. The results of this study can be useful in conducting monetary policy analysis and macro-forecasting in the Euro area.
    Keywords: Model validation, Forecasting, Factor Augmented DSGE, Time-varying parameter VAR, DGSE-VAR, Bayesian analysis
    JEL: C11 C15 C32
    Date: 2014–07–15
    URL: http://d.repec.org/n?u=RePEc:ipg:wpaper:2014-426&r=for
  11. By: Giang Ho; Paolo Mauro
    Abstract: Forecasters often predict continued rapid economic growth into the medium and long term for countries that have recently experienced strong growth. Using long-term forecasts of economic growth from the IMF/World Bank staff Debt Sustainability Analyses for a panel of countries, we show that the baseline forecasts are more optimistic than warranted by past international growth experience. Further, by comparing the IMF’s World Economic Outlook forecasts with actual growth outcomes, we show that optimism bias is greater the longer the forecast horizon.
    Date: 2014–07–02
    URL: http://d.repec.org/n?u=RePEc:imf:imfwpa:14/117&r=for
  12. By: Jaqueson K. Galimberti (KOF Swiss Economic Institute, ETH Zurich, Switzerland); Marcelo L. Moura (Insper, Sao Paolo, Brazil)
    Abstract: Measuring economic activity in real-time is a crucial issue in applied research and in the decision-making process of policy makers; however, it also poses intricate challenges to statistical filtering methods that are built to operate optimally under the auspices of an infinite number of observations. In this paper, we propose and evaluate the use of survey forecasts to augment one of those methods, namely the largely used Hodrick-Prescott filter so as to attenuate the end-of-sample uncertainty observed in the resulting gap estimates. We find that this approach achieves powerful improvements to the real-time reliability of these economic activity measures, and we argue that the use of surveys is preferable relative to model-based forecasts due to both an usually superior accuracy in predicting current and future states of the economy and its parsimony.
    Keywords: business cycle measurement, end-of-sample uncertainty, gap and trend
    JEL: E32 E37
    Date: 2014–07
    URL: http://d.repec.org/n?u=RePEc:kof:wpskof:14-360&r=for

This nep-for issue is ©2014 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.