nep-for New Economics Papers
on Forecasting
Issue of 2019‒06‒17
seven papers chosen by
Rob J Hyndman
Monash University

  1. Forecasting the US GDP Components in the short run By Saulius Jokubaitis; Dmitrij Celov
  2. Comparing Forecasting Performance with Panel Data By Timmermann, Allan G; Zhu, Yinchu
  3. Memory that Drives! New Insights into Forecasting Performance of Stock Prices from SEMIFARMA-AEGAS Model. By Mohamed Chikhi; Claude Diebolt; Tapas Mishra
  4. Central Bank Communication That Works: Lessons from Lab Experiments By Oleksiy Kryvtsov; Luba Petersen
  5. The Hard Problem of Prediction for Conflict Prevention By Mueller, Hannes Felix; Rauh, Christopher
  6. Designing Robust Monetary Policy Using Prediction Pools By Szabolcs Deák; Paul Levine; Afrasiab Mirza; Joseph Pearlman
  7. A New Tidy Data Structure to Support Exploration and Modeling of Temporal Data By Earo Wang; Dianne Cook; Rob J Hyndman

  1. By: Saulius Jokubaitis; Dmitrij Celov
    Abstract: The aim of this paper is to estimate short-term forecasts of the US GDP components by expenditure approach sooner than they are officially released by the national institutions of statistics. For this reason, nowcasts along with 1- and 2-quarter forecasts are estimated by using available monthly information, officially released with a considerably smaller delay. The high-dimensionality problem of the monthly dataset used is solved by assuming sparse structures for the choice of leading indicators, capable of adequately explaining the dynamics of the GDP components. Variable selection and the estimation of the forecasts is performed by using the LASSO method, together with some of its popular modifications. Additionally, a modification of the LASSO is proposed, combining the methods of LASSO and principal components, in order to further improve the forecasting performance. Forecast accuracy of the models is evaluated by conducting pseudo-real-time forecasting exercises for four components of the GDP over the sample of 2005-2015, and compared with the benchmark ARMA models. The main results suggest that LASSO is able to outperform ARMA models when forecasting the GDP components and to identify leading explanatory variables. The proposed modification of the LASSO in some cases show further improvement in forecast accuracy.
    Date: 2019–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1906.07992&r=all
  2. By: Timmermann, Allan G; Zhu, Yinchu
    Abstract: Abstract This paper develops new methods for testing equal predictive accuracy in panels of forecasts that exploit information in the time series and cross-sectional dimensions of the data. Using a common factor setup, we establish conditions on cross-sectional dependencies in forecast errors which allow us to conduct inference and compare performance on a single cross-section of forecasts. We consider both unconditional tests of equal predictive accuracy as well as tests that condition on the realization of common factors and show how to decompose forecast errors into exposures to common factors and an idiosyncratic variance component. Our tests are demonstrated in an empirical application that compares IMF forecasts of country-level real GDP growth and inflation to private-sector survey forecasts and forecasts from a simple time-series model
    Keywords: Economic forecasting; GDP growth; Inflation forecasts; panel data
    Date: 2019–05
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:13746&r=all
  3. By: Mohamed Chikhi; Claude Diebolt; Tapas Mishra
    Abstract: Stock price forecasting, a popular growth-enhancing exercise for investors, is inherently complex – thanks to the interplay of financial economic drivers which determine both the magnitude of memory and the extent of non-linearity within a system. In this paper, we accommodate both features within a single estimation framework to forecast stock prices and identify the nature of market efficiency commensurate with the proposed model. We combine a class of semiparametric autoregressive fractionally integrated moving average (SEMIFARMA) model with asymmetric exponential generalized autoregressive score (AEGAS) errors to design a SEMIRFARMA-AEGAS framework based on which predictive performance of this model is tested against competing methods. Our conditional variance includes leverage effects, jumps and fat tail-skewness distribution, each of which affects magnitude of memory in a stock price system. A true forecast function is built and new insights into stock price forecasting are presented. We estimate several models using the Skewed Student-t maximum likelihood and find that the informational shocks have permanent effects on returns and the SEMIFARMA-AEGAS is appropriate for capturing volatility clustering for both negative (long Value-at-Risk) and positive returns (short Value-at-Risk). We show that this model has better predictive performance over competing models for both long and/or some short time horizons. The predictions from SEMIRFARMA-AEGAS model beats comfortably the random walk model. Our results have implications for market-efficiency: the weak efficiency assumption of financial markets stands violated for all stock price returns studied over a long period.
    Keywords: Stock price forecasting; SEMIFARMA model; AEGAS model; Skewed Student-t maximum likelihood; Asymmetry; Jumps.
    JEL: C14 C58 C22 G17
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:ulp:sbbeta:2019-24&r=all
  4. By: Oleksiy Kryvtsov; Luba Petersen
    Abstract: We use controlled laboratory experiments to test the causal effects of central bank communication on economic expectations and to distinguish the underlying mechanisms of those effects. In an experiment where subjects learn to forecast economic variables, we find that central bank communication has a stabilizing effect on individual and aggregate outcomes and that the size of the effect varies with the type of communication. Announcing past interest rate changes has the largest effect, reducing individual price and expenditure forecast volatility by one- and two-thirds, respectively; cutting half of inflation volatility; and improving price-level stability. Forward-looking announcements in the form of projections and forward guidance of upcoming rate decisions have less effect on individual forecasts, especially if they do not clarify the timing of future policy changes. Our evidence does not link the effects of communication to forecasters’ ability to predict future nominal interest rates. Rather, communication is effective via simple and relatable backward-looking announcements that exert strong influence on less-accurate forecasters. We conclude that increasing the accessibility of central bank information to the general public is a promising direction for improving central bank communication.
    Keywords: Monetary policy implementation; Transmission of monetary policy
    JEL: C9 D84 E3 E52
    Date: 2019–06
    URL: http://d.repec.org/n?u=RePEc:bca:bocawp:19-21&r=all
  5. By: Mueller, Hannes Felix; Rauh, Christopher
    Abstract: There is a growing interest in better conflict prevention and this provides a strong motivation for better conflict forecasting. A key problem of conflict forecasting for prevention is that predicting the start of conflict in previously peaceful countries is extremely hard. To make progress in this hard problem this project exploits both supervised and unsupervised machine learning. Specifically, the latent Dirichlet allocation (LDA) model is used for feature extraction from 3.8 million newspaper articles and these features are then used in a random forest model to predict conflict. We find that forecasting hard cases is possible and benefits from supervised learning despite the small sample size. Several topics are negatively associated with the outbreak of conflict and these gain importance when predicting hard onsets. The trees in the random forest use the topics in lower nodes where they are evaluated conditionally on conflict history, which allows the random forest to adapt to the hard problem and provides useful forecasts for prevention.
    Keywords: Armed Conflict; Forecasting; Machine Learning; Newspaper Text; Random Forest; Topic Models
    Date: 2019–05
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:13748&r=all
  6. By: Szabolcs Deák (University of Surrey); Paul Levine (University of Surrey); Afrasiab Mirza (University of Birmingham); Joseph Pearlman (City University)
    Abstract: How should a forward-looking policy maker conduct monetary policy when she has a finite set of models at her disposal, none of which are believed to be the true data generating process? In our approach, the policy maker first assigns weights to models based on relative forecasting performance rather than in-sample fit, consistent with her forward-looking objective. These weights are then used to solve a policy design problem that selects the optimized Taylor-type interest-rate rule that is robust to model uncertainty across a set of well-established DSGE models with and without financial frictions. We find that the choice of weights has a significant impact on the robust optimized rule which is more inertial and aggressive than either the non-robust single model counterparts or the optimal robust rule based on backward-looking weights as in the common alternative Bayesian Model Averaging. Importantly, we show that a price-level rule has excellent welfare and robustness properties, and therefore should be viewed as a key instrument for policy makers facing uncertainty over the nature of financial frictions.
    JEL: D52 D53 E44 G18 G23
    Date: 2019–06
    URL: http://d.repec.org/n?u=RePEc:sur:surrec:1219&r=all
  7. By: Earo Wang; Dianne Cook; Rob J Hyndman
    Abstract: Mining temporal data for information is often inhibited by a multitude of formats: irregular or multiple time intervals, point events that need aggregating, multiple observational units or repeated measurements on multiple individuals, and heterogeneous data types. On the other hand, the software supporting time series modeling and forecasting, makes strict assumptions on the data to be provided, typically requiring a matrix of numeric data with implicit time indexes. Going from raw data to model-ready data is painful. This work presents a cohesive and conceptual framework for organizing and manipulating temporal data, which in turn flows into visualization, modeling and forecasting routines. Tidy data principles are extended to temporal data by: (1) mapping the semantics of a dataset into its physical layout; (2) including an explicitly declared index variable representing time; (3) incorporating a "key" comprising single or multiple variables to uniquely identify units over time. This tidy data representation most naturally supports thinking of operations on the data as building blocks, forming part of a “data pipeline†in time-based contexts. A sound data pipeline facilitates a fluent workflow for analyzing temporal data. The infrastructure of tidy temporal data has been implemented in the R package tsibble.
    Keywords: time series, data wrangling, tidy data, R, forecasting, data science, exploratory data analysis, data pipelines
    JEL: C88 C81 C82 C22 C32
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2019-12&r=all

This nep-for issue is ©2019 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.