nep-for New Economics Papers
on Forecasting
Issue of 2020‒01‒13
thirteen papers chosen by
Rob J Hyndman
Monash University

  1. Assessing Nowcast Accuracy of US GDP Growth in Real Time: The Role of Booms and Busts By Boriss Siliverstovs
  2. From fixed-event to fixed-horizon density forecasts: obtaining measures of multi-horizon uncertainty from survey density forecasts By Gergely Ganics; Barbara Rossi; Tatevik Sekhposyan
  3. Bayesian VAR forecasts, survey information and structural change in the euro area By Gergely Ganics; Florens Odendahl
  4. Adaptive Dynamic Model Averaging with an Application to House Price Forecasting By Alisa Yusupova; Nicos G. Pavlidis; Efthymios G. Pavlidis
  5. Forecasting Bitcoin closing price series using linear regression and neural networks models By Nicola Uras; Lodovica Marchesi; Michele Marchesi; Roberto Tonelli
  6. A Gated Recurrent Unit Approach to Bitcoin Price Prediction By Aniruddha Dutta; Saket Kumar; Meheli Basu
  7. Predicting disaggregated tourist arrivals in Sierra Leone using ARIMA model By Jackson, Emerson Abraham; Tamuke, Edmund
  8. Predicting intraday jumps in stock prices using liquidity measures and technical indicators By Ao Kong; Hongliang Zhu; Robert Azencott
  9. Bayesian Median Autoregression for Robust Time Series Forecasting By Zijian Zeng; Meng Li
  10. Disaggregated Short-Term Inflation Forecast (STIF) for Monetary Policy Decision in Sierra Leone By Jackson, Emerson Abraham; Tamuke, Edmund; Jabbie, Mohamed
  11. Forecasting Implied Volatility Smile Surface via Deep Learning and Attention Mechanism By Shengli Chen; Zili Zhang
  12. On the Stability and Growth Pact compliance: what is predictable with machine learning? By Kea BARET; Theophilos PAPADIMITRIOU
  13. Forecasting TB notifications at Zengeza clinic in Chitungwiza, Zimbabwe By Nyoni, Smartson. Pumulani; Nyoni, Thabani

  1. By: Boriss Siliverstovs (Bank of Latvia)
    Abstract: In this paper we reassess the forecasting performance of the Bayesian mixed-frequency model suggested in Carriero et al. (2015) in terms of point and density forecasts of the GDP growth rate using US macroeconomic data. Following Chauvet and Potter (2013), we evaluate the forecasting accuracy of the model relative to a univariate AR(2) model separately for expansions and recessions, as defined by the NBER business cycle chronology, rather than relying on a comparison of forecast accuracy over the whole forecast sample spanning from the first quarter of 1985 to the third quarter of 2011. We find that most of the evidence favouring the more sophisticated model over the simple benchmark model is due to relatively few observations during recessions, especially those during the Great Recession. In contrast, during expansions the gains in forecasting accuracy over the benchmark model are at best very modest. This implies that the relative forecasting performance of the models varies with business cycle phases. Ignoring this fact results in a distorted picture: the relative performance of the more sophisticated model in comparison with the naive benchmark model tends to be overstated during expansions and understated during recessions.
    Keywords: nowcasting, mixed-frequency data, real-time data, business cycle
    JEL: C22 C53
    Date: 2019–03–27
  2. By: Gergely Ganics (Banco de España); Barbara Rossi (ICREA – Univ. Pompeu Fabra, Barcelona GSE, and CREI); Tatevik Sekhposyan (Texas A&M University)
    Abstract: Surveys of Professional Forecasters produce precise and timely point forecasts for key macroeconomic variables. However, the accompanying density forecasts are not as widely utilized, and there is no consensus about their quality. This is partly because such surveys are often conducted for “fixed events”. For example, in each quarter panelists are asked to forecast output growth and inflation for the current calendar year and the next, implying that the forecast horizon changes with each survey round. The fixed-event nature limits the usefulness of survey density predictions for policymakers and market participants, who often wish to characterize uncertainty a fixed number of periods ahead (“fixed-horizon”). Is it possible to obtain fixed-horizon density forecasts using the available fixed-event ones? We propose a density combination approach that weights fixed-event density forecasts according to a uniformity of the probability integral transform criterion, aiming at obtaining a correctly calibrated fixed-horizon density forecast. Using data from the US Survey of Professional Forecasters, we show that our combination method produces competitive density forecasts relative to widely used alternatives based on historical forecast errors or Bayesian VARs. Thus, our proposed fixed-horizon predictive densities are a new and useful tool for researchers and policy makers.
    Keywords: Survey of Professional Forecasters, density forecasts, forecast combination, predictive density, probability integral transform, uncertainty, real-time
    JEL: C13 C32 C53
    Date: 2019–12
  3. By: Gergely Ganics (Banco de España); Florens Odendahl (Banque de France)
    Abstract: We incorporate external information extracted from the European Central Bank’s Survey of Professional Forecasters into the predictions of a Bayesian VAR, using entropic tilting and soft conditioning. The resulting conditional forecasts significantly improve the plain BVAR point and density forecasts. Importantly, we do not restrict the forecasts at a specific quarterly horizon, but their possible paths over several horizons jointly, as the survey information comes in the form of one- and two-year-ahead expectations. Besides improving the accuracy of the variable that we target, the spillover effects on “other-than-targeted” variables are relevant in size and statistically significant. We document that the baseline BVAR exhibits an upward bias for GDP growth after the financial crisis, and our results provide evidence that survey forecasts can help mitigate the effects of structural breaks on the forecasting performance of a popular macroeconometric model. Furthermore, we provide evidence of unstable VAR dynamics, especially during and after the recent Great Recession.
    Keywords: Survey of Professional Forecasters, density forecasts, entropic tilting, soft conditioning
    JEL: C32 C53 E37
    Date: 2019–12
  4. By: Alisa Yusupova; Nicos G. Pavlidis; Efthymios G. Pavlidis
    Abstract: Dynamic model averaging (DMA) combines the forecasts of a large number of dynamic linear models (DLMs) to predict the future value of a time series. The performance of DMA critically depends on the appropriate choice of two forgetting factors. The first of these controls the speed of adaptation of the coefficient vector of each DLM, while the second enables time variation in the model averaging stage. In this paper we develop a novel, adaptive dynamic model averaging (ADMA) methodology. The proposed methodology employs a stochastic optimisation algorithm that sequentially updates the forgetting factor of each DLM, and uses a state-of-the-art non-parametric model combination algorithm from the prediction with expert advice literature, which offers finite-time performance guarantees. An empirical application to quarterly UK house price data suggests that ADMA produces more accurate forecasts than the benchmark autoregressive model, as well as competing DMA specifications.
    Date: 2019–12
  5. By: Nicola Uras; Lodovica Marchesi; Michele Marchesi; Roberto Tonelli
    Abstract: This paper studies how to forecast daily closing price series of Bitcoin, using data on prices and volumes of prior days. Bitcoin price behaviour is still largely unexplored, presenting new opportunities. We compared our results with two modern works on Bitcoin prices forecasting and with a well-known recent paper that uses Intel, National Bank shares and Microsoft daily NASDAQ closing prices spanning a 3-year interval. We followed different approaches in parallel, implementing both statistical techniques and machine learning algorithms. The SLR model for univariate series forecast uses only closing prices, whereas the MLR model for multivariate series uses both price and volume data. We applied the ADF -Test to these series, which resulted to be indistinguishable from a random walk. We also used two artificial neural networks: MLP and LSTM. We then partitioned the dataset into shorter sequences, representing different price regimes, obtaining best result using more than one previous price, thus confirming our regime hypothesis. All the models were evaluated in terms of MAPE and relativeRMSE. They performed well, and were overall better than those obtained in the benchmarks. Based on the results, it was possible to demonstrate the efficacy of the proposed methodology and its contribution to the state-of-the-art.
    Date: 2020–01
  6. By: Aniruddha Dutta; Saket Kumar; Meheli Basu
    Abstract: In today's era of big data, deep learning and artificial intelligence have formed the backbone for cryptocurrency portfolio optimization. Researchers have investigated various state of the art machine learning models to predict Bitcoin price and volatility. Machine learning models like recurrent neural network (RNN) and long short-term memory (LSTM) have been shown to perform better than traditional time series models in cryptocurrency price prediction. However, very few studies have applied sequence models with robust feature engineering to predict future pricing. in this study, we investigate a framework with a set of advanced machine learning methods with a fixed set of exogenous and endogenous factors to predict daily Bitcoin prices. We study and compare different approaches using the root mean squared error (RMSE). Experimental results show that gated recurring unit (GRU) model with recurrent dropout performs better better than popular existing models. We also show that simple trading strategies, when implemented with our proposed GRU model and with proper learning, can lead to financial gain.
    Date: 2019–12
  7. By: Jackson, Emerson Abraham; Tamuke, Edmund
    Abstract: This study have uniquely mad use of Box-Jenkins ARIMA models to address the core of the threes objectives set out in view of the focus to add meaningful value to knowledge exploration. The outcome of the research have testify the achievements of this through successful nine months outof-sample forecasts produced from the program codes, with indicating best model choices from the empirical estimation. In addition, the results also provide description of risks produced from the uncertainty Fan Chart, which clearly outlined possible downside and upside risks to tourist visitations in the country. In the conclusion, it was suggested that downside risks to the low level tourist arrival can be managed through collaboration between authorities concerned with the management of tourist arrivals in the country.
    Keywords: ARIMA Methodology; Out-of-Sample Forecast; Tourist Arrivals; Sierra Leone
    JEL: C32 C52 C53 L83
    Date: 2019–09–13
  8. By: Ao Kong; Hongliang Zhu; Robert Azencott
    Abstract: Predicting the intraday stock jumps is a significant but challenging problem in finance. Due to the instantaneity and imperceptibility characteristics of intraday stock jumps, relevant studies on their predictability remain limited. This paper proposes a data-driven approach to predict intraday stock jumps using the information embedded in liquidity measures and technical indicators. Specifically, a trading day is divided into a series of 5-minute intervals, and at the end of each interval, the candidate attributes defined by liquidity measures and technical indicators are input into machine learning algorithms to predict the arrival of a stock jump as well as its direction in the following 5-minute interval. Empirical study is conducted on the level-2 high-frequency data of 1271 stocks in the Shenzhen Stock Exchange of China to validate our approach. The result provides initial evidence of the predictability of jump arrivals and jump directions using level-2 stock data as well as the effectiveness of using a combination of liquidity measures and technical indicators in this prediction. We also reveal the superiority of using random forest compared to other machine learning algorithms in building prediction models. Importantly, our study provides a portable data-driven approach that exploits liquidity and technical information from level-2 stock data to predict intraday price jumps of individual stocks.
    Date: 2019–12
  9. By: Zijian Zeng; Meng Li
    Abstract: We develop a Bayesian median autoregressive (BayesMAR) model for time series forecasting. The proposed method utilizes time-varying quantile regression at the median, favorably inheriting the robustness of median regression in contrast to the widely used mean-based methods. Motivated by a working Laplace likelihood approach in Bayesian quantile regression, BayesMAR adopts a parametric model bearing the same structure of autoregressive (AR) models by altering the Gaussian error to Laplace, leading to a simple, robust, and interpretable modeling strategy for time series forecasting. We estimate model parameters by Markov chain Monte Carlo. Bayesian model averaging (BMA) is used to account for model uncertainty including the uncertainty in the autoregressive order, in addition to a Bayesian model selection approach. The proposed methods are illustrated using simulation and real data applications. An application to U.S. macroeconomic data forecasting shows that BayesMAR leads to favorable and often superior predictive performances than the selected mean-based alternatives under various loss functions. The proposed methods are generic and can be used to complement a rich class of methods that builds on the AR models.
    Date: 2020–01
  10. By: Jackson, Emerson Abraham; Tamuke, Edmund; Jabbie, Mohamed
    Abstract: In this paper, the researchers have developed a short term inflation forecasting (STIF) model using Box-Jenkins time series approach (ARIMA) for analysing inflation and associated risks in Sierra Leone. The model is aided with fan charts for all thirteen components, including the Headline CPI as communication tools to inform the general public about uncertainties that surround price dynamics in Sierra Leone – this then make it possible for policy makers to utilise expert judgments in a bid to stabilize the economy. The uniqueness of this paper is its interpretation of risks to each of the disaggregated components, while also improving credibility of decisions taken by policy makers at the Bank of Sierra Leone [BSL]. Empirically, Food and Non-Alcoholic Beverages, Housing and Health components indicate that shock arising from within or outside of Sierra Leone can significantly impact headline CPI, with immediate pass-through effect of high prices on consumers’ spending, at least in the short-run. The outcome of this empirical research shows uniqueness of the disaggregated model in tailoring policy makers’ attention towards targeting sector-specific policy interventions. Fan Charts produced have also highlighted degree of risks, which is based on confidence bands, which shows deviation from the baseline forecast. The ultimate goal is to improve sectoral productive capacity, while at the same time, monitoring price volatility spill-over through empirical disaggregation of the CPI basket – in association with this, outcome from the study also shows that the use of multivariate models like VAR would be welcome to monitor events on price dynamics in the national economy.
    Keywords: STIF; ARIMA; Disaggregated CPI; Fan Charts; Sierra Leone
    JEL: C13 C52 C53 E37
    Date: 2019–09–07
  11. By: Shengli Chen; Zili Zhang
    Abstract: The implied volatility smile surface is the basis of option pricing, and the dynamic evolution of the option volatility smile surface is difficult to predict. In this paper, attention mechanism is introduced into LSTM, and a volatility surface prediction method combining deep learning and attention mechanism is pioneeringly established. LSTM's forgetting gate makes it have strong generalization ability, and its feedback structure enables it to characterize the long memory of financial volatility. The application of attention mechanism in LSTM networks can significantly enhance the ability of LSTM networks to select input features. The experimental results show that the two strategies constructed using the predicted implied volatility surfaces have higher returns and Sharpe ratios than that the volatility surfaces are not predicted. This paper confirms that the use of AI to predict the implied volatility surface has theoretical and economic value. The research method provides a new reference for option pricing and strategy.
    Date: 2019–12
  12. By: Kea BARET; Theophilos PAPADIMITRIOU
    Abstract: The aim of the paper is to propose simplest advanced indicators to prevent internal imbalances in European Union. The paper also highlights that new methods coming from Machine Learning field could be appropriate to forecast fiscal policy outcomes, instead of traditionnal econometrics approaches. The Stability and Growth Pact (SGP) and especially the 3% limit sets on the fiscal balance purpose to coordinate fiscal policies of the European Union member states and ensure debt sustainability. The Macroeconomic Imbalance Procedure (MIP) scoreboard introduced by the European Commission aims to verify the good conduct of public finances. We propose an analysis of the determinants of the SGP compliance by the 28 European Union members between 2006 ans 2018, through a Support Vector Machine model. More than testing if the MIP scoreboard variables really matter to forecast the risk of unsustainability, we also test a set of macroeconomic, monetary, and financial variables and apply a prior feature selection model which highlights the best predictors. We give some proofs that main primary indicators of the MIP scoreboard are not useful for SGP compliance forecast and we propose new variables to forecast the European Union supranational fiscal rule compliance.
    Keywords: Fiscal Rules; Stability and Growth Pact, Forecasting, Machine learning.
    JEL: E61 H11 H61 H62
    Date: 2019
  13. By: Nyoni, Smartson. Pumulani; Nyoni, Thabani
    Abstract: This study uses monthly time series data on TB notifications at Zengeza clinic in Chitungwiza from January 2013 to December 2018; to forecast TB notifications using the Box & Jenkins (1970) approach to univariate time series analysis. Diagnostic tests indicate that TBN is an I (0) variable. Based on the AIC, the study presents the SARMA (2, 0, 2)(1, 0, 1)12 model, the diagnostic tests further show that this model is quite stable and hence acceptable for forecasting the TB notifications at Zengeza clinic. The selected optimal model shows that the TB notifications will decline over the out-of-sample period. The main policy recommendation emanating from this study is that there should be continued intensification of TB surveillance and control programmes in order to reduce TB incidences not only at Zengeza clinic but also in Zimbabwe at large.
    Keywords: Forecasting; TB; TB notifications
    JEL: I18
    Date: 2019–12–01

This nep-for issue is ©2020 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.