nep-for New Economics Papers
on Forecasting
Issue of 2019‒03‒11
six papers chosen by
Rob J Hyndman
Monash University

  1. Hierarchical forecasting By George Athanasopoulos; Puwasala Gamakumara; Anastasios Panagiotelis; Rob J Hyndman; Mohamed Affan
  2. Forecasting Economics and Financial Time Series: ARIMA vs. LSTM By Sima Siami-Namini; Akbar Siami Namin
  3. A brief history of forecasting competitions By Rob J Hyndman
  4. Forecasting the Term Structure of Interest Rates of the BRICS: Evidence from a Nonparametric Functional Data Analysis By Joao F. Caldeira; Rangan Gupta; Tahir Suleman; Hudson S. Torrent
  5. Modeling and forecasting inflation in Lesotho using Box-Jenkins ARIMA models By NYONI, THABANI
  6. A feature-based framework for detecting technical outliers in water-quality data from in situ sensors By Priyanga Dilini Talagala; Rob J Hyndman; Catherine Leigh; Kerrie Mengersen; Kate Smith-Miles

  1. By: George Athanasopoulos; Puwasala Gamakumara; Anastasios Panagiotelis; Rob J Hyndman; Mohamed Affan
    Abstract: Accurate forecasts of macroeconomic variables are crucial inputs into the decisions of economic agents and policy makers. Exploiting inherent aggregation structures of such variables, we apply forecast reconciliation methods to generate forecasts that are coherent with the aggregation constraints. We generate both point and probabilistic forecasts for the first time in the macroeconomic setting. Using Australian GDP we show that forecast reconciliation not only returns coherent forecasts but also improves the overall forecast accuracy in both point and probabilistic frameworks.
    Date: 2019
  2. By: Sima Siami-Namini; Akbar Siami Namin
    Abstract: Forecasting time series data is an important subject in economics, business, and finance. Traditionally, there are several techniques to effectively forecast the next lag of time series data such as univariate Autoregressive (AR), univariate Moving Average (MA), Simple Exponential Smoothing (SES), and more notably Autoregressive Integrated Moving Average (ARIMA) with its many variations. In particular, ARIMA model has demonstrated its outperformance in precision and accuracy of predicting the next lags of time series. With the recent advancement in computational power of computers and more importantly developing more advanced machine learning algorithms and approaches such as deep learning, new algorithms are developed to forecast time series data. The research question investigated in this article is that whether and how the newly developed deep learning-based algorithms for forecasting time series data, such as "Long Short-Term Memory (LSTM)", are superior to the traditional algorithms. The empirical studies conducted and reported in this article show that deep learning-based algorithms such as LSTM outperform traditional-based algorithms such as ARIMA model. More specifically, the average reduction in error rates obtained by LSTM is between 84 - 87 percent when compared to ARIMA indicating the superiority of LSTM to ARIMA. Furthermore, it was noticed that the number of training times, known as "epoch" in deep learning, has no effect on the performance of the trained forecast model and it exhibits a truly random behavior.
    Date: 2018–03
  3. By: Rob J Hyndman
    Abstract: Forecasting competitions are now so widespread that it is often forgotten how controversial they were when first held, and how influential they have been over the years. I briefly review the history of forecasting competitions, and discuss what we have learned about their design and implementation, and what they can tell us about forecasting. I also provide a few suggestions for potential future competitions, and for research about forecasting based on competitions.
    Keywords: evaluation, forecasting accuracy, Kaggle, M competitions, neural networks, prediction intervals, probability scoring, time series
    Date: 2019
  4. By: Joao F. Caldeira (Department of Economics, Universidade Federal do Rio Grande do Sul and CNPq, Brazil); Rangan Gupta (Department of Economics, University of Pretoria, Pretoria, South Africa); Tahir Suleman (School of Economics and Finance, Victoria University of Wellington & School of Business, Wellington Institute of Technology, New Zealand); Hudson S. Torrent (Department of Statistics, Universidade Federal do Rio Grande do Sul, Brazil)
    Abstract: In this paper, we develop a non-parametric functional data analysis (NP-FDA) model to forecast the term-structure of Brazil, Russia, India, China and South Africa (BRICS). We use daily data over the period of January 1, 2010 to December 31, 2016. We find that, while it is in general difficult to beat the random-walk model in the shorter-horizons, at longer-runs our proposed NP-FDA approach outperforms not only the random-walk model, but also other popular competitors used in term-structure forecasting literature. Our results have important implications for both policymakers aiming to stabilize the economy, and for optimal portfolio allocation decisions of financial market agents.
    Keywords: Functional data analysis, yield curve forecasting, performance evaluation, BRICS
    JEL: C53 E43 G17
    Date: 2019–02
    Abstract: This research uses annual time series data on inflation rates in Lesotho from 1974 to 2017, to model and forecast inflation using ARIMA models. Diagnostic tests indicate that L is I(1). The study presents the ARIMA (0, 1, 2). The diagnostic tests further imply that the presented optimal ARIMA (0, 1, 2) model is stable and acceptable for predicting inflation in Lesotho. The results of the study apparently show that L will be approximately 5.2% over the out-of-sample forecast period. The CBL is expected to tighten Lesotho’s monetary policy in order to maintain price stability.
    Keywords: Forecasting; Inflation
    JEL: C53 E31 E37 E47
    Date: 2019–02–25
  6. By: Priyanga Dilini Talagala; Rob J Hyndman; Catherine Leigh; Kerrie Mengersen; Kate Smith-Miles
    Abstract: Outliers due to technical errors in water-quality data from in situ sensors can reduce data quality and have a direct impact on inference drawn from subsequent data analysis. However, outlier detection through manual monitoring is unfeasible given the volume and velocity of data the sensors produce. Here, we proposed an automated framework that provides early detection of outliers in water-quality data from in situ sensors caused by technical issues.The framework was used first to identify the data features that differentiate outlying instances from typical behaviours. Then statistical transformations were applied to make the outlying instances stand out in transformed data space. Unsupervised outlier scoring techniques were then applied to the transformed data space and an approach based on extreme value theory was used to calculate a threshold for each potential outlier. Using two data sets obtained from in situ sensors in rivers flowing into the Great Barrier Reef lagoon, Australia, we showed that the proposed framework successfully identified outliers involving abrupt changes in turbidity, conductivity and river level, including sudden spikes, sudden isolated drops and level shifts, while maintaining very low false detection rates. We implemented this framework in the open source R package oddwater.
    JEL: C10 C14 C22
    Date: 2019

This nep-for issue is ©2019 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.