|
on Forecasting |
By: | Firmin Doko Tchatoka (School of Economics, University of Adelaide); Qazi Haque (Business School, The University of Western Australia) |
Abstract: | The asymptotic distributions of the recursive out-of-sample forecast accuracy test statistics depend on stochastic integrals of Brownian motion when the models under comparison are nested. This often complicates their implementation in practice because the computation of their asymptotic critical values is costly. Hansen and Timmermann (2015, Econometrica) propose a Wald approximation of the commonly used recursive F-statistic and provide a simple characterization of the exact density of its asymptotic distribution. However, this characterization holds only when the larger model has one extra predictor or the forecast errors are homoscedastic. No such closed-form characterization is readily available when the nesting involves more than one predictor and heteroskedasticity is present. We first show both the recursive F-test and its Wald approximation have poor finite-sample properties, especially when the forecast horizon is greater than one. We then propose an hybrid bootstrap method consisting of a block moving bootstrap (which is nonparametric) and a residual based bootstrap for both statistics, and establish its validity. Simulations show that our hybrid bootstrap has good finite-sample performance, even in multi-step ahead forecasts with heteroscedastic or autocorrelated errors, and more than one predictor. The bootstrap method is illustrated on forecasting core inflation and GDP growth. |
Keywords: | Out-of-sample forecasts; HAC estimator; Moving block bootstrap; Bootstrap consistency |
JEL: | C12 C15 C32 |
Date: | 2020–02 |
URL: | http://d.repec.org/n?u=RePEc:adl:wpaper:2020-03&r=all |
By: | Marijn A. Bolhuis; Brett Rayner |
Abstract: | We develop a framework to nowcast (and forecast) economic variables with machine learning techniques. We explain how machine learning methods can address common shortcomings of traditional OLS-based models and use several machine learning models to predict real output growth with lower forecast errors than traditional models. By combining multiple machine learning models into ensembles, we lower forecast errors even further. We also identify measures of variable importance to help improve the transparency of machine learning-based forecasts. Applying the framework to Turkey reduces forecast errors by at least 30 percent relative to traditional models. The framework also better predicts economic volatility, suggesting that machine learning techniques could be an important part of the macro forecasting toolkit of many countries. |
Date: | 2020–02–28 |
URL: | http://d.repec.org/n?u=RePEc:imf:imfwpa:20/45&r=all |
By: | Rybacki, Jakub |
Abstract: | The aim of this paper is to evaluate gross domestic product (GDP) forecast errors of Polish professional forecasters based on the individual data from the Rzeczpospolita daily newspaper. This dataset contains predictions on forecasting competitions during the years 2013–2019 in Poland. Our analysis shows a lack of statistical effectiveness of these predictions. First, there is a systemic negative bias, which is especially strong during the years of conservative PiS government rule. Second, the forecasters failed to correctly predict the effects of major changes in fiscal policy. Third, there is evidence of strategic behaviors; for example, the forecasters tended to revise their prognosis too frequently and too excessively. We also document herding behavior, i.e., an alignment of the most extreme forecasts towards market consensus with time, and an overly strong reliance on forecasts from NBP inflation projections in cases of estimates for longer horizons. |
Keywords: | GDP forecasting |
JEL: | E32 E37 |
Date: | 2020–01–28 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:98952&r=all |
By: | Manav Kaushik; A K Giri |
Abstract: | In todays global economy, accuracy in predicting macro-economic parameters such as the foreign the exchange rate or at least estimating the trend correctly is of key importance for any future investment. In recent times, the use of computational intelligence-based techniques for forecasting macroeconomic variables has been proven highly successful. This paper tries to come up with a multivariate time series approach to forecast the exchange rate (USD/INR) while parallelly comparing the performance of three multivariate prediction modelling techniques: Vector Auto Regression (a Traditional Econometric Technique), Support Vector Machine (a Contemporary Machine Learning Technique), and Recurrent Neural Networks (a Contemporary Deep Learning Technique). We have used monthly historical data for several macroeconomic variables from April 1994 to December 2018 for USA and India to predict USD-INR Foreign Exchange Rate. The results clearly depict that contemporary techniques of SVM and RNN (Long Short-Term Memory) outperform the widely used traditional method of Auto Regression. The RNN model with Long Short-Term Memory (LSTM) provides the maximum accuracy (97.83%) followed by SVM Model (97.17%) and VAR Model (96.31%). At last, we present a brief analysis of the correlation and interdependencies of the variables used for forecasting. |
Date: | 2020–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2002.10247&r=all |
By: | Michael D. Cai; Marco Del Negro; Edward P. Herbst; Ethan Matlin; Reca Sarfati; Frank Schorfheide |
Abstract: | This paper illustrates the usefulness of sequential Monte Carlo (SMC) methods in approximating DSGE model posterior distributions. We show how the tempering schedule can be chosen adaptively, document the accuracy and runtime benefits of generalized data tempering for “online” estimation (that is, re-estimating a model as new data become available), and provide examples of multimodal posteriors that are well captured by SMC methods. We then use the online estimation of the DSGE model to compute pseudo-out-of-sample density forecasts and study the sensitivity of the predictive performance to changes in the prior distribution. We find that making priors less informative (compared to the benchmark priors used in the literature) by increasing the prior variance does not lead to a deterioration of forecast accuracy. |
JEL: | C11 C32 C53 E32 E37 E52 |
Date: | 2020–03 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:26826&r=all |
By: | Marijn A. Bolhuis; Brett Rayner |
Abstract: | We leverage insights from machine learning to optimize the tradeoff between bias and variance when estimating economic models using pooled datasets. Specifically, we develop a simple algorithm that estimates the similarity of economic structures across countries and selects the optimal pool of countries to maximize out-of-sample prediction accuracy of a model. We apply the new alogrithm by nowcasting output growth with a panel of 102 countries and are able to significantly improve forecast accuracy relative to alternative pools. The algortihm improves nowcast performance for advanced economies, as well as emerging market and developing economies, suggesting that machine learning techniques using pooled data could be an important macro tool for many countries. |
Date: | 2020–02–28 |
URL: | http://d.repec.org/n?u=RePEc:imf:imfwpa:20/44&r=all |
By: | Chengyuan Zhang; Fuxin Jiang; Shouyang Wang; Shaolong Sun |
Abstract: | The Asian-pacific region is the major international tourism demand market in the world, and its tourism demand is deeply affected by various factors. Previous studies have shown that different market factors influence the tourism market demand at different timescales. Accordingly, the decomposition ensemble learning approach is proposed to analyze the impact of different market factors on market demand, and the potential advantages of the proposed method on forecasting tourism demand in the Asia-pacific region are further explored. This study carefully explores the multi-scale relationship between tourist destinations and the major source countries, by decomposing the corresponding monthly tourist arrivals with noise-assisted multivariate empirical mode decomposition. With the China and Malaysia as case studies, their respective empirical results show that decomposition ensemble approach significantly better than the benchmarks which include statistical model, machine learning and deep learning model, in terms of the level forecasting accuracy and directional forecasting accuracy. |
Date: | 2020–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2002.09201&r=all |
By: | Yang Yifan; Guo Ju'e; Sun Shaolong; Li Yixin |
Abstract: | Faced with the growing research towards crude oil price fluctuations influential factors following the accelerated development of Internet technology, accessible data such as Google search volume index are increasingly quantified and incorporated into forecasting approaches. In this paper, we apply multi-scale data that including both GSVI data and traditional economic data related to crude oil price as independent variables and propose a new hybrid approach for monthly crude oil price forecasting. This hybrid approach, based on divide and conquer strategy, consists of K-means method, kernel principal component analysis and kernel extreme learning machine , where K-means method is adopted to divide input data into certain clusters, KPCA is applied to reduce dimension, and KELM is employed for final crude oil price forecasting. The empirical result can be analyzed from data and method levels. At the data level, GSVI data perform better than economic data in level forecasting accuracy but with opposite performance in directional forecasting accuracy because of Herd Behavior, while hybrid data combined their advantages and obtain best forecasting performance in both level and directional accuracy. At the method level, the approaches with K-means perform better than those without K-means, which demonstrates that divide and conquer strategy can effectively improve the forecasting performance. |
Date: | 2020–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2002.09656&r=all |