|
on Forecasting |
By: | Kenichiro McAlinn (Booth School of Business, University of Chicago and Department of Statistical Science, Duke University); Knut Are Aastveit (Norges Bank and BI Norwegian Business School); Jouchi Nakajima (Bank for International Settlements); Mike West (Department of Statistical Science, Duke University) |
Abstract: | We present new methodology and a case study in use of a class of Bayesian predictive synthesis (BPS) models for multivariate time series forecasting. This extends the foundational BPS framework to the multivariate setting, with detailed application in the topical and challenging context of multi-step macroeconomic forecasting in a monetary policy setting. BPS evaluates - sequentially and adaptively over time – varying forecast biases and facets of miscalibration of individual forecast densities for multiple time series, and – critically – their time-varying interdependencies. We define BPS methodology for a new class of dynamic multivariate latent factor models implied by BPS theory. Structured dynamic latent factor BPS is here motivated by the application context–sequential forecasting of multiple US macroeconomic time series with forecasts generated from several traditional econometric time series models. The case study highlights the potential of BPS to improve of forecasts of multiple series at multiple forecast horizons, and its use in learning dynamic relationships among forecasting models or agents. |
Keywords: | Agent opinion analysis, Bayesian forecasting, Dynamic latent factors models, Dynamic SURE models, Macroeconomic forecasting, Multivariate density forecast combination |
JEL: | C11 C15 C53 E37 |
Date: | 2019–01–16 |
URL: | http://d.repec.org/n?u=RePEc:bno:worpap:2019_02&r=all |
By: | Maximilian Böck; Martin Feldkircher; Florian Huber |
Abstract: | This document introduces the R library BGVAR to estimate Bayesian global vector autoregressions (GVAR) with shrinkage priors and stochastic volatility. The Bayesian treatment of GVARs allows us to include large information sets by mitigating issues related to overfitting. This improves inference and often leads to better out-of-sample forecasts. Computational efficiency is achieved by using C++ to considerably speed up time-consuming functions. To maximize usability, the package includes numerous functions for carrying out structural inference and forecasting. These include generalized and structural impulse response functions, forecast error variance and historical decompositions as well as conditional forecasts. |
Keywords: | Global Vector Autoregressions; Bayesian inference; time series analysis; R |
JEL: | C30 C50 C87 F40 |
Date: | 2020–08–20 |
URL: | http://d.repec.org/n?u=RePEc:fip:feddgw:88639&r=all |
By: | Berta, P.; Lovaglio, P.G.; Paruolo, P.; Verzillo, S. |
Abstract: | Response management to the SARS-CoV-2 outbreak requires to answer several forecasting tasks. For hospital managers, a major one is to anticipate the likely needs of beds in intensive care in a given catchment area one or two weeks ahead, starting as early as possible in the evolution of the epidemic. This paper proposes to use a bivariate Error Correction model to forecast the needs of beds in intensive care, jointly with the number of patients hospitalised with Covid-19 symptoms. Error Correction models are found to provide reliable forecasts that are tailored to the local characteristics both of epidemic dynamics and of hospital practice for various in Europe in Italy, France and Scotland, both at the onset and at later stages of the spread of the disease. The forecast performance is encouraging for all analysed regions, suggesting that the present approach may be useful also beyond the analysed cases. |
Keywords: | SARS-CoV-2; Covid-19; Intensive Care Units; Forecasting; Vector error correction model; VAR; |
JEL: | C53 C32 |
Date: | 2020–08 |
URL: | http://d.repec.org/n?u=RePEc:yor:hectdg:20/16&r=all |
By: | Alexander Chudik; M. Hashem Pesaran; Mahrad Sharifvaghefi |
Abstract: | This paper is concerned with the problem of variable selection and forecasting in the presence of parameter instability. There are a number of approaches proposed for forecasting in the presence of breaks, including the use of rolling windows or exponential down-weighting. However, these studies start with a given model specification and do not consider the problem of variable selection. It is clear that, in the absence of breaks, researchers should weigh the observations equally at both the variable selection and forecasting stages. In this study, we investigate whether or not we should use weighted observations at the variable selection stage in the presence of structural breaks, particularly when the number of potential covariates is large. Amongst the extant variable selection approaches we focus on the recently developed One Covariate at a time Multiple Testing (OCMT) method that allows a natural distinction between the selection and forecasting stages, and provide theoretical justification for using the full (not down-weighted) sample in the selection stage of OCMT and down-weighting of observations only at the forecasting stage (if needed). The benefits of the proposed method are illustrated by empirical applications to forecasting output growths and stock market returns. |
Keywords: | Time-varying parameters; structural breaks; high-dimensionality; multiple testing; variable selection; one covariate at a time multiple testing (OCMT); forecasting |
JEL: | C22 C52 C53 C55 |
Date: | 2020–08–19 |
URL: | http://d.repec.org/n?u=RePEc:fip:feddgw:88638&r=all |
By: | Alexander Chudik; M. Hashem Pesaran; Mahrad Sharifvaghefi |
Abstract: | This paper is concerned with problem of variable selection and forecasting in the presence of parameter instability. There are a number of approaches proposed for forecasting in the presence of breaks, including the use of rolling windows or exponential down-weighting. However, these studies start with a given model specification and do not consider the problem of variable selection. It is clear that, in the absence of breaks, researchers should weigh the observations equally at both variable selection and forecasting stages. In this study, we investigate whether or not we should use weighted observations at the variable selection stage in the presence of structural breaks, particularly when the number of potential covariates is large. Amongst the extant variable selection approaches we focus on the recently developed One Covariate at a time Multiple Testing (OCMT) method that allows a natural distinction between the selection and forecasting stages, and provide theoretical justification for using the full (not down-weighted) sample in the selection stage of OCMT and down-weighting of observations only at the forecasting stage (if needed). The benefits of the proposed method are illustrated by empirical applications to forecasting output growths and stock market returns. |
Keywords: | time-varying parameters, structural breaks, high-dimensionality, multiple testing, variable selection, one covariate at a time multiple testing (OCMT), forecasting |
JEL: | C22 C52 C53 C55 |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:ces:ceswps:_8475&r=all |
By: | Hinterlang, Natascha |
Abstract: | This paper analyses the forecasting performance of monetary policy reaction functions using U.S. Federal Reserve's Greenbook real-time data. The results indicate that artificial neural networks are able to predict the nominal interest rate better than linear and nonlinearTaylor rule models as well as univariate processes. While in-sample measures usually imply a forward-looking behaviour of the central bank, using nowcasts of the explanatory variables seems to be better suited for forecasting purposes. Overall, evidence suggests that U.S. monetary policy behaviour between1987-2012 is nonlinear. |
Keywords: | Forecasting,Monetary Policy,Artificial Neural Network,Taylor Rule,Reaction Function |
JEL: | C45 E47 E52 |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:zbw:bubdps:442020&r=all |
By: | Philippe Goulet Coulombe; Maxime Leroux; Dalibor Stevanovic; St\'ephane Surprenant |
Abstract: | We move beyond "Is Machine Learning Useful for Macroeconomic Forecasting?" by adding the "how". The current forecasting literature has focused on matching specific variables and horizons with a particularly successful algorithm. In contrast, we study the usefulness of the underlying features driving ML gains over standard macroeconometric methods. We distinguish four so-called features (nonlinearities, regularization, cross-validation and alternative loss function) and study their behavior in both the data-rich and data-poor environments. To do so, we design experiments that allow to identify the "treatment" effects of interest. We conclude that (i) nonlinearity is the true game changer for macroeconomic prediction, (ii) the standard factor model remains the best regularization, (iii) K-fold cross-validation is the best practice and (iv) the $L_2$ is preferred to the $\bar \epsilon$-insensitive in-sample loss. The forecasting gains of nonlinear techniques are associated with high macroeconomic uncertainty, financial stress and housing bubble bursts. This suggests that Machine Learning is useful for macroeconomic forecasting by mostly capturing important nonlinearities that arise in the context of uncertainty and financial frictions. |
Date: | 2020–08 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2008.12477&r=all |
By: | Roberto Casarin (University Ca’ Foscari of Venice); Stefano Grassi (University of Rome ‘Tor Vergata’); Francesco Ravazzolo (Free University of Bozen-Bolzano and CAMP, BI Norwegian Business School); Herman K. van Dijk (Erasmus University Rotterdam, Norges Bank and Tinbergen Institute) |
Abstract: | A flexible forecast density combination approach is introduced that can deal with large data sets. It extends the mixture of experts approach by allowing for model set incompleteness and dynamic learning of combination weights. A dimension reduction step is introduced using a sequential clustering mechanism that allocates the large set of forecast densities into a small number of subsets and the combination weights of the large set of densities are modelled as a dynamic factor model with a number of factors equal to the number of subsets. The forecast density combination is represented as a large finite mixture in nonlinear state space form. An efficient simulation-based Bayesian inferential procedure is proposed using parallel sequential clustering and filtering, implemented on graphics processing units. The approach is applied to track the Standard & Poor 500 index combining more than 7000 forecast densities based on 1856 US individual stocks that are are clustered in a relatively small subset. Substantial forecast and economic gains are obtained, in particular, in the tails using Value-at-Risk. Using a large macroeconomic data set of 142 series, similar forecast gains, including probabilities of recession, are obtained from multivariate forecast density combinations of US real GDP, Inflation, Treasury Bill yield and Employment. Evidence obtained on the dynamic patterns in the financial as well as macroeconomic clusters provide valuable signals useful for improved modelling and more effective economic and financial policies. |
Date: | 2019–03 |
URL: | http://d.repec.org/n?u=RePEc:bno:worpap:2019_07&r=all |
By: | Robert C. Smit (Vrije Universiteit Amsterdam, The Netherlands); Francesco Ravazzolo (Free University of Bolzano‐Bozen, Faculty of Economics and Management, Italy); Luca Rossini (Queen Mary University of London, United Kingdom and Vrije Universiteit Amsterdam, The Netherlands) |
Abstract: | The scoring and defensive abilities of association football teams change over time as a result of the trading of players, injuries, tactics and other management factors. As such, we develop a hierarchical Bayesian dynamic model based on the Skellam distribution, where the scoring abilities are changing over time and different across teams. In this paper, we introduce a unique method used to handle promotion and relegation within the league. The model uses three different seasons and the forecasting ability has been measured and validated on the 2018-2019 English Premier League season. The model predicts the outcome of the matches correctly about 60% of the time. |
Keywords: | Bayesian hierarchical models, dynamic models, English Premier League, football data, Skellam distribution |
JEL: | C11 L83 Z20 |
URL: | http://d.repec.org/n?u=RePEc:bzn:wpaper:bemps72&r=all |