Forecasting
http://lists.repec.org/mailman/listinfo/nep-for
Forecasting
2018-06-11
How sensitive are VAR forecasts to prior hyperparameters? An automated sensitivity analysis
http://d.repec.org/n?u=RePEc:een:camaaa:2018-25&r=for
Vector autoregressions combined with Minnesota-type priors are widely used for macroeconomic forecasting. The fact that strong but sensible priors can substantially improve forecast performance implies VAR forecasts are sensitive to prior hyperparameters. But the nature of this sensitivity is seldom investigated. We develop a general method based on Automatic Differentiation to systematically compute the sensitivities of forecastsâ€”both points and intervalsâ€”with respect to any prior hyperparameters. In a forecasting exercise using US data, we find that forecasts are relatively sensitive to the strength of shrinkage for the VAR coefficients, but they are not much affected by the prior mean of the error covariance matrix or the strength of shrinkage for the intercepts.
Joshua C.C. Chan
Liana Jacobi
Dan Zhu
vector autoregression, automatic differentiation, interval forecasts
2018-05
Testing for Changes in Forecasting Performance
http://d.repec.org/n?u=RePEc:hit:econdp:2018-03&r=for
We consider the issue of forecast failure (or breakdown) and propose methods to assess retrospectively whether a given forecasting model provides forecasts which show evidence of changes with respect to some loss function. We adapt the classical structural change tests to the forecast failure context. First, we recommend that all tests should be carried with a fixed scheme to have best power. This ensures a maximum difference between the fitted in and out-of-sample means of the losses and avoids contamination issues under the rolling and recursive schemes. With a fixed scheme, Giacomini and Rossi's (2009) (GR) test is simply a Wald test for a one-time change in the mean of the total (the in-sample plus out-of-sample) losses at a known break date, say m, the value that separates the in and out-of-sample periods. To alleviate this problem, we consider a variety of tests: maximizing the GR test over all possible values of m within a pre-specified range; a Double sup-Wald (DSW) test which for each m performs a sup-Wald test for a change in the mean of the out-of-sample losses and takes the maximum of such tests over some range; we also propose to work directly with the total loss series to define the Total Loss sup-Wald (TLSW) test and the Total Loss UDmax (TLUD) test. Using extensive simulations, we show that with forecasting models potentially involving lagged dependent variables, the only tests having a monotonic power function for all data-generating processes are the DSW and TLUD tests, constructed with a fixed forecasting window scheme. Some explanations are provided and two empirical applications illustrate the relevance of our findings in practice.
PERRON, Pierre
YAMAMOTO, Yohei
forecast failure, non-monotonic power, structural change, out-of-sample method
2018-05
Forecasting unemployment rates in Malta: A labour market flows approach
http://d.repec.org/n?u=RePEc:mlt:wpaper:0318&r=for
This study extends the flow approach to forecasting unemployment, as carried out by Barnichon and Nekarda (2013) and Barnichon and Garda (2016), to the Maltese labour market using a wider number of estimating techniques. The flow approach results in significant improvements in forecast accuracy over an autoregressive (AR) process. Particular improvements to forecasting accuracy are returned over shorter time horizons. When including flows, forecast improvements over both an AR process and non-flow forecasts are found when applying VECM methods. Bayesian and OLS VARs also show strong improvements over an AR, with or without the inclusion of flows. For Maltese data, the use of flows computed using aggregate data in these two latter methodologies does not bring about a significant improvement over the forecasts which exclude them.
Reuben Ellul
2018
Order Invariant Tests for Proper Calibration of Multivariate Density Forecasts
http://d.repec.org/n?u=RePEc:ces:ceswps:_7023&r=for
Established tests for proper calibration of multivariate density forecasts based on Rosenblatt probability integral transforms can be manipulated by changing the order of variables in the forecasting model. We derive order invariant tests. The new tests are applicable to densities of arbitrary dimensions and can deal with parameter estimation uncertainty and dynamic misspecification. Monte Carlo simulations show that they often have superior power relative to established approaches. We use the tests to evaluate GARCH-based multivariate density forecasts for a vector of stock market returns.
Jonas Dovern
Hans Manner
density calibration, goodness-of-fit test, predictive density, Rosenblatt transformation
2018
Semi-parametric Dynamic Asymmetric Laplace Models for Tail Risk Forecasting, Incorporating Realized Measures
http://d.repec.org/n?u=RePEc:arx:papers:1805.08653&r=for
The joint Value at Risk (VaR) and expected shortfall (ES) quantile regression model of Taylor (2017) is extended via incorporating a realized measure, to drive the tail risk dynamics, as a potentially more efficient driver than daily returns. Both a maximum likelihood and an adaptive Bayesian Markov Chain Monte Carlo method are employed for estimation, whose properties are assessed and compared via a simulation study; results favour the Bayesian approach, which is subsequently employed in a forecasting study of seven market indices and two individual assets. The proposed models are compared to a range of parametric, non-parametric and semi-parametric models, including GARCH, Realized-GARCH and the joint VaR and ES quantile regression models in Taylor (2017). The comparison is in terms of accuracy of one-day-ahead Value-at-Risk and Expected Shortfall forecasts, over a long forecast sample period that includes the global financial crisis in 2007-2008. The results favor the proposed models incorporating a realized measure, especially when employing the sub-sampled Realized Variance and the sub-sampled Realized Range.
Richard Gerlach
Chao Wang
2018-05
Composite likelihood methods for large Bayesian VARs with stochastic volatility
http://d.repec.org/n?u=RePEc:een:camaaa:2018-26&r=for
Adding multivariate stochastic volatility of a flexible form to large Vector Autoregressions (VARs) involving over a hundred variables has proved challenging due to computational considerations and over-parameterization concerns. The existing literature either works with homoskedastic models or smaller models with restrictive forms for the stochastic volatility. In this paper, we develop composite likelihood methods for large VARs with multivariate stochastic volatility. These involve estimating large numbers of parsimonious models and then taking a weighted average across these models. We discuss various schemes for choosing the weights. In our empirical work involving VARs of up to 196 variables, we show that composite likelihood methods have similar properties to existing alternatives used with small data sets in that they estimate the multivariate stochastic volatility in a flexible and realistic manner and they forecast comparably. In very high dimensional VARs, they are computationally feasible where other approaches involving stochastic volatility are not and produce superior forecasts than natural conjugate prior homoscedastic VARs.
Joshua C.C. Chan
Eric Eisenstat
Chenghan Hou
Gary Koop
Bayesian, large VAR, composite likelihood, prediction pools, stochastic volatility
2018-05
Probabilistic forecasts for the 2018 FIFA World Cup based on the bookmaker consensus model
http://d.repec.org/n?u=RePEc:inn:wpaper:2018-09&r=for
Football fans worldwide anticipate the 2018 FIFA World Cup that will take place in Russia from 14 June to 15 July 2018. 32 of the best teams from 5 confederations compete to determine the new World Champion. Using a consensus model based on quoted odds from 26 bookmakers and betting exchanges a probabilistic forecast for the outcome of the World Cup is obtained. The favorite is Brazil with a forecasted winning probability of 16.6%, closely followed by the defending World Champion and 2017 FIFA Confederations Cup winner Germany with a winning probability of 15.8%. Two other teams also have winning probabilities above 10%: Spain and France with 12.5% and 12.1%, respectively. The results from this bookmaker consensus model are coupled with simulations of the entire tournament to obtain implied abilities for each team. These allow to obtain pairwise probabilities for each possible game along with probabilities for each team to proceed to the various stages of the tournament. This shows that indeed the most likely final is a match of the top favorites Brazil and Germany (with a probability of 5.5%) where Brazil has the chance to compensate the dramatic semifinal in Belo Horizonte, four years ago. However, given that it comes to this final, the chances are almost even (50.6% for Brazil vs. 49.4% for Germany). The most likely semifinals are between the four top teams, i.e., with a probability of 9.4% Brazil and France meet in the first semifinal (with chances slightly in favor of Brazil in such a match, 53.5%) and with 9.2% Germany and Spain play the second semifinal (with chances slightly in favor of Germany with 53.1%). These probabilistic forecasts have been obtained by suitably averaging the quoted winning odds for all teams across bookmakers. More precisely, the odds are first adjusted for the bookmakers' profit margins ("overrounds"), averaged on the log-odds scale, and then transformed back to winning probabilities. Moreover, an "inverse" approach to simulating the tournament yields estimated team abilities (or strengths) from which probabilities for all possible pairwise matches can be derived. This technique (Leitner, Zeileis, and Hornik 2010a) correctly predicted the winner of 2010 FIFA World Cup (Leitner, Zeileis, and Hornik 2010b) and three out of four semifinalists at the 2014 FIFA World Cup (Zeileis, Leitner, and Hornik 2014). Interactive web graphics for this report are available at: https://eeecon.uibk.ac.at/~zeileis/news/fifa2018/
Achim Zeileis
Christoph Leitner
Kurt Hornik
consensus, agreement, bookmakers odds, tournament, 2018 FIFA World Cup
2018-05
Systemic Risk and Financial Fragility in the Chinese Economy: A Dynamic Factor Model Approach
http://d.repec.org/n?u=RePEc:bkr:wpaper:wps30&r=for
This paper studies systemic risk and financial fragility in the Chinese economy, applying the dynamic factor model approach. First, we estimate a dynamic factor model to forecast systemic risk that exhibits significant out-of-sample forecasting power, taking into account the effect of several macroeconomic factors on systemic risk, such as economic growth slowdown, large corporate debt, rise of shadow banking, and real estate market slowdown. Second, we analyse the historical dynamics of financial fragility in the Chinese economy over the last ten years using factor-augmented quantile regressions. The results of the analysis demonstrate that the level of fragility in the Chinese financial system decreased after the Global Financial Crisis of 2007-2009, but has been gradually rising since 2015.
Alexey Vasilenko
systemic risk, financial fragility, factor model, quantile regressions, China .
2018-03