nep-for New Economics Papers
on Forecasting
Issue of 2018‒08‒27
seven papers chosen by
Rob J Hyndman
Monash University

  1. Probabilisitic Forecasts in Hierarchical Time Series By Puwasala Gamakumara; Anastasios Panagiotelis; George Athanasopoulos; Rob J Hyndman
  2. Forecasting using Bayesian VARs: A Benchmark for STREAM By Ian Borg; Germano Ruisi
  3. Composite Likelihood Methods for Large Bayesian VARs with Stochastic Volatility By Joshua C.C. Chan; Eric Eisenstat; Chenghan Hou; Gary Koop
  4. On Using Predictive-ability Tests in the Selection of Time-series Prediction Models: A Monte Carlo Evaluation By Costantini, Mauro; Kunst, Robert M.
  5. Inflation Expectations: The Effect of Question Ordering on Forecast Inconsistencies By Maxime Phillot; Rina Rosenblatt-Wisch
  6. Can Economic Perception Surveys Improve Macroeconomic Forecasting in Chile? By Nicolas Chanut; Mario Marcel; Carlos Medel
  7. The Bigger Picture: Combining Econometrics with Analytics Improve Forecasts of Movie Success By Steven F. Lehrer; Tian Xie

  1. By: Puwasala Gamakumara; Anastasios Panagiotelis; George Athanasopoulos; Rob J Hyndman
    Abstract: Forecast reconciliation involves adjusting forecasts to ensure coherence with aggregation constraints. We extend this concept from point forecasts to probabilistic forecasts by redefining forecast reconciliation in terms of linear functions in general, and projections more specifically. New theorems establish that the true predictive distribution can be recovered in the elliptical case by linear reconciliation, and general conditions are derived for when this is a projection. A geometric interpretation is also used to prove two new theoretical results for point forecasting; that reconciliation via projection both preserves unbiasedness and dominates unreconciled forecasts in a mean squared error sense. Strategies for forecast evaluation based on scoring rules are discussed, and it is shown that the popular log score is an improper scoring rule with respect to the class of unreconciled forecasts when the true predictive distribution coheres with aggregation constraints. Finally, evidence from a simulation study shows that reconciliation based on an oblique projection, derived from the MinT method of Wickramasuriya, Athanasopoulos and Hyndman (2018) for point forecasting, outperforms both reconciled and unreconciled alternatives.
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2018-11&r=for
  2. By: Ian Borg (Central Bank of Malta); Germano Ruisi
    Abstract: This study develops a suite of Bayesian Vector Autoregression (BVAR) models for the Maltese economy to benchmark the forecasting performance of STREAM, the traditional macro-econometric model used by the Central Bank of Malta for its regular forecasting exercises. Three different BVARs are proposed, containing an endogenous and exogenous block, and differ only in terms of the crosssectional size of the former. The small BVAR contains only three endogenous variables, the medium BVAR includes 17 variables, while the large BVAR includes 32 endogenous variables. The exogenous block remains consistent across the three models. By using a similar information set, the Bayesian VARs developed in this study are utilised to benchmark the forecast performance of STREAM. In general, for real GDP, the GDP deflator, and the unemployment rate, BVAR median projections for the period 2014-2016 improve the forecast performance at the one, two, and four-step ahead horizons when compared to STREAM. However, the latter does rather well at annual projections, but it is broadly outperformed by the medium and large BVARs.
    JEL: C11 C52 C53 E17
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:mlt:wpaper:0418&r=for
  3. By: Joshua C.C. Chan (University of Technology Sydney); Eric Eisenstat (The University of Queensland); Chenghan Hou (Hunan University); Gary Koop (University of Strathclyde)
    Abstract: Adding multivariate stochastic volatility of a ?exible form to large Vector Autoregressions (VARs) involving over a hundred variables has proved challenging due to computational considerations and over-parameterization concerns. The existing literature either works with homoskedastic models or smaller models with restrictive forms for the stochastic volatility. In this pa- per, we develop composite likelihood methods for large VARs with multivariate stochastic volatility. These involve estimating large numbers of parsimonious models and then taking a weighted average across these models. We discuss various schemes for choosing the weights. In our empirical work involving VARs of up to 196 variables, we show that composite likelihood methods have similar properties to existing alternatives used with small data sets in that they estimate the multivariate stochastic volatility in a ?exible and realistic manner and they forecast comparably. In very high dimensional VARs, they are computationally feasible where other approaches involving stochastic volatility are not and produce superior forecasts than natural conjugate prior homoskedastic VARs.
    Keywords: Bayesian; large VAR; composite likelihood; prediction pools; stochastic volatility
    JEL: C11 C32 C53
    Date: 2018–05–01
    URL: http://d.repec.org/n?u=RePEc:uts:ecowps:44&r=for
  4. By: Costantini, Mauro (Department of Economics and Finance, Brunel University, London); Kunst, Robert M. (Institute for Advanced Studies, Vienna, and University of Vienna)
    Abstract: Comparative ex-ante prediction experiments over expanding subsamples are a popular tool for the task of selecting the best forecasting model class in finite samples of practical relevance. Flanking such a horse race by predictive-accuracy tests,such as the test by Diebold and Mariano (1995), tends to increase support for the simpler structure. We are concerned with the question whether such simplicity boosting actually benefits predictive accuracy in finite samples. We consider two variants of the DM test, one with naive normal critical values and one with bootstrapped critical values, the predictive-ability test by Giacomini and White (2006), which continues to be valid in nested problems, the F test by Clark and McCracken (2001), and also model selection via the AIC as a benchmark strategy. Our Monte Carlo simulations focus on basic univariate time-series specifications, such as linear (ARMA) and nonlinear (SETAR) generating processes.
    Keywords: Forecasting, time series, predictive accuracy, model selection
    JEL: C22 C52 C53
    Date: 2018–07
    URL: http://d.repec.org/n?u=RePEc:ihs:ihsesp:341&r=for
  5. By: Maxime Phillot; Rina Rosenblatt-Wisch
    Abstract: Expectations are key in modern macroeconomics. However, due to their scant measurability, policymakers often rely on survey data. It is thus of critical importance to know the limits of survey data use. We look at inflation expectations as measured through the Deloitte CFO Survey Switzerland and respondents' sensitivity to question ordering thereof. In particular, we investigate whether forecast inconsistencies - the discrepancies between point forecasts and measures of central tendency derived from density forecasts - change significantly depending on whether the point forecast or the density forecast is asked first. We find that a) forecast inconsistencies are sizeable in the data and b) question ordering matters. Specifically, both parametric and non-parametric evaluations of consistency show that c) point forecasts tend to be significantly higher than density forecasts only for those respondents who give a density forecast first. In addition, d) characteristics such as uncertainty, firm size and economic sector relate to inconsistencies.
    Keywords: Question effects, question ordering, inflation expectations, consistency of forecasts, point forecast, density forecast
    JEL: E31 E37 E58
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:snb:snbwpa:2018-11&r=for
  6. By: Nicolas Chanut; Mario Marcel; Carlos Medel
    Abstract: This paper focuses on the value of five economic perceptions surveys for macroeconomic forecasting in Chile. We compare their main features in terms of timing, representativeness, questionnaires, and aggregation of responses. We note the shortcomings of composite indices that combine questions with different focus and time perspective and propose instead eight alternative measures distinguishing between current sentiment/future expectations and between personal/country-wide perceptions. Our results suggest that future and country-wide perceptions are formed with distinct information from personal and current sentiment, and the latter are somewhat affected by the former. When turning to the ability of the existing and alternative measures to contribute to macro-aggregates forecasting, we find a rather strong relationship between personal and aggregate perceptions, consumption actions and actual consumption, especially of durables, outpacing the predictive ability of the existing synthetic indicator. On the business side, the predictive value of surveys seems to be stronger for employment than for investment, while employment and investment seem to Grangercause personal sentiment/expectations. This suggests that while broad perceptions tend to be shaped by independent information, the assessment of the own situation is reassured through actual employment and investment actions. The low ability of economic perception measures to predict investment behavior, in turn, confirms that investment actions are far more complex and projectspecific to be captured by responses to rather broad questions. In all, while surveys of economic perceptions are a rich source of information, it is necessary to select the surveys and questions that are more revealing of present and prospective behavior.
    Date: 2018–07
    URL: http://d.repec.org/n?u=RePEc:chb:bcchwp:824&r=for
  7. By: Steven F. Lehrer; Tian Xie
    Abstract: There exists significant hype regarding how much machine learning and incorporating social media data can improve forecast accuracy in commercial applications. To assess if the hype is warranted, we use data from the film industry in simulation experiments that contrast econometric approaches with tools from the predictive analytics literature. Further, we propose new strategies that combine elements from each literature in a bid to capture richer patterns of heterogeneity in the underlying relationship governing revenue. Our results demonstrate the importance of social media data and value from hybrid strategies that combine econometrics and machine learning when conducting forecasts with new big data sources. Specifically, while recursive partitioning strategies greatly outperform dimension reduction strategies and traditional econometric approaches in forecast accuracy, there are further significant gains from using hybrid approaches. Further, Monte Carlo experiments demonstrate that these benefits arise from the significant heterogeneity in how social media measures and other film characteristics influence box office outcomes.
    JEL: C52 C53
    Date: 2018–06
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:24755&r=for

This nep-for issue is ©2018 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.