nep-for New Economics Papers
on Forecasting
Issue of 2016‒06‒09
ten papers chosen by
Rob J Hyndman
Monash University

  1. Forecasting Inflation with the Hybrid New Keynesian Phillips Curve: A Compact-Scale Global VAR Approach By Carlos Medel
  2. Do Combination Forecasts Outperform the Historical Average? Economic and Statistical Evidence By Thomadakis, Apostolos
  3. An adaptive approach to forecasting three key macroeconomic variables for transitional China By Niu, Linlin; Xu, Xiu; Chen, Ying
  4. A large Bayesian vector autoregression model for Russia By Deryugina, Elena; Ponomarenko, Alexey
  5. Stock prices prediction via tensor decomposition and links forecast By Alessandro Spelta
  6. Nowcasting and short-term forecasting of Russian GDP with a dynamic factor model By Porshakov, Alexey; Deryugina, Elena; Ponomarenko, Alexey; Sinyakov, Andrey
  7. A Relational Model for Predicting Farm-Level Crop Yield Distributions in the Absence of Farm-Level Data By Porth, Lysa; Tan, Ken Seng; Zhu, Wenjun
  8. A unified approach to mortality modelling using state-space framework: characterisation, identification, estimation and forecasting By Man Chung Fung; Gareth W. Peters; Pavel V. Shevchenko
  9. Predictive Bookmaker Consensus Model for the UEFA Euro 2016 By Achim Zeileis; Christoph Leitner; Kurt Hornik
  10. What does past correlation structure tell us about the future? An answer from network filtering By Nicol\'o Musmeci; Tomaso Aste; Tiziana Di Matteo

  1. By: Carlos Medel
    Abstract: In this article, it is analysed the multihorizon predictive power of the Hybrid New Keynesian Phillips Curve (HNKPC) making use of a compact-scale Global VAR (GVAR) for the headline inflation of six developed countries with different inflationary experiences; covering from 2000.1 until 2014.12. The key element of this article is the use of direct measures of inflation expectations—Consensus Economics—embedded in a GVAR environment, i.e. modelling cross-country interactions. The GVAR point forecast is evaluated using the Mean Squared Forecast Error (MSFE) statistic and statistically compared with several benchmarks. These belong to traditional statistical modelling, such as autoregressions (AR), the exponential smoothing model (ES), and the random walk model (RW). One last economics-based benchmark is the closed economy univariate HNKPC. The results indicate that the GVAR has a low performance compared to that exhibited by the RW. The most accurate forecasts, however, are obtained with the AR and especially with the univariate HNKPC. In the long-run, the ES model also appears as a better alternative rather than the RW. The MSPE is obviously affected by the unanticipated effects of the financial crisis started in 2008. So, when considering an evaluation sample just before the crisis, the GVAR appears as a valid alternative for some countries in the long-run. The most robust forecasting devices across countries and horizons result in the univariate HNKPC, giving a role for economic fundamentals when forecasting inflation.
    Date: 2016–05
    URL: http://d.repec.org/n?u=RePEc:chb:bcchwp:785&r=for
  2. By: Thomadakis, Apostolos
    Abstract: This paper examines the out-of-sample predictability of monthly German stock returns, and addresses the issue of whether combinations of individual model forecasts are able to provide significant out-of-sample gains relative to the historical average. Empirical analysis over the period from 1973 to 2012 implies that firstly, the term spread has the in-sample ability to predict stock returns, secondly, and most importantly, this variable successfully delivers consistent out-of-sample forecast gains relative to the historical average, and thirdly, combination forecasts do not appear to offer a significant evidence of consistently beating the historical average forecasts of the stock returns. Results are robust using both statistical and economic criteria, and hold across different out-of-sample forecast evaluation periods.
    Keywords: Equity Premium, Forecast Combination, Out-of-Sample Forecast, Mean-Variance Investor
    JEL: C22 C32 C53 G11 G17
    Date: 2016–05–20
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:71589&r=for
  3. By: Niu, Linlin; Xu, Xiu; Chen, Ying
    Abstract: We propose the use of a local autoregressive (LAR) model for adaptive estimation and forecasting of three of China’s key macroeconomic variables: GDP growth, inflation and the 7-day interbank lending rate. The approach takes into account possible structural changes in the data-generating process to select a local homogeneous interval for model estimation, and is particularly well-suited to a transition economy experiencing ongoing shifts in policy and structural adjustment. Our results indicate that the proposed method outperforms alternative models and forecast methods, especially for forecast horizons of 3 to 12 months. Our 1-quarter ahead adaptive forecasts even match the performance of the well-known CMRC Langrun survey forecast. The selected homogeneous intervals indicate gradual changes in growth of industrial production driven by constant evolution of the real economy in China, as well as abrupt changes in interestrate and inflation dynamics that capture monetary policy shifts.
    Keywords: Chinese economy, local parametric models, forecasting
    JEL: E43 E47
    Date: 2015–04–10
    URL: http://d.repec.org/n?u=RePEc:bof:bofitp:2015_012&r=for
  4. By: Deryugina, Elena; Ponomarenko, Alexey
    Abstract: We apply an econometric approach developed specifically to address the ‘curse of dimensionality’ in Russian data and estimate a Bayesian vector autoregression model comprising 14 major domestic real, price and monetary macroeconomic indicators as well as external sector variables. We conduct several types of exercise to validate our model: impulse response analysis, recursive forecasting and counter factual simulation. Our results demonstrate that the employed methodology is highly appropriate for economic modelling in Russia. We also show that post-crisis real sector developments in Russia could be accurately forecast if conditioned on the oil price and EU GDP (but not if conditioned on the oil price alone). Publication keywords: Bayesian vector autoregression, forecasting, Russia
    JEL: E32 E44 E47 C32
    Date: 2014–12–04
    URL: http://d.repec.org/n?u=RePEc:bof:bofitp:2014_022&r=for
  5. By: Alessandro Spelta (Università Cattolica del Sacro Cuore; Dipartimento di Economia e Finanza, Università Cattolica del Sacro Cuore)
    Abstract: Many complex systems display fluctuations between alternative states in correspondence to tipping points. These critical shifts are usually associated with generic empirical phenomena such as strengthening correlations between entities composing the system. In finance, for instance, market crashes are the consequence of herding behaviors that make the units of the system strongly correlated, lowering their distances. Consequently, determining future distances between stocks can be a valuable starting point for predicting market down-turns. This is the scope of the work. It introduces a multi-way procedure for forecasting stock prices by decomposing a distance tensor. This multidimensional method avoids aggregation processes that could lead to the loss of crucial features of the system. The technique is applied to a basket of stocks composing the S&P500 composite index and to the index itself so as to demonstrate its ability to predict the large market shifts that arise in times of turbulence, such as the ongoing financial crisis.
    Keywords: Stock prices, Correlations, Tensor Decomposition, Forecast.
    JEL: C02 C63 C63
    Date: 2016–05
    URL: http://d.repec.org/n?u=RePEc:ctc:serie1:def041&r=for
  6. By: Porshakov, Alexey; Deryugina, Elena; Ponomarenko, Alexey; Sinyakov, Andrey
    Abstract: Real-time assessment of quarterly GDP growth rates is crucial for evaluation of economy’s current perspectives given the fact that respective data is normally subject to substantial publication delays by national statistical agencies. Large information sets of real-time indicators which could be used to approximate GDP growth rates in the quarter of interest are in practice characterized by unbalanced data, mixed frequencies, systematic data revisions, as well as a more general curse of dimensionality problem. The latter issues could, however, be practically resolved by means of dynamic factor modeling that has recently been recognized as a helpful tool to evaluate current economic conditions by means of higher frequency indicators. Our major results show that the performance of dynamic factor models in predicting Russian GDP dynamics appears to be superior as compared to other common alternative specifications. At the same time, we empirically show that the arrival of new data seems to consistently improve DFM’s predictive accuracy throughout sequential nowcast vintages. We also introduce the analysis of nowcast evolution resulting from the gradual expansion of the dataset of explanatory variables, as well as the framework for estimating contributions of different blocks of predictors into now-casts of Russian GDP.
    Keywords: GDP nowcast, dynamic factor models, principal components, Kalman filter, nowcast evolution
    JEL: C53 C82 E17
    Date: 2015–05–28
    URL: http://d.repec.org/n?u=RePEc:bof:bofitp:2015_019&r=for
  7. By: Porth, Lysa; Tan, Ken Seng; Zhu, Wenjun
    Abstract: Individual farm-level expected yields serve as the foundation for crop insurance design and rating. Therefore, constructing a reasonable, accurate, and robust model for the farm-level loss distribution is essential. Unfortunately, farm-level yield data is often insufficient or unavailable in many regions to conduct sound statistical inference, especially in developing countries. This paper develops a new two-step relational model to predict farm-level crop yield distributions in the absence of farm yield losses, through "borrowing" information from a neighbouring country, where detailed farm-level yield experience is available. The first step of the relational model defines a similarity measure based on a Euclidean metric to select an optimal county, considering weather information, average farm size, county size and county-level yield volatility. The second step links the selected county with the county to be predicted through modeling the dependence structures between the farm-level and county-level yield losses. Detailed farm-level and county-level corn yield data in the U.S. and Canada are used to empirically examine the performance of the proposed relational model. The results show that the approach developed in this paper may be useful in improving yield forecasts and pricing in the case where farm-level data is limited or not available. Further, this approach may also help to address the issue of aggregation bias, when county-level data is used as a substitute for farm-level data, which tend to result in underestimating the predicted risk relative to the true risk.
    Keywords: Relational Model, Aggregation Bias, Shortness of Data, Euclidean Distance, Crop Insurance, Yield Forecasting, Ratemaking, Crop Production/Industries, Research Methods/ Statistical Methods,
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:ags:aaea16:236278&r=for
  8. By: Man Chung Fung; Gareth W. Peters; Pavel V. Shevchenko
    Abstract: This paper explores and develops alternative statistical representations and estimation approaches for dynamic mortality models. The framework we adopt is to reinterpret popular mortality models such as the Lee-Carter class of models in a general state-space modelling methodology, which allows modelling, estimation and forecasting of mortality under a unified framework. Furthermore, we propose an alternative class of model identification constraints which is more suited to statistical inference in filtering and parameter estimation settings based on maximization of the marginalized likelihood or in Bayesian inference. We then develop a novel class of Bayesian state-space models which incorporate apriori beliefs about the mortality model characteristics as well as for more flexible and appropriate assumptions relating to heteroscedasticity that present in observed mortality data. We show that multiple period and cohort effect can be cast under a state-space structure. To study long term mortality dynamics, we introduce stochastic volatility to the period effect. The estimation of the resulting stochastic volatility model of mortality is performed using a recent class of Monte Carlo procedure specifically designed for state and parameter estimation in Bayesian state-space models, known as the class of particle Markov chain Monte Carlo methods. We illustrate the framework we have developed using Danish male mortality data, and show that incorporating heteroscedasticity and stochastic volatility markedly improves model fit despite an increase of model complexity. Forecasting properties of the enhanced models are examined with long term and short term calibration periods on the reconstruction of life tables.
    Date: 2016–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1605.09484&r=for
  9. By: Achim Zeileis; Christoph Leitner; Kurt Hornik
    Abstract: From 10 June to 10 July 2016 the best European football teams will meet in France to determine the European Champion in the UEFA European Championship 2016 tournament (Euro 2016 for short). For the first time 24 teams compete, expanding the format from 16 teams as in the previous five Euro tournaments. For forecasting the winning probability of each team a predictive model based on bookmaker odds from 19 online bookmakers is employed. The favorite is the host France with a forecasted winning probability of 21.5%, followed by the current World Champion Germany with a winning probability of 20.1%. The defending European Champion Spain follows after some gap with 13.7% and all remaining teams are predicted to have lower chances with England (9.2%) and Belgium (7.7%) being the "best of the rest". Furthermore, by complementing the bookmaker consensus results with simulations of the whole tournament, predicted pairwise probabilities for each possible game at the Euro 2016 are obtained along with "survival" probabilities for each team proceeding to the different stages of the tournament. For example, it can be determined that it is much more likely that top favorites France and Germany meet in the semifinal (7.8%) rather than in the final at the Stade de France (4.2%) - which would be a re-match of the friendly game that was played on 13 November 2015 during the terrorist attacks in Paris and that France won 2-0. Hence it is maybe better that the tournament draw favors a match in the semifinal at Marseille (with an almost even winning probability of 50.5% for France). The most likely final is then that either of the two teams plays against the defending champion Spain with a probability of 5.7% for France vs. Spain and 5.4% for Germany vs. Spain, respectively. All forecasts are the result of an aggregation of quoted winning odds for each team in the Euro 2016: These are first adjusted for profit margins ("overrounds"), averaged on the log-odds scale, and then transformed back to winning probabilities. Moreover, team abilities (or strengths) are approximated by an "inverse" procedure of tournament simulations, yielding estimates of probabilities for all possible pairwise matches at all stages of the tournament. This technique correctly predicted the winner of the FIFA 2010 and Euro 2012 tournaments while missing the winner but correctly predicting the final for the Euro 2008 and three out of four semifinalists at the FIFA 2014 World Cup (Leitner, Zeileis, and Hornik 2008, 2010a,b; Zeileis, Leitner, and Hornik 2012, 2014).
    Keywords: consensus, agreement, bookmakers odds, tournament, UEFA European Championship 2016
    JEL: C53 C40 D84
    Date: 2016–05
    URL: http://d.repec.org/n?u=RePEc:inn:wpaper:2016-15&r=for
  10. By: Nicol\'o Musmeci; Tomaso Aste; Tiziana Di Matteo
    Abstract: We discovered that past changes in the market correlation structure are significantly related with future changes in the market volatility. By using correlation-based information filtering networks we device a new tool for forecasting the market volatility changes. In particular, we introduce a new measure, the "correlation structure persistence", that quantifies the rate of change of the market dependence structure. This measure shows a deep interplay with changes in volatility and we demonstrate it can anticipate market risk variations. Notably, our method overcomes the curse of dimensionality that limits the applicability of traditional econometric tools to portfolios made of a large number of assets. We report on forecasting performances and statistical significance of this tool for two different equity datasets. We also identify an optimal region of parameters in terms of True Positive and False Positive trade-off, through a ROC curve analysis. We find that our forecasting method is robust and it outperforms predictors based on past volatility only. Moreover the temporal analysis indicates that our method is able to adapt to abrupt changes in the market, such as financial crises, more rapidly than methods based on past volatility.
    Date: 2016–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1605.08908&r=for

This nep-for issue is ©2016 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.