nep-for New Economics Papers
on Forecasting
Issue of 2016‒02‒04
twelve papers chosen by
Rob J Hyndman
Monash University

  1. Evaluating a Structural Model Forecast: Decomposition Approach By Frantisek Brazdik; Zuzana Humplova; Frantisek Kopriva
  2. Option-Implied Equity Premium Predictions via Entropic TiltinG By Davide Pettenuzzo; Konstantinos Metaxoglou; Aaron Smith
  3. Inflation as a global phenomenon - some implications for policy analysis and forecasting By Kabukcuoglu, Ayse; Martinez-Garcia, Enrique
  4. Predicting Belgium’s GDP using targeted bridge models By Christophe Piette
  5. Forecasting the 2015 General Election with Internet Big Data: An Application of the TRUST Framework By Ronald McDonald; Xuxin Mao
  6. A Methodological Note on Eliciting Price Forecasts in Asset Market Experiments By Nobuyuki Hanaki; Eizo Akiyama; Ryuichiro Ishikawa
  7. The implications of liquidity expansion in China for the US dollar By Wensheng Kang; Ronald A. Ratti; Joaquin L. Vespignani
  8. Oil price forecastability and economic uncertainty By Stelios D. Bekiros; Rangan Gupta; Alessia Paccagnini
  9. Generalizing smooth transition autoregressions By Emilio Zanetti Chini
  10. Regional Oil Extraction and Consumption: A simple production model for the next 35 years Part I By Michael Dittmar
  11. Financial connectedness among European volatility risk premia By Andrea Cipollini; Iolanda Lo Cascio; Silvia Muzzioli
  12. Survival Models for Credit Risk Estimation in the context of SME By Alberto BURCHI; Francesca PIERRI

  1. By: Frantisek Brazdik; Zuzana Humplova; Frantisek Kopriva
    Abstract: When presenting the results of macroeconomic forecasting, forecasters often have to explain the contribution of data revisions, conditioning information, and expert judgment updates to the forecast update. We present a framework for decomposing the differences between two forecasts generated by a linear structural model into the contributions of the elements of the information set when anticipated and unanticipated conditioning is applied. The presented framework is based on a set of supporting forecasts that simplify the decomposition of the forecast update. The features of the framework are demonstrated by examining two forecast scenarios with the same initial prediction period but different forecast assumptions. The full capabilities of the decomposition framework are documented by an example forecast evaluation where the forecast from the Czech National Bank’s Inflation Report III/2012 is assessed with respect to the updated forecast from Inflation Report III/2013.
    Keywords: Data revisions, DSGE models, forecasting, forecast revisions
    JEL: C53 E01 E47
    Date: 2015–12
  2. By: Davide Pettenuzzo (Brandeis University); Konstantinos Metaxoglou (Carleton University); Aaron Smith (University of California, Davis)
    Abstract: We propose a new method to improve density forecasts of the equity premium us- ing information from options markets. We tilt the predictive densities from standard econometric models suggested in the stock return predictability literature towards the second moment of the risk-neutral distribution implied by options prices. In so do- ing, we use a simple regression-based approach to remove the variance risk premium. By combining the backward-looking information contained in the econometric models with the forward-looking information from the options prices, tilting yields sharper predictive densities. Using density forecasts of the U.S. equity premium in Rapach and Zhou (2012), we nd that tilting leads to more accurate predictions, both in terms of statistical and economic performance.
    JEL: C11 C22 G11 G12
    Date: 2016–01
  3. By: Kabukcuoglu, Ayse (Koç University); Martinez-Garcia, Enrique (Federal Reserve Bank of Dallas)
    Abstract: We evaluate the performance of inflation forecasts based on the open-economy Phillips curve by exploiting the spatial pattern of international propagation of inflation. We model these spatial linkages using global inflation and either domestic slack or oil price fluctuations, motivated by a novel interpretation of the forecasting implications of the workhorse openeconomy New Keynesian model (Martínez-García and Wynne (2010), Kabukcuoglu and Martínez-García (2014)). We find that incorporating spatial interactions yields significantly more accurate forecasts of local inflation in 14 advanced countries (including the U.S.) than a simple autoregressive model that captures only the temporal dimension of the inflation dynamics.
    JEL: C21 C23 C53 F41
    Date: 2016–01–01
  4. By: Christophe Piette (Research Department, NBB)
    Abstract: This paper investigates the usefulness, within the frameworks of the standard bridge model and the ‘bridging with factors’ approach, of a predictor selection procedure that builds on the elastic net algorithm. A pseudo-real time forecasting exercise is performed, in which estimates for Belgium’s quarterly GDP are generated using a monthly dataset of 93 potential predictors. While the simulation results indicate that specifying forecasting models using this procedure can lead to a slight improvement in terms of predictive accuracy over shorter horizons, the forecasting errors made by these ‘targeted’ models are not found to be significantly different from those based on the principal components extracted from the entire set of available indicators. In other words, the only advantage of following such an approach lies in the fact that it enables the forecaster to streamline the information set.
    Keywords: bridge models, nowcasting, variable selection
    JEL: C22 E37
    Date: 2016–01
  5. By: Ronald McDonald; Xuxin Mao
    Abstract: Many variables, such as currencies, are very difficult to predict and often researchers demonstrate that a simple random walk process can out-perform a model-based forecast using fundamentals. However, there is increasing evidence that such results can be overturned with the use of rich enough dynamic processes in the underlying statistical modelling and also by ensuring that a rich enough information set is used. Elections have also become increasingly difficult to predict, despite the use of increasingly sophisticated methods, with the 2015 UK General Election being a good case in point. In this paper we demonstrate that the kind of statistical methods used to predict currencies and other financial variables, combined with information culled from internet sources such as Google trends, can greatly improve on the predictions based solely on opinion polls. This paper offers the first real time test of the so-called Big Data for the UK 2015 General Election. Our real time predictions of both the overall UK and Scottish components of the election are very close to the actual outcomes.
    Date: 2015–10
  6. By: Nobuyuki Hanaki (Université Nice Sophia Antipolis; GREDEG-CNRS; IUF); Eizo Akiyama (University of Tsukuba, Japan); Ryuichiro Ishikawa (University of Tsukuba, Japan)
    Abstract: We investigate (a) whether eliciting future price forecasts influences market outcomes, and (b) whether differences in the way subjects are incentivized to submit ''accurate'' price forecasts influence the market outcomes as well as the forecasts submitted by subjects in an experimental asset market. We consider three treatments: one without forecast elicitation (NF) and two with forecast elicitations. In one of the latter treatments, subjects are paid based on both their performance of forecasting and trading (Bonus), while in the other, they are paid based only on one of the two that is chosen randomly at the end of the experiment (Unique). While we found no statistical differences in terms of mispricing, trading volumes, and trading behavior between NF and Unique treatments, there were some statistically significant differences between NF and Bonus treatments. Thus, if the aim is to avoid influencing the behavior of subjects and the market outcomes by eliciting price forecasts compared to NF treatment, then the Unique treatment seems to be better than the Bonus treatment.
    Keywords: Price forecast elicitation, Experimental asset markets
    JEL: C90 D84
    Date: 2016–01
  7. By: Wensheng Kang; Ronald A. Ratti; Joaquin L. Vespignani
    Abstract: he value of the US dollar is of major importance to the world economy. Global liquidity has grown sharply in recent years with growing importance of China’s money supply to global liquidity. We develop out-of-sample forecasts of the US dollar exchange rate value using US and non-US global data on inflation, output, interest rates, and liquidity on the US, China and non-US/non-China liquidity. Monetary model forecasts significantly outperform a random walk forecast in terms of MSFE at horizons over 12 to 30 months ahead. A monetary model with sticky prices performs best. Rolling sample analysis indicates changes over time in the influence of variables in forecasting the US dollar. China’s liquidity has a distinct, significant and changing influence on the US dollar exchange rate. Post global financial crisis, increases in the growth rate in China’s M2 forecast a significantly higher value for the US dollar 12 months and 18 months ahead and significantly lower values for the US dollar 24 and 30 months ahead.
    Keywords: China’s liquidity, trade-weighted US dollar, forecasting US dollar exchange rate
    JEL: E41 E51 F31 F41
    Date: 2016–01
  8. By: Stelios D. Bekiros; Rangan Gupta; Alessia Paccagnini
    Abstract: Information on economic policy uncertainty does matter in predicting the change in oil prices. We compare the forecastability of standard, Bayesian and time-varying VAR against univariate models. The time-varying VAR model outranks all alternative models over the period 2007:1–2014:2.
    Keywords: Oil prices; Economic policy uncertainty; Forecasting
    JEL: C22 C32 C53 E60 Q41
    Date: 2015–07
  9. By: Emilio Zanetti Chini (Department of Economics and Management)
    Abstract: We introduce a new time series model capable to parametrize the joint asymmetry in duration and length of cycles - the dynamic asymmetry - by using a particular generalization of the logistic function. The modelling strategy is discussed in detail, with particular emphasis on two different tests for the null of symmetric adjustment and three diagnostic tests, whose power properties are explored via Monte Carlo experiments. Four case studies in classical economic and biological real datasets illustrate the versatility of the new model in different fields. In all the cases, the dynamic asymmetry in the cycle is efficiently detected and modelled. Finally, a rolling forecasting exercise is applied to the resulting estimates. Our model beats linear and conventional nonlinear competitors in point forecasting, while this superiority becomes less evident in density forecasting, specially when relying on robust measures.
    Keywords: Dynamic asymmetry, Nonlinear time series, Econometric Modelling, Point forecasts, Density forecasts, Evaluating forecasts, Combining forecasts, Error measures.
    JEL: C22 C51 C52
    Date: 2016–01
  10. By: Michael Dittmar (ETH Zurich, Institute of Particle Physics)
    Abstract: The growing conflicts in and about oil exporting regions and speculations about volatile oil prices during the last decade have renewed the public interest in predictions for the near future oil production and consumption. Unfortunately, studies from only 10 years ago, which tried to forecast the oil production during the next 20-30 years, failed to make accurate predictions for today's global oil production and consumption. Forecasts using economic growth scenarios, overestimated the actual oil production, while models which tried to estimate the maximum future oil production/year, using the official country oil reserve data, predicted a too low production. In this paper, a new approach to model the maximal future regional and thus global oil production (part I) and consumption (part II) during the next decades is proposed. Our analysis of the regional oil production data during past decades shows that, in contrast to periods when production was growing and growth rates varied greatly from one country to another, remarkable similarities are found during the plateau and decline periods of different countries. Following this model, the particular production phase of each major oil producing country and region is determined essentially only from the recent past oil production data. Using these data, the model is then used to predict the production from all major oil producing countries, regions and continents up to the year 2050. The limited regional and global potential to compensate this decline with unconventional oil and oil-equivalents is also presented.
    Date: 2016–01
  11. By: Andrea Cipollini; Iolanda Lo Cascio; Silvia Muzzioli
    Abstract: In this paper we use the Diebold Yilmaz (2009 and 2012) methodology to estimate the contribution and the vulnerability to systemic risk of volatility risk premia for five European stock markets: France, Germany, UK, Switzerland and the Netherlands. The volatility risk premium, which is a proxy of risk aversion, is measured by the difference between the implied volatility and expected realized volatility of the stock market for next month. While Diebold and Yilmaz focus is on the forecast error variance decomposition of stock returns or range based volatilities employing a stationary VAR in levels, we account for the (locally) long memory stationary properties of the levels of volatility risk premia series. Therefore, we estimate and invert a Fractionally Integrated VAR model to compute the cross forecast error variance shares necessary to obtain the index of total and directional connectedness.
    Keywords: volatility risk premium, long memory, FIVAR, financial connectedness
    JEL: C32 C38 C58 G13
    Date: 2015–12
  12. By: Alberto BURCHI; Francesca PIERRI
    Abstract: The credit risk is the potential that a bank borrower or counterparty will fail to meet its obligations in accordance with agreed terms. Using a large dataset of corporate balance sheets we develop a survival model to predict default. Unlike previous works, we consider forecasts of probability of default for small corporate, we take into account regional macroeconomic conditions as well as national macroeconomic indicators and we use a large sample of balance sheet. We define our model after a series of steps designed to select only the significant variables; we examine the scale of the continuous covariates in the preliminary main effect model and we apply, when required, appropriate transformations to respect the linearity in the log hazard. Last but not least, we check the proportionality hazard assumption.
    Date: 2015–12–01

This nep-for issue is ©2016 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.