nep-for New Economics Papers
on Forecasting
Issue of 2015‒08‒25
ten papers chosen by
Rob J Hyndman
Monash University

  1. Google Trends and Forecasting Performance of Exchange Rate Models By Levent Bulut
  2. Forecasting Wheat Commodity Prices using a Global Vector Autoregressive model By Gutierrez, Luciano; Piras, Francesco; Olmeo, Maria Grazia
  3. Towards improving the framework for probabilistic forecast evaluation By Leonard A. Smith; Emma B. Suckling; Erica L. Thompson; Trevor Maynard; Hailiang Du
  4. Central Bank Transparency and the consensus forecast: What does The Economist poll of forecasters tell us? By Emna Trabelsi
  5. What do Professional Forecasters actually predict? By Didier Nibbering; Richard Paap; Michel van der Wel
  6. The Limits of Learning By Elliot Aurissergues
  7. Evaluating the Information Value for Measures of Systemic Conditions By Oet, Mikhail V.; Dooley, John; Gramlich, Dieter; Sarlin, Peter; Ong, Stephen J.
  8. Detecting and Forecasting Large Deviations and Bubbles in a Near-Explosive Random Coefficient Model By Anurag Narayan Banerjee; Guillaume Chevillon; Marie Kratz
  9. Revising empirical linkages between direction of Canadian stock price index movement and Oil supply and demand shocks: Artificial neural network and support vector machines approaches By Dhaoui, Abderrazak; Audi, Mohamed; Ouled Ahmed Ben Ali, Raja
  10. An optimal trading problem in intraday electricity markets * By René Aïd; Pierre Gruet; Huyên Pham

  1. By: Levent Bulut (Department of Economics, Ipek University)
    Abstract: In this paper, internet search data provided from Google Trends is utilized to nowcast the known variates of alternative exchange rate determination models. The sample covers 12 OECD countries’ exchange rates for the period from Jan 2004 to June 2014. The results indicate that inclusion of Google Trends-based nowcasting values of macro fundamentals to the current set of government released-macro-economic variables improve the out-of-sample forecast of Purchasing Power Parity model in seven currency pairs and of Monetary model in four currency pairs. In this paper we claim that, for proper testing of the structural models, since there is a lag in the release of official data on macro fundamentals, the literature should focus more on using ex ante variables on current macro fundamentals and nowcasting of these variables with utilization of Google Search Inquiries can be one alternative for this purpose.
    Keywords: Meese-Rogoff Puzzle, Out-of-sample predictability of Exchange Rates, Google Trends
    JEL: F31 F37 C52
    Date: 2015–08
  2. By: Gutierrez, Luciano; Piras, Francesco; Olmeo, Maria Grazia
    Abstract: In this paper the performance of a Global Vector Autoregression model in forecasting export wheat prices is evaluated in comparison to different benchmark models. Forecast evaluation results are based on different statistics including RMSE, MAPE, the Diebold-Mariano (DM) tests and turning points forecast accuracy. The results show that the GVAR forecasts tend to outperform forecasts based on the benchmark models, emphasizing the interdependencies in the global wheat market.
    Keywords: forecasting, global dynamic models, price analysis, wheat market, Agricultural and Food Policy, G14, Q14, C12, C15,
    Date: 2015–06
  3. By: Leonard A. Smith; Emma B. Suckling; Erica L. Thompson; Trevor Maynard; Hailiang Du
    Abstract: The evaluation of forecast performance plays a central role both in the interpretation and use of forecast systems and in their development. Different evaluation measures (scores) are available, often quantifying different characteristics of forecast performance. The properties of several proper scores for probabilistic forecast evaluation are contrasted and then used to interpret decadal probability hindcasts of global mean temperature. The Continuous Ranked Probability Score (CRPS), Proper Linear (PL) score, and IJ Good’s logarithmic score (also referred to as Ignorance) are compared; although information from all three may be useful, the logarithmic score has an immediate interpretation and is not insensitive to forecast busts. Neither CRPS nor PL is local; this is shown to produce counter intuitive evaluations by CRPS. Benchmark forecasts from empirical models like Dynamic Climatology place the scores in context. Comparing scores for forecast systems based on physical models (in this case HadCM3, from the CMIP5 decadal archive) against such benchmarks is more informative than internal comparison systems based on similar physical simulation models with each other. It is shown that a forecast system based on HadCM3 out performs Dynamic Climatology in decadal global mean temperature hindcasts; Dynamic Climatology previously outperformed a forecast system based upon HadGEM2 and reasons for these results are suggested. Forecasts of aggregate data (5-year means of global mean temperature) are, of course, narrower than forecasts of annual averages due to the suppression of variance; while the average “distance” between the forecasts and a target may be expected to decrease, little if any discernible improvement in probabilistic skill is achieved.
    JEL: C1
    Date: 2015–07–17
  4. By: Emna Trabelsi (ISG - Institut Supérieur de Gestion de Tunis [Tunis] - Université de Tunis [Tunis])
    Abstract: We are interested, in this paper, in studying the effects that central banks exert on private sector forecasts by means of their transparency and communication measures. We analyze the impact of central bank transparency on the accuracy of the consensus forecasts (usually calculated as the mean or the median of the forecasts from a panel of individual forecasters) for a series of macroeconomic variables: inflation, Real output growth and the current account as a share of GDP for 7 advanced economies. Interestingly, while it is found of significance of central bank transparency and communication measures on forecasts themselves, there appear some limits of the same measures when we study their impact on forecast errors. Our findings, indeed, suggest that deviations of the forecasted economic data from the realized ones (i.e. RGDP and the current account as a share of GDP) are a bit affected by the central bank transparency measures considered in the paper. Inflation forecast errors, especially, are not affected at all by those measures. A possible explanation (among others) could be attributed to the inefficiency of the mean forecasts. Inefficiency of the consensus forecasts is not a new issue from a theoretical point of view, but its empirical relevance is for the first time (to our knowledge) questioned on data extracted from The Economist poll of forecasters. More particularly, our paper has implications over questioning the efficacy of releasing more transparent public information as sparked by Morris and Shin (2002) whose argument states that when private agents have diverse sources of information, public information can lead them to overreact to the signals from the central bank.
    Date: 2015–04–30
  5. By: Didier Nibbering (Erasmus University Rotterdam, the Netherlands); Richard Paap (Erasmus University Rotterdam, the Netherlands); Michel van der Wel (Erasmus University Rotterdam, the Netherlands)
    Abstract: In this paper we study what professional forecasters actually explain. We use spectral analysis and state space modeling to decompose economic time series into a trend, a business-cycle, and an irregular component. To examine which components are captured by professional forecasters we regress their forecasts on the estimated components extracted from both the spectral analysis and the state space model. For both decomposition methods we find that the Survey of Professional Forecasters can predict almost all variation in the time series due to the trend and the business-cycle, but the forecasts contain little information about the variation in the irregular component.
    Keywords: Expert Forecast; Trend-Cycle Decomposition; State Space Modeling; Baxter-King Filter
    JEL: C22 C53 E37
    Date: 2015–08–07
  6. By: Elliot Aurissergues (EEP-PSE - Ecole d'Économie de Paris - Paris School of Economics)
    Abstract: In this paper, we criticize the current adaptive or statistical learning literature. Instead of emphasizing asymptotical results, we focus on the short run forecasting performance of the different algorithms before convergence to rational expectation solution occurs. First, we suggest that the literature should drop ordinary least squares techniques in favor of the more efficient Bayesian estimation. Second, we cast doubt on the rationality of the behavior implied by the theory. We argue that agents do not use all available information in these models. Past prices carry some information about expectations of others and some algorithms are able to exploit this information. In a very simple case, this algorithm is simply naive expectations. In more complex one, we augment the usual learning with an estimation of past expectation errors using Kalman Filter. Interestingly, we find that some of these algorithms are divergent and may beat convergent ones in the short run. For a large set of parameters, their dominance is too short to be significant. However, when the sensitivity of the actual price to the expected one is close to one, divergent algorithms should be considered.
    Date: 2014–12–09
  7. By: Oet, Mikhail V. (Federal Reserve Bank of Cleveland); Dooley, John (Federal Reserve Bank of Cleveland); Gramlich, Dieter (Baden-Wuerttemberg Cooperative State University); Sarlin, Peter (Center of Excellence SAFE at Goethe University, the Hanken School of Economics, and RiskLab Finland at Arcada University of Applied Sciences); Ong, Stephen J. (Federal Reserve Bank of Cleveland)
    Abstract: Timely identification of coincident systemic conditions and forward-looking capacity to anticipate adverse developments are critical for macroprudential policy. Despite clear recognition of these factors in literature, an evaluation methodology and empirical tests for the information value of coincident measures are lacking. This paper provides a twofold contribution to the literature: (i) a general-purpose evaluation framework for assessing information value for measures of systemic conditions, and (ii) an empirical assessment of the information value for several alternative measures of US systemic conditions. We find substantial differences among the measures, of which the Cleveland Financial Stress Index shows best-in-class identification performance. In terms of forecasting performance, Kamakura’s Troubled Company Index, Cleveland Financial Stress Index, and Goldman Sachs Financial Conditions Index show moderately stable usefulness metrics over time.
    Keywords: Information value; Systemic conditions; Coincident measures; Early warning; Macroprudential policy
    JEL: E32 E37 G01 G18 G28
    Date: 2015–08–06
  8. By: Anurag Narayan Banerjee (Business school - Durham University); Guillaume Chevillon (SID - Information Systems, Decision Sciences and Statistics Department - Essec Business School); Marie Kratz (SID - Information Systems, Decision Sciences and Statistics Department - Essec Business School, MAP5 - MAP5 - Mathématiques Appliquées à Paris 5 - CNRS - UPD5 - Université Paris Descartes - Paris 5 - Institut National des Sciences Mathématiques et de leurs Interactions)
    Abstract: This paper proposes a Near Explosive Random-Coefficient autoregressive model for asset pricing which accommodates both the fundamental asset value and the recurrent presence of autonomous deviations or bubbles. Such a process can be stationary with or without fat tails, unit-root nonstationary or exhibit temporary exponential growth. We develop the asymptotic theory to analyze ordinary least-squares (OLS) estimation. One important theoretical observation is that the estimator distribution in the random coefficient model is qualitatively different from its distribution in the equivalent fixed coefficient model. We conduct recursive and full-sample inference by inverting the asymptotic distribution of the OLS test statistic, a common procedure in the presence of localizing parameters. This methodology allows to detect the presence of bubbles and establish probability statements on their apparition and devolution. We apply our methods to the study of the dynamics of the Case-Shiller index of U.S. house prices. Focusing in particular on the change in the price level, we provide an early detection device for turning points of booms and bust of the housing market.
    Date: 2013–09–23
  9. By: Dhaoui, Abderrazak; Audi, Mohamed; Ouled Ahmed Ben Ali, Raja
    Abstract: Over the years, the oil price has shown an impressive fluctuation and isn’t without signification impact on the evolution of stock market returns. Because of the complexity of stock market data, developing an efficient model for predicting linkages between macroeconomic data and stock price movement is very difficult. This study attempted to develop two robust and efficient models and compared their performance in predicting the direction of movement in the Canadian stock market. The proposed models are based on two classification techniques, artificial neural networks and Support Vector Machines. Considering together world oil production and world oil prices in order to supervise for oil supply and oil demand shocks, strong evidence of sensitivity of stock price movement direction to the oil price shocks specifications is found. Experimental results showed that average performance of artificial neural networks model is around 96.75% that is significantly better than that of the Support Vector Machines reaching 95.67%.
    Keywords: Oil price; Stock price movement; Oil supply shocks; Oil demand shocks; Artificial neural networks model, Support Vector Machines.
    JEL: G12 G17
    Date: 2015–08–07
  10. By: René Aïd (FiME Lab - Laboratoire de Finance des Marchés d'Energie - Université Paris IX - Paris Dauphine - CREST - EDF R&D); Pierre Gruet (LPMA - Laboratoire de Probabilités et Modèles Aléatoires - CNRS - UP7 - Université Paris Diderot - Paris 7 - UPMC - Université Pierre et Marie Curie - Paris 6); Huyên Pham (LPMA - Laboratoire de Probabilités et Modèles Aléatoires - CNRS - UP7 - Université Paris Diderot - Paris 7 - UPMC - Université Pierre et Marie Curie - Paris 6, ENSAE Paris-Tech & CREST, Laboratoire de Finance et d'Assurance - ENSAE Paris-Tech & CREST)
    Abstract: We consider the problem of optimal trading for a power producer in the context of intraday electricity markets. The aim is to minimize the imbalance cost induced by the random residual demand in electricity, i.e. the consumption from the clients minus the production from renewable energy. For a simple linear price impact model and a quadratic criterion, we explicitly obtain approximate optimal strategies in the intraday market and thermal power generation, and exhibit some remarkable properties of the trading rate. Furthermore, we study the case when there are jumps on the demand forecast and on the intraday price, typically due to error in the prediction of wind power generation. Finally, we solve the problem when taking into account delay constraints in thermal power production.
    Date: 2015–01–19

This nep-for issue is ©2015 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.