nep-for New Economics Papers
on Forecasting
Issue of 2014‒02‒15
ten papers chosen by
Rob J Hyndman
Monash University

  1. Yield Curve and Recession Forecasting in a Machine Learning Framework By Gogas, Periklis; Papadimitriou , Theophilos; Matthaiou, Maria- Artemis; Chrysanthidou, Efthymia
  2. Golden Rule of Forecasting: Be conservative By Armstrong, J. Scott; Green, Kesten C.; Graefe, Andreas
  3. Using Twitter to Model the EUR/USD Exchange Rate By Dietmar Janetzko
  4. Bayesian Stochastic Search for the Best Predictors: Nowcasting GDP Growth By Nikolaus Hautsch; Dieter Hess; Fuyu Yang
  5. GDP Forecasting Bias due to Aggregation Inaccuracy in a Chain- Linking Framework By Marcus Cobb
  6. Estimating Interest Rate Setting Behavior in Korea: An Ordered Probit Model Approach By Hyeongwoo Kim
  7. Forecasting Distress in European SME Portfolios By Ferreira Filipe, Sara; Grammatikos, Theoharry; Michala, Dimitra
  8. A contribution to the chronology of turning points in global economic activity (1980-2012) By Grossman, Valerie; Mack, Adrienne; Martinez-Garcia, Enrique
  9. Implied Volatility and the Risk-Free Rate of Return in Options Markets By Marcelo Bianconi; Scott MacLachlan; Marco Sammon
  10. A Bounded Model of Time Variation in Trend Inflation, NAIRU and the Phillips Curve By Joshua C.C. Chan; Gary Koop; Simon M. Potter

  1. By: Gogas, Periklis (Democritus University of Thrace, Department of Economics); Papadimitriou , Theophilos (Democritus University of Thrace, Department of Economics); Matthaiou, Maria- Artemis (Democritus University of Thrace, Department of Economics); Chrysanthidou, Efthymia (Democritus University of Thrace, Department of Economics)
    Abstract: In this paper, we investigate the forecasting ability of the yield curve in terms of the U.S. real GDP cycle. More specifically, within a Machine Learning (ML) framework, we use data from a variety of short (treasury bills) and long term interest rates (bonds) for the period from 1976:Q3 to 2011:Q4 in conjunction with the real GDP for the same period, to create a model that can successfully forecast output fluctuations (inflation and output gaps) around its long-run trend. We focus our attention in correctly forecasting the instances of output gaps referred for the purposes of our analysis here as recessions. In this effort, we applied a Support Vector Machines (SVM) technique for classification. The results show that we can achieve an overall forecasting accuracy of 66,7% and a 100% accuracy in forecasting recessions.
    Keywords: Yield Curve; Recession Forecasting; SVM
    JEL: E43
    Date: 2014–02–01
    URL: http://d.repec.org/n?u=RePEc:ris:duthrp:2014_008&r=for
  2. By: Armstrong, J. Scott; Green, Kesten C.; Graefe, Andreas
    Abstract: This paper proposes a unifying theory of forecasting in the form of a Golden Rule of Forecasting. The Golden Rule is to be conservative. A conservative forecast is consistent with cumulative knowledge about the present and the past. To be conservative, forecasters must seek all knowledge relevant to the problem, and use methods that have been validated for the situation. A checklist of 28 guidelines is provided to implement the Golden Rule. This article’s review of research found 150 experimental comparisons; all supported the guidelines. The average error reduction from following a single guideline (compared to common practice) was 28 percent. The Golden Rule Checklist helps forecasters to forecast more accurately, especially when the situation is uncertain and complex, and when bias is likely. Non-experts who know the Golden Rule can identify dubious forecasts quickly and inexpensively. To date, ignorance of research findings, bias, sophisticated statistical procedures, and the proliferation of big data have led forecasters to violate the Golden Rule. As a result, despite major advances in forecasting methods, evidence that forecasting practice has improved over the past half-century is lacking.
    Keywords: accuracy, analytics, bias, big data, causal forces, causal models, combining, complexity, contrary series, damped trends, decision-making, decomposition, Delphi, ethics, extrapolation, inconsistent trends, index method, judgmental bootstrapping, judgmental forecasting, nowcasting, regression, risk, shrinkage, simplicity, stepwise regression, structured analogies
    JEL: C01 C1 C15 C2 C3 C4 C5 C8 C9 K2
    Date: 2014–02–06
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:53579&r=for
  3. By: Dietmar Janetzko
    Abstract: Fast, global, and sensitively reacting to political, economic and social events of any kind, these are attributes that social media like Twitter share with foreign exchange markets. The leading assumption of this paper is that information which can be distilled from public debates on Twitter has predictive content for exchange rate movements. This assumption prompted a Twitter-based exchange rate model that harnesses regARIMA analyses for short-term out-of-sample ex post forecasts of the daily closing prices of EUR/USD spot exchange rates. The analyses used Tweet counts collected from January 1, 2012 - September 27, 2013. To identify concepts mentioned on Twitter with a predictive potential the analysis followed a 2-step selection. Firstly, a heuristic qualitative analysis assembled a long list of 594 concepts, e.g., Merkel, Greece, Cyprus, crisis, chaos, growth, unemployment expected to covary with the ups and downs of the EUR/USD exchange rate. Secondly, cross-validation using window averaging with a fixed-sized rolling origin was deployed to select concepts and corresponding univariate time series that had error scores below chance level as defined by the random walk model. With regard to a short list of 17 concepts (covariates), in particular SP (Standard & Poor's) and risk, the out-of-sample predictive accuracy of the Twitter-based regARIMA model was found to be repeatedly better than that obtained from both the random walk model and a random noise covariate in 1-step ahead forecasts of the EUR/USD exchange rate. This advantage was evident on the level of forecast error metrics (MSFE, MAE) when a majority vote over different estimation windows was conducted. The results challenge the semi-strong form of the efficient market hypothesis (Fama, 1970, 1991) which when applied to the FX market maintains that all publicly available information is already integrated into exchange rates.
    Date: 2014–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1402.1624&r=for
  4. By: Nikolaus Hautsch (University of Vienna); Dieter Hess (University of Cologne); Fuyu Yang (University of East Anglia)
    Abstract: We propose a Bayesian framework for nowcasting GDP growth in real time. Using vintage data on macroeconomic announcements we set up a state space system connecting latent GDP growth rates to agencies' releases of GDP and other economic indicators. We propose a Gibbs sampling scheme to filter out daily GDP growth rates using all available macroeconomic information. The sample draws from the resulting posterior distribution, thereby allowing us to simulate backcasting, nowcasting, and forecasting densities. A stochastic search variable selection procedure yields a data-driven way of selecting the relevant GDP predictors out of a potentially large set of economic indicators.
    Date: 2014–01
    URL: http://d.repec.org/n?u=RePEc:uea:aepppr:2012_56&r=for
  5. By: Marcus Cobb
    Abstract: When evaluating the economy’s performance, Gross Domestic Product (GDP) is the most often used indicator and it is therefore also one of the most often forecasted. Due to the shortcomings of the traditional fixed-base methods, many countries have adopted chain-linking to avoid price structure obsolescence. This has meant that GDP’s well-known accounting identities hold only approximately raising challenges for those reading the numbers, but also for forecasters that follow approaches that rely on these accounting properties. Oddly enough, the issue of aggregation is hardly mentioned in forecasting. This omission could be the result of everybody adopting the chain-linking methodology with ease and considering it unnecessary to make a point out of it, but it could also originate from ignoring the issue altogether. Whatever the reason for this omission, it could lead practitioners that are unfamiliar with the method to make unnecessary mistakes. This document presents explicitly the role of prices in a bottom-up forecasting framework and, based on it, argues that they should be taken into account when generating aggregate forecasts based on the accounting identities. Also, something that should be taken into consideration by practitioners is that discrepancies due to aggregation inaccuracy are not necessarily negligible.
    Date: 2014–01
    URL: http://d.repec.org/n?u=RePEc:chb:bcchwp:721&r=for
  6. By: Hyeongwoo Kim
    Abstract: We investigate the Bank of Korea's interest rate setting behavior using a discrete choice model, where the Monetary Policy Committee revises the target policy interest rate only when the gap between the current market interest rate and the optimal rate exceeds a certain threshold value. Using monthly frequency data since 2000, we evaluate an array of ordered probit models in terms of the in-sample fit. We find important roles for the output gap, inflation, and the won depreciation rate against the US dollar. We also implement out-of-sample forecast exercises with September 2008 (Lehman Brothers Bankruptcy) for a split point, finding good out-of-sample predictability of our models.
    Keywords: Monetary Policy; Bank of Korea; Ordered Probit Model; Target RP Rate; Interbank Call Rate; Taylor Rule
    JEL: E52 E58
    Date: 2014–02
    URL: http://d.repec.org/n?u=RePEc:abn:wpaper:auwp2014-02&r=for
  7. By: Ferreira Filipe, Sara; Grammatikos, Theoharry; Michala, Dimitra
    Abstract: In the European Union, small and medium sized enterprises (SMEs) represent 99% of all businesses and contribute to more than half of the total value-added. In this paper, we develop distress prediction models for SMEs using a dataset from eight European countries over the period 2000-2009. We examine idiosyncratic and systematic covariates and find that the first discriminate between healthy and distressed firms based on their relative level of risk, whereas the second move the overall distress rates. Moreover, SMEs across Europe are vulnerable to the same idiosyncratic factors but systematic factors vary in different regions. Also, micro SMEs are more vulnerable to these systematic factors compared to larger SMEs. The paper contributes to the literature in several ways. First, using a sample with many micro companies, it offers unique insights into the European small business sector. Second, it is the first paper to explore distress in a multi-country setting, allowing for regional comparisons and uncovering regional vulnerabilities. Third, by incorporating systematic dependencies, the models can capture changes in overall distress rates and comovements during economic cycles.
    Keywords: credit risk, distress, forecasting, SMEs, discrete time hazard model, multi-period logit model, duration analysis
    JEL: C13 C41 C53 G33
    Date: 2014–02–01
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:53572&r=for
  8. By: Grossman, Valerie (Federal Reserve Bank of Dallas); Mack, Adrienne (Federal Reserve Bank of Dallas); Martinez-Garcia, Enrique (Federal Reserve Bank of Dallas)
    Abstract: The Database of Global Economic Indicators (DGEI) of the Federal Reserve Bank of Dallas is aimed at standardizing and disseminating world economic indicators for the study of globalization. It includes a core sample of 40 countries with available indicators and broad coverage for quarterly real GDP, and the monthly series of industrial production (IP), Purchasing Managers Index (PMI), merchandise exports and imports, headline CPI, CPI (ex. food and energy), PPI/WPI inflation, nominal and real exchange rates, and official/policy interest rates (see Grossman, Mack, and Martínez-García (2013)). This paper aims to codify in a systematic way the chronology of global business cycles for DGEI. We propose a novel chronology based on IP data for a sample of 84 countries at a monthly frequency from 1980 until now, and assess the turning points obtained as a signal of the underlying state of the economy as tracked by the indicators of DGEI. We conclude by proposing and also evaluating global recession probability forecasts up to 12 months ahead. The logit model proposed uses the DGEI aggregate indicators to offer advanced warning of turning point in the global cycle—by this metric a global downturn in 2013 does not appear likely.
    JEL: C14 C82 E32 E65
    Date: 2014–01–31
    URL: http://d.repec.org/n?u=RePEc:fip:feddgw:169&r=for
  9. By: Marcelo Bianconi; Scott MacLachlan; Marco Sammon
    Abstract: This paper implements an algorithm that can be used to solve systems of Black-Scholes equations for implied volatility and implied risk-free rate of return. After using a seemingly unrelated regressions (SUR) model to obtain point estimates for implied volatility and implied risk-free rate, the options are re-priced using these parameters in the Black-Scholes formula. Given this re-pricing, we find that the difference between the market and model price is increasing in moneyness, and decreasing in time to expiration and the size of the bid ask spread. We ask whether the new information gained by the simultaneous solution is useful. We find that after using the SUR model, and re-pricing the options, the varying risk-free rate model yields Black-Scholes prices closer to market prices than the fixed risk-free rate model. We also find that the varying risk-free rate model is better for predicting future evolutions in model-free implied volatility as measured by the VIX. Finally, we discuss potential trading strategies based both on the model-based Black-Scholes prices and on VIX predictability.
    Keywords: re-pricing options, forecasting volatility, seemingly unrelated regression, implied volatility
    JEL: G13 C63
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:tuf:tuftec:0777&r=for
  10. By: Joshua C.C. Chan; Gary Koop; Simon M. Potter
    Abstract: In this paper, we develop a bivariate unobserved components model for inflation and unemployment. The unobserved components are trend inflation and the non-accelerating inflation rate of unemployment (NAIRU). Our model also incorporates a time-varying Phillips curve and time-varying inflation persistence. What sets this paper apart from the existing literature is that we do not use unbounded random walks for the unobserved components, but rather use bounded random walks. For instance, trend inflation is assumed to evolve within bounds. Our empirical work shows the importance of bounding. We find that our bounded bivariate model forecasts better than many alternatives, including a version of our model with unbounded unobserved components. Our model also yields sensible estimates of trend inflation, NAIRU, inflation persistence and the slope of the Phillips curve.
    Keywords: trend inflation, non-linear state space model, natural rate of unemployment, inflation targeting, Bayesian
    Date: 2014–01
    URL: http://d.repec.org/n?u=RePEc:een:camaaa:2014-10&r=for

This nep-for issue is ©2014 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.