nep-for New Economics Papers
on Forecasting
Issue of 2011‒01‒03
twelve papers chosen by
Rob J Hyndman
Monash University

  1. Why are survey forecasts superior to model forecasts? By Clements, Michael P.
  2. Real-time Forecasting of Inflation and Output Growth in the Presence of Data Revisions By Clements, Michael P.; Galvão, Ana Beatriz
  3. Probabilistic Forecasts of Volatility and its Risk Premia By Worapree Maneesoonthorn; Gael M. Martin; Catherine S. Forbes; Simone Grose
  4. ‘Lean’ versus ‘Rich’ Data Sets: Forecasting during the Great Moderation and the Great Recession By Marco J. Lombardi; Philipp Maier
  5. Nowcasting Spanish GDP growth in real time: "One and a half months earlier" By David de Antonio Liedo; Elena Fernández Muñoz
  6. Dynamic Conditional Correlations for Asymmetric Processes By Manabu Asai; Michael McAleer
  7. REALIZED VOLATILITY RISK By David E. Allen; Michael McAleer; Marcel Scharth
  8. Realized volatility and overnight returns By Ahoniemi, Katja; Lanne, Markku
  9. Inattentive professional forecasters By Andrade, P.; Le Bihan, H.
  10. Semi-Structural Models for Inflation Forecasting By Maral Kichian; Fabio Rumler; Paul Corrigan
  11. The Norges Bank’s key rate projections and the news element of monetary policy: a wavelet based jump detection approach By Lars Winkelmann
  12. Predicting Financial Distress in a High-Stress Financial World: The Role of Option Prices as Bank Risk Metrics By Jérôme Coffinet; Adrian Pop; Muriel Tiesset

  1. By: Clements, Michael P. (University of Warwick)
    Abstract: We investigate two characteristics of survey forecasts that are shown to contribute to their superiority over purely model-based forecasts. These are that the consensus forecasts incorporate the effects of perceived changes in the long-run outlook, as well as embodying departures from the path toward the long-run expectation. Both characteristics on average tend to enhance forecast accuracy. At the level of the individual forecasts, there is scant evidence that the second characteristic enhances forecast accuracy, and the average accuracy of the individualforecasts can be improved by applying a mechanical correction. Keywords: consensus forecast, model-based forecasts, long-run expectations.
    Keywords: consensus forecast ; model-based forecasts ; long-run expectations JEL Classification: C53 ; E37
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:wrk:warwec:954&r=for
  2. By: Clements, Michael P. (University of Warwick); Galvão, Ana Beatriz (Queen Mary University of London)
    Abstract: We show how to improve the accuracy of real-time forecasts from models that include au-toregressive terms by estimating the models on ‘lightly-revised’data instead of using data from the latest-available vintage. Forecast accuracy is improved by reorganizing the data vintages employed in the estimation of the model in such a way that the vintages used in estimation are of a similar maturity to the data in the forecast loss function. The size of the expected reductions in mean squared error depend on the characteristics of the data revision process. Empirically, we …nd RMSFE gains of 2-4% when forecasting output growth and in‡ation with AR models, and gains of the order of 8% with ADL models.
    Keywords: real-time data ; news and noise revisions ; optimal forecasts ; multi-vintage models. JEL Classification: C53
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:wrk:warwec:953&r=for
  3. By: Worapree Maneesoonthorn; Gael M. Martin; Catherine S. Forbes; Simone Grose
    Abstract: The object of this paper is to produce distributional forecasts of physical volatility and its associated risk premia using a non-Gaussian, non-linear state space approach. Option and spot market information on the unobserved variance process is captured by using dual 'model-free' variance measures to define a bivariate observation equation in the state space model. The premium for diffusive variance risk is defined as linear in the latent variance (in the usual fashion) whilst the premium for jump variance risk is specified as a conditionally deterministic dynamic process, driven by a function of past measurements. The inferential approach adopted is Bayesian, implemented via a Markov chain Monte Carlo algorithm that caters for the multiple sources of non-linearity in the model and the bivariate measure. The method is applied to empirical spot and option price data for the S&P500 index over the 1999 to 2008 period, with conclusions drawn about investors' required compensation for variance risk during the recent financial turmoil. The accuracy of the probabilistic forecasts of the observable variance measures is demonstrated, and compared with that of forecasts yielded by more standard time series models. To illustrate the benefits of the approach, the posterior distribution is augmented by information on daily returns to produce Value at Risk predictions, as well as being used to yield forecasts of the prices of derivatives on volatility itself. Linking the variance risk premia to the risk aversion parameter in a representative agent model, probabilistic forecasts of relative risk aversion are also produced.
    Keywords: Volatility Forecasting; Non-linear State Space Models; Non-parametric Variance Measures; Bayesian Markov Chain Monte Carlo; VIX Futures; Risk Aversion.
    JEL: C11 C53
    Date: 2010–12–20
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2010-22&r=for
  4. By: Marco J. Lombardi; Philipp Maier
    Abstract: We evaluate forecasts for the euro area in data-rich and ‘data-lean’ environments by comparing three different approaches: a simple PMI model based on Purchasing Managers’ Indices (PMIs), a dynamic factor model with euro area data, and a dynamic factor model with data from the euro plus data from national economies (pseudo-real time data). We estimate backcasts, nowcasts and forecasts for GDP, components of GDP, and GDP of all individual euro area members, and examine forecasts for the ‘Great Moderation’ (2000-2007) and the ‘Great Recession’ (2008-2009) separately. All models consistently beat naïve AR benchmarks. More data does not necessarily improve forecasting accuracy: For the factor model, adding monthly indicators from national economies can lead to more uneven forecasting accuracy, notably when forecasting components of euro area GDP during the Great Recession. This suggests that the merits of national data may reside in better estimation of heterogeneity across GDP components, rather than in improving headline GDP forecasts for individual euro area countries. Comparing factor models to the much simpler PMI model, we find that the dynamic factor model dominates the latter during the Great Moderation. However, during the Great Recession, the PMI model has the advantage that survey-based measures respond faster to changes in the outlook, whereas factor models are more sluggish in adjusting. Consequently, the dynamic factor model has relatively more difficulties beating the PMI model, with relatively large errors in forecasting some countries or components of euro area GDP.
    Keywords: Econometric and statistical methods; International topics
    JEL: C50 C53 E37 E47
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:bca:bocawp:10-37&r=for
  5. By: David de Antonio Liedo (Banco de España); Elena Fernández Muñoz (Banco de España)
    Abstract: The sharp decline in economic activity registered in Spain over 2008 and 2009 has no precedents in recent history. After ten prosperous years with an average GDP growth of 3.7%, the current recession places non-judgemental forecasting models under stress. This paper evaluates the Spanish GDP nowcasting performance of combinations of small and medium-sized linear dynamic regressions with priors originating in the Bayesian VAR literature. Our forecasting procedure can be considered a timely and simple approximation to the mix of accounting tools, models and judgement used by the statistical agencies to construct aggregate GDP figures. The real time forecast evaluation conducted over the most severe phase of the recession shows that our method yields reliable real GDP growth predictions almost one and a half months before the official figures are published.
    Keywords: Minnesota priors, mixed estimation, forecasting
    JEL: C32 C53 E37
    Date: 2010–12
    URL: http://d.repec.org/n?u=RePEc:bde:wpaper:1037&r=for
  6. By: Manabu Asai; Michael McAleer (University of Canterbury)
    Abstract: The paper develops two Dynamic Conditional Correlation (DCC) models, namely the Wishart DCC (WDCC) model and the Matrix-Exponential Conditional Correlation (MECC) model. The paper applies the WDCC approach to the exponential GARCH (EGARCH) and GJR models to propose asymmetric DCC models. We use the standardized multivariate t-distribution to accommodate heavy-tailed errors. The paper presents an empirical example using the trivariate data of the Nikkei 225, Hang Seng and Straits Times Indices for estimating and forecasting the WDCC-EGARCH and WDCC-GJR models, and compares the performance with the asymmetric BEKK model. The empirical results show that AIC and BIC favour the WDCC-EGARCH model to the WDCC-GJR and asymmetric BEKK models. Moreover, the empirical results indicate that the WDCC-EGARCH-t model produces reasonable VaR threshold forecasts, which are very close to the nominal 1% to 3% values.
    Keywords: Dynamic conditional correlations; Matrix exponential model; Wishart process; EGARCH; GJR; asymmetric BEKK; heavy-tailed errors
    Date: 2010–12–01
    URL: http://d.repec.org/n?u=RePEc:cbt:econwp:10/76&r=for
  7. By: David E. Allen (School of Accounting, Finance and Economics, Edith Cowan University); Michael McAleer (Erasmus University Rotterdam, Tinbergen Institute, The Netherlands, and Institute of Economic Research, Kyoto University); Marcel Scharth (Tinbergen Institute, The Netherlands, Department of Econometrics, VU University Amsterdam)
    Abstract: In this paper we show that realized variation measures constructed from high- frequency returns reveal a large degree of volatility risk in stock and index returns, where we characterize volatility risk by the extent to which forecasting errors in realized volatility are substantive. Even though returns standardized by ex post quadratic variation measures are nearly Gaussian, this unpredictability brings greater uncertainty to the empirically relevant ex ante distribution of returns. Explicitly modeling this volatility risk is fundamental. We propose a dually asymmetric realized volatility model, which incorporates the fact that realized volatility series are systematically more volatile in high volatility periods. Returns in this framework display time varying volatility, skewness and kurtosis. We provide a detailed account of the empirical advantages of the model using data on the S&P 500 index and eight other indexes and stocks.
    Keywords: Realized volatility, volatility of volatility, volatility risk, value-at-risk, forecasting, conditional heteroskedasticity.
    Date: 2010–12
    URL: http://d.repec.org/n?u=RePEc:kyo:wpaper:753&r=for
  8. By: Ahoniemi, Katja (Aalto University School of Economics); Lanne, Markku (University of Helsinki)
    Abstract: No consensus has emerged on how to deal with overnight returns when calculating realized volatility in markets where trading does not take place 24 hours a day. This paper explores several common volatility applications, investigating how the chosen treatment of overnight returns affects the results. For example, the selection of the best volatility forecasting model depends on the way overnight returns are incorporated into realized volatility. The evidence favours weighted estimators over those that have been more commonly used in the existing literature. The definition of overnight returns is particularly challenging for the S&P 500 index, and we propose two alternative measures for its overnight return.
    Keywords: realized volatility; forecasting
    JEL: C14 C22 C52
    Date: 2010–12–08
    URL: http://d.repec.org/n?u=RePEc:hhs:bofrdp:2010_019&r=for
  9. By: Andrade, P.; Le Bihan, H.
    Abstract: We use the ECB Survey of Professional Forecasters to characterize the dynamics of expectations at the micro level. We find that forecasters (i) have predictable forecast errors; (ii) disagree; (iii) fail to systematically update their forecasts in the wake of new information; (iv) disagree even when updating; and (v) differ in their frequency of updating and forecast performances. We argue that these micro data facts are qualitatively in line with recent models in which expectations are formed by inattentive agents. However building and estimating an expectation model that features two types of inattention, namely sticky information à la Mankiw-Reis and noisy information à la Sims, we cannot quantitatively generate the error and disagreement that are observed in the SPF data. The rejection is mainly due to the fact that professionals relatively agree on very sluggish forecasts.
    Keywords: imperfect information, inattention, forecast errors, disagreement, business cycle.
    JEL: D84 E3 E37
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:bfr:banfra:307&r=for
  10. By: Maral Kichian; Fabio Rumler; Paul Corrigan
    Abstract: We propose alternative single-equation semi-structural models for forecasting inflation in Canada, whereby structural New Keynesian models are combined with time-series features in the data. Several marginal cost measures are used, including one that in addition to unit labour cost also integrates relative price shocks known to play an important role in open-economies. Structural estimation and testing is conducted using identification-robust methods that are valid whatever the identification status of the econometric model. We find that our semi-structural models perform better than various strictly structural and conventional time series models. In the latter case, forecasting performance is significantly better, both in the short run and in the medium run.
    Keywords: Inflation and prices; Econometric and statistical methods
    JEL: C13 C53 E31
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:bca:bocawp:10-34&r=for
  11. By: Lars Winkelmann
    Abstract: This paper investigates the information content of the Norges Bank’s key rate projections. Wavelet spectrum estimates provide the basis for estimating jump probabilities of short- and long-term interest rates on monetary policy announcement days before and after the introduction of key rate projections. The behavior of short-term interest rates reveals that key rate projections have only little effects on market’s forecasting ability of current target rate changes. In contrast, longer-term interest rates indicate that the announcement of key rate projections has significantly reduced market participants’ revisions of the expected future policy path. Therefore, the announcement of key rate projections further improves central bank communication.
    Keywords: Central bank communication, interest rate projections, wavelets, jump probabilities
    JEL: E52 E58 C14
    Date: 2010–12
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2010-062&r=for
  12. By: Jérôme Coffinet (Banque de France - Banque de France); Adrian Pop (LEMNA - Laboratoire d'économie et de management de Nantes Atlantique - Université de Nantes : EA4272); Muriel Tiesset (Banque de France - Banque de France)
    Abstract: The current financial crisis offers a unique opportunity to investigate the leading properties of market indicators in a stressed environment and their usefulness from a banking supervision perspective. One pool of relevant information that has been little explored in the empirical literature is the market for bank's exchange-traded option contracts. In this paper, we first extract implied volatility indicators from the prices of the most actively traded option contracts on financial firms' equity. We then examine empirically their ability to predict financial distress by applying survival analysis techniques to a sample of large US financial firms. We find that market indicators extracted from option prices significantly explain the survival time of troubled financial firms and do a better job in predicting financial distress than other time-varying covariates typically included in bank failure models. Overall, both accounting information and option prices contain useful information of subsequent financial problems and, more importantly, the combination produces good forecasts in a high-stress financial world, full of doubts and uncertainties.
    Keywords: Financial distress ; Financial system oversight ; Market discipline ; Options ; Implied volatility ; Survival analysis
    Date: 2010–10–01
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-00547744_v1&r=for

This nep-for issue is ©2011 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.