nep-for New Economics Papers
on Forecasting
Issue of 2007‒06‒11
nine papers chosen by
Rob J Hyndman
Monash University

  1. Does the Option Market Produce Superior Forecasts of Noise-Corrected Volatility Measures? By Gael M. Martin; Andrew Reidy; Jill Wright
  2. Forecasting key macroeconomic variables from a large number of predictors: A state space approach By Arvid Raknerud, Terje Skjerpen and Anders Rygh Swensen
  3. Monetary Policy with Model Uncertainty: Distribution Forecast Targeting By Svensson, Lars E O; Williams, Noah
  4. Infrastructure forecast modelling II; Policy planning via structural analysis and balanced scorecard. Electricity in Colombia case study. By Daniel TORRES – GRACIA,
  5. Flexible Time Series Forecasting Using Shrinkage Techniques and Focused Selection Criteria By Christian T. Brownlees; Giampiero Gallo
  6. Forecasting technology costs via the Learning Curve – Myth or Magic? By Alberth, S.
  7. The expectations hypothesis of the term structure: some empirical evidence for Portugal By Silva Lopes, Artur C.; M. Monteiro, Olga Susana
  8. Exchange Rate Fundamentals and Order Flow By Martin D. D. Evans; Richard K. Lyons
  9. A multiple regime smooth transition heterogeneous autoregressive model for long memory and asymmetries By Michael McAller; Marcelo C. Medeiros

  1. By: Gael M. Martin; Andrew Reidy; Jill Wright
    Abstract: This paper presents a comprehensive empirical evaluation of option-implied and returns-based forecasts of volatility, in which recent developments related to the impact on measured volatility of market microstructure noise are taken into account. The paper also assesses the robustness of the performance of the option-implied forecasts to the way in which those forecasts are extracted from the option market. Using a test for superior predictive ability, model-free implied volatility, which aggregates information across the volatility 'smile', and at-the-money implied volatility, which ignores such information, are both tested as benchmark forecasts. The forecasting assessment is conducted using intraday data for three Dow Jones Industrial Average (DJIA) stocks and the S&P500 index over the 1996-2006 period, with future volatility proxied by a range of alternative noise-corrected realized measures. The results provide compelling evidence against the model-free forecast, with its poor performance linked to both the bias and excess variability that it exhibits as a forecast of actual volatility. The positive bias, in particular, is consistent with the option market factoring in a substantial premium for volatility risk. In contrast, implied volatility constructed from liquid at-the-money options is given strong support as a forecast of volatility, at least for the DJIA stocks. Neither benchmark is supported for the S&P500 index. Importantly, the qualitative results are robust to the measure used to proxy future volatility, although there is some evidence to suggest that any option-implied forecast may perform less well in forecasting the measure that excludes jump information, namely bi-power variation.
    Keywords: Volatility Forecasts; Quadratic Variation; Intraday Volatility Measures
    JEL: C10 C53 G12
    Date: 2007–06
  2. By: Arvid Raknerud, Terje Skjerpen and Anders Rygh Swensen (Statistics Norway)
    Abstract: We use state space methods to estimate a large dynamic factor model for the Norwegian economy involving 93 variables for 1978Q2–2005Q4. The model is used to obtain forecasts for 22 key variables that can be derived from the original variables by aggregation. To investigate the potential gain in using such a large information set, we compare the forecasting properties of the dynamic factor model with those of univariate benchmark models. We find that there is an overall gain in using the dynamic factor model, but that the gain is notable only for a few of the key variables.
    Keywords: Dynamic factor model; Forecasting; State space; AR models
    JEL: C13 C22 C32 C53
    Date: 2007–05
  3. By: Svensson, Lars E O; Williams, Noah
    Abstract: We examine optimal and other monetary policies in a linear-quadratic setup with a relatively general form of model uncertainty, so-called Markov jump-linear-quadratic systems extended to include forward-looking variables and unobservable "modes." The form of model uncertainty our framework encompasses includes: simple i.i.d. model deviations; serially correlated model deviations; estimable regime-switching models; more complex structural uncertainty about very different models, for instance, backward- and forward-looking models; time-varying central-bank judgment about the state of model uncertainty; and so forth. We provide an algorithm for finding the optimal policy as well as solutions for arbitrary policy functions. This allows us to compute and plot consistent distribution forecasts - fan charts - of target variables and instruments. Our methods hence extend certainty equivalence and "mean forecast targeting" to more general certainty non-equivalence and "distribution forecast targeting."
    Keywords: multiplicative uncertainty; Optimal policy
    JEL: E42 E52 E58
    Date: 2007–06
  4. By: Daniel TORRES – GRACIA,
    Abstract: Countless developments in forecasting models and processes have been developed in the last four decades, to support increasing demands in infrastructure services delivery and competitiveness. A wide range of these developments is available nowadays from highly detailed macroeconomic or technical forecasting models based on convergence of marginal functions, up to strategic business models supported on broad and soft methods of decision making modelling. Despite of this, it is surprising the little level of practical implementation of forecasting models within public infrastructure planning organisations involved in policy making and implementation processes that decide on short, medium and long term of important resources. Lacks on its practical approach, as well as the methodological complexity and high costs involved within its implementation processes, are among their major weaknesses reported in the literature. These models have been restricted to very specialized infrastructure planning units able to manage long term implementation process, involve highly qualified professional within the process and finance its related costs (private firms mostly). National and sub-national organizations with infrastructure planning functions, under tight schedules and financial restrictions, are applying softer focuses on forecasting modelling support on the social and institutional agreement on future goals as alternative method to replace complex analyses on future trends common in more complex models approaches. Although this “agreed” focus is valid under the assumption of the social acceptance premise, it is constraining technical validity to this validity, and reducing their trend’s analysis to any kind of technical assumption, whether rational or not, as long as it has been subjectively agreed. This focus has gained terrain within some national efforts in forecasting modelling in Colombia in the last years, reducing its technical analysis and quality in their practical results. The PPCI2 programme, through the Sustainable Infrastructure and Energy Directorate and DNP, within its objective of improve technical capacity in project planning process under private and public initiatives, promoted a methodological proposal to develop an infrastructure forecasting model able to empower technical quality of decision making models, under a practical, reliable and doable implementation process across top level decision makers of the infrastructure planning units at national level in Colombia. The result of that effort is the Infrastructure General Forecasting model – IGF introduced in this document. The IGF is a quantitative-qualitative model supported in the structural analysis process – SAP to study interactions across forecasted variables and the Balanced Scorecard methodology – BSC to the support decision making processes. The underlined analytical method is the matrix analysis of probabilistic cross impacts. Its major outputs include trends and long term figures on forecasted variables as forecasting models traditionally offer, but additionally includes analyses on the role played by forecasted variables under a set of trends alternatives within the sector they affect. Basic modules of the IGF model includes historical trends analysis, analysis on current situation and short term effect of new PPPs and forecasting simulation analysis. The three modules combine the analyst criteria with secondary data under a systematic approach. This document explains IGF´s conceptual basis and methodology, as well as its structure across energy, telecommunications, transport and water supply sectors, and some pilot results on the coverage and market of the electricity service. Results show strong inputs to empower technical and strategic capacity across infrastructure planning units in Colombia useful to policy makers, sector planners, consultants, lectures and researchers on infrastructure planning.
    Date: 2007–05–07
  5. By: Christian T. Brownlees (Università degli Studi di Firenze, Dipartimento di Statistica); Giampiero Gallo (Università degli Studi di Firenze, Dipartimento di Statistica "G. Parenti")
    Abstract: Nonlinear time series models can exhibit components such as long range trends and seasonalities that may be modeled in a flexible fashion. The resulting unconstrained maximum likelihood estimator can be too heavily parameterized and suboptimal for forecasting purposes. The paper proposes the use of a class of shrinkage estimators that includes the Ridge estimator for forecasting time series, with a special attention to GARCH and ACD models. The local large sample properties of this class of shrinkage estimators is investigated. Moreover, we propose symmetric and asymmetric focused selection criteria of shrinkage estimators. The focused information criterion selection strategy consists of picking up the shrinkage estimator that minimizes the estimated risk (e.g. MSE) of a given smooth function of the parameters of interest to the forecaster. The usefulness of such shrinkage techniques is illustrated by means of a simulation exercise and an intra-daily financial durations forecasting application. The empirical application shows that an appropriate shrinkage forecasting methodology can significantly outperform the unconstrained ML forecasts of rich flexible specifications.
    Keywords: Forecasting, Shrinkage Estimation, FIC, MEM, GARCH, ACD
    JEL: C22 C51 C53
    Date: 2007–05
  6. By: Alberth, S.
    Abstract: To further our understanding of the effectiveness of learning or experience curves to forecast technology costs, a statistical analysis using historical data has been carried out. Three hypotheses have been tested using available data sets that together shed light on the ability of experience curves to forecast future technology costs. The results indicate that the Single Factor Learning Curve is a highly effective estimator of future costs with little bias when errors were viewed in their log format. However it was also found that due to the convexity of the log curve an overestimation of potential cost reductions arises when returned to their monetary units. Furthermore the effectiveness of increasing weights for more recent data was tested using Weighted Least Squares with exponentially increasing weights. This resulted in forecasts that were typically less biased than when using Ordinary Least Square and highlighted the potential benefits of this method.
    Keywords: Forecasting, Learning curves, Renewable energy
    JEL: D81 O30
    Date: 2007–02
  7. By: Silva Lopes, Artur C.; M. Monteiro, Olga Susana
    Abstract: The purpose of this paper is to test the (rational) expectations hypothesis of the term structure of interest rates using Portuguese data for the interbank money market. The results obtained support only a very weak, long-run or "asymptotic" version of the hypothesis, and broadly agree with previous evidence for other countries. The empirical evidence supports the cointegration of Portuguese rates and the "puzzle" well known in the literature: although its forecasts of future short-term rates are in the correct direction, the spread between longer and shorter rates fails to forecast future longer rates. In the single equation framework, the implications of the hypothesis in terms of the predictive ability of the spread are also clearly rejected.
    Keywords: term structure of interest rates; expectations hypothesis; hypothesis testing; cointegration; Portugal.
    JEL: C32 C22 E43
    Date: 2007–05–31
  8. By: Martin D. D. Evans; Richard K. Lyons
    Abstract: We address whether transaction flows in foreign exchange markets convey fundamental information. Our GE model includes fundamental information that first manifests at the micro level and is not symmetrically observed by all agents. This produces foreign exchange transactions that play a central role in information aggregation, providing testable links between transaction flows, exchange rates, and future fundamentals. We test these links using data on all end-user currency trades received at Citibank over 6.5 years, a sample sufficiently long to analyze real-time forecasts at the quarterly horizon. The predictions are borne out in four empirical findings that define this paper's main contribution: (1) transaction flows forecast future macro variables such as output growth, money growth, and inflation, (2) transaction flows forecast these macro variables significantly better than the exchange rate does, (3) transaction flows (proprietary) forecast future exchange rates, and (4) the forecasted part of fundamentals is better at explaining exchange rates than standard measured fundamentals.
    JEL: F31 G12 G14
    Date: 2007–06
  9. By: Michael McAller (School of Economics and Commerce, University of Western Australia); Marcelo C. Medeiros (Department of Economics, PUC-Rio)
    Abstract: In this paper we propose a flexible model to capture nonlinearities and long-range dependence in time series dynamics. The new model is a multiple regime smooth transition extension of the Heterogenous Autoregressive (HAR) model, which is specifically designed to model the behavior of the volatility inherent in financial time series. The model is able to describe simultaneously long memory, as well as sign and size asymmetries. A sequence of tests is developed to determine the number of regimes, and an estimation and testing procedure is presented. Monte Carlo simulations evaluate the finite-sample properties of the proposed tests and estimation procedures. We apply the model to several Dow Jones Industrial Average index stocks using transaction level data from the Trades and Quotes database that covers ten years of data. We find strong support for long memory and both sign and size asymmetries. Furthermore, the new model, when combined with the linear HAR model, is viable and flexible for purposes of forecasting volatility.
    Keywords: Realized volatility, smooth transition, heterogeneous autoregression, financial econometrics,leverage, sign and size asymmetries, forecasting, risk management, model combination.
    Date: 2007–04

This nep-for issue is ©2007 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.