nep-for New Economics Papers
on Forecasting
Issue of 2012‒01‒25
eleven papers chosen by
Rob J Hyndman
Monash University

  1. Does the Box-Cox transformation help in forecasting macroeconomic time series? By Tommaso Proietti; Helmut Lütkepohl
  2. How informative are in-sample information criteria to forecasting? the case of Chilean GDP By Medel, Carlos A.
  3. Forecast combination for discrete choice models: predicting FOMC monetary policy decisions By Laurent Pauwels; Andrey Vasnev
  4. Short-run forecasting of the euro-dollar exchange rate with economic fundamentals By Marcos dal Bianco; Maximo Camacho; Gabriel Perez-Quiros
  5. Do central banks forecast influence private agents ? Forecasting performance vs. signals By Paul Hubert;
  6. The Two-sided Weibull Distribution and Forecasting Financial Tail Risk By Richard Gerlach; Qian Chen
  7. Bayesian Semi-parametric Expected Shortfall Forecasting in Financial Markets By Richard H. Gerlach; Cathy W.S. Chen; Liou-Yan Lin
  8. A Bayesian evaluation of alternative models of trend inflation By Todd E. Clark; Taeyoung Doh
  9. The Multistep Beveridge-Nelson Decomposition By Tommaso Proietti
  10. Central Bank Forecasts as an Instrument of Monetary Policy By Paul Hubert;
  11. ¿Akaike o Schwarz? ¿Cuál elegir para predecir el PIB chileno? By Medel, Carlos A.

  1. By: Tommaso Proietti (The University of Sydney Business School and Università degli Studi di Roma "Tor Vergata"); Helmut Lütkepohl (European University Institute)
    Abstract: The paper investigates whether transforming a time series leads to an improvement in forecasting accuracy. The class of transformations that is considered is the Box-Cox power transformation, which applies to series measured on a ratio scale. We propose a nonparametric approach for estimating the optimal transformation parameter based on the frequency domain estimation of the prediction error variance, and also conduct an extensive recursive forecast experiment on a large set of seasonal monthly macroeconomic time series related to industrial production and retail turnover. In about one fifth of the series considered the Box-Cox transformation produces forecasts significantly better than the untransformed data at one-step-ahead horizon; in most of the cases the logarithmic transformation is the relevant one. As the forecast horizon increases, the evidence in favour of a transformation becomes less strong. Typically, the naïve predictor that just reverses the transformation leads to a lower mean square error than the optimal predictor at short forecast leads. We also discuss whether the preliminary in-sample frequency domain assessment conducted provides a reliable guidance which series should be transformed for improving significantly the predictive performance.
    Keywords: Forecasts comparisons. Multi-step forecasting. Rolling forecasts. Nonparametric estimation of prediction error variance.
    Date: 2011–10
    URL: http://d.repec.org/n?u=RePEc:syb:wpbsba:08/2011&r=for
  2. By: Medel, Carlos A.
    Abstract: There is no standard economic forecasting procedure that systematically outperforms the others at all horizons and with any dataset. A common way to proceed, in many contexts, is to choose the best model within a family based on a fitting criteria, and then forecast. I compare the out-of-sample performance of a large number of autoregressive integrated moving average (ARIMA) models with some variations, chosen by three commonly used information criteria for model building: Akaike, Schwarz, and Hannan-Quinn. I perform this exercise to identify how to achieve the smallest root mean squared forecast error with models based on information criteria. I use the Chilean GDP dataset, estimating with a rolling window sample to generate one- to four-step ahead forecasts. Also, I examine the role of seasonal adjustment and the Easter effect on out-of-sample performance. After the estimation of more than 20 million models, the results show that Akaike and Schwarz are better criteria for forecasting purposes where the traditional ARMA specification is preferred. Accounting for the Easter effect improves the forecast accuracy only with seasonally adjusted data, and second-order stationarity is best.
    Keywords: data mining; forecasting; ARIMA; seasonal adjustment; Easter-effect
    JEL: C13 C53 C52 C22
    Date: 2012–01–14
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:35949&r=for
  3. By: Laurent Pauwels (The University of Sydney Business School); Andrey Vasnev (The University of Sydney Business School)
    Abstract: This paper provides a methodology for combining forecasts based on several discrete choice models. This is achieved primarily by combining one-step-ahead probability forecast associated with each model. The paper applies well-established scoring rules for qualitative response models in the context of forecast combination. Log-scores and quadratic-scores are both used to evaluate the forecasting accuracy of each model and to combine the probability forecasts. In addition to producing point forecasts, the effect of sampling variation is also assessed. This methodology is applied to forecast the US Federal Open Market Committee (FOMC) decisions in changing the federal funds target rate. Several of the economic fundamentals influencing the FOMC decisions are nonstationary over time and are modelled in a similar fashion to Hu and Phillips (2004a, JoE). The empirical results show that combining forecasted probabilities using scores mostly outperforms both equal weight combination and forecasts based on multivariate models.
    Keywords: Forecast combination, Probability forecast, Discrete choice models, Monetary policy decisions
    Date: 2011–06
    URL: http://d.repec.org/n?u=RePEc:syb:wpbsba:11/2011&r=for
  4. By: Marcos dal Bianco; Maximo Camacho; Gabriel Perez-Quiros
    Abstract: We propose a fundamentals-based econometric model for the weekly changes in the euro-dollar rate with the distinctive feature of mixing economic variables quoted at different frequencies. The model obtains good in-sample fit and, more importantly, encouraging out-of-sample forecasting results at horizons ranging from one-week to one month. Specifically, we obtain statistically significant improvements upon the hard-to-beat random-walk model using traditional statistical measures of forecasting error at all horizons. Moreover, our model obtains a great improvement when we use the direction of change metric, which has more economic relevance than other loss measures. With this measure, our model performs much better at all forecasting horizons than a naive model that predicts the exchange rate as an equal chance to go up or down, with statistically significant improvements.
    Keywords: Euro-dollar rate, Exchange rate forecasting, State-space model, Mixed frequencies
    JEL: F31 F37 C01 C22
    Date: 2012–01
    URL: http://d.repec.org/n?u=RePEc:bbv:wpaper:1201&r=for
  5. By: Paul Hubert (Observatoire Français des Conjonctures Économiques); (Observatoire Français des Conjonctures Économiques)
    Abstract: Focusing on a set of central banks that publish their internal macroeconomic forecasts in real time enables one to shed light on the expectations channel of monetary policy. The main contribution of this paper is to assess whether central bank forecasts influence private forecasts. The response is positive for inflation forecasts in Sweden, the UK and Japan. To disentangle the sources of influence of central banks, two concepts are proposed: endogenous influence, which is due to more accurate central bank forecasts, and exogenous influence, which is due to central bank signals on either future policy decisions or private information. Original empirical evidence on the central bank forecasting performance relative to private agents is provided, and estimates show that in Sweden, more accurate inflation forecasts generate specific central bank influence that is different from the influence from signals. The publication of forecasts may therefore refer to two central banking strategies that aim to shape private expectations: forecasting or policymaking.
    Keywords: Monetary Policy; Imperfect Information; Communication; Endogenous Influence; Exogenous Influence.
    JEL: E52 E58
    Date: 2011–10
    URL: http://d.repec.org/n?u=RePEc:fce:doctra:1120&r=for
  6. By: Richard Gerlach (Faculty of Economics and Business, The University of Sydney); Qian Chen
    Abstract: A two-sided Weibull is developed to model the conditional financial return distribution, for the purpose of forecasting Value at Risk (VaR) and conditional VaR. A range of conditional return distributions are combined with four volatility specifications to forecast tail risk in four international markets, two exchange rates and one individual asset series, over a four year forecast period that includes the recent global financial crisis. The two-sided Weibull performs at least as well as other distributions for VaR forecasting, but performs most favourably for conditional Value at Risk forecasting, prior to as well as during and after the recent crisis.
    Keywords: Two-sidedWeibull, Value-at-Risk, Expected shortfall, Back-testing, global financial crisis, volatility.
    Date: 2011–01
    URL: http://d.repec.org/n?u=RePEc:syb:wpbsba:01/2011&r=for
  7. By: Richard H. Gerlach (The University of Sydney Business School); Cathy W.S. Chen (Feng Chia University, Taiwan); Liou-Yan Lin (Feng Chia University, Taiwan)
    Abstract: Bayesian semi-parametric estimation has proven effective for quantile estimation in general and specifically in financial Value at Risk forecasting. Expected short-fall is a competing tail risk measure, involving a conditional expectation beyond a quantile, that has recently been semi-parametrically estimated via asymmetric least squares and so-called expectiles. An asymmetric Gaussian density is proposed allowing a likelihood to be developed that leads to Bayesian semi-parametric estimation and forecasts of expectiles and expected shortfall. Further, the conditional autoregressive expectile class of model is generalised to two fully nonlinear families. Adaptive Markov chain Monte Carlo sampling schemes are employed for estimation in these families. The proposed models are clearly favoured in an empirical study forecasting eleven financial return series: clear evidence of more accurate expected shortfall forecasting, compared to a range of competing methods is found. Further, the most favoured models are those estimated by Bayesian methods.
    Keywords: CARE model; Nonlinear; Asymmetric Gaussian distribution; Expected shortfall; semi-parametric.
    Date: 2012–01
    URL: http://d.repec.org/n?u=RePEc:syb:wpbsba:01/2012&r=for
  8. By: Todd E. Clark; Taeyoung Doh
    Abstract: With the concept of trend inflation now widely understood as to be important as a measure of the public's perception of the inflation goal of the central bank and important to the accuracy of longer-term inflation forecasts, this paper uses Bayesian methods to assess alternative models of trend inflation. Reflecting models common in reduced-form inflation modeling and forecasting, we specify a range of models of inflation, including: AR with constant trend; AR with trend equal to last period's inflation rate; local level model; AR with random walk trend; AR with trend equal to the long-run expectation from the Survey of Professional Forecasters; and AR with time-varying parameters. We consider versions of the models with constant shock variances and with stochastic volatility. We first use Bayesian metrics to compare the fits of the alternative models. We then use Bayesian methods of model averaging to account for uncertainty surrounding the model of trend inflation, to obtain an alternative estimate of trend inflation in the U.S. and to generate medium-term, model-average forecasts of inflation. Our analysis yields two broad results. First, in model fit and density forecast accuracy, models with stochastic volatility consistently dominate those with constant volatility. Second, for the specification of trend inflation, it is difficult to say that one model of trend inflation is the best. Among alternative models of the trend in core PCE inflation, the local level specification of Stock and Watson (2007) and the survey-based trend specification are about equally good. Among competing models of trend GDP inflation, several trend specifications seem to be about equally good.
    Keywords: Bayesian statistical decision theory ; Inflation (Finance) - Mathematical models ; Forecasting
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:fip:fedcwp:1134&r=for
  9. By: Tommaso Proietti (The University of Sydney Business School)
    Abstract: The Beveridge-Nelson decomposition defines the trend component in terms of the eventual forecast function, as the value the series would take if it were on its long-run path. The paper introduces the multistep Beveridge-Nelson decomposition, which arises when the forecast function is obtained by the direct autoregressive approach, which optimizes the predictive ability of the AR model at forecast horizons greater than one. We compare our proposal with the standard Beveridge-Nelson decomposition, for which the forecast function is obtained by iterating the one-stepahead predictions via the chain rule. We illustrate that the multistep Beveridge-Nelson trend is more efficient than the standard one in the presence of model misspecification and we subsequently assess the predictive validity of the extracted transitory component with respect to future growth.
    Keywords: Trend and Cycle. Forecasting. Filtering. Misspecification
    JEL: C22 C52 E32
    Date: 2011–10
    URL: http://d.repec.org/n?u=RePEc:syb:wpbsba:09/2011&r=for
  10. By: Paul Hubert (Observatoire Français des Conjonctures Économiques); (Observatoire Français des Conjonctures Économiques)
    Abstract: Policymakers at the Federal Open Market Committee (FOMC) publish macroeconomic forecasts since 1979. Some studies find that these forecasts do not contain useful information to predict these macroeconomic variables compared to other forecasts. In this paper, we examine the value of publishing these FOMC forecasts in two steps. We assess whether they influence private forecasts and whether they may be considered as a policy instrument. We provide original evidence that FOMC forecasts are able to influence private expectations. We also find that FOMC forecasts give information about future Fed rate movements, affect policy variables in a different way from the Fed rate, and respond differently to macro shocks.
    Keywords: Monetary Policy, Forecasts, FOMC, Influence, Policy signals, Structural VAR.
    JEL: E52 E58
    Date: 2011–11
    URL: http://d.repec.org/n?u=RePEc:fce:doctra:1123&r=for
  11. By: Medel, Carlos A.
    Abstract: Schwarz. In this paper I evaluate the predictive ability of the Akaike and Schwarz information criteria using autoregressive integrated moving average models, with sectoral data of Chilean GDP. In terms of root mean square error, and after the estimation of more than a million models, the results indicate that —on average— the models based on the Schwarz criterion perform better than those selected with the Akaike, for the four horizons analyzed. Furthermore, the statistical significance of these differences indicates that the superiority in favor of the Schwarz criterion holds mainly at higher horizo
    Keywords: information criteria; data mining; forecasting; ARIMA
    JEL: C13 C53 C52 C22
    Date: 2012–01–14
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:35950&r=for

This nep-for issue is ©2012 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.