nep-for New Economics Papers
on Forecasting
Issue of 2010‒06‒04
ten papers chosen by
Rob J Hyndman
Monash University

  1. Bayesian Model Averaging. An Application to Forecast Inflation in Colombia By Eliana González
  2. The Superiority of Greenbook Forecasts and the Role of Recessions By Kishor N. Kundan
  3. Conditional forecasts in DSGE models By Junior Maih
  4. Weights and pools for a Norwegian density combination By Hilde Bjørnland; Karsten Gerdrup; Christie Smith; Anne Sofie Jore; Leif Anders Thorsrud
  5. Do Google Searches Help in Nowcasting Private Consumption? A Real-Time Evidence for the US By Konstantin A. Kholodilin; Maximilian Podstawski; Boriss Siliverstovs
  6. Ranking Multivariate GARCH Models by Problem Dimension By Massimiliano Caporin; Michael McAleer
  7. Predictions of short-term rates and the expectations hypothesis By Massimo Guidolin; Daniel L. Thornton
  8. Bootstrap prediction intervals for VaR and ES in the context of GARCH models By María Rosa Nieto; Esther Ruiz
  9. The Implied Cost of Capital: A New Approach By Hou, Kewei; van Dijk, Mathijs A.; Zhang, Yinglei
  10. Adaptive Interest Rate Modelling By Mengmeng Guo; Wolfgang Karl Härdle

  1. By: Eliana González
    Abstract: An application of Bayesian Model Averaging, BMA, is implemented to construct combined forecasts for the colombian inflation for the short and medium run. A model selection algorithm is applied over a set of linear models with a large dataset of potencial predictors using marginal as well as predictive likelihood. The forecasts obtained when using predictive likelihood outperformed the ones obtained when using marginal likelihood. BMA forecasts reduce forecasting error compared to the individual forecasts, equal weighted average, dynamic factors model and random walk forecasts for most horizons. Additionally, the BMA outperformed for some horizons the frequentist Information theoretic model average, ITMA, when the weights of both methodologies are build based on the predictive ability of the models.
    Date: 2010–05–23
  2. By: Kishor N. Kundan (University of Wisconsin-Milwaukee)
    Abstract: In this paper, we investigate the role of recessions on the relative forecasting performance of the Fed and the private sector. Romer and Romer (2000) showed that the Fed's forecasts of inflation and output were superior to that of the private sector in the pre-1991 period. D'Agostino and Whelan (2008) found that the information superiority of the Fed deteriorated after 1991. Our results show that the information superiority of the Fed in forecasting real activity did arise from its forecasting dominance during recessions. If recessions are excluded from the pre-1992 period, the informational advantage of the Fed disappears, and in some cases, private sector forecasts perform better. We do not find any systematic effect of recessions on inflation forecasts.
    Keywords: Greenbook Forecasts, Recessions, Business Cycle Turning Points
    JEL: E31 E32 E37
    Date: 2010
  3. By: Junior Maih (Norges Bank (Central Bank of Norway))
    Abstract: New-generation DSGE models are sometimes misspecified in dimensions that matter for their forecasting performance. The paper suggests one way to improve the forecasts of a DSGE model using a conditioning information that need not be accurate. The technique presented allows for agents to anticipate the information on the conditioning variables several periods ahead. It also allows the forecaster to apply a continuum of degrees of uncertainty around the mean of the conditioning information, making hard-conditional and unconditional forecasts special cases. An application to a small open-economy DSGE model shows that the benefits of conditioning depend crucially on the ability of the model to capture the correlation between the conditioning information and the variables of interest.
    Keywords: DSGE model, conditional forecast
    JEL: C53 F47
    Date: 2010–04–27
  4. By: Hilde Bjørnland (Norwegian School of Management (BI) and Norges Bank (Central Bank of Norway)); Karsten Gerdrup (Norges Bank (Central Bank of Norway)); Christie Smith (Reserve Bank of New Zealand); Anne Sofie Jore (Norges Bank (Central Bank of Norway)); Leif Anders Thorsrud (Norges Bank (Central Bank of Norway))
    Abstract: We apply a suite of models to produce quasi-real-time density forecasts of Norwegian GDP and in ation, and evaluate dfferent combination and selection methods using the Kullback-Leibler information criterion (KLIC). We use linear and logarithmic opinion pools in conjunction with various weighting schemes, and we compare these combinations to two different selection methods. In our application, logarithmic opinion pools were better than linear opinion pools, and score-based weights were generally superior to other weighting schemes. Model selection generally yielded poor density forecasts, as evaluated by KLIC.
    Keywords: Model combination; evaluation; density forecasting; KLIC
    JEL: C32 C52 C53 E52
    Date: 2010–05–19
  5. By: Konstantin A. Kholodilin (DIW Berlin, Germany); Maximilian Podstawski (Universität Potsdam, Wirtschafts- und Sozialwissen- schaftliche Fakultät, Potsdam, Germany); Boriss Siliverstovs (KOF Swiss Economic Institute, ETH Zurich, Switzerland)
    Abstract: In this paper, we investigate whether the Google search activity can help in nowcasting the year-on-year growth rates of monthly US private consumption using a real-time data set. The Google-based forecasts are compared to those based on a benchmark AR(1) model and the models including the consumer surveys and financial indicators. According to the Diebold-Mariano test of equal predictive ability, the null hypothesis can be rejected suggesting that Google-based forecasts are significantly more accurate than those of the benchmark model. At the same time, the corresponding null hypothesis cannot be rejected for models with consumer surveys and financial variables. Moreover, when we apply the test of superior predictive ability (Hansen, 2005) that controls for possible data-snooping biases, we are able to reject the null hypothesis that the benchmark model is not inferior to any alternative model forecasts. Furthermore, the results of the model confidence set (MCS) procedure (Hansen et al., 2005) suggest that the autoregressive benchmark is not selected into a set of the best forecasting models. Apart from several Google-based models, the MCS contains also some models including survey-based indicators and financial variables. We conclude that Google searches do help improving the nowcasts of the private consumption in US.
    Keywords: Google indicators, real-time nowcasting, principal components, US private consumption.
    JEL: C22 C53 C82
    Date: 2010–04
  6. By: Massimiliano Caporin; Michael McAleer (University of Canterbury)
    Abstract: In the last 15 years, several Multivariate GARCH (MGARCH) models have appeared in the literature. The two most widely known and used are the Scalar BEKK model of Engle and Kroner (1995) and Ding and Engle (2001), and the DCC model of Engle (2002). Some recent research has begun to examine MGARCH specifications in terms of their out-of-sample forecasting performance. In this paper, we provide an empirical comparison of a set of MGARCH models, namely BEKK, DCC, Corrected DCC (cDCC) of Aeilli (2008), CCC of Bollerslev (1990), Exponentially Weighted Moving Average, and covariance shrinking of Ledoit and Wolf (2004), using the historical data of 89 US equities. Our methods follow some of the approach described in Patton and Sheppard (2009), and contribute to the literature in several directions. First, we consider a wide range of models, including the recent cDCC model and covariance shrinking. Second, we use a range of tests and approaches for direct and indirect model comparison, including the Weighted Likelihood Ratio test of Amisano and Giacomini (2007). Third, we examine how the model rankings are influenced by the cross-sectional dimension of the problem.
    Keywords: Covariance forecasting; model confidence set; model ranking; MGARCH; model comparison
    JEL: C32 C53 C52
    Date: 2010–05–01
  7. By: Massimo Guidolin; Daniel L. Thornton
    Abstract: Despite its role in monetary policy and finance, the expectations hypothesis (EH) of the term structure of interest rates has received virtually no empirical support. The empirical failure of the EH was attributed to a variety of econometric biases associated with the single-equation models most often used to test it, although no bias seems to account for the extent and magnitude of the failure. This paper analyzes the EH by focusing on the predictability of the short-term rate. This is done by comparing h-month ahead forecasts for the 1- and 3-month Treasury bill yields implied by the EH with the forecasts from random-walk, Diebold and Li’s (2006), and Duffee’s (2002) models. The evidence suggests that the failure of the EH is likely a consequence of market participants’ inability to adequately predict the short-term rate, in that none of these models out-performs a simple random walk model in recursive, real time experiments. Using standard methods that take into account the additional uncertainty caused by the need to estimate model parameters, the null hypothesis of equal predictive accuracy of each models relative to the random walk alternative is never rejected
    Keywords: Rational expectations (Economic theory) ; Interest rates
    Date: 2010
  8. By: María Rosa Nieto; Esther Ruiz
    Abstract: In this paper, we propose a new bootstrap procedure to obtain prediction intervals of future Value at Risk (VaR) and Expected Shortfall (ES) in the context of univariate GARCH models. These intervals incorporate the parameter uncertainty associated with the estimation of the conditional variance of returns. Furthermore, they do not depend on any particular assumption on the error distribution. Alternative bootstrap intervals previously proposed in the literature incorporate the first but not the second source of uncertainty when computing the VaR and ES. We also consider an iterated smoothed bootstrap with better properties than traditional ones when computing prediction intervals for quantiles. However, this latter procedure depends on parameters that have to be arbitrarily chosen and is very complicated computationally. We analyze the finite sample performance of the proposed procedure and show that the coverage of our proposed procedure is closer to the nominal than that of the alternatives. All the results are illustrated by obtaining one-step-ahead prediction intervals of the VaR and ES of several real time series of financial returns.
    Keywords: Expected Shortfall, Feasible Historical Simulation, Hill estimator, Parameter uncertainty, Quantile intervals, Value at Risk
    Date: 2010–05
  9. By: Hou, Kewei (Ohio State University); van Dijk, Mathijs A. (Erasmus University Rotterdam); Zhang, Yinglei (Chinese University of Hong Kong)
    Abstract: We propose a new approach to estimate the implied cost of capital (ICC). Our approach is distinct from prior studies in that we do not rely on analysts' earnings forecasts to compute the ICC. Instead, we use a cross-sectional model to forecast the earnings of individual firms. Our approach has two major advantages. First, it allows us to estimate the ICC for a much larger sample of firms over a much longer time period. Second, it is not affected by the various issues that lead to the well-documented biases in analysts' forecasts. Our cross-sectional earnings model delivers earnings forecasts that outperform consensus analyst forecasts. We show that, as a result, our approach to estimate the ICC produces a more reliable proxy for expected returns than other approaches. We present evidence on the implications for the equity premium and a variety of asset pricing anomalies.
    Date: 2010–02
  10. By: Mengmeng Guo; Wolfgang Karl Härdle
    Abstract: A good description of the dynamics of interest rates is crucial to price derivatives and to hedge corresponding risk. Interest rate modelling in an unstable macroeconomic context motivates one factor models with time varying parameters. In this paper, the local parameter approach is introduced to adaptively estimate interest rate models. This method can be generally used in time varying coefficient parametric models. It is used not only to detect the jumps and structural breaks, but also to choose the largest time homogeneous interval for each time point, such that in this interval, the coeffcients are statistically constant. We use this adaptive approach and apply it in simulations and real data. Using the three month treasure bill rate as a proxy of the short rate, we nd that our method can detect both structural changes and stable intervals for homogeneous modelling of the interest rate process. In more unstable macroeconomy periods, the time homogeneous interval can not last long. Furthermore, our approach performs well in long horizon forecasting.
    Keywords: CIR model, Interest rate, Local parametric approach, Time homogeneous interval, Adaptive statistical techniques
    JEL: E44 G12 G32 N22
    Date: 2010–05

This nep-for issue is ©2010 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.