nep-for New Economics Papers
on Forecasting
Issue of 2010‒05‒22
fourteen papers chosen by
Rob J Hyndman
Monash University

  1. Automatic forecasting with a modified exponential smoothing state space framework By Alysha M De Livera
  2. Do Jumps Matter? Forecasting Multivariate Realized Volatility allowing for Common Jumps By Yin Liao; Heather M. Anderson; Farshid Vahid
  3. Forecasting age-related changes in breast cancer mortality among white and black US women: A functional approach By Farah Yasmeen; Rob J Hyndman; Bircan Erbas
  4. Forecasting the Intermittent Demand for Slow-Moving Items By Keith Ord; Ralph Snyder; Adrian Beaumont
  5. Forecast Combinations By Marco Aiolfi; Carlos Capistrán; Allan Timmermann
  6. Forecast Combination Based on Multiple Encompassing Tests in a Macroeconomic DSGE System By Costantini, Mauro; Gunter, Ulrich; Kunst, Robert M.
  7. Empirical simultaneous confidence regions for path-forecasts By Jordà, Òscar; Knüppel, Malte; Marcellino, Massimiliano
  8. Forecasting the Malmquist Productivity Index By Daskovska, Alexandra; Simar, Léopold; Van Bellegem, Sébastien
  9. Ranking multivariate GARCH models by problem dimension By Caporin, M.; McAleer, M.J.
  10. GDP Trend Deviations and the Yield Spread: the Case of Five E.U. Countries By Periklis Gogas; Ioannis Pragidis
  11. Thirty-five years of long-run energy forecasting : lessons for climate change policy By Hourcade, Jean-Charles; Nadaud, Franck
  12. Two stock options at the races: Black-Scholes forecasts By Gleb Oshanin; Gregory Schehr
  13. Теорема о существовании разрывов в шкале вероятностей. II. By Harin, Alexander
  14. Bayesian hierarchical modelling of bacteria growth By Ana P. Palacios; Juan Miguel Marín; Michael P. Wiper

  1. By: Alysha M De Livera
    Abstract: A new automatic forecasting procedure is proposed based on a recent exponential smoothing framework which incorporates a Box-Cox transformation and ARMA residual corrections. The procedure is complete with well-defined methods for initialization, estimation, likelihood evaluation, and analytical derivation of point and interval predictions under a Gaussian error assumption. The algorithm is examined extensively by applying it to single seasonal and non-seasonal time series from the M and the M3 competitions, and is shown to provide competitive out-of-sample forecast accuracy compared to the best methods in these competitions and to the traditional exponential smoothing framework. The proposed algorithm can be used as an alternative to existing automatic forecasting procedures in modeling single seasonal and non-seasonal time series. In addition, it provides the new option of automatic modeling of multiple seasonal time series which cannot be handled using any of the existing automatic forecasting procedures. The proposed automatic procedure is further illustrated by applying it to two multiple seasonal time series involving call center data and electricity demand data.
    Keywords: Exponential smoothing, state space models, automatic forecasting, Box-Cox transformation, residual adjustment, multiple seasonality, time series
    JEL: C22 C53
    Date: 2010–04–28
  2. By: Yin Liao; Heather M. Anderson; Farshid Vahid
    Abstract: Realized volatility of stock returns is often decomposed into two distinct components that are attributed to continuous price variation and jumps. This paper proposes a tobit multivariate factor model for the jumps coupled with a standard multivariate factor model for the continuous sample path to jointly forecast volatility in three Chinese Mainland stocks. Out of sample forecast analysis shows that separate multivariate factor models for the two volatility processes outperform a single multivariate factor model of realized volatility, and that a single multivariate factor model of realized volatility outperforms univariate models.
    Keywords: Realized Volatility, Bipower Variation, Jumps, Common Factors, Forecasting
    JEL: C13 C32 C52 C53 G32
    Date: 2010–05
  3. By: Farah Yasmeen; Rob J Hyndman; Bircan Erbas
    Abstract: The disparity in breast cancer mortality rates among white and black US women is widening with higher mortality rates among black women. We apply functional time series models on age-specific breast cancer mortality rates for each group of women, and forecast their mortality curves using exponential smoothing state-space models with damping. The data were obtained from the Surveillance, Epidemiology and End Results (SEER) program of the US (SEER, 2007). Mortality data were obtained from the National Centre for Health Statistics (NCHS) available on the SEER*Stat database. We use annual unadjusted breast cancer mortality rates from 1969 to 2004 in 5-year age groups (45-49, 50-54, 55-59, 60-64, 65-69, 70-74, 75-79, 80-84). Age-specific mortality curves were obtained using nonparametric smoothing methods. The curves are then decomposed using functional principal components and we fit functional time series models with four basis functions for each population separately. The curves from each population are forecast and prediction intervals are calculated. Twenty-year forecasts indicate an over-all decline in future breast cancer mortality rates for both groups of women. This decline is steeper among white women aged 55-73 and black women aged 60-84. For black women under 55 years of age, the forecast rates are relatively stable indicating no significant change in future breast cancer mortality rates among young black women in the next 20 years.
    Keywords: Breast cancer mortality, racial and ethnic disparities, screening, trends, forecasting, functional data analysis
    JEL: C14 C23 J11
    Date: 2010–04–22
  4. By: Keith Ord; Ralph Snyder; Adrian Beaumont
    Abstract: Organizations with large-scale inventory systems typically have a large proportion of items for which demand is intermittent and low volume. We examine different approaches to forecasting for such products, paying particular attention to the need for inventory planning over a multi-period lead-time when the underlying process may be non-stationary. We develop a forecasting framework based upon the zero-inflated Poisson distribution (ZIP), which enables the explicit evaluation of the multi-period lead-time demand distribution in special cases and an effective simulation scheme more generally. We also develop performance measures related to the entire predictive distribution, rather than focusing exclusively upon point predictions. The ZIP model is compared to a number of existing methods using data on the monthly demand for 1,046 automobile parts, provided by a US automobile manufacturer. We conclude that the ZIP scheme compares favorably to other approaches, including variations of Croston's method as well as providing a straightforward basis for inventory planning.
    Keywords: Croston's method; Exponential smoothing; Intermittent demand; Inventory control; Prediction likelihood; State space models; Zero-inflated Poisson distribution
    JEL: C22
    Date: 2010–05
  5. By: Marco Aiolfi (Goldman Sachs Asset Management); Carlos Capistrán (Banco de México); Allan Timmermann (University of California, San Diego and CREATES)
    Abstract: We consider combinations of subjective survey forecasts and model-based forecasts from linear and non-linear univariate specifications as well as multivariate factor-augmented models. Empirical results suggest that a simple equal-weighted average of survey forecasts outperform the best model-based forecasts for a majority of macroeconomic variables and forecast horizons. Additional improvements can in some cases be gained by using a simple equal-weighted average of survey and model-based forecasts. We also provide an analysis of the importance of model instability for explaining gains from forecast combination. Analytical and simulation results uncover break scenarios where forecast combinations outperform the best individual forecasting model.
    Keywords: Time-series forecasts, survey forecasts, model instability
    JEL: C22 C53
    Date: 2010–05–06
  6. By: Costantini, Mauro (Department of Economics, University of Vienna, Vienna, Austria); Gunter, Ulrich (Department of Economics, University of Vienna, Vienna, Austria); Kunst, Robert M. (Department of Economics and Finance, Institute for Advanced Studies, Vienna, Austria and Department of Economics, University of Vienna, Vienna, Austria)
    Abstract: We use data generated by a macroeconomic DSGE model to study the relative benefits of forecast combinations based on forecast-encompassing tests relative to simple uniformly weighted forecast averages across rival models. Assumed rival models are four linear autoregressive specifications, one of them a more sophisticated factor-augmented vector autoregression (FAVAR). The forecaster is assumed not to know the true data-generating DSGE model. The results critically depend on the prediction horizon. While one-step prediction hardly supports test-based combinations, the test-based procedure attains a clear lead at prediction horizons greater than two.
    Keywords: Combining forecasts, encompassing tests, model selection, time series, DSGE model
    JEL: C15 C32 C53
    Date: 2010–05
  7. By: Jordà, Òscar; Knüppel, Malte; Marcellino, Massimiliano
    Abstract: Measuring and displaying uncertainty around path-forecasts, i.e. forecasts made in period T about the expected trajectory of a random variable in periods T+1 to T+H is a key ingredient for decision making under uncertainty. The probabilistic assessment about the set of possible trajectories that the variable may follow over time is summarized by the simultaneous confidence region generated from its forecast generating distribution. However, if the null model is only approximative or altogether unavailable, one cannot derive analytic expressions for this confidence region, and its non-parametric estimation is impractical given commonly available predictive sample sizes. Instead, this paper derives the approximate rectangular confidence regions that control false discovery rate error, which are a function of the predictive sample covariance matrix and the empirical distribution of the Mahalanobis distance of the path-forecast errors. These rectangular regions are simple to construct and appear to work well in a variety of cases explored empirically and by simulation. The proposed techniques are applied to provide confidence bands around the Fed and Bank of England real-time path-forecasts of growth and inflation. --
    Keywords: Path forecast,forecast uncertainty,simultaneous confidence region,Scheffé's S-method,Mahalanobis distance,false discovery rate
    JEL: C32 C52 C53
    Date: 2010
  8. By: Daskovska, Alexandra; Simar, Léopold; Van Bellegem, Sébastien
    Abstract: The Malmquist Productivity Index (MPI) suggests a convenient way of measuring the productivity change of a given unit between two consequent time periods. Until now, only a static approach for analyzing the MPI was available in the literature. However, this hides a potentially valuable information given by the evolution of productivity over time. In this paper, we introduce a dynamic procedure for forecasting the MPI. We compare several approaches and give credit to a method based on the assumption of circularity. Because the MPI is not circular, we present a new decomposition of the MPI, in which the time-varying indices are circular. Based on that decomposition, a new working dynamic forecasting procedure is proposed and illustrated. To construct prediction intervals of the MPI, we extend the bootstrap method in order to take into account potential serial correlation in the data. We illustrate all the new techniques described above by forecasting the productivityt index of 17 OCDE countries, constructed from their GDP, labor and capital stock.
    Keywords: Malmquist Productivity Index, circularity efficiency, smooth bootstrap
    Date: 2009–06–11
  9. By: Caporin, M. (Erasmus Econometric Institute); McAleer, M.J. (Erasmus Econometric Institute)
    Abstract: In the last 15 years, several Multivariate GARCH (MGARCH) models have appeared in the literature. The two most widely known and used are the Scalar BEKK model of Engle and Kroner (1995) and Ding and Engle (2001), and the DCC model of Engle (2002). Some recent research has begun to examine MGARCH specifications in terms of their out-of-sample forecasting performance. In this paper, we provide an empirical comparison of a set of MGARCH models, namely BEKK, DCC, Corrected DCC (cDCC) of Aeilli (2008), CCC of Bollerslev (1990), Exponentially Weighted Moving Average, and covariance shrinking of Ledoit and Wolf (2004), using the historical data of 89 US equities. Our methods follow some of the approach described in Patton and Sheppard (2009), and contribute to the literature in several directions. First, we consider a wide range of models, including the recent cDCC model and covariance shrinking. Second, we use a range of tests and approaches for direct and indirect model comparison, including the Weighted Likelihood Ratio test of Amisano and Giacomini (2007). Third, we examine how the model rankings are influenced by the cross-sectional dimension of the problem.
    Keywords: covariance forecasting;model confidence set;model ranking;MGARCH;model comparison
    Date: 2010–05–11
  10. By: Periklis Gogas; Ioannis Pragidis
    Abstract: Several studies have established the predictive power of the yield curve in terms of real economic activity. In this paper we use data for a variety of E.U. countries: both EMU (Germany, France, Italy) and non-EMU members (Sweden and the U.K.). The data used range from 1991:Q1 to 2009:Q1. For each country, we extract the long run trend and the cyclical component of real economic activity, while the corresponding interbank interest rates of long and short term maturities are used for the calculation of the country specific yield spreads. We also augment the models tested with non monetary policy variables: the countries' unemployment rates and stock indices. The methodology employed in the effort to forecast real output, is a probit model of the inverse cumulative distribution function of the standard distribution, using several formal forecasting and goodness of fit evaluation tests. The results show that the yield curve augmented with the non-monetary variables has significant forecasting power in terms of real economic activity but the results differ qualitatively between the individual economies examined raising non-trivial policy implications.
    Date: 2010–05
  11. By: Hourcade, Jean-Charles; Nadaud, Franck
    Abstract: This paper sheds light on an implicit dimension of the climate policy debate: the extent to which supply-side response (emission-reducing energy technologies) may substitute for the transformation of consumption behavior and thus help get around the political difficulties surrounding such behavioral transformation. The paper performs a meta-review of long-term energy forecasts since the end of the 1960s in order to put in perspective the controversies around technological optimism about the potential for cheap, large-scale, carbon-free energy production. This retrospective analysis encompasses 116 scenarios conducted over 36 years and analyzes their predictions for a) fossil fuels, b) nuclear energy, and c) renewable energy. The analysis demonstrates how the predicted relative shares of these three types of energy have evolved since 1970, for two cases: a) predicted shares in 2010, which shows how the initial outlooks for the 2000-2010 period have been revised as a function of observed trends; and b) predicted shares for t+30, which shows how these revisions have affected medium-term prospects. The analysis shows a decrease, since 1970, in technological optimism about switching away from fossil fuels; this decrease is unsurprisingly correlated with a decline in modelers’ beliefs in the suitability of nuclear energy. But, after a trend of increasing optimism, a declining trend also characterizes renewable energies in the 1980s and 1990s before a slight revival of technological optimism about renewables in the aftermath of Kyoto.
    Keywords: Energy Production and Transportation,Energy and Environment,Environment and Energy Efficiency,Energy Demand,Climate Change Mitigation and Green House Gases
    Date: 2010–05–01
  12. By: Gleb Oshanin; Gregory Schehr
    Abstract: Suppose one buys two very similar stocks and is curious about how much, after some time T, one of them will contribute to the overall asset, expecting, of course, that it should be around 1/2 of the sum. Here we examine this question within the classical Black and Scholes (BS) model, focusing on the evolution of the probability density function P(w) of a random variable w = a_T^{(1)}/(a_T^{(1)} + a_T^{(2)}) where a_T^{(1)} and a_T^{(2)} are the values of two (either European- or the Asian-style) options produced by two absolutely identical BS stochastic equations. We show that within the realm of the BS model the behavior of P(w) is surprisingly different from common-sense-based expectations. For the European-style options P(w) always undergoes a transition, (when T approaches a certain threshold value), from a unimodal to a bimodal form with the most probable values being close to 0 and 1, and, strikingly, w =1/2 being the least probable value. This signifies that the symmetry between two options spontaneously breaks and just one of them completely dominates the sum. For path-dependent Asian-style options we observe the same anomalous behavior, but only for a certain range of parameters. Outside of this range, P(w) is always a bell-shaped function with a maximum at w = 1/2.
    Date: 2010–05
  13. By: Harin, Alexander
    Abstract: The theorem of existence of ruptures in the probability scale has been proved. The theorem can be used, e.g., in economics and forecasting. It can assist to solve paradoxes such as Allais paradox and the “four-fold-pattern” paradox and to create the correcting formula of forecasting.
    Keywords: probability; economics; forecasting; modeling; modelling; utility; decisions; uncertainty;
    JEL: D81 C5 E17 C1
    Date: 2010–05–10
  14. By: Ana P. Palacios; Juan Miguel Marín; Michael P. Wiper
    Abstract: Bacterial growth models are commonly used in food safety. Such models permit the prediction of microbial safety and the shelf life of perishable foods. In this paper, we study the problem of modelling bacterial growth when we observe multiple experimental results under identical environmental conditions. We develop a hierarchical version of the Gompertz equation to take into account the possibility of replicated experiments and we show how it can be fitted using a fully Bayesian approach. This approach is illustrated using experimental data from Listeria monocytogenes growth and the results are compared with alternative models. Model selection is undertaken throughout using an appropriate version of the deviance information criterion and the posterior predictive loss criterion. Models are fitted using WinBUGS via R2WinBUGS.
    Keywords: Predictive microbiology, Growth models, Gompertz curve, Bayesian hierarchical modelling
    Date: 2010–04

This nep-for issue is ©2010 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.