
on Forecasting 
By:  Alysha M De Livera 
Abstract:  A new automatic forecasting procedure is proposed based on a recent exponential smoothing framework which incorporates a BoxCox transformation and ARMA residual corrections. The procedure is complete with welldefined methods for initialization, estimation, likelihood evaluation, and analytical derivation of point and interval predictions under a Gaussian error assumption. The algorithm is examined extensively by applying it to single seasonal and nonseasonal time series from the M and the M3 competitions, and is shown to provide competitive outofsample forecast accuracy compared to the best methods in these competitions and to the traditional exponential smoothing framework. The proposed algorithm can be used as an alternative to existing automatic forecasting procedures in modeling single seasonal and nonseasonal time series. In addition, it provides the new option of automatic modeling of multiple seasonal time series which cannot be handled using any of the existing automatic forecasting procedures. The proposed automatic procedure is further illustrated by applying it to two multiple seasonal time series involving call center data and electricity demand data. 
Keywords:  Exponential smoothing, state space models, automatic forecasting, BoxCox transformation, residual adjustment, multiple seasonality, time series 
JEL:  C22 C53 
Date:  2010–04–28 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:201010&r=for 
By:  Yin Liao; Heather M. Anderson; Farshid Vahid 
Abstract:  Realized volatility of stock returns is often decomposed into two distinct components that are attributed to continuous price variation and jumps. This paper proposes a tobit multivariate factor model for the jumps coupled with a standard multivariate factor model for the continuous sample path to jointly forecast volatility in three Chinese Mainland stocks. Out of sample forecast analysis shows that separate multivariate factor models for the two volatility processes outperform a single multivariate factor model of realized volatility, and that a single multivariate factor model of realized volatility outperforms univariate models. 
Keywords:  Realized Volatility, Bipower Variation, Jumps, Common Factors, Forecasting 
JEL:  C13 C32 C52 C53 G32 
Date:  2010–05 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:201011&r=for 
By:  Farah Yasmeen; Rob J Hyndman; Bircan Erbas 
Abstract:  The disparity in breast cancer mortality rates among white and black US women is widening with higher mortality rates among black women. We apply functional time series models on agespecific breast cancer mortality rates for each group of women, and forecast their mortality curves using exponential smoothing statespace models with damping. The data were obtained from the Surveillance, Epidemiology and End Results (SEER) program of the US (SEER, 2007). Mortality data were obtained from the National Centre for Health Statistics (NCHS) available on the SEER*Stat database. We use annual unadjusted breast cancer mortality rates from 1969 to 2004 in 5year age groups (4549, 5054, 5559, 6064, 6569, 7074, 7579, 8084). Agespecific mortality curves were obtained using nonparametric smoothing methods. The curves are then decomposed using functional principal components and we fit functional time series models with four basis functions for each population separately. The curves from each population are forecast and prediction intervals are calculated. Twentyyear forecasts indicate an overall decline in future breast cancer mortality rates for both groups of women. This decline is steeper among white women aged 5573 and black women aged 6084. For black women under 55 years of age, the forecast rates are relatively stable indicating no significant change in future breast cancer mortality rates among young black women in the next 20 years. 
Keywords:  Breast cancer mortality, racial and ethnic disparities, screening, trends, forecasting, functional data analysis 
JEL:  C14 C23 J11 
Date:  2010–04–22 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:20109&r=for 
By:  Keith Ord; Ralph Snyder; Adrian Beaumont 
Abstract:  Organizations with largescale inventory systems typically have a large proportion of items for which demand is intermittent and low volume. We examine different approaches to forecasting for such products, paying particular attention to the need for inventory planning over a multiperiod leadtime when the underlying process may be nonstationary. We develop a forecasting framework based upon the zeroinflated Poisson distribution (ZIP), which enables the explicit evaluation of the multiperiod leadtime demand distribution in special cases and an effective simulation scheme more generally. We also develop performance measures related to the entire predictive distribution, rather than focusing exclusively upon point predictions. The ZIP model is compared to a number of existing methods using data on the monthly demand for 1,046 automobile parts, provided by a US automobile manufacturer. We conclude that the ZIP scheme compares favorably to other approaches, including variations of Croston's method as well as providing a straightforward basis for inventory planning. 
Keywords:  Croston's method; Exponential smoothing; Intermittent demand; Inventory control; Prediction likelihood; State space models; Zeroinflated Poisson distribution 
JEL:  C22 
Date:  2010–05 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:201012&r=for 
By:  Marco Aiolfi (Goldman Sachs Asset Management); Carlos Capistrán (Banco de México); Allan Timmermann (University of California, San Diego and CREATES) 
Abstract:  We consider combinations of subjective survey forecasts and modelbased forecasts from linear and nonlinear univariate specifications as well as multivariate factoraugmented models. Empirical results suggest that a simple equalweighted average of survey forecasts outperform the best modelbased forecasts for a majority of macroeconomic variables and forecast horizons. Additional improvements can in some cases be gained by using a simple equalweighted average of survey and modelbased forecasts. We also provide an analysis of the importance of model instability for explaining gains from forecast combination. Analytical and simulation results uncover break scenarios where forecast combinations outperform the best individual forecasting model. 
Keywords:  Timeseries forecasts, survey forecasts, model instability 
JEL:  C22 C53 
Date:  2010–05–06 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201021&r=for 
By:  Costantini, Mauro (Department of Economics, University of Vienna, Vienna, Austria); Gunter, Ulrich (Department of Economics, University of Vienna, Vienna, Austria); Kunst, Robert M. (Department of Economics and Finance, Institute for Advanced Studies, Vienna, Austria and Department of Economics, University of Vienna, Vienna, Austria) 
Abstract:  We use data generated by a macroeconomic DSGE model to study the relative benefits of forecast combinations based on forecastencompassing tests relative to simple uniformly weighted forecast averages across rival models. Assumed rival models are four linear autoregressive specifications, one of them a more sophisticated factoraugmented vector autoregression (FAVAR). The forecaster is assumed not to know the true datagenerating DSGE model. The results critically depend on the prediction horizon. While onestep prediction hardly supports testbased combinations, the testbased procedure attains a clear lead at prediction horizons greater than two. 
Keywords:  Combining forecasts, encompassing tests, model selection, time series, DSGE model 
JEL:  C15 C32 C53 
Date:  2010–05 
URL:  http://d.repec.org/n?u=RePEc:ihs:ihsesp:251&r=for 
By:  Jordà, Òscar; Knüppel, Malte; Marcellino, Massimiliano 
Abstract:  Measuring and displaying uncertainty around pathforecasts, i.e. forecasts made in period T about the expected trajectory of a random variable in periods T+1 to T+H is a key ingredient for decision making under uncertainty. The probabilistic assessment about the set of possible trajectories that the variable may follow over time is summarized by the simultaneous confidence region generated from its forecast generating distribution. However, if the null model is only approximative or altogether unavailable, one cannot derive analytic expressions for this confidence region, and its nonparametric estimation is impractical given commonly available predictive sample sizes. Instead, this paper derives the approximate rectangular confidence regions that control false discovery rate error, which are a function of the predictive sample covariance matrix and the empirical distribution of the Mahalanobis distance of the pathforecast errors. These rectangular regions are simple to construct and appear to work well in a variety of cases explored empirically and by simulation. The proposed techniques are applied to provide confidence bands around the Fed and Bank of England realtime pathforecasts of growth and inflation.  
Keywords:  Path forecast,forecast uncertainty,simultaneous confidence region,Scheffé's Smethod,Mahalanobis distance,false discovery rate 
JEL:  C32 C52 C53 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:zbw:bubdp1:201006&r=for 
By:  Daskovska, Alexandra; Simar, Léopold; Van Bellegem, Sébastien 
Abstract:  The Malmquist Productivity Index (MPI) suggests a convenient way of measuring the productivity change of a given unit between two consequent time periods. Until now, only a static approach for analyzing the MPI was available in the literature. However, this hides a potentially valuable information given by the evolution of productivity over time. In this paper, we introduce a dynamic procedure for forecasting the MPI. We compare several approaches and give credit to a method based on the assumption of circularity. Because the MPI is not circular, we present a new decomposition of the MPI, in which the timevarying indices are circular. Based on that decomposition, a new working dynamic forecasting procedure is proposed and illustrated. To construct prediction intervals of the MPI, we extend the bootstrap method in order to take into account potential serial correlation in the data. We illustrate all the new techniques described above by forecasting the productivityt index of 17 OCDE countries, constructed from their GDP, labor and capital stock. 
Keywords:  Malmquist Productivity Index, circularity efficiency, smooth bootstrap 
Date:  2009–06–11 
URL:  http://d.repec.org/n?u=RePEc:tse:wpaper:22150&r=for 
By:  Caporin, M. (Erasmus Econometric Institute); McAleer, M.J. (Erasmus Econometric Institute) 
Abstract:  In the last 15 years, several Multivariate GARCH (MGARCH) models have appeared in the literature. The two most widely known and used are the Scalar BEKK model of Engle and Kroner (1995) and Ding and Engle (2001), and the DCC model of Engle (2002). Some recent research has begun to examine MGARCH specifications in terms of their outofsample forecasting performance. In this paper, we provide an empirical comparison of a set of MGARCH models, namely BEKK, DCC, Corrected DCC (cDCC) of Aeilli (2008), CCC of Bollerslev (1990), Exponentially Weighted Moving Average, and covariance shrinking of Ledoit and Wolf (2004), using the historical data of 89 US equities. Our methods follow some of the approach described in Patton and Sheppard (2009), and contribute to the literature in several directions. First, we consider a wide range of models, including the recent cDCC model and covariance shrinking. Second, we use a range of tests and approaches for direct and indirect model comparison, including the Weighted Likelihood Ratio test of Amisano and Giacomini (2007). Third, we examine how the model rankings are influenced by the crosssectional dimension of the problem. 
Keywords:  covariance forecasting;model confidence set;model ranking;MGARCH;model comparison 
Date:  2010–05–11 
URL:  http://d.repec.org/n?u=RePEc:dgr:eureir:1765019447&r=for 
By:  Periklis Gogas; Ioannis Pragidis 
Abstract:  Several studies have established the predictive power of the yield curve in terms of real economic activity. In this paper we use data for a variety of E.U. countries: both EMU (Germany, France, Italy) and nonEMU members (Sweden and the U.K.). The data used range from 1991:Q1 to 2009:Q1. For each country, we extract the long run trend and the cyclical component of real economic activity, while the corresponding interbank interest rates of long and short term maturities are used for the calculation of the country specific yield spreads. We also augment the models tested with non monetary policy variables: the countries' unemployment rates and stock indices. The methodology employed in the effort to forecast real output, is a probit model of the inverse cumulative distribution function of the standard distribution, using several formal forecasting and goodness of fit evaluation tests. The results show that the yield curve augmented with the nonmonetary variables has significant forecasting power in terms of real economic activity but the results differ qualitatively between the individual economies examined raising nontrivial policy implications. 
Date:  2010–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1005.1326&r=for 
By:  Hourcade, JeanCharles; Nadaud, Franck 
Abstract:  This paper sheds light on an implicit dimension of the climate policy debate: the extent to which supplyside response (emissionreducing energy technologies) may substitute for the transformation of consumption behavior and thus help get around the political difficulties surrounding such behavioral transformation. The paper performs a metareview of longterm energy forecasts since the end of the 1960s in order to put in perspective the controversies around technological optimism about the potential for cheap, largescale, carbonfree energy production. This retrospective analysis encompasses 116 scenarios conducted over 36 years and analyzes their predictions for a) fossil fuels, b) nuclear energy, and c) renewable energy. The analysis demonstrates how the predicted relative shares of these three types of energy have evolved since 1970, for two cases: a) predicted shares in 2010, which shows how the initial outlooks for the 20002010 period have been revised as a function of observed trends; and b) predicted shares for t+30, which shows how these revisions have affected mediumterm prospects. The analysis shows a decrease, since 1970, in technological optimism about switching away from fossil fuels; this decrease is unsurprisingly correlated with a decline in modelers’ beliefs in the suitability of nuclear energy. But, after a trend of increasing optimism, a declining trend also characterizes renewable energies in the 1980s and 1990s before a slight revival of technological optimism about renewables in the aftermath of Kyoto. 
Keywords:  Energy Production and Transportation,Energy and Environment,Environment and Energy Efficiency,Energy Demand,Climate Change Mitigation and Green House Gases 
Date:  2010–05–01 
URL:  http://d.repec.org/n?u=RePEc:wbk:wbrwps:5298&r=for 
By:  Gleb Oshanin; Gregory Schehr 
Abstract:  Suppose one buys two very similar stocks and is curious about how much, after some time T, one of them will contribute to the overall asset, expecting, of course, that it should be around 1/2 of the sum. Here we examine this question within the classical Black and Scholes (BS) model, focusing on the evolution of the probability density function P(w) of a random variable w = a_T^{(1)}/(a_T^{(1)} + a_T^{(2)}) where a_T^{(1)} and a_T^{(2)} are the values of two (either European or the Asianstyle) options produced by two absolutely identical BS stochastic equations. We show that within the realm of the BS model the behavior of P(w) is surprisingly different from commonsensebased expectations. For the Europeanstyle options P(w) always undergoes a transition, (when T approaches a certain threshold value), from a unimodal to a bimodal form with the most probable values being close to 0 and 1, and, strikingly, w =1/2 being the least probable value. This signifies that the symmetry between two options spontaneously breaks and just one of them completely dominates the sum. For pathdependent Asianstyle options we observe the same anomalous behavior, but only for a certain range of parameters. Outside of this range, P(w) is always a bellshaped function with a maximum at w = 1/2. 
Date:  2010–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1005.1760&r=for 
By:  Harin, Alexander 
Abstract:  The theorem of existence of ruptures in the probability scale has been proved. The theorem can be used, e.g., in economics and forecasting. It can assist to solve paradoxes such as Allais paradox and the “fourfoldpattern” paradox and to create the correcting formula of forecasting. 
Keywords:  probability; economics; forecasting; modeling; modelling; utility; decisions; uncertainty; 
JEL:  D81 C5 E17 C1 
Date:  2010–05–10 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:22633&r=for 
By:  Ana P. Palacios; Juan Miguel Marín; Michael P. Wiper 
Abstract:  Bacterial growth models are commonly used in food safety. Such models permit the prediction of microbial safety and the shelf life of perishable foods. In this paper, we study the problem of modelling bacterial growth when we observe multiple experimental results under identical environmental conditions. We develop a hierarchical version of the Gompertz equation to take into account the possibility of replicated experiments and we show how it can be fitted using a fully Bayesian approach. This approach is illustrated using experimental data from Listeria monocytogenes growth and the results are compared with alternative models. Model selection is undertaken throughout using an appropriate version of the deviance information criterion and the posterior predictive loss criterion. Models are fitted using WinBUGS via R2WinBUGS. 
Keywords:  Predictive microbiology, Growth models, Gompertz curve, Bayesian hierarchical modelling 
Date:  2010–04 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:ws102109&r=for 