
on Forecasting 
By:  Muhammad Akram; Rob J. Hyndman; J. Keith Ord 
Abstract:  We consider the properties of nonlinear exponential smoothing state space models under various assumptions about the innovations, or error, process. Our interest is restricted to those models that are used to describe nonnegative observations, because many series of practical interest are so constrained. We first demonstrate that when the innovations process is assumed to be Gaussian, the resulting prediction distribution may have an infinite variance beyond a certain forecasting horizon. Further, such processes may converge almost surely to zero; an examination of purely multiplicative models reveals the circumstances under which this condition arises. We then explore effects of using an (invalid) Gaussian distribution to describe the innovations process when the underlying distribution is lognormal. Our results suggest that this approximation causes no serious problems for parameter estimation or for forecasting one or two steps ahead. However, for longerterm forecasts the true prediction intervals become increasingly skewed, whereas those based on the Gaussian approximation may have a progressively larger negative component. In addition, the Gaussian approximation is clearly inappropriate for simulation purposes. The performance of the Gaussian approximation is compared with those of two lognormal models for shortterm forecasting using data on the weekly sales of over three hundred items of costume jewelry. 
Keywords:  Forecasting; time series; exponential smoothing; positivevalued processes; seasonality; state space models. 
JEL:  C53 C22 C51 
Date:  2007–11 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:200714&r=for 
By:  Sancetta, A. 
Abstract:  Given the sequential update nature of Bayes rule, Bayesian methods find natural application to prediction problems. Advances in computational methods allow to routinely use Bayesian methods in econometrics. Hence, there is a strong case for feasible predictions in a Bayesian framework. This paper studies the theoretical properties of Bayesian predictions and shows that under minimal conditions we can derive finite sample bounds for the loss incurred using Bayesian predictions under the KullbackLeibler divergence. In particular, the concept of universality of predictions is discussed and universality is established for Bayesian predictions in a variety of settings. These include predictions under almost arbitrary loss functions, model averaging, predictions in a non stationary environment and under model missspecification. Given the possibility of regime switches and multiple breaks in economic series, as well as the need to choose among different forecasting models, which may inevitably be missspecified, the finite sample results derived here are of interest to economic and financial forecasting. Key words: Bayesian prediction, model averaging, universal prediction. 
JEL:  C11 C44 C53 
Date:  2007–11 
URL:  http://d.repec.org/n?u=RePEc:cam:camdae:0755&r=for 
By:  Sellin, Peter (Monetary Policy Department, Central Bank of Sweden) 
Abstract:  In this paper we undertake an outofsample evaluation of the ability of a model to forecast the Swedish Krona’s real and nominal effective exchange rate, using a cointegrating relation between the real exchange rate, relative output, terms of trade and net foreign assets (or alternatively the trade balance). The cointegrating relation is derived from a theoretical model of the New Open Economy Macroeconomics type. The forecasting performance of our estimated vector error correction model is quite good once the dynamics of the model have been augmented with an interest rate differential. 
Keywords:  New Open Economy Macroeconomics; real exchange rate; nominal exchange rate; forecasting 
JEL:  C52 C53 F31 
Date:  2007–10–01 
URL:  http://d.repec.org/n?u=RePEc:hhs:rbnkwp:0213&r=for 
By:  D'Agostino, Antonello; Giannone, Domenico 
Abstract:  This paper compares the predictive ability of the factor models of Stock and Watson (2002) and Forni, Hallin, Lippi, and Reichlin (2005) using a large panel of US macroeconomic variables. We propose a nesting procedure of comparison that clarifies and partially overturns the results of similar exercises in the literature. Our main conclusion is that for the dataset at hand the two methods have a similar performance and produce highly collinear forecasts. 
Keywords:  Factor Models; Forecasting; Large CrossSection 
JEL:  C31 C52 C53 
Date:  2007–11 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:6564&r=for 
By:  Samrat Goswami (Department of Economics, University of Pretoria); Rangan Gupta (Department of Economics, University of Pretoria); Eric Scaling (Department of Economics, University of Pretoria) 
Abstract:  This paper develops an estimable hybrid model that combines the theoretical rigor of a microfounded DSGE model with the flexibility of an atheoretical VAR model. The model is estimated via maximum likelihood technique based on quarterly data on real Gross National Product (GNP), consumption, investment and hours worked, for the South African economy, over the period of 1970:1 to 2000:4. Based on a recursive estimation using the Kalman filter algorithm, the outofsample forecasts from the hybrid model are then compared with the forecasts generated from the Classical and Bayesian variants of the VAR for the period 2001:12005:4. The results indicate that, in general, the estimated hybrid DSGE model outperforms the Classical VAR, but not the Bayesian VARs in terms of outofsample forecasting performances. 
Keywords:  DSGE Model, VAR and BVAR Model, NewKeynesianMacroeconomic Model, Forecast Accuracy, DSGE Forecasts, VAR Forecasts, BVAR Forecasts. 
JEL:  E17 E27 E32 E37 E47 
Date:  2007–07 
URL:  http://d.repec.org/n?u=RePEc:pre:wpaper:200724&r=for 
By:  Francesco Audrino; Dominik Colagelo 
Abstract:  We propose a new semiparametric model for the implied volatility surface, which incorporates machine learning algorithms. Given a starting model, a treeboosting algorithm sequentially minimizes the residuals of observed and estimated implied volatility. To overcome the poor predicting power of existing models, we include a grid in the region of interest, and implement a crossvalidation strategy to find an optimal stopping value for the tree boosting. Back testing the outofsample appropriateness of our model on a large data set of implied volatilities on S&P 500 options, we provide empirical evidence of its strong predictive potential, as well as comparing it to other standard approaches in the literature. 
Keywords:  Implied Volatility, Implied Volatility Surface, Forecasting, Tree Boosting, Regression Tree, Functional Gradient Descent 
JEL:  C13 C14 C51 C53 C63 G12 G13 
Date:  2007–11 
URL:  http://d.repec.org/n?u=RePEc:usg:dp2007:200742&r=for 
By:  Villani, Mattias (Research Department, Central Bank of Sweden); Kohn, Robert (Faculty of Business, University of New South Wales); Giordani, Paolo (Research Department, Central Bank of Sweden) 
Abstract:  We model a regression density nonparametrically so that at each value of the covariates the density is a mixture of normals with the means, variances and mixture probabilities of the com ponents changing smoothly as a function of the covariates. The model extends existing models in two important ways. First, the components are allowed to be heteroscedastic regressions as the standard model with homoscedastic regressions can give a poor fit to heteroscedastic data, especially when the number of covariates is large. Furthermore, we typically need a lot fewer heteroscedastic components, which makes it easier to interpret the model and speeds up the computation. The second main extension is to introduce a novel variable selection prior into all the components of the model. The variable selection prior acts as a selfadjusting mech anism that prevents overfitting and makes it feasible to fit highdimensional nonparametric surfaces. We use Bayesian inference and Markov Chain Monte Carlo methods to estimate the model. Simulated and real examples are used to show that the full generality of our model is required to fit a large class of densities. 
Keywords:  Bayesian inference; Markov Chain Monte Carlo; Mixture of Experts; Predictive inference; Splines; ValueatRisk; Variable selection 
JEL:  E50 
Date:  2007–09–01 
URL:  http://d.repec.org/n?u=RePEc:hhs:rbnkwp:0211&r=for 