nep-for New Economics Papers
on Forecasting
Issue of 2007‒11‒24
seven papers chosen by
Rob J Hyndman
Monash University

  1. Non-linear exponential smoothing and positive data By Muhammad Akram; Rob J. Hyndman; J. Keith Ord
  2. Universality of Bayesian Predictions By Sancetta, A.
  3. Using a New Open Economy Macroeconomics model to make real nominal exchange rate forecasts By Sellin, Peter
  4. Comparing Alternative Predictors Based on Large-Panel Factor Models By D'Agostino, Antonello; Giannone, Domenico
  5. Forecasting the South African Economy: A DSGE-VAR Approach By Samrat Goswami; Rangan Gupta; Eric Scaling
  6. Forecasting Implied Volatility Surfaces By Francesco Audrino; Dominik Colagelo
  7. Nonparametric Regression Density Estimation Using Smoothly Varying Normal Mixtures By Villani, Mattias; Kohn, Robert; Giordani, Paolo

  1. By: Muhammad Akram; Rob J. Hyndman; J. Keith Ord
    Abstract: We consider the properties of nonlinear exponential smoothing state space models under various assumptions about the innovations, or error, process. Our interest is restricted to those models that are used to describe non-negative observations, because many series of practical interest are so constrained. We first demonstrate that when the innovations process is assumed to be Gaussian, the resulting prediction distribution may have an infinite variance beyond a certain forecasting horizon. Further, such processes may converge almost surely to zero; an examination of purely multiplicative models reveals the circumstances under which this condition arises. We then explore effects of using an (invalid) Gaussian distribution to describe the innovations process when the underlying distribution is lognormal. Our results suggest that this approximation causes no serious problems for parameter estimation or for forecasting one or two steps ahead. However, for longer-term forecasts the true prediction intervals become increasingly skewed, whereas those based on the Gaussian approximation may have a progressively larger negative component. In addition, the Gaussian approximation is clearly inappropriate for simulation purposes. The performance of the Gaussian approximation is compared with those of two lognormal models for short-term forecasting using data on the weekly sales of over three hundred items of costume jewelry.
    Keywords: Forecasting; time series; exponential smoothing; positive-valued processes; seasonality; state space models.
    JEL: C53 C22 C51
    Date: 2007–11
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2007-14&r=for
  2. By: Sancetta, A.
    Abstract: Given the sequential update nature of Bayes rule, Bayesian methods find natural application to prediction problems. Advances in computational methods allow to routinely use Bayesian methods in econometrics. Hence, there is a strong case for feasible predictions in a Bayesian framework. This paper studies the theoretical properties of Bayesian predictions and shows that under minimal conditions we can derive finite sample bounds for the loss incurred using Bayesian predictions under the Kullback-Leibler divergence. In particular, the concept of universality of predictions is discussed and universality is established for Bayesian predictions in a variety of settings. These include predictions under almost arbitrary loss functions, model averaging, predictions in a non stationary environment and under model miss-specification. Given the possibility of regime switches and multiple breaks in economic series, as well as the need to choose among different forecasting models, which may inevitably be miss-specified, the finite sample results derived here are of interest to economic and financial forecasting. Key words: Bayesian prediction, model averaging, universal prediction.
    JEL: C11 C44 C53
    Date: 2007–11
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:0755&r=for
  3. By: Sellin, Peter (Monetary Policy Department, Central Bank of Sweden)
    Abstract: In this paper we undertake an out-of-sample evaluation of the ability of a model to forecast the Swedish Krona’s real and nominal effective exchange rate, using a cointegrating relation between the real exchange rate, relative output, terms of trade and net foreign assets (or alternatively the trade balance). The cointegrating relation is derived from a theoretical model of the New Open Economy Macroeconomics type. The forecasting performance of our estimated vector error correction model is quite good once the dynamics of the model have been augmented with an interest rate differential.
    Keywords: New Open Economy Macroeconomics; real exchange rate; nominal exchange rate; forecasting
    JEL: C52 C53 F31
    Date: 2007–10–01
    URL: http://d.repec.org/n?u=RePEc:hhs:rbnkwp:0213&r=for
  4. By: D'Agostino, Antonello; Giannone, Domenico
    Abstract: This paper compares the predictive ability of the factor models of Stock and Watson (2002) and Forni, Hallin, Lippi, and Reichlin (2005) using a large panel of US macroeconomic variables. We propose a nesting procedure of comparison that clarifies and partially overturns the results of similar exercises in the literature. Our main conclusion is that for the dataset at hand the two methods have a similar performance and produce highly collinear forecasts.
    Keywords: Factor Models; Forecasting; Large Cross-Section
    JEL: C31 C52 C53
    Date: 2007–11
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:6564&r=for
  5. By: Samrat Goswami (Department of Economics, University of Pretoria); Rangan Gupta (Department of Economics, University of Pretoria); Eric Scaling (Department of Economics, University of Pretoria)
    Abstract: This paper develops an estimable hybrid model that combines the theoretical rigor of a micro-founded DSGE model with the flexibility of an atheoretical VAR model. The model is estimated via maximum likelihood technique based on quarterly data on real Gross National Product (GNP), consumption, investment and hours worked, for the South African economy, over the period of 1970:1 to 2000:4. Based on a recursive estimation using the Kalman filter algorithm, the out-of-sample forecasts from the hybrid model are then compared with the forecasts generated from the Classical and Bayesian variants of the VAR for the period 2001:1-2005:4. The results indicate that, in general, the estimated hybrid DSGE model outperforms the Classical VAR, but not the Bayesian VARs in terms of out-of-sample forecasting performances.
    Keywords: DSGE Model, VAR and BVAR Model, New-Keynesian-Macroeconomic Model, Forecast Accuracy, DSGE Forecasts, VAR Forecasts, BVAR Forecasts.
    JEL: E17 E27 E32 E37 E47
    Date: 2007–07
    URL: http://d.repec.org/n?u=RePEc:pre:wpaper:200724&r=for
  6. By: Francesco Audrino; Dominik Colagelo
    Abstract: We propose a new semi-parametric model for the implied volatility surface, which incorporates machine learning algorithms. Given a starting model, a tree-boosting algorithm sequentially minimizes the residuals of observed and estimated implied volatility. To overcome the poor predicting power of existing models, we include a grid in the region of interest, and implement a cross-validation strategy to find an optimal stopping value for the tree boosting. Back testing the out-of-sample appropriateness of our model on a large data set of implied volatilities on S&P 500 options, we provide empirical evidence of its strong predictive potential, as well as comparing it to other standard approaches in the literature.
    Keywords: Implied Volatility, Implied Volatility Surface, Forecasting, Tree Boosting, Regression Tree, Functional Gradient Descent
    JEL: C13 C14 C51 C53 C63 G12 G13
    Date: 2007–11
    URL: http://d.repec.org/n?u=RePEc:usg:dp2007:2007-42&r=for
  7. By: Villani, Mattias (Research Department, Central Bank of Sweden); Kohn, Robert (Faculty of Business, University of New South Wales); Giordani, Paolo (Research Department, Central Bank of Sweden)
    Abstract: We model a regression density nonparametrically so that at each value of the covariates the density is a mixture of normals with the means, variances and mixture probabilities of the com- ponents changing smoothly as a function of the covariates. The model extends existing models in two important ways. First, the components are allowed to be heteroscedastic regressions as the standard model with homoscedastic regressions can give a poor fit to heteroscedastic data, especially when the number of covariates is large. Furthermore, we typically need a lot fewer heteroscedastic components, which makes it easier to interpret the model and speeds up the computation. The second main extension is to introduce a novel variable selection prior into all the components of the model. The variable selection prior acts as a self-adjusting mech- anism that prevents overfitting and makes it feasible to fit high-dimensional nonparametric surfaces. We use Bayesian inference and Markov Chain Monte Carlo methods to estimate the model. Simulated and real examples are used to show that the full generality of our model is required to fit a large class of densities.
    Keywords: Bayesian inference; Markov Chain Monte Carlo; Mixture of Experts; Predictive inference; Splines; Value-at-Risk; Variable selection
    JEL: E50
    Date: 2007–09–01
    URL: http://d.repec.org/n?u=RePEc:hhs:rbnkwp:0211&r=for

This nep-for issue is ©2007 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.