nep-for New Economics Papers
on Forecasting
Issue of 2012‒10‒06
ten papers chosen by
Rob J Hyndman
Monash University

  1. On Confidence Intervals for Autoregressive Roots and Predictive Regression By Peter C.B. Phillips
  2. A flexible semiparametric model for time series By Degui Li; Oliver Linton; Zudi Lu
  3. The macroeconomic forecasting performance of autoregressive models with alternative specifications of time-varying volatility By Todd E. Clark; Francesco Ravazzolo
  4. Forecasting Exchange Rates with Commodity Convenience Yields By Toni Beutler
  5. The yield spread puzzle and the information content of SPF forecasts By Kajal Lahiri; George Monokroussos; Yongchen Zhao
  6. Trimmed-mean inflation statistics: just hit the one in the middle By Brent Meyer; Guhan Venkatu
  7. Nonparametric Predictive Regression By Ioannis Kasparis; Elena Andreou; Peter C.B. Phillips
  8. Projecting the future cost of the French elderly disabled allowance using a microsimulation model By C. MARBOT; D. ROY
  9. The R Package MitISEM: Mixture of Student-t Distributions using Importance Sampling Weighted Expectation Maximization for Efficient and Robust Simulation By Nalan Basturk; Lennart Hoogerheide; Anne Opschoor; Herman K. van Dijk
  10. Optimal Predictions of Powers of Conditionally Heteroskedastic Processes By Christan Francq; Jean-Michel Zakoian

  1. By: Peter C.B. Phillips (Cowles Foundation, Yale University)
    Abstract: A prominent use of local to unity limit theory in applied work is the construction of confidence intervals for autogressive roots through inversion of the ADF t statistic associated with a unit root test, as suggested in Stock (1991). Such confidence intervals are valid when the true model has an autoregressive root that is local to unity (rho = 1 + (c/n)) but are invalid at the limits of the domain of definition of the localizing coefficient c because of a failure in tightness and the escape of probability mass. Consideration of the boundary case shows that these confidence intervals are invalid for stationary autoregression where they manifest locational bias and width distortion. In particular, the coverage probability of these intervals tends to zero as c approaches -infinity, and the width of the intervals exceeds the width of intervals constructed in the usual way under stationarity. Some implications of these results for predictive regression tests are explored. It is shown that when the regressor has autoregressive coefficient |rho| < 1 and the sample size n approaches infinity, the Campbell and Yogo (2006) confidence intervals for the regression coefficient have zero coverage probability asymptotically and their predictive test statistic Q erroneously indicates predictability with probability approaching unity when the null of no predictability holds. These results have obvious implications for empirical practice.
    Keywords: Autoregressive root, Confidence belt, Confidence interval, Coverage probability, Local to unity, Localizing coefficient, Predictive regression, Tightness
    JEL: C22
    Date: 2012–09
  2. By: Degui Li; Oliver Linton (Institute for Fiscal Studies and Cambridge University); Zudi Lu
    Abstract: We consider approximating a multivariate regression function by an affine combination of one-dimensional conditional component regression functions. The weight parameters involved in the approximation are estimated by least squares on the first-stage nonparametric kernel estimates. We establish asymptotic normality for the estimated weights and the regression function in two cases: the number of the covariates is finite, and the number of the covariates is diverging. As the observations are assumed to be stationary and near epoch dependent, the approach in this paper is applicable to estimation and forecasting issues in time series analysis. Furthermore, the methods and results are augmented by a simulation study and illustrated by application in the analysis of the Australian annual mean temperature anomaly series. We also apply our methods to high frequency volatility forecasting, where we obtain superior results to parametric methods.
    Keywords: Asymptotic normality, model averaging, Nadaraya-Watson kernel estimation, near epoch dependence, semiparametric method.
    JEL: C14 C22
    Date: 2012–09
  3. By: Todd E. Clark; Francesco Ravazzolo
    Abstract: This paper compares alternative models of time-varying macroeconomic volatility on the basis of the accuracy of point and density forecasts of macroeconomic variables. In this analysis, we consider both Bayesian autoregressive and Bayesian vector autoregressive models that incorporate some form of time-varying volatility, precisely stochastic volatility (both with constant and time-varying autoregressive coeffi cients), stochastic volatility following a stationary AR process, stochastic volatility coupled with fat tails, GARCH, and mixture-of-innovation models. The comparison is based on the accuracy of forecasts of key macroeconomic time series for real-time post–War-II data both for the United States and United Kingdom. The results show that the AR and VAR specifications with widely used stochastic volatility dominate models with alternative volatility specifications, in terms of point forecasting to some degree and density forecasting to a greater degree.
    Keywords: Simulation modeling ; Economic forecasting ; Bayesian statistical decision theory
    Date: 2012
  4. By: Toni Beutler (Study Center Gerzensee and University of Lausanne)
    Abstract: This paper investigates whether commodity convenience yields - the yields that accrue to the holders of physical commodities - can predict the exchange rate of commodity-exporters' currencies. Predictability is a consequence of the fact that i) convenience yields are useful predictors for commodity prices and ii) commodity currencies have a strong relationship with commodity prices. The empirical evidence indicates that there is a significant relationship between aggregate measures of convenience yields and commodity currencies' exchange rate, both in-sample and out-of- sample. A high level of convenience yields strongly predicts a depreciation of the Australian, Canadian and New Zealand dollars exchange rates at horizons of 1 to 24 months.
    Date: 2012–03
  5. By: Kajal Lahiri; George Monokroussos; Yongchen Zhao
    Abstract: While the yield spread has long been recognized as a good predictor of recessions, it seems to have been largely overlooked by professional forecasters. We examine this puzzle, established by Rudebusch and Williams (2009), in a data-rich environment including not just the yield spread but many other predictors as well. We confirm the puzzle in this context by examining the contributions of both the SPF forecasts and the yield spread in predicting recessions, and by examining the information content of SPF forecasts directly. Furthermore, we take the first step towards a possible resolution of this puzzle by recognizing the heterogeneity across professional forecasters.
    Date: 2012
  6. By: Brent Meyer; Guhan Venkatu
    Abstract: This paper reinvestigates the performance of trimmed-mean inflation measures some 20 years since their inception, asking whether there is a particular trimmed mean measure that dominates the median CPI. Unlike previous research, we evaluate the performance of symmetric and asymmetric trimmed-means using a well-known equality of prediction test. We fi nd that there is a large swath of trimmed-means that have statistically indistinguishable performance. Also, while the swath of statistically similar trims changes slightly over different sample periods, it always includes the median CPI—an extreme trim that holds conceptual and computational advantages. We conclude with a simple forecasting exercise that highlights the advantage of the median CPI relative to other standard inflation measures.
    Keywords: Inflation (Finance) ; Consumer price indexes
    Date: 2012
  7. By: Ioannis Kasparis (Dept. of Economics, University of Cyprus); Elena Andreou (Dept. of Economics, University of Cyprus); Peter C.B. Phillips (Cowles Foundation, Yale University)
    Abstract: A unifying framework for inference is developed in predictive regressions where the predictor has unknown integration properties and may be stationary or nonstationary. Two easily implemented nonparametric F-tests are proposed. The test statistics are related to those of Kasparis and Phillips (2012) and are obtained by kernel regression. The limit distribution of these predictive tests holds for a wide range of predictors including stationary as well as non-stationary fractional and near unit root processes. In this sense the proposed tests provide a unifying framework for predictive inference, allowing for possibly nonlinear relationships of unknown form, and offering robustness to integration order and functional form. Under the null of no predictability the limit distributions of the tests involve functionals of independent chi^2 variates. The tests are consistent and divergence rates are faster when the predictor is stationary. Asymptotic theory and simulations show that the proposed tests are more powerful than existing parametric predictability tests when deviations from unity are large or the predictive regression is nonlinear. Some empirical illustrations to monthly SP500 stock returns data are provided.
    Keywords: Functional regression, Nonparametric predictability test, Nonparametric regression, Stock returns, Predictive regression
    JEL: C22 C32
    Date: 2012–09
  8. By: C. MARBOT (Insee); D. ROY (Insee)
    Abstract: Confronted with an ageing population, developed countries are facing the challenge of providing care to a growing number of disabled elderly people. Knowing how many they will be and, given the current pensions and welfare systems, how much it will cost to care for them is crucial to policymakers. The INSEE pensions microsimulation tool (called Destinie) was extended in 2011 to elderly disability, in preparation for a reform of the funding of elderly disability in France. Microsimulation at the individual level allows to take into account expected changes in the distribution of variables that influence the process under study. It also allows to simulate allowances based on complex, non-linear scales that require calculation at the individual level. This document describes the implementation method and the results of the forecasts. First, on the characteristics of the disabled elderly and presence of caregivers. Then, several alternative scenarios are studied and yield a range of estimates of the future cost of the allowance for elderly disability, ranging from 0.54% of GDP in the most optimistic scenario to 0.71% of GDP in the most pessimistic one.
    Keywords: Microsimulation, forecasts, elderly disability, APA
    JEL: I18 H51 J14 C53
    Date: 2012
  9. By: Nalan Basturk (Erasmus University Rotterdam); Lennart Hoogerheide (VU University Amsterdam); Anne Opschoor (Erasmus University Rotterdam); Herman K. van Dijk (EUR & VU)
    Abstract: This paper presents the R package MitISEM, which provides an automatic and flexible method to approximate a non-elliptical target density using adaptive mixtures of Student-t densities, where only a kernel of the target density is required. The approximation can be used as a candidate density in Importance Sampling or Metropolis Hastings methods for Bayesian inference on model parameters and probabilities. The package provides also an extended MitISEM algorithm, ‘sequential MitISEM’, which substantially decreases the computational time when the target density has to be approximated for increasing data samples. This occurs when the posterior distribution is updated with new observations and/or when one computes model probabilities using predictive likelihoods. We illustrate the MitISEM algorithm using three canonical statistical and econometric models that are characterized by several types of non-elliptical posterior shapes and that describe well-known data patterns in econometrics and finance. We show that the candidate distribution obtained by MitISEM outperforms those obtained by ‘naive’ approximations in terms of numerical efficiency. Further, the MitISEM approach can be used for Bayesian model comparison, using the predictive likelihoods.
    Keywords: finite mixtures; Student-t distributions; Importance Sampling; MCMC; Metropolis-Hastings algorithm; Expectation Maximization; Bayesian inference; R software
    JEL: C11 C15
    Date: 2012–09–20
  10. By: Christan Francq (Crest and University Lille 3); Jean-Michel Zakoian (Crest and University Lille 3)
    Abstract: In conditionally heteroskedastic models, the optimal prediction of powers, or logarithms, of the absolute value has a simple expression in terms of the volatility and an expectation involving the independent process. A natural procedure for estimating this prediction is to estimate the volatility in a first step, for instance by Gaussian quasi-maximum likelihood (QML) or by least-absolute deviations, and to use empirical means based on rescaled innovations to estimate the expectation in a second step. This paper proposes an alternative one-step procedure, based on an appropriate non-Gaussian QML estimator, and establishes the asymptotic properties of the two approaches. Asymptotic comparisons and numerical experiments show that the differences in accuracy can be important, depending on the prediction problem and the innovations distribution. An application to indexes of major stock exchanges is given
    Keywords: Efficiency of estimators, GARCH, Least-absolute deviations estimation, Prediction, Quasi maximum likelihood estimation
    Date: 2012–08

This nep-for issue is ©2012 by Rob J Hyndman. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.