
on Forecasting 
By:  Peter C.B. Phillips (Cowles Foundation, Yale University) 
Abstract:  A prominent use of local to unity limit theory in applied work is the construction of confidence intervals for autogressive roots through inversion of the ADF t statistic associated with a unit root test, as suggested in Stock (1991). Such confidence intervals are valid when the true model has an autoregressive root that is local to unity (rho = 1 + (c/n)) but are invalid at the limits of the domain of definition of the localizing coefficient c because of a failure in tightness and the escape of probability mass. Consideration of the boundary case shows that these confidence intervals are invalid for stationary autoregression where they manifest locational bias and width distortion. In particular, the coverage probability of these intervals tends to zero as c approaches infinity, and the width of the intervals exceeds the width of intervals constructed in the usual way under stationarity. Some implications of these results for predictive regression tests are explored. It is shown that when the regressor has autoregressive coefficient rho < 1 and the sample size n approaches infinity, the Campbell and Yogo (2006) confidence intervals for the regression coefficient have zero coverage probability asymptotically and their predictive test statistic Q erroneously indicates predictability with probability approaching unity when the null of no predictability holds. These results have obvious implications for empirical practice. 
Keywords:  Autoregressive root, Confidence belt, Confidence interval, Coverage probability, Local to unity, Localizing coefficient, Predictive regression, Tightness 
JEL:  C22 
Date:  2012–09 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1879&r=for 
By:  Degui Li; Oliver Linton (Institute for Fiscal Studies and Cambridge University); Zudi Lu 
Abstract:  We consider approximating a multivariate regression function by an affine combination of onedimensional conditional component regression functions. The weight parameters involved in the approximation are estimated by least squares on the firststage nonparametric kernel estimates. We establish asymptotic normality for the estimated weights and the regression function in two cases: the number of the covariates is finite, and the number of the covariates is diverging. As the observations are assumed to be stationary and near epoch dependent, the approach in this paper is applicable to estimation and forecasting issues in time series analysis. Furthermore, the methods and results are augmented by a simulation study and illustrated by application in the analysis of the Australian annual mean temperature anomaly series. We also apply our methods to high frequency volatility forecasting, where we obtain superior results to parametric methods. 
Keywords:  Asymptotic normality, model averaging, NadarayaWatson kernel estimation, near epoch dependence, semiparametric method. 
JEL:  C14 C22 
Date:  2012–09 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:28/12&r=for 
By:  Todd E. Clark; Francesco Ravazzolo 
Abstract:  This paper compares alternative models of timevarying macroeconomic volatility on the basis of the accuracy of point and density forecasts of macroeconomic variables. In this analysis, we consider both Bayesian autoregressive and Bayesian vector autoregressive models that incorporate some form of timevarying volatility, precisely stochastic volatility (both with constant and timevarying autoregressive coeffi cients), stochastic volatility following a stationary AR process, stochastic volatility coupled with fat tails, GARCH, and mixtureofinnovation models. The comparison is based on the accuracy of forecasts of key macroeconomic time series for realtime post–WarII data both for the United States and United Kingdom. The results show that the AR and VAR specifications with widely used stochastic volatility dominate models with alternative volatility specifications, in terms of point forecasting to some degree and density forecasting to a greater degree. 
Keywords:  Simulation modeling ; Economic forecasting ; Bayesian statistical decision theory 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:fip:fedcwp:1218&r=for 
By:  Toni Beutler (Study Center Gerzensee and University of Lausanne) 
Abstract:  This paper investigates whether commodity convenience yields  the yields that accrue to the holders of physical commodities  can predict the exchange rate of commodityexporters' currencies. Predictability is a consequence of the fact that i) convenience yields are useful predictors for commodity prices and ii) commodity currencies have a strong relationship with commodity prices. The empirical evidence indicates that there is a significant relationship between aggregate measures of convenience yields and commodity currencies' exchange rate, both insample and outof sample. A high level of convenience yields strongly predicts a depreciation of the Australian, Canadian and New Zealand dollars exchange rates at horizons of 1 to 24 months. 
Date:  2012–03 
URL:  http://d.repec.org/n?u=RePEc:szg:worpap:1203&r=for 
By:  Kajal Lahiri; George Monokroussos; Yongchen Zhao 
Abstract:  While the yield spread has long been recognized as a good predictor of recessions, it seems to have been largely overlooked by professional forecasters. We examine this puzzle, established by Rudebusch and Williams (2009), in a datarich environment including not just the yield spread but many other predictors as well. We confirm the puzzle in this context by examining the contributions of both the SPF forecasts and the yield spread in predicting recessions, and by examining the information content of SPF forecasts directly. Furthermore, we take the first step towards a possible resolution of this puzzle by recognizing the heterogeneity across professional forecasters. 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:nya:albaec:1204&r=for 
By:  Brent Meyer; Guhan Venkatu 
Abstract:  This paper reinvestigates the performance of trimmedmean inflation measures some 20 years since their inception, asking whether there is a particular trimmed mean measure that dominates the median CPI. Unlike previous research, we evaluate the performance of symmetric and asymmetric trimmedmeans using a wellknown equality of prediction test. We fi nd that there is a large swath of trimmedmeans that have statistically indistinguishable performance. Also, while the swath of statistically similar trims changes slightly over different sample periods, it always includes the median CPI—an extreme trim that holds conceptual and computational advantages. We conclude with a simple forecasting exercise that highlights the advantage of the median CPI relative to other standard inflation measures. 
Keywords:  Inflation (Finance) ; Consumer price indexes 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:fip:fedcwp:1217&r=for 
By:  Ioannis Kasparis (Dept. of Economics, University of Cyprus); Elena Andreou (Dept. of Economics, University of Cyprus); Peter C.B. Phillips (Cowles Foundation, Yale University) 
Abstract:  A unifying framework for inference is developed in predictive regressions where the predictor has unknown integration properties and may be stationary or nonstationary. Two easily implemented nonparametric Ftests are proposed. The test statistics are related to those of Kasparis and Phillips (2012) and are obtained by kernel regression. The limit distribution of these predictive tests holds for a wide range of predictors including stationary as well as nonstationary fractional and near unit root processes. In this sense the proposed tests provide a unifying framework for predictive inference, allowing for possibly nonlinear relationships of unknown form, and offering robustness to integration order and functional form. Under the null of no predictability the limit distributions of the tests involve functionals of independent chi^2 variates. The tests are consistent and divergence rates are faster when the predictor is stationary. Asymptotic theory and simulations show that the proposed tests are more powerful than existing parametric predictability tests when deviations from unity are large or the predictive regression is nonlinear. Some empirical illustrations to monthly SP500 stock returns data are provided. 
Keywords:  Functional regression, Nonparametric predictability test, Nonparametric regression, Stock returns, Predictive regression 
JEL:  C22 C32 
Date:  2012–09 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1878&r=for 
By:  C. MARBOT (Insee); D. ROY (Insee) 
Abstract:  Confronted with an ageing population, developed countries are facing the challenge of providing care to a growing number of disabled elderly people. Knowing how many they will be and, given the current pensions and welfare systems, how much it will cost to care for them is crucial to policymakers. The INSEE pensions microsimulation tool (called Destinie) was extended in 2011 to elderly disability, in preparation for a reform of the funding of elderly disability in France. Microsimulation at the individual level allows to take into account expected changes in the distribution of variables that influence the process under study. It also allows to simulate allowances based on complex, nonlinear scales that require calculation at the individual level. This document describes the implementation method and the results of the forecasts. First, on the characteristics of the disabled elderly and presence of caregivers. Then, several alternative scenarios are studied and yield a range of estimates of the future cost of the allowance for elderly disability, ranging from 0.54% of GDP in the most optimistic scenario to 0.71% of GDP in the most pessimistic one. 
Keywords:  Microsimulation, forecasts, elderly disability, APA 
JEL:  I18 H51 J14 C53 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:crs:wpdeee:g201210&r=for 
By:  Nalan Basturk (Erasmus University Rotterdam); Lennart Hoogerheide (VU University Amsterdam); Anne Opschoor (Erasmus University Rotterdam); Herman K. van Dijk (EUR & VU) 
Abstract:  This paper presents the R package MitISEM, which provides an automatic and flexible method to approximate a nonelliptical target density using adaptive mixtures of Studentt densities, where only a kernel of the target density is required. The approximation can be used as a candidate density in Importance Sampling or Metropolis Hastings methods for Bayesian inference on model parameters and probabilities. The package provides also an extended MitISEM algorithm, â€˜sequential MitISEMâ€™, which substantially decreases the computational time when the target density has to be approximated for increasing data samples. This occurs when the posterior distribution is updated with new observations and/or when one computes model probabilities using predictive likelihoods. We illustrate the MitISEM algorithm using three canonical statistical and econometric models that are characterized by several types of nonelliptical posterior shapes and that describe wellknown data patterns in econometrics and finance. We show that the candidate distribution obtained by MitISEM outperforms those obtained by â€˜naiveâ€™ approximations in terms of numerical efficiency. Further, the MitISEM approach can be used for Bayesian model comparison, using the predictive likelihoods. 
Keywords:  finite mixtures; Studentt distributions; Importance Sampling; MCMC; MetropolisHastings algorithm; Expectation Maximization; Bayesian inference; R software 
JEL:  C11 C15 
Date:  2012–09–20 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20120096&r=for 
By:  Christan Francq (Crest and University Lille 3); JeanMichel Zakoian (Crest and University Lille 3) 
Abstract:  In conditionally heteroskedastic models, the optimal prediction of powers, or logarithms, of the absolute value has a simple expression in terms of the volatility and an expectation involving the independent process. A natural procedure for estimating this prediction is to estimate the volatility in a first step, for instance by Gaussian quasimaximum likelihood (QML) or by leastabsolute deviations, and to use empirical means based on rescaled innovations to estimate the expectation in a second step. This paper proposes an alternative onestep procedure, based on an appropriate nonGaussian QML estimator, and establishes the asymptotic properties of the two approaches. Asymptotic comparisons and numerical experiments show that the differences in accuracy can be important, depending on the prediction problem and the innovations distribution. An application to indexes of major stock exchanges is given 
Keywords:  Efficiency of estimators, GARCH, Leastabsolute deviations estimation, Prediction, Quasi maximum likelihood estimation 
Date:  2012–08 
URL:  http://d.repec.org/n?u=RePEc:crs:wpaper:201217&r=for 