
on Econometric Time Series 
By:  Niels Haldrup; Antonio Montañés; Andreu Sansó (Department of Economics, University of Aarhus, Denmark) 
Abstract:  The detection of additive outliers in integrated variables has attracted some attention recently, see e.g. Shin et al. (1996), Vogelsang (1999) and Perron and Rodriguez (2003). This paper serves several purposes. We prove the inconsistency of the test proposed by Vogelsang, we extend the tests proposed by Shin et al. and Perron and Rodriguez to the seasonal case, and we consider alternative ways of computing their tests. We also study the e?ects of periodically varying variances on the previous tests and demonstrate that these can be seriously size distorted. Subsequently, some new tests that allow for periodic heteroskedasticity are proposed. 
Keywords:  Additive outliers, outlier detection, integrated processes, periodic heteroscedasticity, seasonality. 
JEL:  C12 C2 C22 
Date:  2004–12–21 
URL:  http://d.repec.org/n?u=RePEc:aah:aarhec:200414&r=ets 
By:  Peter M Robinson 
Abstract:  Asymptotic inference on nonstationary fractional time series models, including cointegrated ones, is proceeding along two routes, determined by alternative definitions of nonstationary processes. We derive bounds for the mean squared error of the difference between (possibly tapered) discrete Fourier transforms under two regimes. We apply the results to deduce limit theory for estimates of memory parameters, including ones for cointegrated errors, with mention also of implications for estimates of cointegrating coefficients. 
Keywords:  Nonstationary fractional processes, memory parameter estimation, fractional cointegration, rates of convergence. 
Date:  2004–03 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:/2004/468&r=ets 
By:  Peter M Robinson 
Abstract:  Smoothed nonparametric estimates of the spectral density matrix at zero frequency have been widely used in econometric inference, because they can consistently estimate the covariance matrix of a partial sum of a possibly dependent vector process. When elements of the vector process exhibit long memory or antipersistence such estimates are inconsistent. We propose estimates which are still consistent in such circumstances, adapting automatically to memory parameters that can vary across the vector and be unknown. 
Keywords:  Covariance matrix estimation, long memory, antipersistence correction, "HAC" estimates, vector process, spectral density. 
Date:  2004–03 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:/2004/471&r=ets 
By:  Oliver Linton 
Abstract:  This paper is concerned with the practical problem of conducting inference in a vector time series setting when the data is unbalanced or incomplete. In this case, one can work only with the common sample, to which a standard HAC/Bootstrap theory applies, but at the expense of throwing away data and perhaps losing efficiency. An alternative is to use some sort of imputation method, but this requires additional modelling assumptions, which we would rather avoid. We show how the sampling theory changes and how to modify the resampling algorithms to accommodate the problem of missing data. We also discuss efficiency and power. Unbalanced data of the type we consider are quite common in financial panel data, see, for example, Connor and Korajczyk (1993). These data also occur in crosscountry studies. 
Keywords:  Bootstrap, efficient, HAC estimation, missing data, subsampling. 
Date:  2004–04 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:/2004/474&r=ets 
By:  Fabrizio Iacone; Peter M Robinson 
Abstract:  We consider a cointegrated system generated by processes that may be fractionally integrated, and by additive polynomial and generalized polynomial trends. In view of the consequent competition between stochastic and deterministic trends, we consider various estimates of the cointegrating vector and develop relevant asymptotic theory, including the situation where fractional orders of integration are unknown. 
Keywords:  Fractional cointegration, deterministic trends, ordinary least squares estimation, generalized least squares estimation, Wald tests. 
Date:  2004–05 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:/2004/476&r=ets 
By:  TrinoManuel Niguez; Javier Perote 
Abstract:  In this paper we introduce a transformation of the EdgeworthSargan series expansion of the Gaussian distribution, that we call Positive EdgeworthSargan (PES). The main advantage of this new density is that it is well defined for all values in the parameter space, as well as it integrates up to one. We include an illustrative empirical application to compare its performance with other distributions, including the Gaussian and the Student's t, to forecast the full density of daily exchangerate returns by using graphical procedures. Our results show that the proposed function outperforms the other two models for density forecasting, then providing more reliable valueatrisk forecasts. 
Keywords:  Density forecasting, EdgeworthSargan distribution, probability integral transformations, Pvalue plots, VaR 
JEL:  C16 C53 G12 
Date:  2004–10 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:/2004/479&r=ets 
By:  Peter M Robinson 
Abstract:  We consider a time series model involving a fractional stochastic component, whose integration order can lie in the stationary/invertible or nonstationary regions and be unknown, and additive deterministic component consisting of a generalised polynomial. The model can thus incorporate competing descriptions of trending behaviour. The stationary input to the stochastic component has parametric autocorrelation, but innovation with distribution of unknown form. The model is thus semiparametric, and we develop estimates of the parametric component which are asymptotically normal and achieve an Mestimation efficiency bound, equal to that found in work using an adaptive LAM/LAN approach. A major technical feature which we treat is the effect of truncating the autoregressive representation in order to form innovation proxies. This is relevant also when the innovation density is parameterised, and we provide a result for that case also. Our semiparametric estimates employ nonparametric series estimation, which avoids some complications and conditions in kernel approaches featured in much work on adaptive estimation of time series models; our work thus also contributes to methods and theory for nonfractional time series models, such as autoregressive moving averages. A Monte Carlo study of finite sample performance of the semiparametric estimates is included. 
Keywords:  fractional processes, efficient semiparametric estimation, adaptive estimation, nonstationary processes, series estimation, Mestimation 
JEL:  C22 
Date:  2004–11 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:/2004/480&r=ets 
By:  Rombouts, J.V.K.; Verbeek, M. (Erasmus Research Institute of Management (ERIM), Erasmus University Rotterdam) 
Abstract:  In this paper we examine the usefulness of multivariate semiparametric GARCH models for portfolio selection under a ValueatRisk (VaR) constraint. First, we specify and estimate several alternative multivariate GARCH models for daily returns on the S&P 500 and Nasdaq indexes. Examining the within sample VaRs of a set of given portfolios shows that the semiparametric model performs uniformly well, while parametric models in several cases have unacceptable failure rates. Interestingly, distributional assumptions appear to have a much larger impact on the performance of the VaR estimates than the particular parametric specification chosen for the GARCH equations. Finally, we examine the economic value of the multivariate GARCH models by determining optimal portfolios based on maximizing expected returns subject to a VaR constraint, over a period of 500 consecutive days. Again, the superiority and robustness of the semiparametric model is confirmed. 
Keywords:  Multivariate GARCH;semiparametric estimation;ValueatRisk;asset allocation; 
Date:  2004–12–22 
URL:  http://d.repec.org/n?u=RePEc:dgr:eureri:30001977&r=ets 
By:  David E. Giles (Department of Economics, University of Victoria); Chad N. Stroomer (Department of Economics, University of Victoria) 
Abstract:  This paper presents a new method for extracting the cycle from an economic time series. This method uses the fuzzy cmeans clustering algorithm, drawn from the pattern recognition literature, to identify groups of observations. The time series is modeled over each of these subsamples, and the results are combined using the “degrees of membership” for each datapoint with each cluster. The result is a totally flexible model that readily captures complex nonlinearities in the data. This type of “fuzzy regression” analysis has been shown by Giles and Draeseke (2003) to be highly effective in a broad range of situations with economic data. The fuzzy filter that we develop here is compared with the wellknown HodrickPrescott (HP) filter in a Monte Carlo experiment, and the new filter is found to perform as well as, or better than, the HP filter. The advantage of the fuzzy filter is especially pronounced when the data have a deterministic, rather than stochastic, trend. Applications with real timeseries illustrate the different conclusions that can emerge when the fuzzy regression filter and the HP filter are each applied to extract the cycle. 
Keywords:  Fuzzy filter, fuzzy clustering, business cycle, trend extraction, HP filter 
JEL:  C19 C22 E32 
Date:  2004–12–29 
URL:  http://d.repec.org/n?u=RePEc:vic:vicewp:0406&r=ets 
By:  Peter F. Christoffersen (McGill University and CIRANO); Francis X.Diebold (Department of Economics, University of Pennsylvania and NBER) 
Abstract:  We consider three sets of phenomena that feature prominently  and separately  in the financial economics literature: conditional mean dependence (or lack thereof) in asset returns, dependence (and hence forecastability) in asset return signs, and dependence (and hence forecastability) in asset return volatilities. We show that they are very much interrelated, and we explore the relationships in detail. Among other things, we show that (a) Volatility dependence produces sign dependence, so long as expected returns are nonzero, so that one should expect sign dependence, given the overwhelming evidence of volatility dependence; (b) The standard finding of little or no conditional mean dependence is entirely consistent with a significant degree of sign dependence and volatility dependence; (c) Sign dependence is not likely to be found via analysis of sign autocorrelations, runs tests, or traditional market timing tests, because of the special nonlinear nature of sign dependence; (d) Sign dependence is not likely to be found in very highfrequency (e.g., daily) or very lowfrequency (e.g., annual) returns; instead, it is more likely to be found at intermediate return horizons; (e) Sign dependence is very much present in actual U.S. equity returns, and its properties match closely our theoretical predictions; (f) The link between volatility forecastability and sign forecastability remains intact in conditionally nonGaussian environments, as for example with timevarying conditional skewness and/or kurtosis. 
Keywords:  Conditional Mean Dependence, Conditional Volatility Dependence, Sign Dependence, VIX 
JEL:  C22 C53 
Date:  2003–09–22 
URL:  http://d.repec.org/n?u=RePEc:pen:papers:04009&r=ets 
By:  Francis X. Diebold (Department of Economics, University of Pennsylvania and NBER) 
Abstract:  Engle’s footsteps range widely. His major contributions include early work on bandspectral regression, development and unification of the theory of model specification tests (particularly Lagrange multiplier tests), clarification of the meaning of econometric exogeneity and its relationship to causality, and his later stunningly influential work on common trend modeling (cointegration) and volatility modeling (ARCH, short for AutoRegressive Conditional Heteroskedasticity). More generally, Engle’s cumulative work is a fine example of bestpractice applied timeseries econometrics: he identifies important dynamic economic phenomena, formulates precise and interesting questions about those phenomena, constructs sophisticated yet simple econometric models for measurement and testing, and consistently obtains results of widespread substantive interest in the scientific, policy, and financial communities. 
Keywords:  Econometric Theory, Finance 
JEL:  B31 C10 
Date:  2004–02–01 
URL:  http://d.repec.org/n?u=RePEc:pen:papers:04010&r=ets 
By:  Richard Harrison; George Kapetanios; Tony Yates 
Abstract:  This paper explores the effects of measurement error on dynamic forecasting models. It illustrates a tradeoff that confronts forecasters and policymakers when they use data that are measured with error. On the one hand, observations on recent data give valuable clues as to the shocks that are hitting the system and that will be propagated into the variables to be forecast. But on the other, those recent observations are likely to be those least well measured. The paper studies two classes of forecasting problem. The first class includes cases where the forecaster takes the coefficients in the datagenerating process as given, and has to choose how much of the historical time series of data to use to form a forecast. We show that if recent data are sufficiently badly measured, relative to older data, it can be optimal not to use recent data at all. The second class of problems we study is more general. We show that for a general class of linear autoregressive forecasting models, the optimal weight to place on a data observation of some age, relative to the weight in the true datagenerating process, will depend on the measurement error in that observation. We illustrate the gains in forecasting performance using a model of UK business investment growth. 
URL:  http://d.repec.org/n?u=RePEc:boe:boeewp:237&r=ets 
By:  Torben G. Andersen (Department of Economics, Northwestern University); Tim Bollerslev (Department of Economics, Duke University); Francis X. Diebold (Department of Economics, University of Pennsylvania); Jin Wu (Department of Economics, University of Pennsylvania) 
Abstract:  A large literature over several decades reveals both extensive concern with the question of timevarying betas and an emerging consensus that betas are in fact timevarying, leading to the prominence of the conditional CAPM. Set against that background, we assess the dynamics in realized betas, visàvis the dynamics in the underlying realized market variance and individual equity covariances with the market. Working in the recentlypopularized framework of realized volatility, we are led to a framework of nonlinear fractional cointegration: although realized variances and covariances are very highly persistent and well approximated as fractionallyintegrated, realized betas, which are simple nonlinear functions of those realized variances and covariances, are less persistent and arguably best modeled as stationary I(0) processes. We conclude by drawing implications for asset pricing and portfolio management. 
Keywords:  Quadratic variation and covariation, realized volatility, asset pricing, CAPM, equity betas, long memory, nonlinear fractional cointegration, continuoustime methods 
JEL:  C1 G1 
Date:  2003–01–03 
URL:  http://d.repec.org/n?u=RePEc:pen:papers:04018&r=ets 
By:  George Kapetanios; Tony Yates 
Abstract:  Over time, economic statistics are refined. This means that newer data are typically less well measured than old data. Time or vintagevariation in measurement error like this influences how forecasts should be made. Measurement error is obviously not directly observable. This paper shows that modelling the behaviour of the statistics agency generates an estimate of this timevariation. This provides an alternative to assuming that the final releases of variables are true. The paper applies the method to UK aggregate expenditure data, and demonstrates the gains in forecasting from exploiting these modelbased estimates of measurement error. 
URL:  http://d.repec.org/n?u=RePEc:boe:boeewp:238&r=ets 
By:  David Mandy (Department of Economics, University of MissouriColumbia); Sandor Fridli 
Abstract:  We show under very parsimonious assumptions that FGLS and GLS are asymptotically equivalent when errors follow an invertible MA(1) process. Although the linear regression model with MA errors has been studied for many years, asymptotic equivalence of FGLS and GLS has never been established for this model. We do not require anything beyond a finite second moment of the conditional white noise, uniformly bounded fourth moments and independence of the regressor vectors, consistency of the estimator for the MA parameter, and a finite nonsingular probability limit for the (transformed) averages of the regressors. These assumptions are analogous to assumptions typically used to prove asymptotic equivalence of FGLS and GLS in SUR models, models with AR(p) errors, and models of parametric heteroscedasticity. 
JEL:  L5 
Date:  2004–12–16 
URL:  http://d.repec.org/n?u=RePEc:umc:wpaper:0405&r=ets 
By:  Michael W. McCracken (Department of Economics, University of MissouriColumbia) 
Abstract:  This paper presents analytical, Monte Carlo and empirical evidence concerning outofsample tests of Granger causality. The environment is one in which the relative predictive ability of two nested parametric regression models is of interest. Results are provided for three statistics: a regressionbased statistic suggested by Granger and Newbold (1977), a ttype statistic comparable to those suggested by Diebold and Mariano (1995) and West (1996), and an Ftype statistic akin to Theil’s U. Since the asymptotic distributions under the null are nonstandard, tables of asymptotically valid critical values are provided. Monte Carlo evidence supports the theoretical results. An empirical example relating the predictive content of an interest spread to growth shows that the tests can provide a useful model selection tool for forecasting. 
Keywords:  Granger causality, forecast evaluation, hypothesis testing, model selection 
JEL:  C12 C32 C52 C53 
Date:  2004–12–23 
URL:  http://d.repec.org/n?u=RePEc:umc:wpaper:0406&r=ets 
By:  Michael W. McCracken (Department of Economics, University of MissouriColumbia); Todd E. Clark (Federal Reserve Bank of Kansas City) 
Abstract:  This paper presents analytical, Monte Carlo, and empirical evidence on the effectiveness of combining recursive and rolling forecasts when linear predictive models are subject to structural change. We first provide a characterization of the biasvariance tradeoff faced when choosing between either the recursive and rolling schemes or a scalar convex combination of the two. From that, we derive pointwise optimal, timevarying and datadependent observation windows and combining weights designed to minimize mean square forecast error. We then proceed to consider other methods of forecast combination, including Bayesian methods that shrink the rolling forecast to the recursive and Bayesian model averaging. Monte Carlo experiments and several empirical examples indicate that although the recursive scheme is often difficult to beat, when gains can be obtained, some form of shrinkage can often provide improvements in forecast accuracy relative to forecasts made using the recursive scheme or the rolling scheme with a fixed window width. 
Keywords:  structural breaks, forecasting, model averaging. 
JEL:  C53 C12 C52 
Date:  2004–12–23 
URL:  http://d.repec.org/n?u=RePEc:umc:wpaper:0420&r=ets 
By:  Michael W. McCracken (Department of Economics, University of MissouriColumbia); Todd E. Clark (Federal Reserve Bank of Kansas City) 
Abstract:  This paper examines the asymptotic and finitesample properties of tests of equal forecast accuracy and encompassing applied to predictions from nested longhorizon regression models. We first derive the asymptotic distributions of a set of tests of equal forecast accuracy and encompassing, showing that the tests have nonstandard distributions that depend on the parameters of the datagenerating process. Using a simple model–based bootstrap for inference, we then conduct Monte Carlo simulations of a range of datagenerating processes to examine the finitesample size and power of the tests. In these simulations, the bootstrap yields tests with good finite–sample size and power properties, with the encompassing test proposed by Clark and McCracken (2001) having superior power. The paper concludes with a reexamination of the predictive content of capacity utilization for core inflation. 
Keywords:  Forecast evaluation, prediction, causality 
JEL:  C53 C12 C52 
Date:  2004–12–27 
URL:  http://d.repec.org/n?u=RePEc:umc:wpaper:0302&r=ets 
By:  Michael W. McCracken (Department of Economics, University of MissouriColumbia); Todd E. Clark (Federal Reserve Bank of Kansas City) 
Abstract:  This paper presents analytical, Monte Carlo, and empirical evidence on the effects of structural breaks on tests for equal forecast accuracy and forecast encompassing. The forecasts are generated from two parametric, linear models that are nested under the null. The alternative hypotheses allow a causal relationship that is subject to breaks during the sample. With this framework, we show that insample explanatory power is readily found because the usual Ftest will indicate causality if it existed for any portion of the sample. Outofsample predictive power can be harder to find because the results of outofsample tests are highly dependent on the timing of the predictive ability. Moreover, outofsample predictive power is harder to find with some tests than with others: the power of Ftype tests of equal forecast accuracy and encompassing often dominates that of the more commonlyused ttype alternatives. Overall, outofsample tests are effective at revealing whether one variable has predictive power for another at the end of the sample. Based on these results and additional evidence from an empirical application relating GDP growth to an interest rate term spread, we conclude that structural breaks can explain why researchers often find evidence of insample, but not outofsample, predictive content. 
Keywords:  power, structural breaks, forecast evaluation, model selection 
JEL:  C53 C12 C52 
Date:  2004–12–27 
URL:  http://d.repec.org/n?u=RePEc:umc:wpaper:0303&r=ets 