nep-ets New Economics Papers
on Econometric Time Series
Issue of 2005‒01‒02
nineteen papers chosen by
Yong Yin
SUNY at Buffalo

  1. Testing for Additive Outliers in Seasonally Integrated Time Series By Niels Haldrup; Antonio Montañés; Andreu Sansó
  2. The Distance between Rival Nonstationary Fractional Processes By Peter M Robinson
  3. ROBUST COVARIANCE MATRIX ESTIMATION: "HAC" Estimates with Long Memory/Antipersistence Correction By Peter M Robinson
  4. Nonparametric Inference for Unbalanced Time Series Data By Oliver Linton
  5. Cointegration in Fractional Systems with Deterministic Trends By Fabrizio Iacone; Peter M Robinson
  6. Forecasting the density of asset returns By Trino-Manuel Niguez; Javier Perote
  7. Efficiency Improvements in Inference on Stationary and Nonstationary Fractional Time Series By Peter M Robinson
  8. Evaluating Portfolio Value-At-Risk Using Semi-Parametric GARCH Models By Rombouts, J.V.K.; Verbeek, M.
  9. Identifying the Cycle of a Macroeconomic Time-Series Using Fuzzy Filtering By David E. Giles; Chad N. Stroomer
  10. Financial Asset Returns, Direction-of-Change Forecasting, and Volatility Dynamics By Peter F. Christoffersen; Francis X.Diebold
  11. The Nobel Memorial Prize for Robert F. Engle By Francis X. Diebold
  12. Forecasting with measurement errors in dynamic models By Richard Harrison; George Kapetanios; Tony Yates
  13. Realized Beta: Persistence and Predictability By Torben G. Andersen; Tim Bollerslev; Francis X. Diebold; Jin Wu
  14. Estimating time-variation in measurement error from data revisions; an application to forecasting in dynamic models By George Kapetanios; Tony Yates
  15. Exact FGLS Asymptotics for MA Errors By David Mandy; Sandor Fridli
  16. Asymptotics for Out of Sample Tests of Granger Causality By Michael W. McCracken
  17. Improving Forecast Accuracy by Combining Recursive and Rolling Forecasts By Michael W. McCracken; Todd E. Clark
  18. Evaluating Long–Horizon Forecasts By Michael W. McCracken; Todd E. Clark
  19. Forecast-Based Model Selection in the Presence of Structural Breaks By Michael W. McCracken; Todd E. Clark

  1. By: Niels Haldrup; Antonio Montañés; Andreu Sansó (Department of Economics, University of Aarhus, Denmark)
    Abstract: The detection of additive outliers in integrated variables has attracted some attention recently, see e.g. Shin et al. (1996), Vogelsang (1999) and Perron and Rodriguez (2003). This paper serves several purposes. We prove the inconsistency of the test proposed by Vogelsang, we extend the tests proposed by Shin et al. and Perron and Rodriguez to the seasonal case, and we consider alternative ways of computing their tests. We also study the e?ects of periodically varying variances on the previous tests and demonstrate that these can be seriously size distorted. Subsequently, some new tests that allow for periodic heteroskedasticity are proposed.
    Keywords: Additive outliers, outlier detection, integrated processes, periodic heteroscedasticity, seasonality.
    JEL: C12 C2 C22
    Date: 2004–12–21
    URL: http://d.repec.org/n?u=RePEc:aah:aarhec:2004-14&r=ets
  2. By: Peter M Robinson
    Abstract: Asymptotic inference on nonstationary fractional time series models, including cointegrated ones, is proceeding along two routes, determined by alternative definitions of nonstationary processes. We derive bounds for the mean squared error of the difference between (possibly tapered) discrete Fourier transforms under two regimes. We apply the results to deduce limit theory for estimates of memory parameters, including ones for cointegrated errors, with mention also of implications for estimates of cointegrating coefficients.
    Keywords: Nonstationary fractional processes, memory parameter estimation, fractional cointegration, rates of convergence.
    Date: 2004–03
    URL: http://d.repec.org/n?u=RePEc:cep:stiecm:/2004/468&r=ets
  3. By: Peter M Robinson
    Abstract: Smoothed nonparametric estimates of the spectral density matrix at zero frequency have been widely used in econometric inference, because they can consistently estimate the covariance matrix of a partial sum of a possibly dependent vector process. When elements of the vector process exhibit long memory or antipersistence such estimates are inconsistent. We propose estimates which are still consistent in such circumstances, adapting automatically to memory parameters that can vary across the vector and be unknown.
    Keywords: Covariance matrix estimation, long memory, antipersistence correction, "HAC" estimates, vector process, spectral density.
    Date: 2004–03
    URL: http://d.repec.org/n?u=RePEc:cep:stiecm:/2004/471&r=ets
  4. By: Oliver Linton
    Abstract: This paper is concerned with the practical problem of conducting inference in a vector time series setting when the data is unbalanced or incomplete. In this case, one can work only with the common sample, to which a standard HAC/Bootstrap theory applies, but at the expense of throwing away data and perhaps losing efficiency. An alternative is to use some sort of imputation method, but this requires additional modelling assumptions, which we would rather avoid. We show how the sampling theory changes and how to modify the resampling algorithms to accommodate the problem of missing data. We also discuss efficiency and power. Unbalanced data of the type we consider are quite common in financial panel data, see, for example, Connor and Korajczyk (1993). These data also occur in cross-country studies.
    Keywords: Bootstrap, efficient, HAC estimation, missing data, subsampling.
    Date: 2004–04
    URL: http://d.repec.org/n?u=RePEc:cep:stiecm:/2004/474&r=ets
  5. By: Fabrizio Iacone; Peter M Robinson
    Abstract: We consider a cointegrated system generated by processes that may be fractionally integrated, and by additive polynomial and generalized polynomial trends. In view of the consequent competition between stochastic and deterministic trends, we consider various estimates of the cointegrating vector and develop relevant asymptotic theory, including the situation where fractional orders of integration are unknown.
    Keywords: Fractional cointegration, deterministic trends, ordinary least squares estimation, generalized least squares estimation, Wald tests.
    Date: 2004–05
    URL: http://d.repec.org/n?u=RePEc:cep:stiecm:/2004/476&r=ets
  6. By: Trino-Manuel Niguez; Javier Perote
    Abstract: In this paper we introduce a transformation of the Edgeworth-Sargan series expansion of the Gaussian distribution, that we call Positive Edgeworth-Sargan (PES). The main advantage of this new density is that it is well defined for all values in the parameter space, as well as it integrates up to one. We include an illustrative empirical application to compare its performance with other distributions, including the Gaussian and the Student's t, to forecast the full density of daily exchange-rate returns by using graphical procedures. Our results show that the proposed function outperforms the other two models for density forecasting, then providing more reliable value-at-risk forecasts.
    Keywords: Density forecasting, Edgeworth-Sargan distribution, probability integral transformations, P-value plots, VaR
    JEL: C16 C53 G12
    Date: 2004–10
    URL: http://d.repec.org/n?u=RePEc:cep:stiecm:/2004/479&r=ets
  7. By: Peter M Robinson
    Abstract: We consider a time series model involving a fractional stochastic component, whose integration order can lie in the stationary/invertible or nonstationary regions and be unknown, and additive deterministic component consisting of a generalised polynomial. The model can thus incorporate competing descriptions of trending behaviour. The stationary input to the stochastic component has parametric autocorrelation, but innovation with distribution of unknown form. The model is thus semiparametric, and we develop estimates of the parametric component which are asymptotically normal and achieve an M-estimation efficiency bound, equal to that found in work using an adaptive LAM/LAN approach. A major technical feature which we treat is the effect of truncating the autoregressive representation in order to form innovation proxies. This is relevant also when the innovation density is parameterised, and we provide a result for that case also. Our semiparametric estimates employ nonparametric series estimation, which avoids some complications and conditions in kernel approaches featured in much work on adaptive estimation of time series models; our work thus also contributes to methods and theory for non-fractional time series models, such as autoregressive moving averages. A Monte Carlo study of finite sample performance of the semiparametric estimates is included.
    Keywords: fractional processes, efficient semiparametric estimation, adaptive estimation, nonstationary processes, series estimation, M-estimation
    JEL: C22
    Date: 2004–11
    URL: http://d.repec.org/n?u=RePEc:cep:stiecm:/2004/480&r=ets
  8. By: Rombouts, J.V.K.; Verbeek, M. (Erasmus Research Institute of Management (ERIM), Erasmus University Rotterdam)
    Abstract: In this paper we examine the usefulness of multivariate semi-parametric GARCH models for portfolio selection under a Value-at-Risk (VaR) constraint. First, we specify and estimate several alternative multivariate GARCH models for daily returns on the S&P 500 and Nasdaq indexes. Examining the within sample VaRs of a set of given portfolios shows that the semi-parametric model performs uniformly well, while parametric models in several cases have unacceptable failure rates. Interestingly, distributional assumptions appear to have a much larger impact on the performance of the VaR estimates than the particular parametric specification chosen for the GARCH equations. Finally, we examine the economic value of the multivariate GARCH models by determining optimal portfolios based on maximizing expected returns subject to a VaR constraint, over a period of 500 consecutive days. Again, the superiority and robustness of the semi-parametric model is confirmed.
    Keywords: Multivariate GARCH;semi-parametric estimation;Value-at-Risk;asset allocation;
    Date: 2004–12–22
    URL: http://d.repec.org/n?u=RePEc:dgr:eureri:30001977&r=ets
  9. By: David E. Giles (Department of Economics, University of Victoria); Chad N. Stroomer (Department of Economics, University of Victoria)
    Abstract: This paper presents a new method for extracting the cycle from an economic time series. This method uses the fuzzy c-means clustering algorithm, drawn from the pattern recognition literature, to identify groups of observations. The time series is modeled over each of these sub-samples, and the results are combined using the “degrees of membership” for each data-point with each cluster. The result is a totally flexible model that readily captures complex non-linearities in the data. This type of “fuzzy regression” analysis has been shown by Giles and Draeseke (2003) to be highly effective in a broad range of situations with economic data. The fuzzy filter that we develop here is compared with the well-known Hodrick-Prescott (HP) filter in a Monte Carlo experiment, and the new filter is found to perform as well as, or better than, the HP filter. The advantage of the fuzzy filter is especially pronounced when the data have a deterministic, rather than stochastic, trend. Applications with real time-series illustrate the different conclusions that can emerge when the fuzzy regression filter and the HP filter are each applied to extract the cycle.
    Keywords: Fuzzy filter, fuzzy clustering, business cycle, trend extraction, HP filter
    JEL: C19 C22 E32
    Date: 2004–12–29
    URL: http://d.repec.org/n?u=RePEc:vic:vicewp:0406&r=ets
  10. By: Peter F. Christoffersen (McGill University and CIRANO); Francis X.Diebold (Department of Economics, University of Pennsylvania and NBER)
    Abstract: We consider three sets of phenomena that feature prominently - and separately - in the financial economics literature: conditional mean dependence (or lack thereof) in asset returns, dependence (and hence forecastability) in asset return signs, and dependence (and hence forecastability) in asset return volatilities. We show that they are very much interrelated, and we explore the relationships in detail. Among other things, we show that (a) Volatility dependence produces sign dependence, so long as expected returns are nonzero, so that one should expect sign dependence, given the overwhelming evidence of volatility dependence; (b) The standard finding of little or no conditional mean dependence is entirely consistent with a significant degree of sign dependence and volatility dependence; (c) Sign dependence is not likely to be found via analysis of sign autocorrelations, runs tests, or traditional market timing tests, because of the special nonlinear nature of sign dependence; (d) Sign dependence is not likely to be found in very high-frequency (e.g., daily) or very low-frequency (e.g., annual) returns; instead, it is more likely to be found at intermediate return horizons; (e) Sign dependence is very much present in actual U.S. equity returns, and its properties match closely our theoretical predictions; (f) The link between volatility forecastability and sign forecastability remains intact in conditionally non-Gaussian environments, as for example with time-varying conditional skewness and/or kurtosis.
    Keywords: Conditional Mean Dependence, Conditional Volatility Dependence, Sign Dependence, VIX
    JEL: C22 C53
    Date: 2003–09–22
    URL: http://d.repec.org/n?u=RePEc:pen:papers:04-009&r=ets
  11. By: Francis X. Diebold (Department of Economics, University of Pennsylvania and NBER)
    Abstract: Engle’s footsteps range widely. His major contributions include early work on band-spectral regression, development and unification of the theory of model specification tests (particularly Lagrange multiplier tests), clarification of the meaning of econometric exogeneity and its relationship to causality, and his later stunningly influential work on common trend modeling (cointegration) and volatility modeling (ARCH, short for AutoRegressive Conditional Heteroskedasticity). More generally, Engle’s cumulative work is a fine example of best-practice applied time-series econometrics: he identifies important dynamic economic phenomena, formulates precise and interesting questions about those phenomena, constructs sophisticated yet simple econometric models for measurement and testing, and consistently obtains results of widespread substantive interest in the scientific, policy, and financial communities.
    Keywords: Econometric Theory, Finance
    JEL: B31 C10
    Date: 2004–02–01
    URL: http://d.repec.org/n?u=RePEc:pen:papers:04-010&r=ets
  12. By: Richard Harrison; George Kapetanios; Tony Yates
    Abstract: This paper explores the effects of measurement error on dynamic forecasting models. It illustrates a trade-off that confronts forecasters and policymakers when they use data that are measured with error. On the one hand, observations on recent data give valuable clues as to the shocks that are hitting the system and that will be propagated into the variables to be forecast. But on the other, those recent observations are likely to be those least well measured. The paper studies two classes of forecasting problem. The first class includes cases where the forecaster takes the coefficients in the data-generating process as given, and has to choose how much of the historical time series of data to use to form a forecast. We show that if recent data are sufficiently badly measured, relative to older data, it can be optimal not to use recent data at all. The second class of problems we study is more general. We show that for a general class of linear autoregressive forecasting models, the optimal weight to place on a data observation of some age, relative to the weight in the true data-generating process, will depend on the measurement error in that observation. We illustrate the gains in forecasting performance using a model of UK business investment growth.
    URL: http://d.repec.org/n?u=RePEc:boe:boeewp:237&r=ets
  13. By: Torben G. Andersen (Department of Economics, Northwestern University); Tim Bollerslev (Department of Economics, Duke University); Francis X. Diebold (Department of Economics, University of Pennsylvania); Jin Wu (Department of Economics, University of Pennsylvania)
    Abstract: A large literature over several decades reveals both extensive concern with the question of time-varying betas and an emerging consensus that betas are in fact time-varying, leading to the prominence of the conditional CAPM. Set against that background, we assess the dynamics in realized betas, vis-à-vis the dynamics in the underlying realized market variance and individual equity covariances with the market. Working in the recently-popularized framework of realized volatility, we are led to a framework of nonlinear fractional cointegration: although realized variances and covariances are very highly persistent and well approximated as fractionally-integrated, realized betas, which are simple nonlinear functions of those realized variances and covariances, are less persistent and arguably best modeled as stationary I(0) processes. We conclude by drawing implications for asset pricing and portfolio management.
    Keywords: Quadratic variation and covariation, realized volatility, asset pricing, CAPM, equity betas, long memory, nonlinear fractional cointegration, continuous-time methods
    JEL: C1 G1
    Date: 2003–01–03
    URL: http://d.repec.org/n?u=RePEc:pen:papers:04-018&r=ets
  14. By: George Kapetanios; Tony Yates
    Abstract: Over time, economic statistics are refined. This means that newer data are typically less well measured than old data. Time or vintage-variation in measurement error like this influences how forecasts should be made. Measurement error is obviously not directly observable. This paper shows that modelling the behaviour of the statistics agency generates an estimate of this time-variation. This provides an alternative to assuming that the final releases of variables are true. The paper applies the method to UK aggregate expenditure data, and demonstrates the gains in forecasting from exploiting these model-based estimates of measurement error.
    URL: http://d.repec.org/n?u=RePEc:boe:boeewp:238&r=ets
  15. By: David Mandy (Department of Economics, University of Missouri-Columbia); Sandor Fridli
    Abstract: We show under very parsimonious assumptions that FGLS and GLS are asymptotically equivalent when errors follow an invertible MA(1) process. Although the linear regression model with MA errors has been studied for many years, asymptotic equivalence of FGLS and GLS has never been established for this model. We do not require anything beyond a finite second moment of the conditional white noise, uniformly bounded fourth moments and independence of the regressor vectors, consistency of the estimator for the MA parameter, and a finite nonsingular probability limit for the (transformed) averages of the regressors. These assumptions are analogous to assumptions typically used to prove asymptotic equivalence of FGLS and GLS in SUR models, models with AR(p) errors, and models of parametric heteroscedasticity.
    JEL: L5
    Date: 2004–12–16
    URL: http://d.repec.org/n?u=RePEc:umc:wpaper:0405&r=ets
  16. By: Michael W. McCracken (Department of Economics, University of Missouri-Columbia)
    Abstract: This paper presents analytical, Monte Carlo and empirical evidence concerning out-of-sample tests of Granger causality. The environment is one in which the relative predictive ability of two nested parametric regression models is of interest. Results are provided for three statistics: a regression-based statistic suggested by Granger and Newbold (1977), a t-type statistic comparable to those suggested by Diebold and Mariano (1995) and West (1996), and an F-type statistic akin to Theil’s U. Since the asymptotic distributions under the null are nonstandard, tables of asymptotically valid critical values are provided. Monte Carlo evidence supports the theoretical results. An empirical example relating the predictive content of an interest spread to growth shows that the tests can provide a useful model selection tool for forecasting.
    Keywords: Granger causality, forecast evaluation, hypothesis testing, model selection
    JEL: C12 C32 C52 C53
    Date: 2004–12–23
    URL: http://d.repec.org/n?u=RePEc:umc:wpaper:0406&r=ets
  17. By: Michael W. McCracken (Department of Economics, University of Missouri-Columbia); Todd E. Clark (Federal Reserve Bank of Kansas City)
    Abstract: This paper presents analytical, Monte Carlo, and empirical evidence on the effectiveness of combining recursive and rolling forecasts when linear predictive models are subject to structural change. We first provide a characterization of the bias-variance tradeoff faced when choosing between either the recursive and rolling schemes or a scalar convex combination of the two. From that, we derive pointwise optimal, time-varying and datadependent observation windows and combining weights designed to minimize mean square forecast error. We then proceed to consider other methods of forecast combination, including Bayesian methods that shrink the rolling forecast to the recursive and Bayesian model averaging. Monte Carlo experiments and several empirical examples indicate that although the recursive scheme is often difficult to beat, when gains can be obtained, some form of shrinkage can often provide improvements in forecast accuracy relative to forecasts made using the recursive scheme or the rolling scheme with a fixed window width.
    Keywords: structural breaks, forecasting, model averaging.
    JEL: C53 C12 C52
    Date: 2004–12–23
    URL: http://d.repec.org/n?u=RePEc:umc:wpaper:0420&r=ets
  18. By: Michael W. McCracken (Department of Economics, University of Missouri-Columbia); Todd E. Clark (Federal Reserve Bank of Kansas City)
    Abstract: This paper examines the asymptotic and finite-sample properties of tests of equal forecast accuracy and encompassing applied to predictions from nested long-horizon regression models. We first derive the asymptotic distributions of a set of tests of equal forecast accuracy and encompassing, showing that the tests have non-standard distributions that depend on the parameters of the data-generating process. Using a simple model–based bootstrap for inference, we then conduct Monte Carlo simulations of a range of data-generating processes to examine the finite-sample size and power of the tests. In these simulations, the bootstrap yields tests with good finite–sample size and power properties, with the encompassing test proposed by Clark and McCracken (2001) having superior power. The paper concludes with a reexamination of the predictive content of capacity utilization for core inflation.
    Keywords: Forecast evaluation, prediction, causality
    JEL: C53 C12 C52
    Date: 2004–12–27
    URL: http://d.repec.org/n?u=RePEc:umc:wpaper:0302&r=ets
  19. By: Michael W. McCracken (Department of Economics, University of Missouri-Columbia); Todd E. Clark (Federal Reserve Bank of Kansas City)
    Abstract: This paper presents analytical, Monte Carlo, and empirical evidence on the effects of structural breaks on tests for equal forecast accuracy and forecast encompassing. The forecasts are generated from two parametric, linear models that are nested under the null. The alternative hypotheses allow a causal relationship that is subject to breaks during the sample. With this framework, we show that in-sample explanatory power is readily found because the usual F-test will indicate causality if it existed for any portion of the sample. Out-of-sample predictive power can be harder to find because the results of out-of-sample tests are highly dependent on the timing of the predictive ability. Moreover, out-of-sample predictive power is harder to find with some tests than with others: the power of F-type tests of equal forecast accuracy and encompassing often dominates that of the more commonly-used t-type alternatives. Overall, out-of-sample tests are effective at revealing whether one variable has predictive power for another at the end of the sample. Based on these results and additional evidence from an empirical application relating GDP growth to an interest rate term spread, we conclude that structural breaks can explain why researchers often find evidence of in-sample, but not out-of-sample, predictive content.
    Keywords: power, structural breaks, forecast evaluation, model selection
    JEL: C53 C12 C52
    Date: 2004–12–27
    URL: http://d.repec.org/n?u=RePEc:umc:wpaper:0303&r=ets

This nep-ets issue is ©2005 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.