nep-ets New Economics Papers
on Econometric Time Series
Issue of 2009‒07‒03
fourteen papers chosen by
Yong Yin
SUNY at Buffalo

  1. Realised Quantile-Based Estimation of the Integrated Variance By Kim Christensen; Roel Oomen; Mark Podolskij
  2. Tests for Changing Mean with Monotonic Power By Ted Juhl; Zhijie Xiao
  3. Quantile Cointegrating Regression By Zhijie Xiao
  4. Finite State Markov-Chain Approximations to Highly Persistent Processes By Karen A. Kopecky; Richard M. H. Suen
  5. Forecast accuracy and economic gains from Bayesian model averaging using time varying weight By Lennart Hoogerheide; Richard Kleijn; Francesco Ravazzolo; Herman K. van Dijk; Marno Verbeek
  6. Exponential Smoothing and the Akaike Information Criterion By Ralph D. Snyder; J. Keith Ord
  7. A Nonparametric Copula Based Test for Conditional Independence with Applications to Granger Causality By Taoufik Bouezmarni; Jeroen Rombouts; Abderrahim Taamouti
  8. The Power of Bootstrap Tests of Cointegration Rank with Financial Time Series By Ahlgren, Niklas; Antell, Jan
  9. Forecasting realized (co)variances with a block structure Wishart autoregressive model By Bonato, Matteo; Caporin, Massimiliano; Ranaldo, Angelo
  10. Forecasting Aggregated Time Series Variables: A Survey By Helmut Luetkepohl
  11. "A space-time filter for panel data models containing random effects" By Olivier Parent; James P. Lesage
  12. Testing the hypothesis of contagion using multivariate volatility models By Pereira, Pedro L. Valls
  13. Forecasting Levels of log Variables in Vector Autoregressions By Gunnar Bårdsen and Helmut Lütkepohl
  14. A nonparametric approach to forecasting realized volatility By Adam Clements; Ralf Becker

  1. By: Kim Christensen (Aarhus University and CREATES); Roel Oomen (Deutsche Bank, London, UK and the Department of Quantitative Economics, the University of Amsterdam, The Netherlands); Mark Podolskij (ETH Zürich, Switzerland and CREATES)
    Abstract: In this paper, we propose a new jump robust quantile-based realised variancemeasure of ex-post return variation that can be computed using potentially noisy data. This new estimator is consistent for integrated variance and we present feasible central limit theorems which show that it converges at the best attainable rate and has excellent efficiency. Asymptotically, the quantile-based realised variance is immune to finite activity jumps and outliers in the price series, while in modified form the estimator is applicable with market microstructure noise and therefore operational on highfrequency data. Simulations show that it also has superior robustness properties in finite samples, while an empirical application illustrates its use on equity data.
    Keywords: Finite activity jumps, Integrated variance, Market microstructure noise, Order statistics, Outliers, Realised variance
    JEL: C10 C80
    Date: 2009–05–01
    URL: http://d.repec.org/n?u=RePEc:aah:create:2009-27&r=ets
  2. By: Ted Juhl (University of Kansas); Zhijie Xiao (Boston College)
    Abstract: Several widely used tests for a changing mean exhibit nonmonotonic power in finite samples due to "incorrect" estimation of nuisance parameters under the alternative. In this paper, we study the issue of nonmonotonic power in testing for changing mean. We investigate the asymptotic power properties of the tests using a new framework where alternatives are characterized as having "large" changes. The asymptotic analysis provides a theoretical explanation to the power problem. Modified tests that have monotonic power against a wide range of alternatives of structural change are proposed. Instead of estimating the nuisance parameters based on ordinary least squares residuals, the proposed tests use modified estimators based on nonparametric regression residuals. It is shown that tests based on the modified long-run variance estimator provide an improved rate of divergence of the tests under the alternative of a change in mean. Tests for structural breaks based on such an estimator are able to remain consistent while still retaining the same asymptotic distribution under the null hypothesis of constant mean.
    Keywords: stability, changing parameters, time varying parameters
    JEL: C22
    Date: 2009–06–17
    URL: http://d.repec.org/n?u=RePEc:boc:bocoec:709&r=ets
  3. By: Zhijie Xiao (Boston College)
    Abstract: Quantile regression has important applications in risk management, portfolio optimization, and asset pricing. The current paper studies estimation, inference and financial applications of quantile regression with cointegrated time series. In addition, a new cointegration model with varying coefficients is proposed. In the proposed model, the value of cointegrating coefficients may be affected by the shocks and thus may vary over the innovation quantile. The proposed model may be viewed as a stochastic cointegration model which includes the conventional cointegration model as a special case. It also provides a useful complement to cointegration models with (G)ARCH effects. Asymptotic properties of the proposed model and limiting distribution of the cointegrating regression quantiles are derived. In the presence of endogenous regressors, fully-modified quantile regression estimators and augmented quantile cointegrating regression are proposed to remove the second order bias and nuisance parameters. Regression Wald test are constructed based on the fully modified quantile regression estimators. An empirical application to stock index data highlights the potential of the proposed method.
    Keywords: ARCH/GARCH, Cointegration, Portfolio Optimization, Quantile Regression, Time Varying
    JEL: C22 G1
    Date: 2009–01–31
    URL: http://d.repec.org/n?u=RePEc:boc:bocoec:708&r=ets
  4. By: Karen A. Kopecky (Department of Economics, The University of Western Ontario); Richard M. H. Suen (Department of Economics, University of California Riverside)
    Abstract: This paper re-examines the Rouwenhorst method of approximating first-order autoregressive processes. This method is appealing because it can match the conditional and unconditional mean, the conditional and unconditional variance and the first-order autocorrelation of any AR(1) process. This paper provides the first formal proof of this and other results. When comparing to five other methods, the Rouwenhorst method has the best performance in approximating the business cycle moments generated by the stochastic growth model. It is shown that, equipped with the Rouwenhorst method, an alternative approach to generating these moments has a higher degree of accuracy than the simulation method.
    Keywords: Numerical Methods, Finite State Approximations, Optimal Growth Model
    JEL: C63
    Date: 2009–05
    URL: http://d.repec.org/n?u=RePEc:ucr:wpaper:200904&r=ets
  5. By: Lennart Hoogerheide; Richard Kleijn; Francesco Ravazzolo; Herman K. van Dijk; Marno Verbeek (Econometric and Tinbergen Institutes, Erasmus University Rotterdam; PGGM, Zeist; Norges Bank; Econometric and Tinbergen Institutes, Erasmus University Rotterdam; Rotterdam School of Management, Erasmus University Rotterdam)
    Abstract: Several Bayesian model combination schemes, including some novel approaches that simultaneously allow for parameter uncertainty, model uncertainty and robust time varying model weights, are compared in terms of forecast accuracy and economic gains using ¯nancial and macroeconomic time series. The results indicate that the proposed time varying model weight schemes outperform other combination schemes in terms of predictive and economic gains. In an empirical application using returns on the S&P 500 index, time varying model weights provide improved forecasts with substantial economic gains in an investment strategy including transaction costs. Another empirical example refers to forecasting US economic growth over the business cycle. It suggests that time varying combination schemes may be very useful in business cycle analysis and forecasting, as these may provide an early indicator for recessions.
    Keywords: Forecast combination, Bayesian model averaging, time varying model weights, portfolio optimization, business cycle
    Date: 2009–06–23
    URL: http://d.repec.org/n?u=RePEc:bno:worpap:2009_10&r=ets
  6. By: Ralph D. Snyder; J. Keith Ord
    Abstract: Using an innovations state space approach, it has been found that the Akaike information criterion (AIC) works slightly better, on average, than prediction validation on withheld data, for choosing between the various common methods of exponential smoothing for forecasting. There is, however, a puzzle. Should the count of the seed states be incorporated into the penalty term in the AIC formula? We examine arguments for and against this practice in an attempt to find an acceptable resolution of this question.
    Keywords: Exponential smoothing, forecasting, Akaike information criterion, innovations state space approach
    JEL: C22
    Date: 2009–06–11
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2009-4&r=ets
  7. By: Taoufik Bouezmarni; Jeroen Rombouts; Abderrahim Taamouti
    Abstract: This paper proposes a new nonparametric test for conditional independence, which is based on the comparison of Bernstein copula densities using the Hellinger distance. The test is easy to implement because it does not involve a weighting function in the test statistic, and it can be applied in general settings since there is no restriction on the dimension of the data. In fact, to apply the test, only a bandwidth is needed for the nonparametric copula. We prove that the test statistic is asymptotically pivotal under the null hypothesis, establish local power properties, and motivate the validity of the bootstrap technique that we use in finite sample settings. A simulation study illustrates the good size and power properties of the test. We illustrate the empirical relevance of our test by focusing on Granger causality using financial time series data to test for nonlinear leverage versus volatility feedback effects and to test for causality between stock returns and trading volume. In a third application, we investigate Granger causality between macroeconomic variables. <P>Le présent document propose un nouveau test non paramétrique d’indépendance conditionnelle, lequel est fondé sur la comparaison des densités de la copule de Bernstein suivant la distance de Hellinger. Le test est facile à réaliser, du fait qu’il n’implique pas de fonction de pondération dans les variables utilisées et peut être appliqué dans des conditions générales puisqu’il n’y a pas de restriction sur l’étendue des données. En fait, dans le cas de la copule non paramétrique, l’application du test ne requiert qu’une largeur de bande. Nous démontrons que les variables utilisées pour le test jouent asymptotiquement un rôle crucial sous l’hypothèse nulle. Nous établissons aussi les propriétés des pouvoirs locaux et justifions la validité de la technique bootstrap (technique d’auto-amorçage) que nous utilisons dans les contextes où les échantillons sont de taille finie. Une étude par simulation illustre l’ampleur adéquate et la puissance du test. Nous démontrons la pertinence empirique de notre démarche en mettant l’accent sur les liens de causalité de Granger et en recourant à des séries temporelles de données financières pour vérifier l’effet de levier non linéaire, par opposition à l’effet de rétroaction de la volatilité, et la causalité entre le rendement des actions et le volume des transactions. Dans une troisième application, nous examinons les liens de causalité de Granger entre certaines variables macroéconomiques.
    Keywords: Nonparametric tests, conditional independence, Granger non-causality, Bernstein density copula, bootstrap, finance, volatility asymmetry, leverage effect, volatility feedback effect, macroeconomics, tests non paramétriques, indépendance conditionnelle, non-causalité de Granger, copule de densité de Bernstein, bootstrap, finance, asymétrie de la volatilité, effet de levier, effet de rétroaction de la volatilité, macroéconomie.
    JEL: C12 C14 C15 C19 G1 G12 E3 E4 E52
    Date: 2009–06–01
    URL: http://d.repec.org/n?u=RePEc:cir:cirwor:2009s-28&r=ets
  8. By: Ahlgren, Niklas (Hanken School of Economics); Antell, Jan (Hanken School of Economics)
    Abstract: Bootstrap likelihood ratio tests of cointegration rank are commonly used because they tend to have rejection probabilities that are closer to the nominal level than the rejection probabilities of the correspond- ing asymptotic tests. The e¤ect of bootstrapping the test on its power is largely unknown. We show that a new computationally inexpensive procedure can be applied to the estimation of the power function of the bootstrap test of cointegration rank. The bootstrap test is found to have a power function close to that of the level-adjusted asymp- totic test. The bootstrap test estimates the level-adjusted power of the asymptotic test highly accurately. The bootstrap test may have low power to reject the null hypothesis of cointegration rank zero, or underestimate the cointegration rank. An empirical application to Euribor interest rates is provided as an illustration of the findings.
    Keywords: Cointegration; Likelihood ratio test; Test power; Bootstrap
    Date: 2009–06–11
    URL: http://d.repec.org/n?u=RePEc:hhb:hanken:0541&r=ets
  9. By: Bonato, Matteo (University of Zurich); Caporin, Massimiliano (University of Padova); Ranaldo, Angelo (Swiss National Bank)
    Abstract: In modelling and forecasting volatility, two main trade-offs emerge: mathematical tractability versus economic interpretation and accuracy versus speed. The authors attempt to reconcile, at least partially, both trade-offs. The former trade-off is crucial for many financial applications, including portfolio and risk management. The speed/accuracy trade-off is becoming more and more relevant in an environment of large portfolios, prolonged periods of high volatility (as in the current financial crisis), and the burgeoning phenomenon of algorithmic trading in which computer-based trading rules are automatically implemented. The increased availability of high-frequency data provides new tools for forecasting variances and covariances between assets. However, there is scant literature on forecasting more than one realised volatility. Following Gourieroux, Jasiak and Sufana (Journal of Econometrics, forthcoming), the authors propose a methodology to model and forecast realised covariances without any restriction on the parameters while maintaining economic interpretability. An empirical application based on variance forecasting and risk evaluation of a portfolio of two US treasury bills and two exchange rates is presented. The authors compare their model with several alternative specifications proposed in the literature. Empirical findings suggest that the model can be efficiently used in large portfolios.
    Keywords: Wishart process; realized volatility; Granger causality; volatility spillover; Value-at-Risk
    JEL: C13 C16 C22 C51 C53
    Date: 2009–06–24
    URL: http://d.repec.org/n?u=RePEc:ris:snbwpa:2009_003&r=ets
  10. By: Helmut Luetkepohl
    Abstract: Aggregated times series variables can be forecasted in different ways. For example, they may be forecasted on the basis of the aggregate series or forecasts of disaggregated variables may be obtained first and then these forecasts may be aggregated. A number of forecasts are presented and compared. Classical theoretical results on the relative efficiencies of different forecasts are reviewed and some complications are discussed which invalidate the theoretical results. Contemporaneous as well as temporal aggregation are considered.
    Keywords: Autoregressive moving-average process, temporal aggregation, contemporaneous aggregation, vector autoregressive moving-average process
    JEL: C22 C32
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:eui:euiwps:eco2009/17&r=ets
  11. By: Olivier Parent; James P. Lesage
    Abstract: A space-time filter structure is introduced that can be used to accommodate dependence across space and time in the error components of panel data models that contain random effects. This general specification encompasses several more specific space-time structures that have been used recently in the panel data literature. Markov Chain Monte Carlo methods are set forth for estimating the model which allow simple treatment of initial period observations as endogenous or exogenous. Performance of the approach is demonstrated using both Monte Carlo experiments and an applied illustration.
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:cin:ucecwp:2009-04&r=ets
  12. By: Pereira, Pedro L. Valls
    Abstract: The aim of this paper is to test whether or not there was evidence of contagion across the various financial crises that assailed some countries in the 1990s. Data on sovereign debt bonds for Brazil, Mexico, Russia and Argentina were used to implement the test. The contagion hypothesis is tested using multivariate volatility models. If there is any evidence of structural break in volatility that can be linked to financial crises, the contagion hypothesis will be confirmed. Results suggest that there is evidence in favor of the contagion hypothesis.
    Date: 2009–01–26
    URL: http://d.repec.org/n?u=RePEc:fgv:eesptd:174&r=ets
  13. By: Gunnar Bårdsen and Helmut Lütkepohl (Department of Economics, Norwegian University of Science and Technology)
    Abstract: Sometimes forecasts of the original variable are of interest al- though a variable appears in logarithms (logs) in a system of time series. In that case converting the forecast for the log of the variable to a naive forecast of the original variable by simply applying the exponential transformation is not optimal theoretically. A simple expression for the optimal forecast un- der normality assumptions is derived. Despite its theoretical advantages the optimal forecast is shown to be inferior to the naive forecast if speci¯cation and estimation uncertainty are taken into account. Hence, in practice using the exponential of the log forecast is preferable to using the optimal forecast.
    Date: 2009–06–16
    URL: http://d.repec.org/n?u=RePEc:nst:samfok:10409&r=ets
  14. By: Adam Clements (QUT); Ralf Becker (Manchester)
    Abstract: A well developed literature exists in relation to modeling and forecasting asset return volatility. Much of this relate to the development of time series models of volatility. This paper proposes an alternative method for forecasting volatility that does not involve such a model. Under this approach a forecast is a weighted average of historical volatility. The greatest weight is given to periods that exhibit the most similar market conditions to the time at which the forecast is being formed. Weighting occurs by comparing short-term trends in volatility across time (as a measure of market conditions) by the application of a multivariate kernel scheme. It is found that at a 1 day forecast horizon, the proposed method produces forecasts that are significantly more accurate than competing approaches.
    Keywords: Volatility, forecasts, forecast evaluation, model confidence set, nonparametric
    JEL: C22 G00
    Date: 2009–05–12
    URL: http://d.repec.org/n?u=RePEc:qut:auncer:2009_56&r=ets

This nep-ets issue is ©2009 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.