nep-ets New Economics Papers
on Econometric Time Series
Issue of 2009‒06‒03
ten papers chosen by
Yong Yin
SUNY at Buffalo

  1. Multipower Variation for Brownian Semistationary Processes By Ole E. Barndorff-Nielsen; José Manuel Corcuera; Mark Podolskij
  2. Stochastic volatility of volatility in continuous time By Ole E. Barndorff-Nielsen; Almut E. D. Veraart
  3. Co-integration Rank Testing under Conditional Heteroskedasticity By Giuseppe Cavaliere; Anders Rahbek; A.M.Robert Taylor
  4. A Meta-Distribution for Non-Stationary Samples By Dominique Guégan
  5. To Combine Forecasts or to Combine Information? By Huiyu Huang; Tae-Hwy Lee
  6. Nonlinear Time Series in Financial Forecasting By Gloria González-Rivera; Tae-Hwy Lee
  7. Volatility Models : frrom GARCH to Multi-Horizon Cascades By Alexander Subbotin; Thierry Chauveau; Kateryna Shapovalova
  8. Efficiency in Large Dynamic Panel Models with Common Factor By Patrick GAGLIARDINI; Christian GOURIEROUX
  9. Bootstrap Tests of Stationarity¢Ó By James Morley; Tara M. Sinclair
  10. Multivariate methods for monitoring structural change By Groen, Jan J J; Kapetanios, George; Price, Simon

  1. By: Ole E. Barndorff-Nielsen (Aarhus University and CREATES); José Manuel Corcuera (Universitat de Barcelona); Mark Podolskij (ETH Zürich and CREATES)
    Abstract: In this paper we study the asymptotic behaviour of power and multipower variations of stochatstic processes. Processes of the type considered serve in particular, to analyse data of velocity increments of a uid in a turbulence regime with spot intermittency sigma. The purpose of the present paper is to determine the probabilistic limit behaviour of the (multi)power variations of Y , as a basis for studying properties of the intermittency process. Notably the processes Y are in general not of the semimartingale kind and the established theory of multipower variation for semimartingales does not suffice for deriving the limit properties. As a key tool for the results a general central limit theorem for triangular Gaussian schemes is formulated and proved. Examples and an application to realised variance ratio are given.
    Keywords: Central Limit Theorem; Gaussian Processes; Intermittency; Nonsemimartingales; Turbulence; Volatility; Wiener Chaos
    JEL: C10 C80
    Date: 2009–05–26
    URL: http://d.repec.org/n?u=RePEc:aah:create:2009-21&r=ets
  2. By: Ole E. Barndorff-Nielsen (The T.N. Thiele Centre for Mathematics in Natural Science, Department of Mathematical Sciences, & CREATES, Aarhus University); Almut E. D. Veraart (School of Economics and Management, Aarhus University and CREATES)
    Abstract: This paper introduces the concept of stochastic volatility of volatility in continuous time and, hence, extends standard stochastic volatility (SV) models to allow for an additional source of randomness associated with greater variability in the data. We discuss how stochastic volatility of volatility can be defined both non–parametrically, where we link it to the quadratic variation of the stochastic variance process, and parametrically, where we propose two new SV models which allow for stochastic volatility of volatility. In addition, we show that volatility of volatility can be estimated by a novel estimator called pre–estimated spot variance based realised variance.
    Keywords: Stochastic volatility, volatility of volatility, non-Gaussian Ornstein–Uhlenbeck process, superposition, leverage effect, L´evy processes.
    JEL: C10 C13 C14 G10
    Date: 2009–07–06
    URL: http://d.repec.org/n?u=RePEc:aah:create:2009-25&r=ets
  3. By: Giuseppe Cavaliere (Department of Statistical Sciences, University of Bologna); Anders Rahbek (Department of Economics, University of Copenhagen and CREATES); A.M.Robert Taylor (School of Economics and Granger Centre for Time Series Econometrics, University of Nottingham)
    Abstract: We analyse the properties of the conventional Gaussian-based co-integrating rank tests of Johansen (1996) in the case where the vector of series under test is driven by globally stationary, conditionally heteroskedastic (martingale differ- ence) innovations. We first demonstrate that the limiting null distributions of the rank statistics coincide with those derived by previous authors who assume either i.i.d. or (strict and covariance) stationary martingale difference innovations. We then propose wild bootstrap implementations of the co-integrating rank tests and demonstrate that the associated bootstrap rank statistics replicate the first-order asymptotic null distributions of the rank statistics. We show the same is also true of the corresponding rank tests based on the i.i.d. bootstrap of Swensen (2006). The wild bootstrap, however, has the important property that, unlike the i.i.d. bootstrap, it preserves in the re-sampled data the pattern of heteroskedasticity present in the original shocks. Consistent with this, numerical evidence sug- gests that, relative to tests based on the asymptotic critical values or the i.i.d. bootstrap, the wild bootstrap rank tests perform very well in small samples un- der a variety of conditionally heteroskedastic innovation processes. An empirical application to the term structure of interest rates is given.
    Keywords: Co-integration, trace and maximum eigenvalue rank tests, conditional heteroskedasticity, i.i.d. bootstrap; wild bootstrap
    JEL: C30 C32
    Date: 2009–05–28
    URL: http://d.repec.org/n?u=RePEc:aah:create:2009-22&r=ets
  4. By: Dominique Guégan (PSE, Centre d’Economie de la Sorbonne, University Paris1 Panthéon-Sorbonne)
    Abstract: In this paper, we focus on the building of an invariant distribution function associated to a non-stationary sample. After discussing some specific problems encountered by non-stationarity inside samples like the "spurious" long memory effect, we build a sequence of stationary processes permitting to define the concept of meta-distribution for a given non-stationary sample. We use this new approach to discuss some interesting econometric issues in a non-stationary setting, namely forecasting and risk management strategy.
    Keywords: Non-Stationarity, Copula, Long-memory, Switching, Cumulants, Estimation theory
    JEL: C32 C51 G12
    Date: 2009–06–01
    URL: http://d.repec.org/n?u=RePEc:aah:create:2009-24&r=ets
  5. By: Huiyu Huang (PanAgora Asset Management); Tae-Hwy Lee (Department of Economics, University of California Riverside)
    Abstract: When the objective is to forecast a variable of interest but with many explanatory variables available, one could possibly improve the forecast by carefully integrating them. There are generally two directions one could proceed: combination of forecasts (CF) or combination of information (CI). CF combines forecasts generated from simple models each incorporating a part of the whole information set, while CI brings the entire information set into one super model to generate an ultimate forecast. Through linear regression analysis and simulation, we show the relative merits of each, particularly the circumstances where forecast by CF can be superior to forecast by CI, when CI model is correctly specified and when it is misspecified, and shed some light on the success of equally weighted CF. In our empirical application on prediction of monthly, quarterly, and annual equity premium, we compare the CF forecasts (with various weighting schemes) to CI forecasts (with principal component approach mitigating the problem of parameter proliferation). We find that CF with (close to) equal weights is generally the best and dominates all CI schemes, while also performing substantially better than the historical mean.
    Keywords: Equally weighted combination of forecasts, Equity premium, Factor models, Fore- cast combination, Forecast combination puzzle, Information sets, Many predictors, Principal components, Shrinkage
    JEL: C3 C5 G0
    Date: 2006–03
    URL: http://d.repec.org/n?u=RePEc:ucr:wpaper:200806&r=ets
  6. By: Gloria González-Rivera (Department of Economics, University of California Riverside); Tae-Hwy Lee (Department of Economics, University of California Riverside)
    Date: 2007–09
    URL: http://d.repec.org/n?u=RePEc:ucr:wpaper:200803&r=ets
  7. By: Alexander Subbotin (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I); Thierry Chauveau (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I); Kateryna Shapovalova (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I)
    Abstract: We overview different methods of modeling volatility of stock prices and exchange rates, focusing on their ability to reproduce the empirical properties in the corresponding time series. The properties of price fluctuations vary across the time scales of observation. The adequacy of different models for describing price dynamics at several time horizons simultaneously is the central topic of this study. We propose a detailed survey of recent volatility models, accounting for multiple horizons. These models are based on different and sometimes competing theoretical concepts. They belong either to GARCH or stochastic volatility model families and often borrow methodological tools from statistical physics. We compare their properties and comment on their pratical usefulness and perspectives.
    Keywords: Volatility modeling, GARCH, stochastic volatility, volatility cascade, multiple horizons in volatility.
    Date: 2009–05
    URL: http://d.repec.org/n?u=RePEc:hal:cesptp:halshs-00390636_v1&r=ets
  8. By: Patrick GAGLIARDINI (University of Lugano and Swiss Finance Institute); Christian GOURIEROUX (CREST, CEPREMAP (Paris) and University of Toronto)
    Abstract: This paper deals with efficient estimation in exchangeable nonlinear dynamic panel models with common unobservable factor. The specification accounts for both micro- and macro-dynamics, induced by the lagged individual observation and the common stochastic factor, respectively. For large cross-sectional and time dimensions, and under a semiparametric identification condition, we derive the efficiency bound and introduce efficient estimators for both the micro- and macro-parameters. In particular, we show that the fixed effects estimator of the micro-parameter is not only consistent, but also asymptotically efficient. The results are illustrated with the stochastic migration model for credit risk analysis.
    Keywords: Nonlinear Panel Model, Factor Model, Exchangeability, Systematic Risk, Efficiency Bound, Semi-parametric Efficiency, Fixed Effects Estimator, Bayesian Statistics, Stochastic Migration, Granularity
    JEL: C23 C13 G12
    Date: 2008–08
    URL: http://d.repec.org/n?u=RePEc:chf:rpseri:rp0912&r=ets
  9. By: James Morley (Department of Economics Washington University in St. Louis); Tara M. Sinclair (Department of Economics The George Washington University)
    Abstract: We compare the finite-sample performance of different stationarity tests. Monte Carlo analysis reveals that tests based on Lagrange multiplier (LM) statistics with nonstandard asymptotic distributions reject far more often than their nominal size for trend-stationary processes of the kind estimated for macroeconomic data. Bootstrap versions of these LM tests have empirical rejection probabilities that are closer to nominal size, but they still tend to over-reject. Meanwhile, we find that a bootstrap likelihood ratio (LR) test has very accurate finite-sample size, while at the same time having higher power than the bootstrap LM tests against empiricallyrelevant nonstationary alternatives. Based on the bootstrap LR test, and in some cases contrary to the bootstrap LM tests, we can reject trend stationarity for US real GDP, the unemployment rate, consumer prices, and payroll employment in favour of unit root processes with large permanent movements.
    Keywords: Stationarity Test, Unobserved Components, Parametric Bootstrap, Monte Carlo Simulation, Finite Sample Inference
    JEL: C12 C15 C22
    Date: 2008–10
    URL: http://d.repec.org/n?u=RePEc:gwc:wpaper:2008-11&r=ets
  10. By: Groen, Jan J J (Federal Reserve Bank of New York); Kapetanios, George (Queen Mary and Westfield College); Price, Simon (Bank of England)
    Abstract: Detection of structural change is a critical empirical activity, but continuous 'monitoring' of time series for structural changes in real time raises well-known econometric issues. These have been explored in a univariate context. If multiple series co-break, as may be plausible, then it is possible that simultaneous examination of a multivariate set of data would help identify changes with higher probability or more rapidly than when series are examined on a case-by-case basis. Some asymptotic theory is developed for a maximum CUSUM detection test. Monte Carlo experiments suggest that there is an improvement in detection relative to a univariate detector over a wide range of experimental parameters, given a sufficiently large number of co-breaking series. The method is applied to UK RPI inflation in the period after 2001. A break is detected which would not have been picked up by univariate methods.
    Keywords: monitoring; structural change; panel; CUSUM; fluctuation test
    JEL: C10 C59
    Date: 2009–06–08
    URL: http://d.repec.org/n?u=RePEc:boe:boeewp:0369&r=ets

This nep-ets issue is ©2009 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.