nep-ets New Economics Papers
on Econometric Time Series
Issue of 2010‒10‒02
seven papers chosen by
Yong Yin
SUNY at Buffalo

  1. Model Selection and Testing of Conditional and Stochastic Volatility Models By Massimiliano Caporin; Michael McAleer
  2. Endogeneity and Instrumental Variables in Dynamic Models By Florens, Jean-Pierre; Simon, Guillaume
  3. Residual-based tests for cointegration and multiple deterministic structural breaks: A Monte Carlo study By Matteo Mogliani
  4. Unit root testing under a local break in trend By David I. Harvey; Stephen J. Leybourne; A. M. Robert Taylor
  5. On the forecasting accuracy of multivariate GARCH models By LAURENT, Sébastien; ROMBOUTS, Jeroen V. K.; VIOLANTE, Francesco
  6. Empirical power of the Kwiatkowski-Phillips-Schmidt-Shin test By Ewa M. Syczewska
  7. A semiparametric Bayesian approach to the analysis of financial time series with applications to value at risk estimation By Concepción Ausín; Pedro Galeano; Pulak Ghosh

  1. By: Massimiliano Caporin (Department of Economics and Management "Marco Fanno", University of Padova); Michael McAleer (Erasmus University Rotterdam, Tinbergen Institute, The Netherlands, and Institute of Economic Research, Kyoto University)
    Abstract: This paper focuses on the selection and comparison of alternative non-nested volatility models. We review the traditional in-sample methods commonly applied in the volatility framework, namely diagnostic checking procedures, information criteria, and conditions for the existence of moments and asymptotic theory, as well as the out-of-sample model selection approaches, such as mean squared error and Model Confidence Set approaches. The paper develops some innovative loss functions which are based on Value-at-Risk forecasts. Finally, we present an empirical application based on simple univariate volatility models, namely GARCH, GJR, EGARCH, and Stochastic Volatility that are widely used to capture asymmetry and leverage.
    Keywords: Volatility model selection, volatility model comparison, non-nested models, model confidence set, Value-at-Risk forecasts, asymmetry, leverage
    JEL: C11 C22 C52
    Date: 2010–09
    URL: http://d.repec.org/n?u=RePEc:kyo:wpaper:724&r=ets
  2. By: Florens, Jean-Pierre; Simon, Guillaume
    Abstract: The objective of the paper is to draw the theory of endogeneity in dynamic models in discrete and continuous time, in particular for diffusions and counting processes. We first provide an extension of the separable set-up to a separable dynamic framework given in term of semi-martingale decomposition. Then we define our function of interest as a stopping time for an additional noise process, whose role is played by a Brownian motion for diffusions, and a Poisson process for counting processes.
    JEL: C14 C32 C51
    Date: 2010–04
    URL: http://d.repec.org/n?u=RePEc:tse:wpaper:22896&r=ets
  3. By: Matteo Mogliani
    Abstract: The aim of this paper is to study the performance of residual-based tests for cointegration in the presence of multiple deterministic structural breaks via Monte Carlo simulations. We consider the KPSS-type LM tests proposed in Carrion-i-Silvestre and Sansò (2006) and in Bartley, Lee and Strazicich (2001), as well as the Schmidt and Phillips-type LM tests proposed in Westerlund and Edgerton (2007). This exercise allow us to cover a wide set of single-equation cointegration estimators. Monte Carlo experiments reveal a trade-off between size and power distortions across tests and models. KPSS-type tests display large size distortions under multiple breaks scenarios, while Schmidt and Phillips-type tests appear well-sized across all simulations. However, when regressors are endogenous, the former group of tests displays quite high power against the alternative hypothesis, while the latter shows severe low power.
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:pse:psecon:2010-22&r=ets
  4. By: David I. Harvey; Stephen J. Leybourne; A. M. Robert Taylor
    Abstract: It is well known that it is vital to account for trend breaks when testing for a unit root. In practice, uncertainty exists over whether or not a trend break is present and, if it is, where it is located. Harris et al. (2009) and Carrion-i-Silvestre et al. (2009) propose procedures which account for both of these forms of uncertainty. Each uses what amounts to a pre-test for a trend break, accounting for a trend break (the associated break fraction estimated from the data) in the unit root procedure only where the pre-test signals a break. Assuming the break magnitude is fixed (independent of sample size) these authors show that their methods achieve near asymptotically ecient unit root inference in both trend break and no trend break environments. These asymptotic results are, however, somewhat at odds with the finite sample simulations reported in both papers. These show the presence of pronounced "valleys" in the finite sample power functions (when mapped as functions of the break magnitude) of the tests such that power is initially high for very small breaks, then decreases as the break magnitude increases, before increasing again. Here we show that treating the break magnitude as local to zero (in a Pitman drift sense) allows the asymptotic analysis to very closely approximate this finite sample effect, thereby providing useful analytical insights into the observed phenomenon. In response to this problem we propose practical solutions, based either on the use of a with break unit root test but with adaptive critical values, or on a union of rejections principle taken across with break and without break unit root tests. The former is shown to eliminate power valleys but at the expense of power when no break is present, while the latter considerably mitigates the valleys while not losing all the power gains available when no break exists.
    Keywords: Unit root test; local trend break; asymptotic local power; union of rejections; adaptive critical values
    Date: 2010–09
    URL: http://d.repec.org/n?u=RePEc:not:notgts:10/05&r=ets
  5. By: LAURENT, Sébastien (Maastricht University, The Netherlands; Université catholique de Louvain, CORE, B-1348 Louvain-la-Neuve, Belgium); ROMBOUTS, Jeroen V. K. (HEC Montréal, CIRANO, CIRPEE; Université catholique de Louvain, CORE, B-1348 Louvain-la-Neuve, Belgium); VIOLANTE, Francesco (Université de Namur, CeReFim, B-5000 Namur, Belgium; Université catholique de Louvain, CORE, B-1348 Louvain-la-Neuve, Belgium)
    Abstract: This paper addresses the question of the selection of multivariate GARCH models in terms of variance matrix forecasting accuracy with a particular focus on relatively large scale problems. We consider 10 assets from NYSE and NASDAQ and compare 125 model based one-step-ahead conditional variance forecasts over a period of 10 years using the model confidence set (MCS) and the Superior Predicitive Ability (SPA) tests. Model per- formances are evaluated using four statistical loss functions which account for different types and degrees of asymmetry with respect to over/under predictions. When consid- ering the full sample, MCS results are strongly driven by short periods of high market instability during which multivariate GARCH models appear to be inaccurate. Over rel- atively unstable periods, i.e. dot-com bubble, the set of superior models is composed of more sophisticated specifications such as orthogonal and dynamic conditional correlation (DCC), both with leverage effect in the conditional variances. However, unlike the DCC models, our results show that the orthogonal specifications tend to underestimate the conditional variance. Over calm periods, a simple assumption like constant conditional correlation and symmetry in the conditional variances cannot be rejected. Finally, during the 2007-2008 financial crisis, accounting for non-stationarity in the conditional variance process generates superior forecasts. The SPA test suggests that, independently from the period, the best models do not provide significantly better forecasts than the DCC model of Engle (2002) with leverage in the conditional variances of the returns.
    Keywords: variance matrix, forecasting, multivariate GARCH, loss function, model confidence set, superior predictive ability
    JEL: C10 C32 C51 C52 C53 G10
    Date: 2010–05–01
    URL: http://d.repec.org/n?u=RePEc:cor:louvco:2010025&r=ets
  6. By: Ewa M. Syczewska (Warsaw School of Economics)
    Abstract: The aim of this paper is to study properties of the Kwiatkowski-Phillips-Schmidt-Shin test (KPSS test), introduced in Kwiatkowski et al. (1992) paper. The null of the test corresponds to stationarity of a series, the alternative to its nonstationarity. Distribution of the test statistics is nonstandard, asymptotically converges to Brownian bridges as was shown in original paper. The authors produced tables of critical values based on asymptotic approximation. Here we present results of simulation experiment aimed at studying small sample properties of the test and its empirical power.
    Keywords: KPSS test, stationarity, integration, empirical power of KPSS test
    JEL: C12 C16
    Date: 2010–09–23
    URL: http://d.repec.org/n?u=RePEc:wse:wpaper:45&r=ets
  7. By: Concepción Ausín; Pedro Galeano; Pulak Ghosh
    Abstract: Financial time series analysis deals with the understanding of data collected on financial markets. Several parametric distribution models have been entertained for describing, estimating and predicting the dynamics of financial time series. Alternatively, this article considers a Bayesian semiparametric approach. In particular, the usual parametric distributional assumptions of the GARCH-type models are relaxed by entertaining the class of location-scale mixtures of Gaussian distributions with a Dirichlet process prior on the mixing distribution, leading to a Dirichlet process mixture model. The proposed specification allows for a greater exibility in capturing both the skewness and kurtosis frequently observed in financial returns. The Bayesian model provides statistical inference with finite sample validity. Furthermore, it is also possible to obtain predictive distributions for the Value at Risk (VaR), which has become the most widely used measure of market risk for practitioners. Through a simulation study, we demonstrate the performance of the proposed semiparametric method and compare results with the ones from a normal distribution assumption. We also demonstrate the superiority of our proposed semiparametric method using real data from the Bombay Stock Exchange Index (BSE-30) and the Hang Seng Index (HSI).
    Keywords: Bayesian estimation, Deviance information criterion, Dirichlet process mixture, Financial time series, Location-scale Gaussian mixture, Markov chain Monte Carlo
    Date: 2010–09
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws103822&r=ets

This nep-ets issue is ©2010 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.