nep-ets New Economics Papers
on Econometric Time Series
Issue of 2012‒09‒22
five papers chosen by
Yong Yin
SUNY at Buffalo

  1. A model for vast panels of volatilities By Matteo Luciani; David Veredas
  2. Time-Varying Volatility Asymmetry: A Conditioned HAR-RV(CJ) EGARCH-M Model By Ceylan, Ozcan
  3. Simple techniques for likelihood analysis of univariate and multivariate stable distributions: with extensions to multivariate stochastic volatility and dynamic factor models By Tsionas, Mike
  4. Comparing Predictive Accuracy, Twenty Years Later: A Personal Perspective on the Use and Abuse of Diebold-Mariano Tests By Francis X. Diebold
  5. Estimating Dynamic Equilibrium Models with Stochastic Volatility By Jesus Fernandez-Villaverde; Pablo A. Guerrón-Quintana; Juan Rubio-Ramírez

  1. By: Matteo Luciani (Université Libre de Bruxelles); David Veredas (Université Libre de Bruxelles)
    Abstract: Realized volatilities, when observed over time, share the following stylised facts: comovements, clustering, long-memory, dynamic volatility, skewness and heavy-tails. We propose a dynamic factor model that captures these stylised facts and that can be applied to vast panels of volatilities as it does not suffer from the curse of dimensionality. It is an enhanced version of Bai and Ng (2004) in the following respects: i) we allow for longmemory in both the idiosyncratic and the common components, ii) the common shocks are conditionally heteroskedastic, and iii) the idiosyncratic and common shocks are skewed and heavy-tailed. Estimation of the factors, the idiosyncratic components and the parameters is simple: principal components and low dimension maximum likelihood estimations. A Monte Carlo study shows the usefulness of the approach and an application to 90 daily realized volatilities, pertaining to S&P100, from January 2001 to December 2008, evinces, among others, the following findings: i) All the volatilities have long-memory, more than half in the nonstationary range, that increases during financial turmoils. ii) Tests and criteria point towards one dynamic common factor driving the co-movements. iii) The factor has larger long-memory than the assets volatilities, suggesting that long–memory is a market characteristic. iv) The volatility of the realized volatility is not constant and common to all. v) A forecasting horse race against 8 competing models shows that our model outperforms, in particular in periods of stress.
    Keywords: Realized volatilities, vast dimensions, factor models, long–memory, forecasting
    JEL: C32 C51 G01
    Date: 2012–09
  2. By: Ceylan, Ozcan (Galatasaray University Economic Research Center)
    Abstract: Based on the recent developments in the high-frequency econometrics and asymmetric GARCH modeling literature, I develop a novel model that accounts for the volatility feedback and leverage effects, effectively incorporating signed continuous and jump components of the realized variance in the variance specification through an HAR forecasting model. I then condition the variance specification on the lagged realized variance and the risk aversion (that is proxied by the variance risk premium level) to analyze the eventual state-dependent variations in the volatility asymmetry. I find that the volatility asymmetry is clearly more pronounced in the periods of market stress marked by high levels of volatility and risk aversion. In addition, I reveal a further asymmetry in the asymmetric reaction patterns of the volatility to good and bad news: while the market moves through the periods of higher volatility and risk aversion, the impact of a bad news increases much more heavily than that of good news pointing to the fact that the investors become more sensible to bad news in market downturns.
    Keywords: Time-varying volatility asymmetry; High-frequency econometrics; EGARCH-M; HAR models; Volatility components; Variance risk premium
    JEL: C13 C14 C32 C58 G12
    Date: 2012–09–05
  3. By: Tsionas, Mike
    Abstract: In this paper we consider a variety of procedures for numerical statistical inference in the family of univariate and multivariate stable distributions. In connection with univariate distributions (i) we provide approximations by finite location-scale mixtures and (ii) versions of approximate Bayesian computation (ABC) using the characteristic function and the asymptotic form of the likelihood function. In the context of multivariate stable distributions we propose several ways to perform statistical inference and obtain the spectral measure associated with the distributions, a quantity that has been a major impediment in using them in applied work. We extend the techniques to handle univariate and multivariate stochastic volatility models, static and dynamic factor models with disturbances and factors from general stable distributions, a novel way to model multivariate stochastic volatility through time-varying spectral measures and a novel way to multivariate stable distributions through copulae. The new techniques are applied to artificial as well as real data (ten major currencies, SP100 and individual returns). In connection with ABC special attention is paid to crafting well-performing proposal distributions for MCMC and extensive numerical experiments are conducted to provide critical values of the “closeness” parameter that can be useful for further applied econometric work.
    Keywords: Univariate and multivariate stable distributions; MCMC; Approximate Bayesian Computation; Characteristic function
    JEL: C13 C11
    Date: 2012–05–10
  4. By: Francis X. Diebold (Department of Economics, University of Pennsylvania)
    Abstract: The Diebold-Mariano (DM) test was intended for comparing forecasts; it has been, and remains, useful in that regard. The DM test was not intended for comparing models. Unfortunately, however, much of the large subsequent literature uses DM-type tests for comparing models, in (pseudo-) out-of-sample environments. In that case, much simpler yet more compelling full-sample model comparison procedures exist; they have been, and should continue to be, widely used. The hunch that (pseudo-) out-of-sample analysis is somehow the “only," or “best," or even a “good" way to provide insurance against in-sample over fitting in model comparisons proves largely false. On the other hand, (pseudo-) out-of-sample analysis may be useful for learning about comparative historical predictive performance.
    Keywords: : Forecasting, model comparison, model selection, out-of-sample tests
    JEL: C01 C53
    Date: 2012–09–07
  5. By: Jesus Fernandez-Villaverde; Pablo A. Guerrón-Quintana; Juan Rubio-Ramírez
    Abstract: We propose a novel method to estimate dynamic equilibrium models with stochastic volatility. First, we characterize the properties of the solution to this class of models. Second, we take advantage of the results about the structure of the solution to build a sequential Monte Carlo algorithm to evaluate the likelihood function of the model. The approach, which exploits the profusion of shocks in stochastic volatility models, is versatile and computationally tractable even in large-scale models, such as those often employed by policy-making institutions. As an application, we use our algorithm and Bayesian methods to estimate a business cycle model of the U.S. economy with both stochastic volatility and parameter drifting in monetary policy. Our application shows the importance of stochastic volatility in accounting for the dynamics of the data.
    JEL: C1 E30
    Date: 2012–09

This nep-ets issue is ©2012 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.