nep-ets New Economics Papers
on Econometric Time Series
Issue of 2014‒09‒29
ten papers chosen by
Yong Yin
SUNY at Buffalo

  1. A residual-based ADF test for stationary cointegration in I (2) settings By Javier Gómez Biscarri; Javier Hualde
  2. Chasing volatility - A persistent multiplicative error model with jumps By Massimiliano Caporin; Eduardo Rossi; Paolo Santucci de Magistris
  3. Efficient Bayesian Inference in Generalized Inverse Gamma Processes for Stochastic Volatility By Roberto Leon-Gonzalez
  4. Factor Models of Stock Returns: GARCH Errors versus Time - Varying Betas By Phoebe Koundouri; Nikolaos Kourogenis; Nikitas Pittis; Panagiotis Samartzis
  5. Forecasting Realized Volatility Using Subsample Averaging By Tae-Hwy Lee; Huiyu Huang
  6. Moment Approximation for Unit Root Models with Nonnormal Errors By Aman Ullah; Yong Bao; Ru Zhang
  7. Nonlinear Dynamics and Recurrence Plots for Detecting Financial Crisis By Peter Martey Addo; Monica Billio; Dominique Guegan
  8. On the Invertibility of EGARCH By Martinet, G.G.; McAleer, M.J.
  9. Outlier detection algorithms for least squares time series regression By Søren Johansen; Bent Nielsen
  10. Penalized Splines, Mixed Models and the Wiener-Kolmogorov Filter By Bloechl, Andreas

  1. By: Javier Gómez Biscarri; Javier Hualde
    Abstract: We propose a residual-based augmented Dickey-Fuller (ADF) test statistic that allows for detection of stationary cointegration within a system that may contain both I (2) and I (1) observables. The test is also consistent under the alternative of multicointegration, where first differences of the I (2) observables enter the cointegrating relationships. We find the null limiting distribution of this statistic and justify why our proposal improves over related approaches. Critical values are computed for a variety of situations. Additionally, building on this ADF test statistic, we propose a procedure to test the null of no stationary cointegration which overcomes the drawback, suffered by any residual-based method, of the lack of power with respect to some relevant alternatives. Finally, a Monte Carlo experiment is carried out and an empirical application is provided as an illustrative example.
    Keywords: I(2) systems; stationary cointegration; multicointegration; residual-based tests.
    JEL: C12 C22 C32
    Date: 2014–09
  2. By: Massimiliano Caporin (University of Padova); Eduardo Rossi (University of Pavia); Paolo Santucci de Magistris (Aarhus University and CREATES)
    Abstract: The realized volatility of financial returns is characterized by persistence and occurrence of unpredictable large increments. To capture those features, we introduce the Multiplicative Error Model with jumps (MEM-J). When a jump component is included in the multiplicative specification, the conditional density of the realized measure is shown to be a countably infinite mixture of Gamma and K distributions. Strict stationarity conditions are derived. A Monte Carlo simulation experiment shows that maximum likelihood estimates of the model parameters are reliable even when jumps are rare events. We estimate alternative specifications of the model using a set of daily bipower measures for 7 stock indexes and 16 individual NYSE stocks. The estimates of the jump component confirm that the probability of jumps dramatically increases during the financial crises. Compared to other realized volatility models, the introduction of the jump component provides a sensible improvement in the fit, as well as for in-sample and out-of-sample volatility tail forecasts.
    Keywords: Multiplicative Error Model with Jumps, Jumps in volatility, Realized measures, Volatility-at-Risk
    JEL: C22 C58 G01
    Date: 2014–08–29
  3. By: Roberto Leon-Gonzalez (National Graduate Institute for Policy Studies)
    Abstract: This paper develops a novel and efficient algorithm for Bayesian inference in inverse Gamma Stochastic Volatility models. It is shown that by conditioning on auxiliary variables, it is possible to sample all the volatilities jointly directly from their posterior conditional density, using simple and easy to draw from distributions. Furthermore, this paper develops a generalized inverse Gamma process with more flexible tails in the distribution of volatilities, which still allows for simple and efficient calculations. Using several macroeconomic and fi…nancial datasets, it is shown that the inverse Gamma and Generalized inverse Gamma processes can greatly outperform the commonly used log normal volatility processes with student-t errors.
    Date: 2014–09
  4. By: Phoebe Koundouri; Nikolaos Kourogenis (Department of Banking and Financial Management, University of Piraeus.); Nikitas Pittis (University of Piraeus, Greece); Panagiotis Samartzis
    Abstract: This paper investigates the implications of time-varying betas in factor models for stock returns. It is shown that a single-factor model (SFMT) with autoregressive betas and homoscedastic errors (SFMT-AR) is capable of reproducing the most important stylized facts of stock returns. An empirical study on the major US stock market sectors shows that SFMT-AR outperforms, in terms of in-sample and out-of-sample performance, SFMT with constant betas and conditionally heteroscedastic (GARCH) errors, as well as two multivariate GARCH-type models.
    Keywords: autoregressive beta, stock returns, single factor model, conditional heteroscedasticity, in-sample performance, out-of-sample performance
    JEL: C22 G10 G11 G12
    Date: 2014–09–17
  5. By: Tae-Hwy Lee (Department of Economics, University of California Riverside); Huiyu Huang (Grantham, Mayo, Van Otterloo and Company LLC)
    Abstract: When the observed price process is the true underlying price process plus microstructure noise, it is known that realized volatility (RV) estimates will be overwhelmed by the noise when the sampling frequency approaches infinity. Therefore, it may be optimal to sample less frequently, and averaging the less frequently sampled subsamples can improve estimation for quadratic variation. In this paper, we extend this idea to forecasting daily realized volatility. While the subsample-averaging has been proposed and used in estimating RV, this paper is the first that uses the subsample-averaging for forecasting RV. The subsample averaging method we examine incorporates the high frequency data in different levels of systematic sampling. It first pools the high frequency data into several subsamples, that generates forecasts from each subsample, and then combine these forecasts. We find that, in daily S&P 500 return RV forecasts, subsample-averaging generates better forecasts than those using only one subsample without averaging over all subsamples.
    Keywords: Subsample averaging. Forecast combination. High-frequency data. Realized volatility. ARFIMA model. HAR model.
    JEL: C53 C58 G17
    Date: 2014–09
  6. By: Aman Ullah (Department of Economics, University of California Riverside); Yong Bao (Purdue University); Ru Zhang (University of California, Riverside)
    Abstract: Phillips (1977a, 1977b) made seminal contributions to time series finite-sample theory, and then, he was among the first to develop the distributions of estimators and forecasts in stationary time series models, see Phillips (1978, 1979), among others. From the mid-eighties Phillips (1987a, 1987b), through his fundamental papers, opened the path of asymptotic (large-sample) theory for the unit root type non-stationary models. This has certainly created a large literature of important papers, including many of Phillips’’own papers. However, not much is known about the analytical finite-sample properties of estimators under the unit root, although see Kiviet and Phillips (2005) for the case when the errors are normally distributed. An objective of this paper is to analyze the …finite-sample behavior of the estimator in the first-order autoregressive model with unit root and nonnormal errors. In particular, we derive analytical approximations for the first two moments in terms of model parameters and the distribution parameters. Through Monte Carlo simulations, we find that our approximate formula perform quite well across different distribution specifications in small samples. However, when the noise to signal ratio is huge, and bias distortion can be quite substantial, and our approximations do not fare well.
    Keywords: unit root, nonnormal, moment approximation.
    Date: 2014–09
  7. By: Peter Martey Addo (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Paris I - Panthéon-Sorbonne, Università Ca' Foscari of Venice - Department of Economics, EEP-PSE - Ecole d'Économie de Paris - Paris School of Economics - Ecole d'Économie de Paris); Monica Billio (Università Ca' Foscari of Venice - Department of Economics); Dominique Guegan (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Paris I - Panthéon-Sorbonne, EEP-PSE - Ecole d'Économie de Paris - Paris School of Economics - Ecole d'Économie de Paris)
    Abstract: Identification of financial bubbles and crisis is a topic of major concern since it is important to prevent collapses that can severely impact nations and economies. Our analysis deals with the use of the recently proposed "delay vector variance" (DVV) method, which examines local predictability of a signal in the phase space to detect the presence of determinism and nonlinearity in a time series. Optimal embedding parameters used in the DVV analysis are obtained via a differential entropy based method using wavelet-based surrogates. We exploit the concept of recurrence plots to study the stock market to locate hidden patterns, non-stationarity, and to examine the nature of these plots in events of financial crisis. In particular, the recurrence plots are employed to detect and characterize financial cycles. A comprehensive analysis of the feasibility of this approach is provided. We show that our methodology is useful in the diagnosis and detection of financial bubbles, which have significantly impacted economic upheavals in the past few decades.
    Keywords: Nonlinearity analysis; surrogates; Delay vector variance (DVV) method; wavelets; financial bubbles; embedding parameters; recurrence plots
    Date: 2013–02
  8. By: Martinet, G.G.; McAleer, M.J.
    Abstract: __Abstract__ Of the two most widely estimated univariate asymmetric conditional volatility models, the exponential GARCH (or EGARCH) specification can capture asymmetry, which refers to the different effects on conditional volatility of positive and negative effects of equal magnitude, and leverage, which refers to the negative correlation between the returns shocks and subsequent shocks to volatility. However, the statistical properties of the (quasi-) maximum likelihood estimator (QMLE) of the EGARCH parameters are not available under general conditions, but only for special cases under highly restrictive and unverifiable conditions. A limitation in the development of asymptotic properties of the QMLE for EGARCH is the lack of an invertibility condition for the returns shocks underlying the model. It is shown in this paper that the EGARCH model can be derived from a stochastic process, for which the invertibility conditions can be stated simply and explicitly. This will be useful in re-interpreting the existing properties of the QMLE of the EGARCH parameters.
    Keywords: Leverage, asymmetry, existence, stochastic process, asymptotic properties, invertibility
    JEL: C22 C52 C50 G32
    Date: 2014–07–01
  9. By: Søren Johansen (Dept of Economics, University of Copenhagen and CREATES, Dept of Economics and Business, Aarhus University); Bent Nielsen (Nuffield College and Dept of Economics)
    Abstract: We review recent asymptotic results on some robust methods for multiple regres- sion. The regressors include stationary and non-stationary time series as well as polynomial terms. The methods include the Huber-skip M-estimator, 1-step Huber-skip M-estimators, in particular the Impulse Indicator Saturation, iterated 1-step Huber-skip M-estimators and the Forward Search. These methods classify observations as outliers or not. From the as- ymptotic results we establish a new asymptotic theory for the gauge of these methods, which is the expected frequency of falsely detected outliers. The asymptotic theory involves normal distribution results and Poisson distribution results. The theory is applied to a time series data set.
    Keywords: Huber-skip M-estimators, 1-step Huber-skip M-estimators, iteration, Forward Search, Impulse Indicator Saturation, Robusti?ed Least Squares, weighted and marked em- pirical processes, iterated martingale inequality, gauge.
    Date: 2014–09–08
  10. By: Bloechl, Andreas
    Abstract: Penalized splines are widespread tools for the estimation of trend and cycle, since they allow a data driven estimation of the penalization parameter by the incorporation into a linear mixed model. Based on the equivalence of penalized splines and the Hodrick-Prescott filter, this paper connects the mixed model framework of penalized splines to the Wiener- Kolmogorov filter. In the case that trend and cycle are described by ARIMA-processes, this filter yields the mean squarred error minimizing estimations of both components. It is shown that for certain settings of the parameters, a penalized spline within the mixed model framework is equal to the Wiener-Kolmogorov filter for a second fold integrated random walk as the trend and a stationary ARMA-process as the cyclical component.
    Keywords: Hodrick-Prescott filter; mixed models; penalized splines; trend estimation; Wiener-Kolmogorov filter
    JEL: C22 C52
    Date: 2014

This nep-ets issue is ©2014 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.