nep-ets New Economics Papers
on Econometric Time Series
Issue of 2012‒05‒29
seven papers chosen by
Yong Yin
SUNY at Buffalo

  1. Asymptotic Theory for the QMLE in GARCH-X Models with Stationary and Non-Stationary Covariates By Heejoon Han; Dennis Kristensen
  2. PC-VAR estimation of vector autoregressive models By Claudio Morana
  3. Estimating and Forecasting APARCH-Skew-t Models by Wavelet Support Vector Machines By Li, Yushu
  4. Wavelet Improvement in Turning Point Detection using a HMM Model By Li, Yushu
  5. Wavelet Based Outlier Correction for Power Controlled Turning Point Detection in Surveillance Systems By Li, Yushu
  6. Estimating Long Memory Causality Relationships by a Wavelet Method By Li, Yushu
  7. Improving Bayesian VAR density forecasts through autoregressive Wishart Stochastic Volatility By Karapanagiotidis, Paul

  1. By: Heejoon Han (National University of Singapore); Dennis Kristensen (University College London and CREATES)
    Abstract: This paper investigates the asymptotic properties of the Gaussian quasi-maximum-likelihood estimators (QMLE?s) of the GARCH model augmented by including an additional explanatory variable - the so-called GARCH-X model. The additional covariate is allowed to exhibit any degree of persistence as captured by its long-memory parameter dx; in particular, we allow for both stationary and non-stationary covariates. We show that the QMLE'?s of the regression coefficients entering the volatility equation are consistent and normally distributed in large samples independently of the degree of persistence. This implies that standard inferential tools, such as t-statistics, do not have to be adjusted to the level of persistence. On the other hand, the intercept in the volatility equation is not identifi?ed when the covariate is non-stationary which is akin to the results of Jensen and Rahbek (2004, Econometric Theory 20) who develop similar results for the pure GARCH model with explosive volatility.
    Keywords: GARCH; Persistent covariate; Fractional integration; Quasi-maximum likelihood estimator; Asymptotic distribution theory.
    JEL: C22 C50 G12
    Date: 2012–05–18
    URL: http://d.repec.org/n?u=RePEc:aah:create:2012-25&r=ets
  2. By: Claudio Morana
    Abstract: In this paper PC-VAR estimation of vector autoregressive models (VAR) is proposed. The estimation strategy successfully lessens the curse of dimensionality a¤ecting VAR models, when estimated using sample sizes typically available in quarterly studies. The procedure involves a dynamic regression using a subset of principal components extracted from a vector time series, and the recovery of the implied unrestricted VAR parameter estimates by solving a set of linear con- straints. PC-VAR and OLS estimation of unrestricted VAR models show the same asymptotic properties. Monte Carlo results strongly support PC-VAR estimation, yielding gains, in terms of both lower bias and higher e¢ ciency, relatively to OLS estimation of high dimen- sional unrestricted VAR models in small samples. Guidance for the selection of the number of components to be used in empirical studies is provided.
    Keywords: vector autoregressive model, principal components analysis, statistical reduction techniques.
    JEL: C22
    Date: 2012–05
    URL: http://d.repec.org/n?u=RePEc:mib:wpaper:223&r=ets
  3. By: Li, Yushu (Department of Economics, Lund University)
    Abstract: This paper concentrates on comparing estimation and forecasting ability of Quasi-Maximum Likelihood (QML) and Support Vector Machines (SVM) for financial data. The financial series are fitted into a family of Asymmetric Power ARCH (APARCH) models. As the skewness and kurtosis are common characteristics of the financial series, a skew t distributed innovation is assumed to model the fat tail and asymmetry. Prior research indicates that the QML estimator for the APARCH model is inefficient when the data distribution shows departure from normality, so the current paper utilizes the nonparametric-based SVM method and shows that it is more efficient than the QML under the skewed Student’s t-distributed error. As the SVM is a kernel-based technique, we further investigate its performance by applying a Gaussian kernel and a wavelet kernel. The wavelet kernel is chosen due to its ability to capture the localized volatility clustering in the APGARCH model. The results are evaluated by a Monte Carlo experiment, with accuracy measured by Normalized Mean Square Error ( NMSE ). The results suggest that the SVM based method generally performs better than QML, with a consistently lower NMSE for both in sample and out of sample data. The outcomes also highlight the fact that the wavelet kernel outperforms the Gaussian kernel with a lower NMSE , is more computation efficient and has better generation capability.
    Keywords: SVM; APARCH; Wavelet Kernel; Monte Carlo Experiment
    JEL: C14 C53 C61
    Date: 2012–05–21
    URL: http://d.repec.org/n?u=RePEc:hhs:lunewp:2012_013&r=ets
  4. By: Li, Yushu (Department of Economics, Lund University)
    Abstract: The Hidden Markov Model (HMM) has been widely used in regime classification and turning point detection for econometric series after the decisive paper by Hamilton (1989). The present paper will show that when using HMM to detect the turning point in cyclical series, the accuracy of the detection will be influenced when the data are exposed to high volatility or combine multiple types of cycles that have different frequency bands. Moreover, the outliers will be frequently misidentified as turning points in the HMM framework. The present paper will also show that these issues can be resolved by wavelet multi-resolution analysis based methods, due to their ability to decompose a series into different frequency bands. By providing both frequency and time resolutions, the wavelet power spectrum can identify the process dynamics at various resolution levels. Thus, the underlying information for the data at different frequency bands can be extracted by wavelet decomposition with different frequency bands, and the outliers can be detected by high-frequency wavelet detail. We apply a Monte Carlo experiment to show that detection accuracy is highly improved for HMM when it is combined with the wavelet approach. An empirical example is illustrated using US GDP growth rate data.
    Keywords: HMM; turning point; wavelet; outlier
    JEL: C22 C38 C63
    Date: 2012–05–21
    URL: http://d.repec.org/n?u=RePEc:hhs:lunewp:2012_014&r=ets
  5. By: Li, Yushu (Department of Economics, Lund University)
    Abstract: Detection turning points in unimodel has various applications to time series which have cyclic periods. Related techniques are widely explored in the field of statistical surveillance, that is, on-line turning point detection procedures. This paper will first present a power controlled turning point detection method based on the theory of the likelihood ratio test in statistical surveillance. Next we show how outliers will influence the performance of this methodology. Due to the sensitivity of the surveillance system to outliers, we finally present a wavelet multiresolution (MRA) based outlier elimination approach, which can be combined with the on-line turning point detection process and will then alleviate the false alarm problem introduced by the outliers.
    Keywords: Unimodel; Turning point; Statistical Surveillance; Outlier; Wavelet multiresolution; Threshold
    JEL: C12 C52 C63
    Date: 2012–05–21
    URL: http://d.repec.org/n?u=RePEc:hhs:lunewp:2012_012&r=ets
  6. By: Li, Yushu (Department of Economics, Lund University)
    Abstract: The traditional causality relationship proposed by Granger (1969) assumes the relationships between variables are short range dependent with the same integrated order. Chen (2006) proposed a bi-variate model which can catch the long-range dependent among the two variables and the series do not need to be fractionally co-integrated. A long memory fractional transfer function is introduced to catch the long-range dependent in this model and a pseudo spectrum based method is proposed to estimate the long memory parameter in the bi-variate causality model. In recent years, a wavelet domain-based method has gained popularity in estimations of long memory parameter in unit series. No extension to bi-series or multi-series has been made and this paper aims to fill this gap. We will construct an estimator for the long memory parameter in the bi-variable causality model in the wavelet domain. The theoretical background is derived and Monte Carlo simulation is used to investigate the performance of the estimator.
    Keywords: Granger causality; long memory; Monte Carlo simulation; wavelet domain
    JEL: C30 C51 C63
    Date: 2012–05–21
    URL: http://d.repec.org/n?u=RePEc:hhs:lunewp:2012_015&r=ets
  7. By: Karapanagiotidis, Paul
    Abstract: Dramatic changes in macroeconomic time series volatility pose a challenge to contemporary vector autoregressive (VAR) forecasting models. Traditionally, the conditional volatility of such models had been assumed constant over time or allowed for breaks across long time periods. More recent work, however, has improved forecasts by allowing the conditional volatility to be completely time variant by specifying the VAR innovation variance as a distinct discrete time process. For example, Clark (2011) specifies the volatility process as an independent log random walk for each time series in the VAR. Unfortunately, there is no empirical reason to believe that the VAR innovation volatility process of macroeconomic growth series follow log random walks, nor that the volatility of each series is independent of the others. This suggests that a more robust specification on the volatility process—one that both accounts for co-persistence in conditional volatility across time series and exhibits mean reverting behaviour—should improve density forecasts, especially over the long run forecasting horizon. In this respect, I employ a latent Inverse-Wishart autoregressive stochastic volatility specification on the conditional variance equation of a Bayesian VAR, with U.S. macroeconomic time series data, in evaluating Bayesian forecast efficiency against a competing log random walk specification by Clark (2011).
    Keywords: InverseWishart distribution; stochastic volatility; predictive likelihoods; MCMC; macroeconomic time series; density forecasts; vector autoregression; steady state priors; Bayesian econometrics;
    JEL: C32 C53 E17 C11
    Date: 2012–03–10
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:38885&r=ets

This nep-ets issue is ©2012 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.