nep-ets New Economics Papers
on Econometric Time Series
Issue of 2016‒09‒11
eleven papers chosen by
Yong Yin
SUNY at Buffalo

  1. The Pruned State-Space System for Non-Linear DSGE Models: Theory and Empirical Applications By Martin M. Andreasen; Jesús Fernández-Villaverde; Juan F. Rubio-Ramírez
  2. Singular Spectrum Analysis of Grenander Processes and Sequential Time Series Reconstruction By D.S. Poskitt
  3. CEstimation of Structural Breaks in Large Panels with Cross-Sectional Dependence By Jiti Gao; Guangming Pan; Yanrong Yang
  4. CLT for Largest Eigenvalues and Unit Root Tests for High-Dimensional Nonstationary Time Series By Bo Zhang; Guangming Pan; Jiti Gao
  5. Specification Testing for Nonlinear Multivariate Cointegrating Regressions By Chaohua Dong; Jiti Gao; Dag Tjostheim; Jiying Yin
  6. Convergence rates of sums of α-mixing triangular arrays: with an application to non-parametric drift function estimation of continuous-time processes By Shin Kanaya
  7. Value-at-Risk with Application of DCC-GARCH Model By Tomas Meluzin; Marek Zinecker; Michal Bernard Pietrzak; Marcin Faldzinski; Adam P. Balcerzak
  8. Fractional Integration and Fat Tails for Realized Covariance Kernels and Returns By Andre Lucas; Anne Opschoor
  9. Forecasting using Random Subspace Methods By Tom Boot; Didier Nibbering
  10. Asymptotic Theory for Extended Asymmetric Multivariate GARCH Processes By Manabu Asai; Michael McAleer
  11. Calculating Joint Confidence Bands for Impulse Response Functions using Highest Density Regions By Helmut Lütkepohl; Anna Staszewska-Bystrova; Peter Winker

  1. By: Martin M. Andreasen; Jesús Fernández-Villaverde; Juan F. Rubio-Ramírez
    Abstract: This paper studies the pruned state-space system for higher-order perturbation approximations to DSGE models. We show the stability of the pruned approximation up to third order and provide closed-form expressions for first and second unconditional moments and impulse response functions. Our results introduce GMM estimation and impulse-response matching for DSGE models approximated up to third order and provide a foundation for indirect inference and SMM. As an application, we consider a New Keynesian model with Epstein-Zin-Weil preferences and two novel feedback effects from long-term bonds to the real economy, allowing us to match the level and variability of the 10-year term premium in the U.S. with a low relative risk aversion of 5.
    Date: 2016–09
  2. By: D.S. Poskitt
    Abstract: This paper provides a detailed analysis of the properties of Singular Spectrum Analysis (SSa) under very general conditions concerning the structure of the observed series. It translates the SSA interpretation of the singular value decomposition of the so called trajectory matrix as a discrete Karhunen-Loeve expansion into conventional principle components analysis, and shows how this motivates a consideration of SSA constructed using standardized or re-scaled trajectories (R-SSA). The asymptotic properties of R-SSA are derived assuming that the true data generating process (DGP) satisfies sufficient regularity to ensure that Grenander's conditions are satisfied. The spectral structure of different population ensemble models implicit in the large sample properties so derived is examined and it is shown how the decomposition of the spectrum into discrete and continuous components leads to an application of sequential R-SSA series reconstruction. As part of the latter exercise the paper presents a generalization of Szego's theorem to fractionally integrated processes. The operation of the theoretical results is demonstrated via simulation experiments. The latter serve as a vehicle to illustrate the numerical consequences of the results in the context of different processes, and to assess the practical impact of the sequential R-SSA processing methodology.
    Keywords: embedding, principle components, re-scaled trajectory matrix, singular value decomposition, spectrum.
    JEL: C14 C22 C52
    Date: 2016
  3. By: Jiti Gao; Guangming Pan; Yanrong Yang
    Abstract: This paper considers modelling and detecting structure breaks associated with cross-sectional dependence for large dimensional panel data models, which are popular in many fields including economics and finance. We propose a dynamic factor structure to measure the degree of cross-sectional dependence. The extent of such cross-sectional dependence is parameterized as an unknown parameter, which is defined by assuming that a small proportion of the total factor loadings are important. Compared with the usual parameterized style, this exponential description of extent covers the case of small proportion of the total sections being cross-sectionally dependent. We established a 'moment' criterion to estimate the unknown based on the covariance of cross-sectional averages at different time lags. By taking into account the fact that the serial dependence of common factors is stronger than that of idiosyncratic components, the proposed criterion is able to capture weak cross-sectional dependence that is reflected on relatively small values of the unknown parameter. Due to the involvement of some unknown parameter, both joint and marginal estimators are constructed. This paper then establishes that the joint estimators of a pair of unknown parameters converge in distribution to bivariate normal. In the case where the other unknown parameter is being assumed to be known, an asymptotic distribution for an estimator of the original unknown parameter is also established, which naturally coincides with the joint asymptotic distribution for the case where the other unknown parameter is assumed to be known. Simulation results show the finite-sample effectiveness of the proposed method. Empirical applications to cross-country macro-variables and stock returns in SP500 market are also reported to show the practical relevance of the proposed estimation theory.
    Keywords: cross-sectional averages, dynamic factor model, joint estimation, marginal estimation, strong factor loading
    JEL: C21 C32
    Date: 2016
  4. By: Bo Zhang; Guangming Pan; Jiti Gao
    Abstract: This paper first considers some testing issues for a vector of high-dimensional time series before it establishes a joint distribution for the largest eigenvalues of the corresponding co-variance matrix associated with the high-dimensional time series for the case where both the dimensionality of the time series and the length of time series go to infinity. As an application, a new unit root test for a vector of high-dimensional time series is proposed and then studied both theoretically and numerically to show that existing unit tests for the fixed-dimensional case are not applicable
    Keywords: asymptotic normality, largest eigenvalue, linear process, unit root test
    JEL: C21 C32
    Date: 2016
  5. By: Chaohua Dong; Jiti Gao; Dag Tjostheim; Jiying Yin
    Abstract: This paper considers a general model specification test for nonlinear multivariate cointegrating regressions where the regressor consists of a univariate integrated time series and a vector of stationary time series. The regressors and the errors are generated from the same innovations, so that the model accommodates endogeneity. A new and simple test is proposed and the resulting asymptotic theory is established. The test statistic is constructed based on a natural distance function between a nonparametric estimate and a smoothed parametric counterpart. The asymptotic distribution of the test statistic under the parametric specification is proportional to that of a local-time random variable with a known distribution. In addition, the finite sample performance of the proposed test is evaluated through using both simulated and real data examples.
    Keywords: cointegration, endogeneity, nonparametric kernel estimation, parametric model specification, time series
    JEL: C12 C14 C22
    Date: 2016
  6. By: Shin Kanaya (Aarhus University)
    Abstract: The convergence rates of the sums of α-mixing (or strongly mixing) triangular arrays of het- erogeneous random variables are derived. We pay particular attention to the case where central limit theorems may fail to hold, due to relatively strong time-series dependence and/or the non- existence of higher-order moments. Several previous studies have presented various versions of laws of large numbers for sequences/triangular arrays, but their convergence rates were not fully investigated. This study is the first to investigate the convergence rates of the sums of α-mixing triangular arrays whose mixing coefficients are permitted to decay arbitrarily slowly. We consider two kinds of asymptotic assumptions: one is that the time distance between adjacent observations is fixed for any sample size n; and the other, called the infill assumption, is that it shrinks to zero as n tends to infinity. Our convergence theorems indicate that an explicit trade-off exists between the rate of convergence and the degree of dependence. While the results under the infill assumption can be seen as a direct extension of those under the fixed-distance assumption, they are new and particularly useful for deriving sharper convergence rates of discretization biases in estimating continuous-time processes from discretely sampled observations. We also discuss some examples to which our results and techniques are useful and applicable: a moving-average process with long lasting past shocks, a continuous-time diffusion process with weak mean reversion, and a near-unit-root process.
    Keywords: Law of large numbers; rate of convergence; α-mixing triangular array; infill asymp- totics; kernel estimation.
    JEL: C14 C22 C58
    Date: 2016–08
  7. By: Tomas Meluzin (Brno University of Technology, Czech Republic); Marek Zinecker (Brno University of Technology, Czech Republic); Michal Bernard Pietrzak (Nicolaus Copernicus University, Poland); Marcin Faldzinski (Nicolaus Copernicus University, Poland); Adam P. Balcerzak (Nicolaus Copernicus University, Poland)
    Abstract: The article concentrates on modelling of volatility of capital markets and estimation of Value-at-Risk. The aim of the article is the description of volatility and interdependencies among three indices: WIG (Poland), DAX (Germany) and DJIA (United States). In order to measure the volatility and strength of interdependencies DCC-GARCH-In model was used, where an impact of the volatility of other markets is additionally taken into consideration during construction of the model. The conducted research for the years 2000-2012 confirmed the presence of interactions among selected capital markets. Next, the model DCC-GARCH-In was applied for evaluation of Value-at-Risk and the obtained measure was assessed with application of backtesting procedure. The results confirm that including volatility in the variance in DCC-GARCH-In model enables better assessment of VaR measure.
    Keywords: capital market, value-at-risk, backtesting, DCC-GARCH model, conditional variance
    JEL: G15 C58
    Date: 2016–09
  8. By: Andre Lucas (VU University Amsterdam, the Netherlands); Anne Opschoor (VU University Amsterdam, the Netherlands)
    Abstract: We introduce a new fractionally integrated model for covariance matrix dynamics based on the long-memory behavior of daily realized covariance matrix kernels and daily return observations. We account for fat tails in both types of data by appropriate distributional assumptions. The covariance matrix dynamics are formulated as a numerically efficient matrix recursion that ensures positive definiteness under simple parameter constraints. Using intraday stock data over the period 2001-2012, we construct realized covariance kernels and show that the new fractionally integrated model statistically and economically outperforms recent alternatives such as the Multivariate HEAVY model and the 2006 “long-memory” version of the Riskmetrics model.
    Keywords: multivariate volatility; fractional integration; realized covariance matrices; heavy tails; matrix-F distribution; score dynamics
    JEL: C32 C58
    Date: 2016–09–02
  9. By: Tom Boot (Erasmus University Rotterdam, the Netherlands); Didier Nibbering (Erasmus University Rotterdam, the Netherlands)
    Abstract: Random subspace methods are a novel approach to obtain accurate forecasts in high-dimensional regression settings. We provide a theoretical justification of the use of random subspace methods and show their usefulness when forecasting monthly macroeconomic variables. We focus on two approaches. The first is random subset regression, where random subsets of predictors are used to construct a forecast. Second, we discuss random projection regression, where artificial predictors are formed by randomly weighting the original predictors. Using recent results from random matrix theory, we obtain a tight bound on the mean squared forecast error for both randomized methods. We identify settings in which one randomized method results in more precise forecasts than the other and than alternative regularization strategies, such as principal component regression, partial least squares, lasso, and ridge regression. The predictive accuracy on the high-dimensional macroeconomic FRED-MD data set increases substantially when using the randomized methods, with random subset regression outperforming any one of the above mentioned competing methods for at least 66\% of the series.
    Keywords: dimension reduction; random projections; random subset regression; principal components analysis; forecasting
    JEL: C32 C38 C53 C55
    Date: 2016–09–06
  10. By: Manabu Asai (Soka University, Japan); Michael McAleer (National Tsing Hua University, Taiwan; Erasmus University Rotterdam, the Netherlands; Complutense University of Madrid, Spain; Yokohama National University, Japan)
    Abstract: The paper considers various extended asymmetric multivariate conditional volatility models, and derives appropriate regularity conditions and associated asymptotic theory. This enables checking of internal consistency and allows valid statistical inferences to be drawn based on empirical estimation. For this purpose, we use an underlying vector random coefficient autoregressive process, for which we show the equivalent representation for the asymmetric multivariate conditional volatility model, to derive asymptotic theory for the quasi-maximum likelihood estimator. As an extension, we develop a new multivariate asymmetric long memory volatility model, and discuss the associated asymptotic properties.
    Keywords: Multivariate conditional volatility; Vector random coefficient autoregressive process; Asymmetry; Long memory; Dynamic conditional correlations; Regularity conditions; Asymptotic properties
    JEL: C13 C32 C58
    Date: 2016–09–05
  11. By: Helmut Lütkepohl (DIW Berlin); Anna Staszewska-Bystrova (University of Lodz); Peter Winker (University of Giessen)
    Abstract: This paper proposes a new non-parametric method of constructing joint confidence bands for impulse response functions of vector autoregressive models. The estimation uncertainty is captured by means of bootstrapping and the highest density region (HDR) approach is used to construct the bands. A Monte Carlo comparison of the HDR bands with existing alternatives shows that the former are competitive with the bootstrap-based Bonferroni and Wald confidence regions. The relative tightness of the HDR bands matched with their good coverage properties makes them attractive for applications. An application to corporate bond spreads for Germany highlights the potential for empirical work.
    Keywords: Impulse responses, joint confidence bands, highest density region, vector autoregressive process
    JEL: C32
    Date: 2016

This nep-ets issue is ©2016 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.