nep-ets New Economics Papers
on Econometric Time Series
Issue of 2017‒01‒29
twelve papers chosen by
Yong Yin
SUNY at Buffalo

  1. Uniform Inference in Panel Autoregression By John Chao; Peter C.B. Phillips
  2. Weak s- Convergence: Theory and Applications By Jianning Kong; Peter C.B. Phillips; Donggyu Sul
  3. BIAS correction for dynamic factor models By García-Martos, Carolina; Bastos, Guadalupe; Alonso Fernández, Andrés Modesto
  4. A Unified Framework for Dimension Reduction in Forecasting By Alessandro Barbarino; Efstathia Bura
  5. Unit Root Tests and Heavy-Tailed Innovations By Georgiev, Iliyan; Rodrigues, Paulo M M; Taylor, A M Robert
  6. Business cycle estimation with high-pass and band-pass local polynomial regression By Luis J. Álvarez
  7. Dynamic panel data modelling using maximum likelihood: an alternative to Arellano-Bond By Enrique Moral-Benito; Paul Allison; Richard Williams
  8. Time Series Copulas for Heteroskedastic Data By Rub\'en Loaiza-Maya; Michael S. Smith; Worapree Maneesoonthorn
  9. The Fiction of Full BEKK By Chia-Lin Chang; Michael McAleer
  10. Sparse Change-point HAR Models for Realized Variance By Arnaud Dufays; Jeroen V.K. Rombouts
  11. A new approach to volatility modeling: the High-Dimensional Markov model By Arnaud Dufays; Maciej Augustyniak; Luc Bauwens
  12. Identification-robust moment-based tests for Markov-switching in autoregressive models By Jean-Marie Dufour; Richard Luger

  1. By: John Chao (University of Maryland); Peter C.B. Phillips (Cowles Foundation, Yale University)
    Abstract: This paper considers estimation and inference concerning the autoregressive coefficient (?) in a panel autoregression for which the degree of persistence in the time dimension is unknown. The main objective is to construct confidence intervals for ? that are asymptotically valid, having asymptotic coverage probability at least that of the nominal level uniformly over the parameter space. It is shown that a properly normalized statistic based on the Anderson-Hsiao IV procedure, which we call the M statistic, is uniformly convergent and can be inverted to obtain asymptotically valid interval estimates. In the unit root case confidence intervals based on this procedure are unsatisfactorily wide and uninformative. To sharpen the intervals a new procedure is developed using information from unit root pretests to select alternative confidence intervals. Two sequential tests are used to assess how close ? is to unity and to correspondingly tailor intervals near the unit root region. When ? is close to unity, the width of these intervals shrinks to zero at a faster rate than that of the confidence interval based on the M statistic. Only when both tests reject the unit root hypothesis does the construction revert to the M statistic intervals, whose width has the optimal N^{-1/2}T^{-1/2} rate of shrinkage when the underlying process is stable. The asymptotic properties of this pretest-based procedure show that it produces confidence intervals with at least the prescribed coverage probability in large samples. Simulations confirm that the proposed interval estimation methods perform well in finite samples and are easy to implement in practice. A supplement to the paper provides an extensive set of new results on the asymptotic behavior of panel IV estimators in weak instrument settings.
    Keywords: Confidence interval, Dynamic panel data models, panel IV, pooled OLS, Pretesting, Uniform inference
    JEL: C23 C36
    Date: 2017–01
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:2071&r=ets
  2. By: Jianning Kong (Shandong University); Peter C.B. Phillips (Cowles Foundation, Yale University); Donggyu Sul (University of Texas Dallas)
    Abstract: The concept of relative convergence, which requires the ratio of two time series to converge to unity in the long run, explains convergent behavior when series share commonly divergent stochastic or deterministic trend components. Relative convergence of this type does not necessarily hold when series share common time decay patterns measured by evaporating rather than divergent trend behavior. To capture convergent behavior in panel data that do not involve stochastic or divergent deterministic trends, we introduce the notion of weak s-convergence, whereby cross section variation in the panel decreases over time. The paper formalizes this concept and proposes a simple-to-implement linear trend regression test of the null of no s-convergence. Asymptotic properties for the test are developed under general regularity conditions and various data generating processes. Simulations show that the test has good size control and discriminatory power. The method is applied to examine whether the idiosyncratic components of 90 disaggregate personal consumption expenditure (PCE) price index items s-converge over time. We find strong evidence of weak s-convergence in the period after 1992, which implies that cross sectional dependence has strenthened over the last two decades. In a second application, the method is used to test whether experimental data in ultimatum games converge over successive rounds, again finding evidence in favor of weak s-convergence. A third application studies convergence and divergence in US States unemployment data over the period 2001-2016.
    Keywords: Asymptotics under misspecified trend regression, Cross section dependence, Evaporating trend, Relative convergence, Trend regression, Weak s-convergence
    JEL: C33
    Date: 2017–01
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:2072&r=ets
  3. By: García-Martos, Carolina; Bastos, Guadalupe; Alonso Fernández, Andrés Modesto
    Abstract: In this paper we work with multivariate time series that follow a Dynamic Factor Model. In particular, we consider the setting where factors are dominated by highly persistent AutoRegressive (AR) processes, and samples that are rather small. Therefore, the factors' AR models are estimated using small sample bias correction techniques. A Monte Carlo study reveals that bias-correcting the AR coefficients of the factors allows to obtain better results in terms of prediction interval coverage. As expected, the simulation reveals that bias-correction is more successful for smaller samples. Results are gathered assuming the AR order and number of factors are known as well as unknown. We also study the advantages of this technique for a set of Industrial Production Indexes of several European countries.
    Keywords: Dynamic Factor Model; persistent process; auto-regressive models; small sample bias correction; Dimensionality reduction
    Date: 2017–01
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:24029&r=ets
  4. By: Alessandro Barbarino; Efstathia Bura
    Abstract: Factor models are widely used in summarizing large datasets with few underlying latent factors and in building time series forecasting models for economic variables. In these models, the reduction of the predictors and the modeling and forecasting of the response y are carried out in two separate and independent phases. We introduce a potentially more attractive alternative, Sufficient Dimension Reduction (SDR), that summarizes x as it relates to y, so that all the information in the conditional distribution of y|x is preserved. We study the relationship between SDR and popular estimation methods, such as ordinary least squares (OLS), dynamic factor models (DFM), partial least squares (PLS) and RIDGE regression, and establish the connection and fundamental differences between the DFM and SDR frameworks. We show that SDR significantly reduces the dimension of widely used macroeconomic series data with one or two sufficient reductions delivering similar forecasting performance to that of competing methods in macro-forecasting.
    Keywords: Diffusion Index ; Dimension Reduction ; Factor Models ; Forecasting ; Partial Least Squares ; Principal Components
    JEL: C32 C53 C55 E17
    Date: 2017–01–12
    URL: http://d.repec.org/n?u=RePEc:fip:fedgfe:2017-04&r=ets
  5. By: Georgiev, Iliyan; Rodrigues, Paulo M M; Taylor, A M Robert
    Abstract: We evaluate the impact of heavy-tailed innovations on some popular unit root tests. In the context of a near-integrated series driven by linear-process shocks, we demonstrate that their limiting distributions are altered under in nite variance vis-Ã -vis finite variance. Reassuringly, however, simulation results suggest that the impact of heavy-tailed innovations on these tests are relatively small. We use the framework of Amsler and Schmidt (2012) whereby the innovations have local-to- nite variances being generated as a linear combination of draws from a thin- tailed distribution (in the domain of attraction of the Gaussian distribution) and a heavy-tailed distribution (in the normal domain of attraction of a stable law). We also explore the properties of ADF tests which employ Eicker-White standard errors, demonstrating that these can yield significant power improvements over conventional tests.
    Keywords: Infinite variance, α-stable distribution, Eicker-White standard errors, symptotic local power functions, weak dependence
    Date: 2017–01
    URL: http://d.repec.org/n?u=RePEc:esy:uefcwp:18832&r=ets
  6. By: Luis J. Álvarez (Banco de España)
    Abstract: Filters constructed on the basis of standard local polynomial regression (LPR) methods have been used in the literature to estimate the business cycle. We provide a frequency domain interpretation of the contrast filter obtained by the difference between a series and its long-run LPR component and show that it operates as a kind of high-pass filter, meaning it provides a noisy estimate of the cycle. We alternatively propose band-pass local polynomial regression methods aimed at isolating the cyclical component. Results are compared to standard high-pass and band-pass filters. Procedures are illustrated using the US GDP series.
    Keywords: business cycles, local polynomial regression, filtering, high-pass, band-pass, US cycles
    JEL: C13
    Date: 2017–01
    URL: http://d.repec.org/n?u=RePEc:bde:wpaper:1702&r=ets
  7. By: Enrique Moral-Benito (Banco de España); Paul Allison (University of pennsylvania); Richard Williams (University of Notre Dame)
    Abstract: The Arellano and Bond (1991) estimator is widely-used among applied researchers when estimating dynamic panels with fixed effects and predetermined regressors. This estimator might behave poorly in finite samples when the cross-section dimension of the data is small (i.e. small N), especially if the variables under analysis are persistent over time. This paper discusses a maximum likelihood estimator that is asymptotically equivalent to Arellano and Bond (1991) but presents better finite sample behaviour. Moreover, the estimator is easy to implement in Stata using the xtdpdml command as described in the companion paperWilliams et al. (2016), which also discusses further advantages of the proposed estimator for practitioners.
    Keywords: dynamic panel data, maximum likelihood estimation
    JEL: C23
    Date: 2017–01
    URL: http://d.repec.org/n?u=RePEc:bde:wpaper:1703&r=ets
  8. By: Rub\'en Loaiza-Maya; Michael S. Smith; Worapree Maneesoonthorn
    Abstract: We propose parametric copulas that capture serial dependence in stationary heteroskedastic time series. We develop our copula for first order Markov series, and extend it to higher orders and multivariate series. We derive the copula of a volatility proxy, based on which we propose new measures of volatility dependence, including co-movement and spillover in multivariate series. In general, these depend upon the marginal distributions of the series. Using exchange rate returns, we show that the resulting copula models can capture their marginal distributions more accurately than univariate and multivariate GARCH models, and produce more accurate value at risk forecasts.
    Date: 2017–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1701.07152&r=ets
  9. By: Chia-Lin Chang (Department of Applied Economics, Department of Finance, National Chung Hsing University, Taiwan); Michael McAleer (Department of Quantitative Finance, National Tsing Hua University, Taiwan; Econometric Institute; Erasmus School of Economics, Erasmus University Rotterdam, The Netherlands;Department of Quantitative Economics, Complutense University of Madrid Spain;)
    Abstract: The purpose of the paper is to show that univariate GARCH is not a special case of multivariate GARCH, specifically the Full BEKK model, except under parametric restrictions on the off-diagonal elements of the random coefficient autoregressive coefficient matrix, provides the regularity conditions that arise from the underlying random coefficient autoregressive process, and for which the (quasi-) maximum likelihood estimates have valid asymptotic properties under the appropriate parametric restrictions. The paper provides a discussion of the stochastic processes, regularity conditions, and asymptotic properties of univariate and multivariate GARCH models. It is shown that the Full BEKK model, which in practice is estimated almost exclusively, has no underlying stochastic process, regularity conditions, or asymptotic properties.
    Keywords: Random coefficient stochastic process; Off-diagonal parametric restrictions; Diagonal and Full BEKK; Regularity conditions; Asymptotic properties; Conditional volatility; Univariate and multivariate models.
    JEL: C22 C32 C52 C58
    Date: 2017–01–23
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20170015&r=ets
  10. By: Arnaud Dufays; Jeroen V.K. Rombouts
    Abstract: Change-point time series specifications constitute flexible models that capture unknown structural changes by allowing for switches in the model parameters. Nevertheless most models suffer from an over-parametrization issue since typically only one latent state variable drives the switches in all parameters. This implies that all parameters have to change when a break happens. To gauge whether and where there are structural breaks in realized variance, we introduce the sparse change-point HAR model. The approach controls for model parsimony by limiting the number of parameters which evolve from one regime to another. Sparsity is achieved thanks to employing a nonstandard shrinkage prior distribution. We derive a Gibbs sampler for inferring the parameters of this process. Simulation studies illustrate the excellent performance of the sampler. Relying on this new framework, we study the stability of the HAR model using realized variance series of several major international indices between January 2000 and August 2015.
    Keywords: Realized variance, Bayesian inference, Time series, Shrinkage prior, Change-point model, Online forecasting
    JEL: C11 C15 C22 C51
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:lvl:crrecr:1607&r=ets
  11. By: Arnaud Dufays; Maciej Augustyniak; Luc Bauwens
    Abstract: A new model -the high-dimensional Markov (HDM) model - is proposed for financial returns and their latent variances. It is also applicable to model directly realized variances. Volatility is modeled as a product of three components: a Markov chain driving volatility persistence, an independent discrete process capable of generating jumps in the volatility, and a predictable (data-driven) process capturing the leverage effect. The Markov chain and jump components allow volatility to switch abruptly between thousands of states. The transition probability matrix of the Markov chain is structured in such a way that the multiplicity of the second largest eigenvalue can be greater than one. This distinctive feature generates a high degree of volatility persistence. The statistical properties of the HDM model are derived and an economic interpretation is attached to each component. In-sample results on six financial time series highlight that the HDM model compares favorably to the main existing volatility processes. A forecasting experiment shows that the HDM model significantly outperforms its competitors when predicting volatility over time horizons longer than five days.
    Keywords: Volatility, Markov-switching, Persistence, Leverage effect.
    JEL: C22 C51 C58
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:lvl:crrecr:1609&r=ets
  12. By: Jean-Marie Dufour; Richard Luger
    Abstract: This paper develops tests of the null hypothesis of linearity in the context of autoregressive models with Markov-switching means and variances. These tests are robust to the identification failures that plague conventional likelihood-based inference methods. The approach exploits the moments of normal mixtures implied by the regime-switching process and uses Monte Carlo test techniques to deal with the presence of an autoregressive component in the model specification. The proposed tests have very respectable power in comparison to the optimal tests for Markov-switching parameters of Carrasco et al. (2014) and they are also quite attractive owing to their computational simplicity. The new tests are illustrated with an empirical application to an autoregressive model of U.S. output growth.
    Keywords: Mixture distributions; Markov chains; Regime switching; Parametric bootstrap; Monte Carlo tests; Exact inference.
    JEL: C12 C15 C22 C52
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:lvl:crrecr:1701&r=ets

This nep-ets issue is ©2017 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.