nep-ets New Economics Papers
on Econometric Time Series
Issue of 2017‒09‒17
ten papers chosen by
Yong Yin
SUNY at Buffalo

  1. Clustering Space-Time Series: A Flexible STAR Approach By E. Otranto; M. Mucciardi
  2. Three serial correlation tests for panel data regression models By Jesse Wursten
  3. A Bootstrap Approach for Generalized Autocontour Testing. Implications for VIX Forecast Densities By Gloria Gonzalez-Rivera; Joao Henrique Mazzeu; Esther Ruiz; Helena Veiga
  4. Stationarity and Invertibility of a Dynamic Correlation Matrix By Michael mcAleer
  5. Inference on a Semiparametric Model with Global Power Law and Local Nonparametric Trends By Jiti Gao; Oliver Linton; Bin Peng
  6. Kernel-based inference in time-varying coefficient models with multiple integrated regressors By Degui Li; Peter CB Phillips; Jiti Gao
  7. A bootstrap stationarity test for predictive regression invalidity By Iliyan Georgiev; David I. Harvey; Stephen J. Leybourne; A. M. Robert Taylor
  8. Point Optimal Testing with Roots That Are Functionally Local to Unity By Anna Bykhovskaya; Peter C. B. Phillips
  9. Boundary Limit Theory for Functional Local to Unity Regression By Anna Bykhovskaya; Peter C. B. Phillips
  10. Clustering Financial Time Series: How Long is Enough? By Gautier Marti; Sébastien Andler; Frank Nielsen; Philippe Donnat

  1. By: E. Otranto; M. Mucciardi
    Abstract: The STAR model is widely used to represent the dynamics of a certain variable recorded at several locations at the same time. Its advantages are often discussed in terms of parsimony with respect to space-time VAR structures because it considers a single coefficient for each time and spatial lag. This hypothesis can be very strong; we add a certain degree of flexibility to the STAR model, providing the possibility for coefficients to vary in groups of locations. The new class of models is compared to the classical STAR and the space-time VAR by simulations and an application.
    Keywords: clustering;forecasting;space–time models;spatial weight matrix
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:cns:cnscwp:201707&r=ets
  2. By: Jesse Wursten (KU Leuven)
    Abstract: The default method to calculate standard errors in regression models requires idiosyncratic errors (uncorrelated on any dimension). More general methods exist (e.g. HAC and clustered errors) but are not always feasible, especially in smaller datasets or those with a complicated (correlation) structure. However, if your residuals are uncorrelated, the default standard errors might actually suffice and be more reliable than their cluster robust version. In this presentation, I present three new panel serial correlation tests which can be used to look for correlation along the first dimension (‘within’ groups). Likewise, I present two new(-ish) commands to test for correlation in the second dimension (‘between’ groups). These commands are faster, more versatile and robust than existing ones (e.g. xtserial, abar).
    Date: 2017–09–14
    URL: http://d.repec.org/n?u=RePEc:boc:usug17:17&r=ets
  3. By: Gloria Gonzalez-Rivera (Department of Economics, University of California Riverside); Joao Henrique Mazzeu (UC3M); Esther Ruiz (UC3M); Helena Veiga (UC3M)
    Abstract: We propose an extension of the Generalized Autocontour (G-ACR) tests for dynamic specification of in-sample conditional densities and for evaluation of out-of-sample forecast densities. The new tests are based on probability integral transforms (PITs) computed from bootstrap conditional densities that incorporate parameter uncertainty. Then, the parametric specification of the conditional moments can be tested without relying on any parametric error distribution yet exploiting distributional properties of the variable of interest. We show that the finite sample distribution of the bootstrapped G-ACR (BG-ACR) tests are well approximated using standard asymptotic distributions. Furthermore, the proposed tests are easy to implement and are accompanied by graphical tools that provide information about the potential sources of misspecification. We apply the BG-ACR tests to the Heterogeneous Autoregressive (HAR) model and the Multiplicative Error Model (MEM) of the U.S. volatility index VIX. We find strong evidence against the parametric assumptions of the conditional densities, i.e. normality in the HAR model and semi non-parametric Gamma (GSNP) in the MEM. In both cases, the true conditional density seems to be more skewed to the right and more peaked than either normal or GSNP densities, with location, variance and skewness changing over time. The preferred specification is the heteroscedastic HAR model with bootstrap conditional densities of the log-VIX.
    Keywords: Distribution Uncertainty; Model Evaluation; Parameter Uncertainty; PIT; VIX; HAR model; Multiplicative Error Model
    Date: 2017–07
    URL: http://d.repec.org/n?u=RePEc:ucr:wpaper:201709&r=ets
  4. By: Michael mcAleer (National Tsing Hua University, Taiwan; University of Sydney Business School, Australia; Erasmus University Rotterdam, The Netherlands)
    Abstract: One of the most widely-used multivariate conditional volatility models is the dynamic conditional correlation (or DCC) specification. However, the underlying stochastic process to derive DCC has not yet been established, which has made problematic the derivation of asymptotic properties of the Quasi-Maximum Likelihood Estimators (QMLE). To date, the statistical properties of the QMLE of the DCC parameters have purportedly been derived under highly restrictive and unverifiable regularity conditions. The paper shows that the DCC model can be obtained from a vector random coefficient moving average process, and derives the stationarity and invertibility conditions of the DCC model. The derivation of DCC from a vector random coefficient moving average process raises three important issues, as follows: (i) demonstrates that DCC is, in fact, a dynamic conditional covariance model of the returns shocks rather than a dynamic conditional correlation model; (ii) provides the motivation, which is presently missing, for standardization of the conditional covariance model to obtain the conditional correlation model; and (iii) shows that the appropriate ARCH or GARCH model for DCC is based on the standardized shocks rather than the returns shocks. The derivation of the regularity conditions, especially stationarity and invertibility, should subsequently lead to a solid statistical foundation for the estimates of the DCC parameters. Several new results are also derived for univariate models, including a novel conditional volatility model expressed in terms of standardized shocks rather than returns shocks, as well as the associated stationarity and invertibility conditions.
    Keywords: Dynamic conditional correlation; dynamic conditional covariance; vector random coefficient moving average; stationarity; invertibility; asymptotic properties.
    JEL: C22 C52 C58 G32
    Date: 2017–09–06
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20170082&r=ets
  5. By: Jiti Gao; Oliver Linton; Bin Peng
    Abstract: This paper studies a model with both a parametric global trend and a nonparametric local trend. This model may be of interest in a number of applications in economics, finance, ecology, and geology. The model nests the parametric global trend model considered in Phillips (2007) and Robinson (2012), and the nonparametric local trend model. We first propose two hypothesis tests to detect whether either of the special cases are appropriate. For the case where both null hypotheses are rejected, we propose an estimation method to capture both aspects of the time trend. We establish consistency and some distribution theory in the presence of a large sample. Moreover, we examine the proposed hypothesis tests and estimation methods through both simulated and real data examples. Finally, we discuss some potential extensions and issues when modelling time effects.
    Keywords: global mean sea level, nonparametric kernel estimation, nonstationarity.
    JEL: C14 C22 Q54
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2017-10&r=ets
  6. By: Degui Li; Peter CB Phillips; Jiti Gao
    Abstract: This paper studies nonlinear cointegrating models with time-varying coecients and multiple nonstationary regressors using classic kernel smoothing methods to estimate the coecient functions. Extending earlier work on nonstationary kernel regression to take account of practical features of the data, we allow the regressors to be cointegrated and to embody a mixture of stochastic and deterministic trends, complications which result in asymptotic degeneracy of the kernel-weighted signal matrix. To address these complications new local and global rotation techniques are introduced to transform the covariate space to accommodate multiple scenarios of induced degeneracy. Under certain regularity conditions we derive asymptotic results that differ substantially from existing kernel regression asymptotics, leading to new limit theory under multiple convergence rates. For the practically important case of endogenous nonstationary regressors we propose a fully-modified kernel estimator whose limit distribution theory corresponds to the prototypical pure (i.e., exogenous covariate) cointegration case, thereby facilitating inference using a generalized Wald-type test statistic. These results substantially generalize econometric estimation and testing techniques in the cointegration literature to accommodate time variation and complications of co-moving regressors. Finally an empirical illustration to aggregate US data on consumption, income, and interest rates is provided.
    Keywords: cointegration, FM-kernel estimation, generalized Wald test, global rotation, kernel degeneracy, local rotation, super-consistency, time-varying coecients.
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2017-11&r=ets
  7. By: Iliyan Georgiev; David I. Harvey; Stephen J. Leybourne; A. M. Robert Taylor
    Abstract: In order for predictive regression tests to delivery asymptotically valid inference, account has to be taken of the degree of persistence of the predictors under test. There is also a maintained assumption that the predictability of the variable of interest is purely attributable to the predictors under test. Violation of this assumption by the omission of relevant persistent predictors renders the predictive regression invalid with the result that both the finite sample and asymptotic size of the predictability tests can be significantly inflated, with the potential therefore to spuriously indicate predictability. In response we propose a predictive regression invalidity test based on a stationarity testing approach. To allow for an unknown degree of persistence in the putative predictors, and for heteroskedasticity in the data, we implement our proposed test using a fixed regressor wild bootstrap procedure. We demonstrate the asymptotic distribution of the bootstrap statistic, conditional on the data, is the same (to first-order) as the asymptotic null distribution of the statistic computed on the original data, conditional on the predictor. This corrects a long-standing error in the bootstrap literature whereby it is incorrectly argued that for strongly persistent regressors the validity of the fixed aggressor bootstrap obtains through equivalence to an unconditional limit distribution. Our bootstrap results are therefore of interest in their own right and are likely to have important applications beyond the present context. An illustration is given by re-examining the results relating to US stock return data in Campbell and Yogo (2006).
    Keywords: Predictive regression, Granger causality, persistence, stationarity test, fixed regressor wild boodstrap, conditional distribution
    URL: http://d.repec.org/n?u=RePEc:not:notgts:17/04&r=ets
  8. By: Anna Bykhovskaya (Department of Economics, Yale University); Peter C. B. Phillips (Cowles Foundation, Yale University)
    Abstract: Limit theory for regressions involving local to unit roots (LURs) is now used extensively in time series econometric work, establishing power properties for unit root and cointegration tests, assisting the construction of uniform confidence intervals for autoregressive coefficients, and enabling the development of methods robust to departures from unit roots. The present paper shows how to generalize LUR asymptotics to cases where the localized departure from unity is a time varying function rather than a constant. Such a functional local unit root (FLUR) model has much greater generality and encompasses many cases of additional interest, including structural break formulations that admit subperiods of unit root, local stationary and local explosive behavior within a given sample. Point optimal FLUR tests are constructed in the paper to accommodate such cases. It is shown that against FLUR\ alternatives, conventional constant point optimal tests can have extremely low power, particularly when the departure from unity occurs early in the sample period. Simulation results are reported and some implications for empirical practice are examined.
    Keywords: Functional local unit root, Local to unity, Uniform confidence interval, Unit root model
    JEL: C22 C65
    Date: 2017–09
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:3007&r=ets
  9. By: Anna Bykhovskaya (Department of Economics, Yale University); Peter C. B. Phillips (Cowles Foundation, Yale University)
    Abstract: This paper studies functional local unit root models (FLURs) in which the autoregressive coefficient may vary with time in the vicinity of unity. We extend conventional local to unity (LUR) models by allowing the localizing coefficient to be a function which characterizes departures from unity that may occur within the sample in both stationary and explosive directions. Such models enhance the flexibility of the LUR framework by including break point, trending, and multi-directional departures from unit autoregressive coefficients. We study the behavior of this model as the localizing function diverges, thereby determining the impact on the time series and on inference from the time series as the limits of the domain of definition of the autoregressive coefficient are approached. This boundary limit theory enables us to characterize the asymptotic form of power functions for associated unit root tests against functional alternatives. Both sequential and simultaneous limits (as the sample size and localizing coefficient diverge) are developed. We find that asymptotics for the process, the autoregressive estimate, and its $t$ statistic have boundary limit behavior that differs from standard limit theory in both explosive and stationary cases. Some novel features of the boundary limit theory are the presence of a segmented limit process for the time series in the stationary direction and a degenerate process in the explosive direction. These features have material implications for autoregressive estimation and inference which are examined in the paper.
    Keywords: Boundary asymptotics, Functional local unit root, Local to unity, Sequential limits, Simultaneous limits, Unit root model
    JEL: C22 C65
    Date: 2017–09
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:3008&r=ets
  10. By: Gautier Marti (LIX - Laboratoire d'informatique de l'École polytechnique [Palaiseau] - Polytechnique - X - CNRS - Centre National de la Recherche Scientifique, Hellebore Capital Limited); Sébastien Andler (ENS de Lyon - Ecole Normale Supérieure de Lyon, Hellebore Capital Limited); Frank Nielsen (LIX - Laboratoire d'informatique de l'École polytechnique [Palaiseau] - Polytechnique - X - CNRS - Centre National de la Recherche Scientifique); Philippe Donnat (Hellebore Capital Limited)
    Abstract: Researchers have used from 30 days to several years of daily returns as source data for clustering financial time series based on their correlations. This paper sets up a statistical framework to study the validity of such practices. We first show that clustering correlated random variables from their observed values is statistically consistent. Then, we also give a first empirical answer to the much debated question: How long should the time series be? If too short, the clusters found can be spurious; if too long, dynamics can be smoothed out.
    Keywords: Financial time series,Clustering,Convergence rates,Correlation
    Date: 2016–07–09
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-01400395&r=ets

This nep-ets issue is ©2017 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.