nep-ets New Economics Papers
on Econometric Time Series
Issue of 2022‒01‒10
six papers chosen by
Jaqueson K. Galimberti
Auckland University of Technology

  1. Gaussian Process Vector Autoregressions and Macroeconomic Uncertainty By Niko Hauzenberger; Florian Huber; Massimiliano Marcellino; Nico Petz
  2. Factor-augmented tree ensembles By Filippo Pellegrino
  3. Autoregressive conditional proportion: A multiplicative-error model for (0,1)-valued time series By Aknouche, Abdelhakim; Dimitrakopoulos, Stefanos
  4. Size-corrected Bootstrap Test after Pretesting for Exogeneity with Heteroskedastic or Clustered Data By Doko Tchatoka, Firmin; Wang, Wenjie
  5. Adaptive calibration of Heston Model using PCRLB based switching Filter By Kumar Yashaswi
  6. The Integrated Copula Spectrum By Yuichi Goto; Tobias Kley; Ria Van Hecke; Stanislav Volgushev; Holger Dette; Marc Hallin

  1. By: Niko Hauzenberger; Florian Huber; Massimiliano Marcellino; Nico Petz
    Abstract: We develop a non-parametric multivariate time series model that remains agnostic on the precise relationship between a (possibly) large set of macroeconomic time series and their lagged values. The main building block of our model is a Gaussian Process prior on the functional relationship that determines the conditional mean of the model, hence the name of Gaussian Process Vector Autoregression (GP-VAR). We control for changes in the error variances by introducing a stochastic volatility specification. To facilitate computation in high dimensions and to introduce convenient statistical properties tailored to match stylized facts commonly observed in macro time series, we assume that the covariance of the Gaussian Process is scaled by the latent volatility factors. We illustrate the use of the GP-VAR by analyzing the effects of macroeconomic uncertainty, with a particular emphasis on time variation and asymmetries in the transmission mechanisms. Using US data, we find that uncertainty shocks have time-varying effects, they are less persistent during recessions but their larger size in these specific periods causes more than proportional effects on real growth and employment.
    Date: 2021–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2112.01995&r=
  2. By: Filippo Pellegrino
    Abstract: This article proposes an extension for standard time-series regression tree modelling to handle predictors that show irregularities such as missing observations, periodic patterns in the form of seasonality and cycles, and non-stationary trends. In doing so, this approach permits also to enrich the information set used in tree-based autoregressions via unobserved components. Furthermore, this manuscript also illustrates a relevant approach to control over-fitting based on ensemble learning and recent developments in the jackknife literature. This is strongly beneficial when the number of observed time periods is small and advantageous compared to benchmark resampling methods. Empirical results show the benefits of predicting equity squared returns as a function of their own past and a set of macroeconomic data via factor-augmented tree ensembles, with respect to simpler benchmarks. As a by-product, this approach allows to study the real-time importance of economic news on equity volatility.
    Date: 2021–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2111.14000&r=
  3. By: Aknouche, Abdelhakim; Dimitrakopoulos, Stefanos
    Abstract: We propose a multiplicative autoregressive conditional proportion (ARCP) model for (0,1)-valued time series, in the spirit of GARCH (generalized autoregressive conditional heteroscedastic) and ACD (autoregressive conditional duration) models. In particular, our underlying process is defined as the product of a (0,1)-valued iid sequence and the inverted conditional mean, which, in turn, depends on past reciprocal observations in such a way that is larger than unity. The probability structure of the model is studied in the context of the stochastic recurrence equation theory, while estimation of the model parameters is performed by the exponential quasi-maximum likelihood estimator (EQMLE). The consistency and asymptotic normality of the EQMLE are both established under general regularity assumptions. Finally, the usefulness of our proposed model is illustrated with simulated and two real datasets.
    Keywords: Proportional time series data, Beta-ARMA model, Simplex ARMA, Autoregressive conditional duration, Exponential QMLE.
    JEL: C13 C22 C25 C46 C51 C58
    Date: 2021–12–06
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:110954&r=
  4. By: Doko Tchatoka, Firmin; Wang, Wenjie
    Abstract: Pretesting for exogeneity has become a routine in many empirical applications involving instrumental variables to decide whether the ordinary least squares or the two-stage least squares (2SLS) method is appropriate. Guggenberger (2010) shows that the second-stage t-test – based on the outcome of a Durbin-Wu-Hausman type pretest for exogeneity in the first stage – has extreme size distortion with asymptotic size equal to 1 when the standard asymptotic critical values are used. In this paper, we first show that both conditional and unconditional on the data, the standard wild bootstrap procedures are invalid for the two-stage testing and a closely related shrinkage method, and therefore are not viable solutions to such size-distortion problem. Then, we propose a novel size-corrected wild bootstrap approach, which combines certain wild bootstrap critical values along with an appropriate size-correction method. We establish uniform validity of this procedure under either conditional heteroskedasticity or clustering in the sense that the resulting tests achieve correct asymptotic size. Monte Carlo simulations confirm our theoretical findings. In particular, our proposed method has remarkable power gains over the standard 2SLS-based t-test in many settings, especially when the identification is not strong.
    Keywords: DWH Pretest; Shrinkage; Instrumental Variable; Asymptotic Size; Wild Bootstrap; Bonferroni-based Size-correction; Clustering.
    JEL: C12 C13 C26
    Date: 2021–11–29
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:110899&r=
  5. By: Kumar Yashaswi
    Abstract: Stochastic volatility models have existed in Option pricing theory ever since the crash of 1987 which violated the Black-Scholes model assumption of constant volatility. Heston model is one such stochastic volatility model that is widely used for volatility estimation and option pricing. In this paper, we design a novel method to estimate parameters of Heston model under state-space representation using Bayesian filtering theory and Posterior Cramer-Rao Lower Bound (PCRLB), integrating it with Normal Maximum Likelihood Estimation (NMLE) proposed in [1]. Several Bayesian filters like Extended Kalman Filter (EKF), Unscented Kalman Filter (UKF), Particle Filter (PF) are used for latent state and parameter estimation. We employ a switching strategy proposed in [2] for adaptive state estimation of the non-linear, discrete-time state-space model (SSM) like Heston model. We use a particle filter approximated PCRLB [3] based performance measure to judge the best filter at each time step. We test our proposed framework on pricing data from S&P 500 and NSE Index, estimating the underlying volatility and parameters from the index. Our proposed method is compared with the VIX measure and historical volatility for both the indexes. The results indicate an effective framework for estimating volatility adaptively with changing market dynamics.
    Date: 2021–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2112.04576&r=
  6. By: Yuichi Goto; Tobias Kley; Ria Van Hecke; Stanislav Volgushev; Holger Dette; Marc Hallin
    Abstract: Frequency domain methods form a ubiquitous part of the statistical toolbox for time series analysis. In recent years, considerable interest has been given to the development of new spectral methodology and tools capturing dynamics in the entire joint distributions and thus avoiding the limitations of classical, L2-based spectral methods. Most of the spectral concepts proposed in that literature suffer from one major drawback, though: their estimation requires the choice of a smoothing parameter, which has a considerable impact on estimation quality and poses challenges for statistical inference. In this paper, associated with the concept of copula-based spectrum, we introduce the notion of copula spectral distribution function or integrated copula spectrum. This integrated copula spectrum retains the advantages of copula-based spectra but can be estimated without the need for smoothing parameters. We provide such estimators, along with a thorough theoretical analysis, based on a functional central limit theorem, of their asymptotic properties.We leverage these results to test various hypotheses that cannot be addressed by classical spectral methods, such as the lack of time-reversibility or asymmetry in tail dynamics.
    Keywords: Copula; Ranks; Time series; Frequency domain; Time-reversibility
    Date: 2021–12
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/335426&r=

This nep-ets issue is ©2022 by Jaqueson K. Galimberti. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.