nep-ets New Economics Papers
on Econometric Time Series
Issue of 2019‒07‒08
ten papers chosen by
Jaqueson K. Galimberti
KOF Swiss Economic Institute

  1. Adaptive Testing for Cointegration with Nonstationary Volatility By Peter Boswijk; Yang Zu
  2. Time-Varying Cointegration and the Kalman Filter By Burak Alparslan Eroglu; J. Isaac Miller; Taner Yigit
  3. Testing for breaks in the cointegrating relationship: On the stability of government bond markets' equilibrium By Rodrigues, Paulo M.M.; Sibbertsen, Philipp; Voges, Michelle
  4. The Topology of Time Series: Improving Recession Forecasting from Yield Spreads By Pawel Dlotko; Simon Rudkin
  5. Second Order Time Dependent Inflation Persistence in the United States: a GARCH-in-Mean Model with Time Varying Coefficients. By Alessandra Canepa,; Menelaos G. Karanasos; Alexandros G. Paraskevopoulos,
  6. Bootstrap methods for inference in the Parks model By Moundigbaye, Mantobaye; Messemer, Clarisse; Parks, Richard W.; Reed, W. Robert
  7. Efficient selection of hyperparameters in large Bayesian VARs using automatic differentiation By Joshua C. C. Chan; Liana Jacobi; Dan Zhu
  8. An automated prior robustness analysis in Bayesian model comparison By Joshua C. C. Chan; Liana Jacobi; Dan Zhu
  9. Modeling Univariate and Multivariate Stochastic Volatility in R with stochvol and factorstochvol By Darjus Hosszejni; Gregor Kastner
  10. Dealing with Stochastic Volatility in Time Series Using the R Package stochvol By Gregor Kastner

  1. By: Peter Boswijk (University of Amsterdam); Yang Zu (University of Nottingham)
    Abstract: This paper generalises Boswijk and Zu (2018)'s adaptive unit root test for time series with nonstationary volatility to a multivariate context. Persistent changes in the innovation variance matrix of a vector autoregressive model lead to size distortions in conventional cointegration tests, which may be resolved using the wild bootstrap, as shown by Cavaliere et al. (2010, 2014). We show that it also leads to the possibility of constructing tests with higher power, by taking the time-varying volatilities and correlations into account in the formulation of the likelihood function and the resulting likelihood ratio test statistic. We find that under suitable conditions, adaptation with respect to the volatility process is possible, in the sense that nonparametric volatility matrix estimation does not lead to a loss of asymptotic local power relative to the case where the volatilities are observed. The asymptotic null distribution of the test is nonstandard and depends on the volatility process; we show that various bootstrap implementations may be used to conduct asymptotically valid inference. Monte Carlo simulations show that the resulting test has good size properties, and higher power than existing tests. Two empirical examples illustrate the applicability of the tests.
    Keywords: Adaptive estimation, Nonparametric volatility estimation, Wild bootstrap
    JEL: C32 C12
    Date: 2019–06–21
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20190043&r=all
  2. By: Burak Alparslan Eroglu; J. Isaac Miller (Department of Economics, University of Missouri); Taner Yigit
    Abstract: We show that time-varying parameter state-space models estimated using the Kalman filter are particularly vulnerable to the problem of spurious regression, because the integrated error is transferred to the estimated state equation. We offer a simple yet effective methodology to reliably recover the instability in cointegrating vectors. In the process, the proposed methodology successfully distinguishes between the cases of no cointegration, fixed cointegration, and time-varying cointegration. We apply these proposed tests to elucidate the relationship between concentrations of greenhouse gases and global temperatures, an important relationship to both climate scientists and economists.
    Keywords: : time-varying cointegration, Kalman filter, spurious regression
    JEL: C12 C32 C51 Q54
    Date: 2019–06–27
    URL: http://d.repec.org/n?u=RePEc:umc:wpaper:1905&r=all
  3. By: Rodrigues, Paulo M.M.; Sibbertsen, Philipp; Voges, Michelle
    Abstract: In this paper, test procedures for no fractional cointegration against possible breaks in the persistence structure of a fractional cointegrating relationship are introduced. The tests proposed are based on the supremum of the Hassler and Breitung (2006) test statistic for no cointegration over possible breakpoints in the long-run equilibrium. We show that the new tests correctly standardized converge to the supremum of a chisquared distribution, and that this convergence is uniform. An in-depth Monte Carlo analysis provides results on the finite sample performance of our tests. We then use the new procedures to investigate whether there was a dissolution of fractional cointegrating relationships between benchmark government bonds of ten EMU countries (Spain, Italy, Portugal, Ireland, Greece, Belgium, Austria, Finland, the Netherlands and France) and Germany with the beginning of the European debt crisis.
    Keywords: Fractional cointegration, Persistence breaks, Hassler-Breitung test, Changing Long-run equilibrium
    JEL: C12 C32
    Date: 2019–06
    URL: http://d.repec.org/n?u=RePEc:han:dpaper:dp-656&r=all
  4. By: Pawel Dlotko (Mathematics Department, Swansea University); Simon Rudkin (School of Management, Swansea University)
    Abstract: Recession forecasting ranges from simplistic inference from the inversion of the yield curve to sophisticated models drawing data from across the macroeconomic and nancial spectra. Each has advantages, in simplicity and informativeness respectively, but each su ers for these. Demonstrating how the properties of yield spread time series themselves can foretell of impending recessions we introduce data topology to economics. Through an exploration of the topology of time series we highlight an untapped source of information with the potential to signi cantly improve understanding of the economy without risking the over tting of introducing other variables.
    Date: 2019–07–04
    URL: http://d.repec.org/n?u=RePEc:swn:wpaper:2019-02&r=all
  5. By: Alessandra Canepa,; Menelaos G. Karanasos; Alexandros G. Paraskevopoulos, (University of Turin)
    Abstract: In this paper we investigate the behavior of in?ation persistence in the United States. To model in?ation we estimate an autoregressive GARCH-in-mean model with variable coe¢ cients and we propose a new measure of second-order time varying persistence, which not only distinguishes between changes in the dynamics of in?ation and its volatility, but it also allows for feedback from nominal uncertainty to in?ation. Our empirical results suggest that in?ation persistence in the United States is best described as unchanged. Another important result relates to the Monte Carlo experiment evidence which reveal that if the model is misspeci?ed, then commonly used unit root tests will misclassify in?ation of being a nonstationary, rather than a stationary process.
    Date: 2019–04
    URL: http://d.repec.org/n?u=RePEc:uto:dipeco:201911&r=all
  6. By: Moundigbaye, Mantobaye; Messemer, Clarisse; Parks, Richard W.; Reed, W. Robert
    Abstract: This paper shows how to bootstrap hypothesis tests in the context of the Parks (Efficient estimation of a system of regression equations when disturbances are both serially and contemporaneously correlated 1967) estimator. It then demonstrates that the bootstrap outperforms Parks's top competitor. The Parks estimator has been a workhorse for the analysis of panel data and seemingly unrelated regression equation systems because it allows the incorporation of cross-sectional correlation together with heteroskedasticity and serial correlation. Unfortunately, the associated, asymptotic standard error estimates are biased downward, often severely. To address this problem, Beck and Katz (What to do (and not to do) with time series cross-section data 1995) developed an approach that uses the Prais-Winsten estimator together with "panel corrected standard errors" (PCSE). While PCSE produces standard error estimates that are less biased than Parks, it forces the user to sacrifice efficiency for accuracy in hypothesis testing. The PCSE approach has been, and continues to be, widely used. This paper develops an alternative: a nonparametric bootstrapping procedure to be used in conjunction with the Parks estimator. We demonstrate its effectiveness using an innovative experimental approach that creates artificial panel datasets modelled after actual panel datasets. Our approach provides a Pareto-improving option by allowing researchers to retain the efficiency of the Parks estimator while producing more accurate hypothesis test results than the PCSE.
    Keywords: Parks model,PCSE,SUR,panel data,cross-sectional correlation,bootstrap,Monte Carlo,simulation
    JEL: C13 C15 C23 C33
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:zbw:ifwedp:201939&r=all
  7. By: Joshua C. C. Chan; Liana Jacobi; Dan Zhu
    Abstract: Large Bayesian VARs with the natural conjugate prior are now routinely used for forecasting and structural analysis. It has been shown that selecting the prior hyperparameters in a data-driven manner can often substantially improve forecast performance. We propose a computationally efficient method to obtain the optimal hyperparameters based on Automatic Differentiation, which is an efficient way to compute derivatives. Using a large US dataset, we show that using the optimal hyperparameter values leads to substantially better forecast performance. Moreover, the proposed method is much faster than the conventional grid-search approach, and is applicable in high-dimensional optimization problems. The new method thus provides a practical and systematic way to develop better shrinkage priors for forecasting in a data-rich environment.
    Keywords: automatic differentiation, vector autoregression, optimal hyperparameters, forecasts, marginal likelihood
    JEL: C11 C53 E37
    Date: 2019–06
    URL: http://d.repec.org/n?u=RePEc:een:camaaa:2019-46&r=all
  8. By: Joshua C. C. Chan; Liana Jacobi; Dan Zhu
    Abstract: The marginal likelihood is the gold standard for Bayesian model comparison although it is well-known that the value of marginal likelihood could be sensitive to the choice of prior hyperparameters. Most models require computationally intense simulation-based methods to evaluate the typically high-dimensional integral of the marginal likelihood expression. Hence, despite the recognition that prior sensitivity analysis is important in this context, it is rarely done in practice. In this paper we develop efficient and feasible methods to compute the sensitivities of marginal likelihood, obtained via two common simulation-based methods, with respect to any prior hyperparameter alongside the MCMC estimation algorithm. Our approach builds on Automatic Differentiation (AD), which has only recently been introduced to the more computationally intensive setting of Markov chain Monte Carlo simulation. We illustrate our approach with two empirical applications in the context of widely used multivariate time series models.
    Keywords: automatic differentiation, model comparison, vector autoregression, factor models
    JEL: C11 C53 E37
    Date: 2019–06
    URL: http://d.repec.org/n?u=RePEc:een:camaaa:2019-45&r=all
  9. By: Darjus Hosszejni; Gregor Kastner
    Abstract: Stochastic volatility (SV) models are nonlinear state-space models that enjoy increasing popularity for fitting and predicting heteroskedastic time series. However, due to the large number of latent quantities, their efficient estimation is non-trivial and software that allows to easily fit SV models to data is rare. We aim to alleviate this issue by presenting novel implementations of four SV models delivered in two R packages. Several unique features are included and documented. As opposed to previous versions, stochvol is now capable of handling linear mean models, heavy-tailed SV, and SV with leverage. Moreover, we newly introduce factorstochvol which caters for multivariate SV. Both packages offer a user-friendly interface through the conventional R generics and a range of tailor-made methods. Computational efficiency is achieved via interfacing R to C++ and doing the heavy work in the latter. In the paper at hand, we provide a detailed discussion on Bayesian SV estimation and showcase the use of the new software through various examples.
    Date: 2019–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1906.12123&r=all
  10. By: Gregor Kastner
    Abstract: The R package stochvol provides a fully Bayesian implementation of heteroskedasticity modeling within the framework of stochastic volatility. It utilizes Markov chain Monte Carlo (MCMC) samplers to conduct inference by obtaining draws from the posterior distribution of parameters and latent variables which can then be used for predicting future volatilities. The package can straightforwardly be employed as a stand-alone tool; moreover, it allows for easy incorporation into other MCMC samplers. The main focus of this paper is to show the functionality of stochvol. In addition, it provides a brief mathematical description of the model, an overview of the sampling schemes used, and several illustrative examples using exchange rate data.
    Date: 2019–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1906.12134&r=all

This nep-ets issue is ©2019 by Jaqueson K. Galimberti. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.