nep-ets New Economics Papers
on Econometric Time Series
Issue of 2020‒08‒17
eleven papers chosen by
Jaqueson K. Galimberti
Auckland University of Technology

  1. Large dynamic covariance matrices: enhancements based on intraday data By Gianluca De Nard; Robert F. Engle; Olivier Ledoit; Michael Wolf
  2. Testing error distribution by kernelized Stein discrepancy in multivariate time series models By Donghang Luo; Ke Zhu; Huan Gong; Dong Li
  3. Modelling Cryptocurrency High-Low Prices using Fractional Cointegrating VAR By Yaya, OaOluwa S; Vo, Xuan Vinh; Ogbonna, Ahamuefula E; Adewuyi, Adeolu O
  4. Adaptiveness of the empirical distribution of residuals in semi- parametric conditional location scale models By Christian Francq; Jean-Michel Zakoïan
  5. Multivariate Filter Estimation of Potential Output for the United States: An Extension with Labor Market Hysteresis By Ali Alichi; Hayk Avetisyan; Douglas Laxton; Shalva Mkhatrishvili; Armen Nurbekyan; Lusine Torosyan; Hou Wang
  6. Identification of Volatility Proxies as Expectations of Squared Financial Return By Sucarrat, Genaro
  7. The economic drivers of volatility and uncertainty By Andrea Carriero; Francesco Corsello; Massimiliano Marcellino
  8. Tail risk forecasting using Bayesian realized EGARCH models By Vica Tendenan; Richard Gerlach; Chao Wang
  9. Revisiting income convergence with DF-Fourier tests: old evidence with a new test By Silva Lopes, Artur
  10. Time Inhomogeneous Multivariate Markov Chains: Detecting and Testing Multiple Structural Breaks Occurring at Unknown By Bruno Damásio; João Nicolau
  11. Do Any Economists Have Superior Forecasting Skills? By Qu, Ritong; Timmermann, Allan; Zhu, Yinchu

  1. By: Gianluca De Nard; Robert F. Engle; Olivier Ledoit; Michael Wolf
    Abstract: Modeling and forecasting dynamic (or time-varying) covariance matrices has many important applications in finance, such as Markowitz portfolio selection. A popular tool to this end are multivariate GARCH models. Historically, such models did not perform well in large dimensions due to the so-called curse of dimensionality. The recent DCC-NL model of Engle et al. (2019) is able to overcome this curse via nonlinear shrinkage estimation of the unconditional correlation matrix. In this paper, we show how performance can be increased further by using open/high/low/close (OHLC) price data instead of simply using daily returns. A key innovation, for the improved modeling of not only dynamic variances but also of dynamic covariances, is the concept of a regularized return, obtained from a volatility proxy in conjunction with a smoothed sign (function) of the observed return.
    Keywords: Dynamic conditional correlations, intraday data, Markowitz portfolio selection, multivariate GARCH, nonlinear shrinkage
    JEL: C13 C58 G11
    Date: 2020–07
    URL: http://d.repec.org/n?u=RePEc:zur:econwp:356&r=all
  2. By: Donghang Luo; Ke Zhu; Huan Gong; Dong Li
    Abstract: Knowing the error distribution is important in many multivariate time series applications. To alleviate the risk of error distribution mis-specification, testing methodologies are needed to detect whether the chosen error distribution is correct. However, the majority of the existing tests only deal with the multivariate normal distribution for some special multivariate time series models, and they thus can not be used to testing for the often observed heavy-tailed and skewed error distributions in applications. In this paper, we construct a new consistent test for general multivariate time series models, based on the kernelized Stein discrepancy. To account for the estimation uncertainty and unobserved initial values, a bootstrap method is provided to calculate the critical values. Our new test is easy-to-implement for a large scope of multivariate error distributions, and its importance is illustrated by simulated and real data.
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2008.00747&r=all
  3. By: Yaya, OaOluwa S; Vo, Xuan Vinh; Ogbonna, Ahamuefula E; Adewuyi, Adeolu O
    Abstract: This paper empirically provides support for fractional cointegration of high and low cryptocurrency price series, using particularly, Bitcoin, Ethereum, Litecoin and Ripple; synchronized at different high time frequencies. The difference of high and low price gives the price range, and the range-based estimator of volatility is more efficient than the return-based estimator of realized volatility. A more general fractional cointegration technique applied is the Fractional Cointegrating Vector Autoregressive framework. The results show that high and low cryptocurrency prices are actually cointegrated in both stationary and non-stationary levels; that is, the range of high-low price. It is therefore quite interesting to note that the fractional cointegration approach presents a lower measure of the persistence for the range compared to the fractional integration approach, and the results are insensitive to different time frequencies. The main finding in this work serves as an alternative volatility estimation method in cryptocurrency and other assets’ price modelling and forecasting.
    Keywords: Fractional cointegration; Cryptocurrency; Fractional integration; FCVAR; Price range
    JEL: C22
    Date: 2020–03–07
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:102190&r=all
  4. By: Christian Francq (CREST - Centre de Recherche en Économie et Statistique - ENSAI - Ecole Nationale de la Statistique et de l'Analyse de l'Information [Bruz] - X - École polytechnique - ENSAE ParisTech - École Nationale de la Statistique et de l'Administration Économique - CNRS - Centre National de la Recherche Scientifique); Jean-Michel Zakoïan
    Abstract: This paper addresses the problem of deriving the asymptotic distribution of the empirical distribution function F n of the residuals in a general class of time series models, including conditional mean and conditional heteroscedaticity, whose independent and identically distributed errors have unknown distribution F. We show that, for a large class of time series models (including the standard ARMA-GARCH), the asymptotic distribution of √ n{ F n (·) − F (·)} is impacted by the estimation but does not depend on the model parameters. It is thus neither asymptotically estimation free, as is the case for purely linear models, nor asymptotically model dependent, as is the case for some nonlinear models. The asymptotic stochastic equicontinuity is also established. We consider an application to the estimation of the conditional Value-at-Risk.
    Date: 2020–07–14
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-02898909&r=all
  5. By: Ali Alichi; Hayk Avetisyan; Douglas Laxton; Shalva Mkhatrishvili; Armen Nurbekyan; Lusine Torosyan; Hou Wang
    Abstract: This paper extends the multivariate filter approach of estimating potential output developed by Alichi and others (2018) to incorporate labor market hysteresis. This extension captures the idea that long and deep recessions (expansions) cause persistent damage (improvement) to the labor market, thereby reducing (increasing) potential output. Applying the model to U.S. data results in significantly smaller estimates of output gaps, and higher estimates of the NAIRU, after the global financial crisis, compared to estimates without hysteresis. The smaller output gaps partly explain the absence of persistent deflation despite the slow recovery during 2010-2017. Going forward, if strong growth performance continues well beyond 2018, hysteresis is expected to result in a structural improvement in growth and employment.
    Keywords: Business cycles;Unemployment;Potential output;Capacity utilization;Production growth;Macroeconomic Modeling,output gap,NAIRU,hysteresis,potential growth,Volcker
    Date: 2019–02–19
    URL: http://d.repec.org/n?u=RePEc:imf:imfwpa:2019/035&r=all
  6. By: Sucarrat, Genaro
    Abstract: Volatility proxies like Realised Volatility (RV) are extensively used to assess the forecasts of squared financial return produced by Autoregressive Conditional Heteroscedasticity (ARCH) models. But are volatility proxies identified as expectations of the squared return? If not, then the results of these comparisons can be misleading, even if the proxy is unbiased. Here, a tripartite distinction between strong, semi-strong and weak identification of a volatility proxy as an expectation of squared return is introduced. The definition implies that semi-strong and weak identification can be studied and corrected for via a multiplicative transformation. Well-known tests can be used to check for identification and bias, and Monte Carlo simulations show they are well-sized and powerful -- even in fairly small samples. As an illustration, twelve volatility proxies used in three seminal studies are revisited. Half of the proxies do not satisfy either semi-strong or weak identification, but their corrected transformations do. Correcting for identification does not always reduce the bias of the proxy, so there is a tradeoff between the choice of correction and the resulting bias.
    Keywords: GARCH models, financial time-series econometrics, volatility forecasting, Realised Volatility
    JEL: C18 C22 C53 C58
    Date: 2020–07–20
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:101953&r=all
  7. By: Andrea Carriero (Queen Mary, University of London); Francesco Corsello (Bank of Italy); Massimiliano Marcellino (Università Bocconi, Milano)
    Abstract: We introduce a time-series model for a large set of variables in which the structural shocks identified are employed to simultaneously explain the evolution of both the level (conditional mean) and the volatility (conditional variance) of the variables. Specifically, the total volatility of macroeconomic variables is first decomposed into two separate components: an idiosyncratic component, and a component common to all of the variables. Then, the common volatility component, often interpreted as a measure of uncertainty, is further decomposed into three parts, respectively driven by the volatilities of the demand, supply and monetary/financial shocks. From a methodological point of view, the model is an extension of the homoscedastic Multivariate Autoregressive Index (MAI) model (Reinsel, 1983) to the case of time-varying volatility. We derive the conditional posterior distribution of the coefficients needed to perform estimations via Gibbs sampling. By estimating the model with US data, we find that the common component of volatility is substantial, and it explains at least 50 per cent of the overall volatility for most variables. The relative contribution of the demand, supply and financial volatilities to the common volatility component is variable specific and often time-varying, and some interesting patterns emerge.
    Keywords: Multivariate autoregressive Index models, stochastic volatility, reduced rank regressions, Bayesian VARs, factor models, structural analysis
    JEL: C15 C32 C38 C51 E30
    Date: 2020–07
    URL: http://d.repec.org/n?u=RePEc:bdi:wptemi:td_1285_20&r=all
  8. By: Vica Tendenan; Richard Gerlach; Chao Wang
    Abstract: This paper develops a Bayesian framework for the realized exponential generalized autoregressive conditional heteroskedasticity (realized EGARCH) model, which can incorporate multiple realized volatility measures for the modelling of a return series. The realized EGARCH model is extended by adopting a standardized Student-t and a standardized skewed Student-t distribution for the return equation. Different types of realized measures, such as sub-sampled realized variance, sub-sampled realized range, and realized kernel, are considered in the paper. The Bayesian Markov chain Monte Carlo (MCMC) estimation employs the robust adaptive Metropolis algorithm (RAM) in the burn in period and the standard random walk Metropolis in the sample period. The Bayesian estimators show more favourable results than maximum likelihood estimators in a simulation study. We test the proposed models with several indices to forecast one-step-ahead Value at Risk (VaR) and Expected Shortfall (ES) over a period of 1000 days. Rigorous tail risk forecast evaluations show that the realized EGARCH models employing the standardized skewed Student-t distribution and incorporating sub-sampled realized range are favored, compared to a range of models.
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2008.05147&r=all
  9. By: Silva Lopes, Artur
    Abstract: Motivated by the purpose to assess the income convergence hypothesis, a simple new Fourier-type unit root test of the Dickey-Fuller family is introduced and analysed. In spite of a few shortcomings that it shares with rival tests, the proposed test generally improves upon them in terms of power performance in small samples. The empirical results that it produces for a recent and updated sample of data for 25 countries clearly contrast with previous evidence produced by the Fourier approach and, more generally, they also contradict a recent wave of optimism concerning income convergence, as they are mostly unfavourable to it.
    Keywords: income convergence; unit root tests; structural breaks
    JEL: C22 F43 O47
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:102208&r=all
  10. By: Bruno Damásio; João Nicolau
    Abstract: Markov chains models are used in several applications and different areas of study. Usually a Markov chain model is assumed to be homogeneous in the sense that the transition probabilities are time invariant. Yet, ignoring the inhomogeneous nature of a stochastic process by disregarding the presence of structural breaks can lead to misleading conclusions. Several methodologies are currently proposed for detecting structural breaks in a Markov chain, however, these methods have some limitations, namely they can only test directly for the presence of a single structural break. This paper proposes a new methodology for detecting and testing the presence multiple structural breaks in a Markov chain occurring at unknown dates.
    Keywords: Inhomogeneous Markov chain, structural breaks, time-varying probabilities
    Date: 2020–06
    URL: http://d.repec.org/n?u=RePEc:ise:remwps:wp01362020&r=all
  11. By: Qu, Ritong; Timmermann, Allan; Zhu, Yinchu
    Abstract: To answer this question, we develop new testing methods for identifying superior forecasting skills in settings with arbitrarily many forecasters, outcome variables, and time periods. Our methods allow us to address if any economists had superior forecasting skills for any variables or at any point in time while carefully controlling for the role of "luck" which can give rise to false discoveries when large numbers of forecasts are evaluated. We propose new hypotheses and test statistics that can be used to identify specialist, generalist, and event-specific skills in forecasting performance. We apply our new methods to a large set of Bloomberg survey forecasts of US economic data show that, overall, there is very little evidence that any individual forecasters can beat a simple equal-weighted average of peer forecasts.
    Keywords: Bloomberg survey; Economic forecasting; multiple testing; superior predictive skills
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:14112&r=all

This nep-ets issue is ©2020 by Jaqueson K. Galimberti. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.