nep-ets New Economics Papers
on Econometric Time Series
Issue of 2020‒05‒11
seventeen papers chosen by
Jaqueson K. Galimberti
Auckland University of Technology

  1. “Spectral analysis of business and consumer survey data” By Oscar Claveria; Enric Monte; Salvador Torra
  2. Tail Granger causalities and where to find them: extreme risk spillovers vs. spurious linkages By Piero Mazzarisi; Silvia Zaoli; Carlo Campajola; Fabrizio Lillo
  4. Flexible Fat-tailed Vector Autoregression By Karlsson, Sune; Mazur, Stepan
  5. Multi-frequency-band tests for white noise under heteroskedasticity By Mengya Liu; Fukan Zhu; Ke Zhu
  6. Revealing Cluster Structures Based on Mixed Sampling Frequencies By Yeonwoo Rho; Yun Liu; Hie Joo Ahn
  7. Bagging Weak Predictors By Eric Hillebrand; Manuel Lukas; Wei Wei
  8. A comment on the dynamic factor model with dynamic factors By Poncela, Pilar; Ruiz, Esther
  9. An ARIMA model to forecast the spread and the final size of COVID-2019 epidemic in Italy By Gaetano Perone
  10. Arctic Amplification of Anthropogenic Forcing: A Vector Autoregressive Analysis By Philippe Goulet Coulombe; Maximilian G\"obel
  11. Clustering volatility regimes for dynamic trading strategies By Gilad Francis; Nick James; Max Menzies; Arjun Prakash
  12. Forecasting the US Dollar-Korean Won Exchange Rate: A Factor-Augmented Model Approach By Sarthak Behera; Hyeongwoo Kim; Soohyon Kim
  13. Long-Range Dependence in Financial Markets: a Moving Average Cluster Entropy Approach By Pietro Murialdo; Linda Ponta; Anna Carbone
  14. Volatility Regressions with Fat Tails By Kim, Jihyun; Meddahi, Nour
  15. Skewed non-Gaussian GARCH models for cryptocurrencies volatility modelling By Roy Cerqueti; Massimiliano Giacalone; Raffaele Mattera
  16. Machine Learning Econometrics: Bayesian algorithms and methods By Dimitris Korobilis; Davide Pettenuzzo
  17. Forecasting in the presence of instabilities: How do we know whether models predict well and how to improve them By Barbara Rossi

  1. By: Oscar Claveria (AQR-IREA, University of Barcelona); Enric Monte (Polytechnic University of Catalunya); Salvador Torra (Riskcenter-IREA, University of Barcelona)
    Abstract: The main objective of this study is two-fold. First, we aim to detect the underlying existing periodicities in business and consumer survey data. With this objective, we conduct a spectral analysis of all survey indicators. Second, we aim to provide researchers with a filter especially designed for business and consumer survey data that circumvents the a priori assumptions of other filtering methods. To this end, we design a low-pass filter that allows extracting the components with periodicities similar to those that can be found in the dynamics of economic activity. The European Commission (EC) conducts monthly business and consumer tendency surveys in which respondents are asked whether they expect a set of variables to rise, fall or remain unchanged. We apply the Welch method for the detection of periodic components in each of the response options of all monthly survey indicators. This approach allows us to extract the harmonic components that correspond to the cyclic and seasonal patterns of the series. Unlike other methods for spectral density estimation, the Welch algorithm provides smoother estimates of the periodicities. We find remarkable differences between the periodicities detected in the industry survey and the consumer survey. While business survey indicators show a common cyclical component of low frequency that corresponds to about four years, for most consumer survey indicators we do not detect any relevant cyclic components, and the obtained lower frequency periodicities show a very irregular pattern across questions and reply options. Most methods for seasonal adjustment are based on a priori assumptions about the structure of the components and do not depend on the features of the specific series. In order to overcome this limitation, we design a low-pass filter for survey indicators. We opt for a Butterworth filter and apply a zero-phase filtering process to preserve the time alignment of the time series. This procedure allows us to reject the frequency components of the survey indicators that do not have a counterpart in the dynamics of economic activity. We use the filtered series to compute diffusion indexes known as balances, and compare them to the seasonally-adjusted balances published by the EC. Although both series are highly correlated, filtered balances tend to be smoother for the consumer survey indicators.
    Keywords: Business and consumer surveys, Spectral analysis, Seasonality, Signal processing, Low-pass filter. JEL classification: C65, C82
    Date: 2020–05
  2. By: Piero Mazzarisi; Silvia Zaoli; Carlo Campajola; Fabrizio Lillo
    Abstract: Identifying risk spillovers in financial markets is of great importance for assessing systemic risk and portfolio management. Granger causality in tail (or in risk) tests whether past extreme events of a time series help predicting future extreme events of another time series. The topology and connectedness of networks built with Granger causality in tail can be used to measure systemic risk and to identify risk transmitters. Here we introduce a novel test of Granger causality in tail which adopts the likelihood ratio statistic and is based on the multivariate generalization of a discrete autoregressive process for binary time series describing the sequence of extreme events of the underlying price dynamics. The proposed test has very good size and power in finite samples, especially for large sample size, allows inferring the correct time scale at which the causal interaction takes place, and it is flexible enough for multivariate extension when more than two time series are considered in order to decrease false detections as spurious effect of neglected variables. An extensive simulation study shows the performances of the proposed method with a large variety of data generating processes and it introduces also the comparison with the test of Granger causality in tail by [Hong et al., 2009]. We report both advantages and drawbacks of the different approaches, pointing out some crucial aspects related to the false detections of Granger causality for tail events. An empirical application to high frequency data of a portfolio of US stocks highlights the merits of our novel approach.
    Date: 2020–05
  3. By: Giuseppe Cavaliere (Department of Economics, University of Bologna, Italy); Heino Bohn Nielsen (Department of Economics, University of Copenhagen, Denmark); Anders Rahbek (Department of Economics, University of Copenhagen, Denmark)
    Abstract: This article provides an introduction to methods and challenges underlying application of the bootstrap in econometric modelling of economic and financial time series. Validity, or asymptotic validity, of the bootstrap is discussed as this is a key element in deciding whether the bootstrap is applicable in empirical contexts. That is, as detailed here, bootstrap validity relies on regularity conditions, which need to be verified on a case-by-case basis. To fix ideas, asymptotic validity is discussed in terms of the leading example of bootstrap-based hypothesis testing in the well-known first order autoregressive model. In particular, bootstrap versions of classic convergence in probability and distribution, and hence of laws of large numbers and central limit theorems, are discussed as crucial ingredients to establish bootstrap validity. Regularity conditions and their implications for possible improvements in terms of (empirical) size and power for bootstrap-based testing, when compared to asymptotic testing, are illustrated by simulations. Following this, an overview of selected recent advances in the application of bootstrap methods in econometrics is also given.
    Keywords: Bootstrap theory; Bootstrap implementation; Econometric time series analysis; Testing; Asymptotic theory; Autoregressive models
    JEL: C12 C13 C15 C22 C32 C50
    Date: 2020–12–17
  4. By: Karlsson, Sune (Örebro University School of Business); Mazur, Stepan (Örebro University School of Business)
    Abstract: We propose a general class of multivariate fat-tailed distributions which includes the normal, t and Laplace distributions as special cases as well as their mixture. Full conditional posterior distributions for the Bayesian VAR-model are derived and used to construct a MCMC-sampler for the joint posterior distribution. The framework allows for selection of a specific special case as the distribution for the error terms in the VAR if the evidence in the data is strong while at the same time allowing for considerable flexibility and more general distributions than offered by any of the special cases. As fat tails can also be a sign of conditional heteroskedasticity we also extend the model to allow for stochastic volatility. The performance is evaluated using simulated data and the utility of the general model specification is demonstrated in applications to macroeconomics.
    Keywords: Scale mixture of normals; Elliptically contoured distribution; Mixture distributions; Stochastic volatility; Markov Chain Monte Carlo
    JEL: C11 C15 C16 C32 C52
    Date: 2020–04–27
  5. By: Mengya Liu; Fukan Zhu; Ke Zhu
    Abstract: This paper proposes a new family of multi-frequency-band (MFB) tests for the white noise hypothesis by using the maximum overlap discrete wavelet packet transform (MODWPT). The MODWPT allows the variance of a process to be decomposed into the variance of its components on different equal-length frequency sub-bands, and the MFB tests then measure the distance between the MODWPT-based variance ratio and its theoretical null value jointly over several frequency sub-bands. The resulting MFB tests have the chi-squared asymptotic null distributions under mild conditions, which allow the data to be heteroskedastic. The MFB tests are shown to have the desirable size and power performance by simulation studies, and their usefulness is further illustrated by two applications.
    Date: 2020–04
  6. By: Yeonwoo Rho; Yun Liu; Hie Joo Ahn
    Abstract: This paper proposes a new nonparametric mixed data sampling (MIDAS) model and develops a framework to infer clusters in a panel dataset of mixed sampling frequencies. The nonparametric MIDAS estimation method is more flexible but substantially less costly to estimate than existing approaches. The proposed clustering algorithm successfully recovers true membership in the cross-section both in theory and in simulations without requiring prior knowledge such as the number of clusters. This methodology is applied to estimate a mixed-frequency Okun's law model for the state-level data in the U.S. and uncovers four clusters based on the dynamic features of labor markets.
    Date: 2020–04
  7. By: Eric Hillebrand; Manuel Lukas; Wei Wei
    Abstract: Relations between economic variables are often not exploited for forecasting, suggesting that predictors are weak in the sense that the estimation uncertainty is larger than the bias from ignoring the relation. In this paper, we propose a novel bagging estimator designed for such predictors. Based on a test for finite-sample predictive ability, our estimator shrinks the OLS estimate not to zero, but towards the null of the test which equates squared bias with estimation variance, and we apply bagging to further reduce the estimation variance. We derive the asymptotic distribution and show that our estimator can substantially lower the MSE compared to the standard ttest bagging. An asymptotic shrinkage representation for the estimator that simplifies computation is provided. Monte Carlo simulations show that the predictor works well in small samples. In an empirical application, we find that our proposed estimators works well for inflation forecasting using unemployment or industrial production as predictors.
    Keywords: inflation forecasting, bootstrap aggregation, estimation uncertainty, weak predictors, shrinkage methods.
    JEL: C13 C15 C18
    Date: 2020
  8. By: Poncela, Pilar; Ruiz, Esther
    Abstract: In this paper, the authors comment on the Monte Carlo results of the paper by Lucchetti and Veneti (A replication of "A quasi-maximum likelihood approach for large, approximate dynamic factor models" (Review of Economics and Statistics), 2020)) that studies and compares the performance of the Kalman Filter and Smoothing (KFS) and Principal Components (PC) factor extraction procedures in the context of Dynamic Factor Models (DFMs). The new Monte Carlo results of Lucchetti and Veneti (2020) refer to a DFM in which the relation between the factors and the variables in the system is not only contemporaneous but also lagged. The authors´ main point is that, in this context, the model specification, which is assumed to be known in Lucchetti and Veneti (2020), is important for the properties of the estimated factors. Furthermore, estimation of the parameters is also problematic in some cases.
    Keywords: Dynamic factor models,EM algorithm,Kalman filter,principal components
    JEL: C15 C32 C55 C87
    Date: 2020
  9. By: Gaetano Perone
    Abstract: Coronavirus disease (COVID-2019) is a severe ongoing novel pandemic that is spreading quickly across the world. Italy, that is widely considered one of the main epicenters of the pandemic, has registered the highest COVID-2019 death rates and death toll in the world, to the present day. In this article I estimate an autoregressive integrated moving average (ARIMA) model to forecast the epidemic trend over the period after April 4, 2020, by using the Italian epidemiological data at national and regional level. The data refer to the number of daily confirmed cases officially registered by the Italian Ministry of Health ( for the period February 20 to April 4, 2020. The main advantage of this model is that it is easy to manage and fit. Moreover, it may give a first understanding of the basic trends, by suggesting the hypothetic epidemic's inflection point and final size.
    Keywords: COVID-2019; infection disease; pandemic; time series; ARIMA model; forecasting models;
    JEL: C22 C53 I18
    Date: 2020–04
  10. By: Philippe Goulet Coulombe; Maximilian G\"obel
    Abstract: Arctic sea ice extent (SIE) in September 2019 ranked second-to-lowest in history and is trending downward. The understanding of how internal variability amplifies the effects of external $\text{CO}_2$ forcing is still limited. We propose the VARCTIC, which is a Vector Autoregression (VAR) designed to capture and extrapolate Arctic feedback loops. VARs are dynamic simultaneous systems of equations, routinely estimated to predict and understand the interactions of multiple macroeconomic time series. Hence, the VARCTIC is a parsimonious compromise between fullblown climate models and purely statistical approaches that usually offer little explanation of the underlying mechanism. Our "business as usual" completely unconditional forecast has SIE hitting 0 in September by the 2060s. Impulse response functions reveal that anthropogenic $\text{CO}_2$ emission shocks have a permanent effect on SIE - a property shared by no other shock. Further, we find Albedo- and Thickness-based feedbacks to be the main amplification channels through which $\text{CO}_2$ anomalies impact SIE in the short/medium run. Conditional forecast analyses reveal that the future path of SIE crucially depends on the evolution of $\text{CO}_2$ emissions, with outcomes ranging from recovering SIE to it reaching 0 in the 2050s. Finally, Albedo and Thickness feedbacks are shown to play an important role in accelerating the speed at which predicted SIE is heading towards 0.
    Date: 2020–05
  11. By: Gilad Francis; Nick James; Max Menzies; Arjun Prakash
    Abstract: We develop a new method to find the number of volatility regimes in a non-stationary financial time series. We use change point detection to partition a time series into locally stationary segments, then estimate the distributions of each piece. The distributions are clustered into a learned number of discrete volatility regimes via an optimisation routine. Using this method, we investigate and determine a clustering structure for indices, large cap equities and exchange-traded funds. Finally, we create and validate a dynamic portfolio allocation strategy that learns the optimal match between the current distribution of a time series with its past regimes, thereby making online risk-avoidance decisions in the present.
    Date: 2020–04
  12. By: Sarthak Behera; Hyeongwoo Kim; Soohyon Kim
    Abstract: We propose factor-augmented out of sample forecasting models for the real exchange rate between Korea and the US. We estimate latent common factors by applying an array of data dimensionality reduction methods to a large panel of monthly frequency time series data. We augment benchmark forecasting models with common factor estimates to formulate out-of-sample forecasts of the real exchange rate. Major findings are as follows. First, our factor models outperform conventional forecasting models when combined with factors from the US macroeconomic predictors. Korean factor models perform overall poorly. Second, our factor models perform well at longer horizons when American real activity factors are employed, whereas American nominal/financial market factors help improve short-run prediction accuracy. Third, models with global PLS factors from UIP fundamentals overall perform well, while PPP and RIRP factors play a limited role in forecasting.
    Keywords: Won/Dollar Real Exchange Rate; Principal Component Analysis; Partial Least Squares; LASSO; Out-of-Sample Forecast
    JEL: C38 C53 C55 F31 G17
    Date: 2020–05
  13. By: Pietro Murialdo; Linda Ponta; Anna Carbone
    Abstract: A perspective is taken on the intangible complexity of economic and social systems by investigating the underlying dynamical processes that produce, store and transmit information in financial time series in terms of the \textit{moving average cluster entropy}. An extensive analysis has evidenced market and horizon dependence of the \textit{moving average cluster entropy} in real world financial assets. The origin of the behavior is scrutinized by applying the \textit{moving average cluster entropy} approach to long-range correlated stochastic processes as the Autoregressive Fractionally Integrated Moving Average (ARFIMA) and Fractional Brownian motion (FBM). To that end, an extensive set of series is generated with a broad range of values of the Hurst exponent $H$ and of the autoregressive, differencing and moving average parameters $p,d,q$. A systematic relation between \textit{moving average cluster entropy}, \textit{Market Dynamic Index} and long-range correlation parameters $H$, $d$ is observed. This study shows that the characteristic behaviour exhibited by the horizon dependence of the cluster entropy is related to long-range positive correlation in financial markets. Specifically, long range positively correlated ARFIMA processes with differencing parameter $ d\simeq 0.05$, $d\simeq 0.15$ and $ d\simeq 0.25$ are consistent with \textit{moving average cluster entropy} results obtained in time series of DJIA, S\&P500 and NASDAQ.
    Date: 2020–04
  14. By: Kim, Jihyun; Meddahi, Nour
    Abstract: Nowadays, a common method to forecast integrated variance is to use the fitted value of a simple OLS autoregression of the realized variance. However, non-parametric estimates of the tail index of this realized variance process reveal that its second moment is possibly unbounded. In this case, the behavior of the OLS estimators and the corresponding statistics are unclear. We prove that when the second moment of the spot variance is unbounded, the slope of the spot variance’s autoregression converges to a random variable as the sample size diverges. The same result holds when one uses the integrated or realized variance instead of the spot variance. We then consider the class of diffusion variance models with an affine drift, a class which includes GARCH and CEV processes, and we prove that IV estimation with adequate instruments provide consistent estimators of the drift parameters as long as the variance process has a finite first moment regardless of the existence of the second moment. In particular, for the GARCH diffusion model with fat tails, an IV estimation where the instrument equals the sign of the centered lagged value of the variable of interest provides consistent estimators. Simulation results corroborate the theoretical findings of the paper.
    Keywords: volatility; autoregression; fat tails; random limits.
    Date: 2020–05
  15. By: Roy Cerqueti; Massimiliano Giacalone; Raffaele Mattera
    Abstract: Recently, cryptocurrencies have attracted a growing interest from investors, practitioners and researchers. Nevertheless, few studies have focused on the predictability of them. In this paper we propose a new and comprehensive study about cryptocurrency market, evaluating the forecasting performance for three of the most important cryptocurrencies (Bitcoin, Ethereum and Litecoin) in terms of market capitalization. At this aim, we consider non-Gaussian GARCH volatility models, which form a class of stochastic recursive systems commonly adopted for financial predictions. Results show that the best specification and forecasting accuracy are achieved under the Skewed Generalized Error Distribution when Bitcoin/USD and Litecoin/USD exchange rates are considered, while the best performances are obtained for skewed Distribution in the case of Ethereum/USD exchange rate. The obtain findings state the effectiveness -- in terms of prediction performance -- of relaxing the normality assumption and considering skewed distributions.
    Date: 2020–04
  16. By: Dimitris Korobilis; Davide Pettenuzzo
    Abstract: As the amount of economic and other data generated worldwide increases vastly, a challenge for future generations of econometricians will be to master efficient algorithms for inference in empirical models with large information sets. This Chapter provides a review of popular estimation algorithms for Bayesian inference in econometrics and surveys alternative algorithms developed in machine learning and computing science that allow for efficient computation in high-dimensional settings. The focus is on scalability and parallelizability of each algorithm, as well as their ability to be adopted in various empirical settings in economics and finance.
    Date: 2020–04
  17. By: Barbara Rossi
    Abstract: This article provides guidance on how to evaluate and improve the forecasting ability of models in the presence of instabilities, which are widespread in economic time series. Empirically relevant examples include predicting the nancial crisis of 2007-2008, as well as, more broadly, uctuations in asset prices, exchange rates, output growth and ination. In the context of unstable environments, I discuss how to assess modelsforecasting ability; how to robustify modelsestimation; and how to correctly report measures of forecast uncertainty. Importantly, and perhaps surprisingly, breaks in modelsparameters are neither necessary nor su¢ cient to generate time variation in modelsforecasting performance: thus, one should not test for breaks in modelsparameters, but rather evaluate their forecasting ability in a robust way. In addition, local measures of modelsforecasting performance are more appropriate than traditional, average measures.
    Keywords: forecasting, instabilities, time variation, inflation, structural breaks, density forecasts, great recession, forecast confidence intervals, output growth, business cycles
    JEL: E4 E52 E21 H31 I3 D1
    Date: 2019–11

This nep-ets issue is ©2020 by Jaqueson K. Galimberti. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.