nep-ets New Economics Papers
on Econometric Time Series
Issue of 2020‒03‒23
twelve papers chosen by
Jaqueson K. Galimberti
Auckland University of Technology

  1. Deep Learning, Jumps, and Volatility Bursts By Oksana Bashchenko; Alexis Marchal
  2. Unit-root test within a threshold ARMA framework By Kung-Sik Chan; Simone Giannerini; Greta Goracci; Howell Tong
  3. Pollution and Expenditures in a Penalized Vector Spatial Autoregressive Time Series Model with Data-Driven Networks By Andree,Bo Pieter Johannes; Spencer,Phoebe Girouard; Azari,Sardar; Chamorro,Andres; Wang,Dieter; Dogo,Harun
  4. On bootstrapping tests of equal forecast accuracy for nested models By Firmin Doko Tchatoka; Qazi Haque
  5. FRED-QD: A Quarterly Database for Macroeconomic Research By Michael W. McCracken; Serena Ng
  6. Adaptive exponential power distribution with moving estimator for nonstationary time series By Jarek Duda
  7. Forecasting Foreign Exchange Rate: A Multivariate Comparative Analysis between Traditional Econometric, Contemporary Machine Learning & Deep Learning Techniques By Manav Kaushik; A K Giri
  8. A mixture autoregressive model based on Gaussian and Student's $t$-distributions By Savi Virolainen
  9. Online Estimation of DSGE Models By Michael D. Cai; Marco Del Negro; Edward P. Herbst; Ethan Matlin; Reca Sarfati; Frank Schorfheide
  10. Backward CUSUM for Testing and Monitoring Structural Change By Sven Otto; J\"org Breitung
  11. Estimation of Weak Factor Models By Yoshimasa Uematsu; Takashi Yamagata
  12. Bayesian Inference in High-Dimensional Time-varying Parameter Models using Integrated Rotated Gaussian Approximations By Florian Huber; Gary Koop; Michael Pfarrhofer

  1. By: Oksana Bashchenko (HEC Lausanne; Swiss Finance Institute); Alexis Marchal (EPFL; SFI)
    Abstract: We develop a new method that detects jumps nonparametrically in financial time series and significantly outperforms the current benchmark on simulated data. We use a long short- term memory (LSTM) neural network that is trained on labelled data generated by a process that experiences both jumps and volatility bursts. As a result, the network learns how to disentangle the two. Then it is applied to out-of-sample simulated data and delivers results that considerably differ from the benchmark: we obtain fewer spurious detection and identify a larger number of true jumps. When applied to real data, our approach for jump screening allows to extract a more precise signal about future volatility.
    Keywords: Jumps, Volatility Burst, High-Frequency Data, Deep Learning, LSTM
    JEL: C14 C32 C45 C58 G17
    Date: 2020–03
    URL: http://d.repec.org/n?u=RePEc:chf:rpseri:rp2010&r=all
  2. By: Kung-Sik Chan; Simone Giannerini; Greta Goracci; Howell Tong
    Abstract: We propose a new unit-root test based on Lagrange Multipliers, where we extend the null hypothesis to an integrated moving-average process (IMA(1,1)) and the alternative to a first-order threshold autoregressive moving-average process (TARMA(1,1)). This new theoretical framework provides tests with good size without pre-modelling steps. Moreover, leveraging on the versatile capability of the TARMA(1,1), our test has power against a wide range of linear and nonlinear alternatives. We prove the consistency and asymptotic similarity of the test. The proof of tightness of the test is of independent and general theoretical interest. Moreover, we propose a wild bootstrap version of the statistic. Our proposals outperform most existing tests in many contexts. We support the view that rejection does not necessarily imply nonlinearity so that unit-root tests should not be used uncritically to select a model. Finally, we present an application to real exchange rates.
    Date: 2020–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2002.09968&r=all
  3. By: Andree,Bo Pieter Johannes; Spencer,Phoebe Girouard; Azari,Sardar; Chamorro,Andres; Wang,Dieter; Dogo,Harun
    Abstract: This paper introduces a Spatial Vector Autoregressive Moving Average (SVARMA) model in which multiple cross-sectional time series are modeled as multivariate, possibly fat-tailed, spatial autoregressive ARMA processes. The estimation requires specifying the cross-sectional spillover channels through spatial weights matrices. the paper explores a kernel method to estimate the network topology based on similarities in the data. It discusses the model and estimation, focusing on a penalized Maximum Likelihood criterion. The empirical performance of the estimator is explored in a simulation study. The model is used to study a spatial time series of pollution and household expenditure data in Indonesia. The analysis finds that the new model improves in terms of implied density, and better neutralizes residual correlations than the VARMA, using fewer parameters. The results suggest that growth in household expenditures precedes pollution reduction, particularly after the expenditures of poorer households increase; that increasing pollution is followed by reduced growth in expenditures, particularly reducing the growth of poorer households; and that there are significant spillovers from bottom-up growth in expenditures. The paper does not find evidence for top-down growth spillovers. Feedback between the identified mechanisms may contribute to pollution-poverty traps and the results imply that pollution damages are economically significant.
    Keywords: Global Environment,Inequality,Brown Issues and Health,Air Quality&Clean Air,Pollution Management&Control,Health Service Management and Delivery
    Date: 2019–02–25
    URL: http://d.repec.org/n?u=RePEc:wbk:wbrwps:8757&r=all
  4. By: Firmin Doko Tchatoka (School of Economics, University of Adelaide); Qazi Haque (Business School, The University of Western Australia)
    Abstract: The asymptotic distributions of the recursive out-of-sample forecast accuracy test statistics depend on stochastic integrals of Brownian motion when the models under comparison are nested. This often complicates their implementation in practice because the computation of their asymptotic critical values is costly. Hansen and Timmermann (2015, Econometrica) propose a Wald approximation of the commonly used recursive F-statistic and provide a simple characterization of the exact density of its asymptotic distribution. However, this characterization holds only when the larger model has one extra predictor or the forecast errors are homoscedastic. No such closed-form characterization is readily available when the nesting involves more than one predictor and heteroskedasticity is present. We first show both the recursive F-test and its Wald approximation have poor finite-sample properties, especially when the forecast horizon is greater than one. We then propose an hybrid bootstrap method consisting of a block moving bootstrap (which is nonparametric) and a residual based bootstrap for both statistics, and establish its validity. Simulations show that our hybrid bootstrap has good finite-sample performance, even in multi-step ahead forecasts with heteroscedastic or autocorrelated errors, and more than one predictor. The bootstrap method is illustrated on forecasting core inflation and GDP growth.
    Keywords: Out-of-sample forecasts; HAC estimator; Moving block bootstrap; Bootstrap consistency
    JEL: C12 C15 C32
    Date: 2020–02
    URL: http://d.repec.org/n?u=RePEc:adl:wpaper:2020-03&r=all
  5. By: Michael W. McCracken; Serena Ng (Board of Governors of the Federal Reserve System (U.S.); University of Michigan; Columbia University)
    Abstract: In this paper we present and describe a large quarterly frequency, macroeconomic database. The data provided are closely modeled to that used in Stock and Watson (2012a). As in our previous work on FRED-MD, our goal is simply to provide a publicly available source of macroeconomic “big data” that is updated in real time using the FRED database. We show that factors extracted from this data set exhibit similar behavior to those extracted from the original Stock and Watson data set. The dominant factors are shown to be insensitive to outliers, but outliers do affect the relative influence of the series as indicated by leverage scores. We then investigate the role unit root tests play in the choice of transformation codes with an emphasis on identifying instances in which the unit root-based codes differ from those already used in the literature. Finally, we show that factors extracted from our data set are useful for forecasting a range of macroeconomic series and that the choice of transformation codes can contribute substantially to the accuracy of these forecasts.
    Keywords: big data; factors; forecasting
    JEL: C30 C33 C82 C8
    Date: 2020–03–11
    URL: http://d.repec.org/n?u=RePEc:fip:fedlwp:87608&r=all
  6. By: Jarek Duda
    Abstract: While standard estimation assumes that all datapoints are from probability distribution of the same fixed parameters $\theta$, we will focus on maximum likelihood (ML) adaptive estimation for nonstationary time series: separately estimating parameters $\theta_T$ for each time $T$ based on the earlier values $(x_t)_{t
    Date: 2020–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2003.02149&r=all
  7. By: Manav Kaushik; A K Giri
    Abstract: In todays global economy, accuracy in predicting macro-economic parameters such as the foreign the exchange rate or at least estimating the trend correctly is of key importance for any future investment. In recent times, the use of computational intelligence-based techniques for forecasting macroeconomic variables has been proven highly successful. This paper tries to come up with a multivariate time series approach to forecast the exchange rate (USD/INR) while parallelly comparing the performance of three multivariate prediction modelling techniques: Vector Auto Regression (a Traditional Econometric Technique), Support Vector Machine (a Contemporary Machine Learning Technique), and Recurrent Neural Networks (a Contemporary Deep Learning Technique). We have used monthly historical data for several macroeconomic variables from April 1994 to December 2018 for USA and India to predict USD-INR Foreign Exchange Rate. The results clearly depict that contemporary techniques of SVM and RNN (Long Short-Term Memory) outperform the widely used traditional method of Auto Regression. The RNN model with Long Short-Term Memory (LSTM) provides the maximum accuracy (97.83%) followed by SVM Model (97.17%) and VAR Model (96.31%). At last, we present a brief analysis of the correlation and interdependencies of the variables used for forecasting.
    Date: 2020–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2002.10247&r=all
  8. By: Savi Virolainen
    Abstract: We introduce a new mixture autoregressive model which combines Gaussian and Student's $t$ mixture components. The model has very attractive properties analogous to the Gaussian and Student's $t$ mixture autoregressive models, but it is more flexible as it enables to model series which consist of both conditionally homoscedastic Gaussian regimes and conditionally heteroscedastic Student's $t$ regimes. The usefulness of our model is demonstrated in an empirical application to the monthly U.S. interest rate spread between the 3-month Treasury bill rate and the effective federal funds rate.
    Date: 2020–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2003.05221&r=all
  9. By: Michael D. Cai; Marco Del Negro; Edward P. Herbst; Ethan Matlin; Reca Sarfati; Frank Schorfheide
    Abstract: This paper illustrates the usefulness of sequential Monte Carlo (SMC) methods in approximating DSGE model posterior distributions. We show how the tempering schedule can be chosen adaptively, document the accuracy and runtime benefits of generalized data tempering for “online” estimation (that is, re-estimating a model as new data become available), and provide examples of multimodal posteriors that are well captured by SMC methods. We then use the online estimation of the DSGE model to compute pseudo-out-of-sample density forecasts and study the sensitivity of the predictive performance to changes in the prior distribution. We find that making priors less informative (compared to the benchmark priors used in the literature) by increasing the prior variance does not lead to a deterioration of forecast accuracy.
    JEL: C11 C32 C53 E32 E37 E52
    Date: 2020–03
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:26826&r=all
  10. By: Sven Otto; J\"org Breitung
    Abstract: It is well known that the conventional CUSUM test suffers from low power and large detection delay. We therefore propose two alternative detector statistics. The backward CUSUM detector sequentially cumulates the recursive residuals in reverse chronological order, whereas the stacked backward CUSUM detector considers a triangular array of backward cumulated residuals. While both the backward CUSUM detector and the stacked backward CUSUM detector are suitable for retrospective testing, only the stacked backward CUSUM detector can be monitored on-line. The limiting distributions of the maximum statistics under suitable sequences of alternatives are derived for retrospective testing and fixed endpoint monitoring. In the retrospective testing context, the local power of the tests is shown to be substantially higher than that for the conventional CUSUM test if a single break occurs after one third of the sample size. When applied to monitoring schemes, the detection delay of the stacked backward CUSUM is shown to be much shorter than that of the conventional monitoring CUSUM procedure. Moreover, an infinite horizon monitoring procedure and critical values are presented.
    Date: 2020–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2003.02682&r=all
  11. By: Yoshimasa Uematsu; Takashi Yamagata
    Abstract: This paper proposes a novel estimation method for the weak factor models, a slightly stronger version of the approximate factor models of Chamberlain and Rothschild (1983), with large cross-sectional and time-series dimensions (N and T, respectively). It assumes that the kth largest eigenvalue of data covariance matrix grows proportionally to N^ak with unknown exponents 0
    Date: 2019–04
    URL: http://d.repec.org/n?u=RePEc:dpr:wpaper:1053r&r=all
  12. By: Florian Huber; Gary Koop; Michael Pfarrhofer
    Abstract: Researchers increasingly wish to estimate time-varying parameter (TVP) regressions which involve a large number of explanatory variables. Including prior information to mitigate over-parameterization concerns has led to many using Bayesian methods. However, Bayesian Markov Chain Monte Carlo (MCMC) methods can be very computationally demanding. In this paper, we develop computationally efficient Bayesian methods for estimating TVP models using an integrated rotated Gaussian approximation (IRGA). This exploits the fact that whereas constant coefficients on regressors are often important, most of the TVPs are often unimportant. Since Gaussian distributions are invariant to rotations we can split the the posterior into two parts: one involving the constant coefficients, the other involving the TVPs. Approximate methods are used on the latter and, conditional on these, the former are estimated with precision using MCMC methods. In empirical exercises involving artificial data and a large macroeconomic data set, we show the accuracy and computational benefits of IRGA methods.
    Date: 2020–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2002.10274&r=all

This nep-ets issue is ©2020 by Jaqueson K. Galimberti. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.