|
on Econometric Time Series |
By: | Francq, Christian; Zakoian, Jean-Michel |
Abstract: | We investigate the problem of testing finiteness of moments for a class of semi-parametric augmented GARCH models encompassing most commonly used specifications. The existence of positive-power moments of the strictly stationary solution is characterized through the Moment Generating Function (MGF) of the model, defined as the MGF of the logarithm of the random autoregressive coefficient in the volatility dynamics. We establish the asymptotic distribution of the empirical MGF, from which tests of moments are deduced. Alternative tests relying on the estimation of the Maximal Moment Exponent (MME) are studied. Power comparisons based on local alternatives and the Bahadur approach are proposed. We provide an illustration on real financial data, showing that semi-parametric estimation of the MME offers an interesting alternative to Hill's nonparametric estimator of the tail index. |
Keywords: | APARCH model; Bahadur slopes; Hill's estimator; Local asymptotic power; Maximal moment exponent; Moment generating function |
JEL: | C12 C58 |
Date: | 2021 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:110511&r= |
By: | Yuefeng Han; Cun-Hui Zhang; Rong Chen |
Abstract: | Observations in various applications are frequently represented as a time series of multidimensional arrays, called tensor time series, preserving the inherent multidimensional structure. In this paper, we present a factor model approach, in a form similar to tensor CP decomposition, to the analysis of high-dimensional dynamic tensor time series. As the loading vectors are uniquely defined but not necessarily orthogonal, it is significantly different from the existing tensor factor models based on Tucker-type tensor decomposition. The model structure allows for a set of uncorrelated one-dimensional latent dynamic factor processes, making it much more convenient to study the underlying dynamics of the time series. A new high order projection estimator is proposed for such a factor model, utilizing the special structure and the idea of the higher order orthogonal iteration procedures commonly used in Tucker-type tensor factor model and general tensor CP decomposition procedures. Theoretical investigation provides statistical error bounds for the proposed methods, which shows the significant advantage of utilizing the special model structure. Simulation study is conducted to further demonstrate the finite sample properties of the estimators. Real data application is used to illustrate the model and its interpretations. |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2110.15517&r= |
By: | Leonardo Nogueira Ferreira |
Abstract: | This paper explores the complementarity between traditional econometrics and machine learning and applies the resulting model – the VAR-teXt – to central bank communication. The VAR-teXt is a vector autoregressive (VAR) model augmented with information retrieved from text, turned into quantitative data via a Latent Dirichlet Allocation (LDA) model, whereby the number of topics (or textual factors) is chosen based on their predictive performance. A Markov chain Monte Carlo (MCMC) sampling algorithm for the estimation of the VAR-teXt that takes into account the fact that the textual factors are estimates is also provided. The approach is then extended to dynamic factor models (DFM) generating the DFM-teXt. Results show that textual factors based on Federal Open Market Committee (FOMC) statements are indeed useful for forecasting. |
Date: | 2021–11 |
URL: | http://d.repec.org/n?u=RePEc:bcb:wpaper:559&r= |
By: | Jan Ditzen; Yiannis Karavias; Joakim Westerlund |
Abstract: | Identifying structural change is a crucial step in analysis of time series and panel data. The longer the time span, the higher the likelihood that the model parameters have changed as a result of major disruptive events, such as the 2007-2008 financial crisis and the 2020 COVID-19 outbreak. Detecting the existence of breaks, and dating them is therefore necessary not only for estimation purposes but also for understanding drivers of change and their effect on relationships. This article introduces a new community contributed command called xtbreak, which provides researchers with a complete toolbox for analysing multiple structural breaks in time series and panel data. xtbreak can detect the existence of breaks, determine their number and location, and provide break date confidence intervals. The new command is used to explore changes in the relationship between COVID-19 cases and deaths in the US using both country-level time series data and state-level panel data. |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2110.14550&r= |
By: | Máximo Camacho (University of Murcia); María Dolores Gadea (University of Zaragoza); Ana Gómez Loscos (Banco de España) |
Abstract: | This paper provides an accurate chronology of the Spanish reference business cycle by adapting the multiple change-point model proposed by Camacho, Gadea and Gómez Loscos (2021). In that approach, each individual pair of specific peaks and troughs from a set of indicators is viewed as a realization of a mixture of an unspecified number of separate bivariate Gaussian distributions, whose different means are the reference turning points and whose transitions are governed by a restricted Markov chain. In the empirical application, seven recessions in the period from 1970.2 to 2020.2 are identified, which are in high concordance with the timing of the turning point dates established by the Spanish Business Cycle Dating Committee (SBCDC) |
Keywords: | business cycles, turning points, finite mixture models, Spain |
JEL: | E32 C22 E27 |
Date: | 2021–11 |
URL: | http://d.repec.org/n?u=RePEc:bde:wpaper:2139&r= |
By: | Graziano Moramarco |
Abstract: | This paper proposes an approach for enhancing density forecasts of non-normal macroeconomic variables using Bayesian Markov-switching models. Alternative views about economic regimes are combined to produce flexible forecasts, which are optimized with respect to standard objective functions of density forecasting. The optimization procedure explores both forecast combinations and Bayesian model averaging. In an application to U.S. GDP growth, the approach is shown to achieve good accuracy in terms of average predictive densities and to produce well-calibrated forecast distributions. The proposed framework can be used to evaluate the contribution of economists' views to density forecast performance. In the empirical application, we consider views derived from the Fed macroeconomic scenarios used for bank stress tests. |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2110.13761&r= |
By: | Lingxiao Huang (Huawei TCS Lab); K. Sudhir (Cowles Foundation and Yale School of Management); Nisheeth Vishnoi (Cowles Foundation and Yale Department of Computer Science) |
Abstract: | We study the problem of constructing coresets for clustering problems with time series data. This problem has gained importance across many fields including biology, medicine, and economics due to the proliferation of sensors for real-time measurement and rapid drop in storage costs. In particular, we consider the setting where the time series data on N entities is generated from a Gaussian mixture model with autocorrelations over k clusters in Rd. Our main contribution is an algorithm to construct coresets for the maximum likelihood objective for this mixture model. Our algorithm is efficient, and, under a mild assumption on the covariance matrices of the Gaussians, the size of the coreset is independent of the number of entities N and the number of observations for each entity, and depends only polynomially on k, d and 1/ε, where ε is the error parameter. We empirically assess the performance of our coresets with synthetic data. |
Date: | 2021–11 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:2310&r= |
By: | Pratha Khandelwal; Philip Nadler; Rossella Arcucci; William Knottenbelt; Yi-Ke Guo |
Abstract: | The nature of available economic data has changed fundamentally in the last decade due to the economy's digitisation. With the prevalence of often black box data-driven machine learning methods, there is a necessity to develop interpretable machine learning methods that can conduct econometric inference, helping policymakers leverage the new nature of economic data. We therefore present a novel Variational Bayesian Inference approach to incorporate a time-varying parameter auto-regressive model which is scalable for big data. Our model is applied to a large blockchain dataset containing prices, transactions of individual actors, analyzing transactional flows and price movements on a very granular level. The model is extendable to any dataset which can be modelled as a dynamical system. We further improve the simple state-space modelling by introducing non-linearities in the forward model with the help of machine learning architectures. |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2110.14346&r= |