
on Econometric Time Series 
Issue of 2021‒07‒19
fourteen papers chosen by Jaqueson K. Galimberti Auckland University of Technology 
By:  Paolo Gorgi (Vrije Universiteit Amsterdam); Siem Jan Koopman (Vrije Universiteit Amsterdam); Julia Schaumburg (Vrije Universiteit Amsterdam) 
Abstract:  We introduce a new and general methodology for analyzing vector autoregressive models with timevarying coefficient matrices and conditionally heteroskedastic disturbances. Our proposed method is able to jointly treat a dynamic latent factor model for the autoregressive coefficient matrices and a multivariate dynamic volatility model for the variance matrix of the disturbance vector. Since the likelihood function is available in closedform through a simple extension of the Kalman filter equations, all unknown parameters in this flexible model can be easily estimated by the method of maximum likelihood. The proposed approach is appealing since it is simple to implement and computationally fast. Furthermore, it presents an alternative to Bayesian methods which are regularly employed in the empirical literature. A simulation study shows the reliability and robustness of the method against potential misspecifications of the volatility in the disturbance vector. We further provide an empirical illustration in which we analyze possibly timevarying relationships between U.S. industrial production, inflation, and bond spread. We empirically identify a timevarying linkage between economic and financial variables which are effectively described by a common dynamic factor. The impulse response analysis points towards substantial differences in the effects of financial shocks on output and inflation during crisis and noncrisis periods. 
Keywords:  timevarying parametersvector autoregressive model, dynamic factor model, Kalman filter, generalized autoregressive conditional heteroskedasticity, orthogonal impulse response function 
JEL:  C32 E31 
Date:  2021–06–28 
URL:  http://d.repec.org/n?u=RePEc:tin:wpaper:20210056&r= 
By:  Francisco Blasques (Vrije Universiteit Amsterdam); Enzo D'Innocenzo (University of Bologna); Siem Jan Koopman (Vrije Universiteit Amsterdam) 
Abstract:  We propose a multiplicative dynamic factor structure for the conditional modelling of the variances of an Ndimensional vector of financial returns. We identify common and idiosyncratic conditional volatility factors. The econometric framework is based on an observationdriven time series model that is simple and parsimonious. The common factor is modeled by a normal density and is robust to fattailed returns as it averages information over the crosssection of the observed Ndimensional vector of returns. The idiosyncratic factors are designed to capture the erratic shocks in returns and therefore rely on fattailed densities. Our model is potentially of a highdimension, is parsimonious and it does not necessarily suffer from the curse of dimensionality. The relatively simple structure of the model leads to simple computations for the estimation of parameters and signal extraction of factors. We derive the stochastic properties of our proposed dynamic factor model, including bounded moments, stationarity, ergodicity, and filter invertibility. We further establish consistency and asymptotic normality of the maximum likelihood estimator. The finite sample properties of the estimator and the reliability of our method to track the common conditional volatility factor are investigated by means of a Monte Carlo study. Finally, we illustrate our approach with two empirical studies. The first study is for a panel of financial returns from ten stocks of the S&P100. The second study is for the panel of returns from all S&P100 stocks. 
Keywords:  Financial econometrics, observationdriven models, conditional volatility, common factor 
JEL:  C32 C52 C58 
Date:  2021–06–28 
URL:  http://d.repec.org/n?u=RePEc:tin:wpaper:20210057&r= 
By:  Fabrizio Cipollini; Giampiero M. Gallo 
Abstract:  Several phenomena are available representing market activity: volumes, number of trades, durations between trades or quotes, volatility  however measured  all share the feature to be represented as positive valued time series. When modeled, persistence in their behavior and reaction to new information suggested to adopt an autoregressivetype framework. The Multiplicative Error Model (MEM) is borne of an extension of the popular GARCH approach for modeling and forecasting conditional volatility of asset returns. It is obtained by multiplicatively combining the conditional expectation of a process (deterministically dependent upon an information set at a previous time period) with a random disturbance representing unpredictable news: MEMs have proved to parsimoniously achieve their task of producing good performing forecasts. In this paper we discuss various aspects of model specification and inference both for the univariate and the multivariate case. The applications are illustrative examples of how the presence of a slow moving lowfrequency component can improve the properties of the estimated models. 
Date:  2021–07 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2107.05923&r= 
By:  Casoli, Chiara; Lucchetti, Riccardo (Jack) 
Abstract:  In this article, we propose a cointegrationbased PermanentTransitory decomposition for nonstationary Dynamic Factor Models. Our methodology exploits the cointegration relations among the observable variables and assumes they are driven by a common and an idiosyncratic component. The common component is further split into a longterm nonstationary part and a shortterm stationary one. A Monte Carlo experiment shows that taking into account the cointegration structure in the DFM leads to a much better reconstruction of the space spanned by the factors, with respect to the most standard technique of applying a factor model in differenced systems. Finally, an application of our procedure to a set of different commodity prices allows to analyse the comovement among different markets. We find that commodity prices move together due to longterm common forces and that the trend for most primary good prices is declining, whereas metals and energy ones exhibit an upward or at least stable pattern since the 2000s. 
Keywords:  Demand and Price Analysis 
Date:  2021–07–13 
URL:  http://d.repec.org/n?u=RePEc:ags:feemwp:312367&r= 
By:  Christian Gourieroux; Joann Jasiak 
Abstract:  We consider a class of semiparametric dynamic models with strong white noise errors. This class of processes includes the standard Vector Autoregressive (VAR) model, the nonfundamental structural VAR, the mixed causalnoncausal models, as well as nonlinear dynamic models such as the (multivariate) ARCHM model. For estimation of processes in this class, we propose the Generalized Covariance (GCov) estimator, which is obtained by minimizing a residualbased multivariate portmanteau statistic as an alternative to the Generalized Method of Moments. We derive the asymptotic properties of the GCov estimator and of the associated residualbased portmanteau statistic. Moreover, we show that the GCov estimators are semiparametrically efficient and the residualbased portmanteau statistics are asymptotically chisquare distributed. The finite sample performance of the GCov estimator is illustrated in a simulation study. The estimator is also applied to a dynamic model of cryptocurrency prices. 
Date:  2021–07 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2107.06979&r= 
By:  Emanuele Bacchiocchi (Institute for Fiscal Studies); Toru Kitagawa (Institute for Fiscal Studies and cemmap and University College London) 
Abstract:  In a landmark contribution to the structural vector autoregression (SVARs) literature, RubioRamirez, Waggoner, and Zha (2010, `Structural Vector Autoregressions: Theory of Identification and Algorithms for Inference,’ Review of Economic Studies) shows a necessary and sufficient condition for equality restrictions to globally identify the structural parameters of a SVAR. The simplest form of the necessary and sufficient condition shown in Theorem 7 of RubioRamirez et al (2010) checks the number of zero restrictions and the ranks of particular matrices without requiring knowledge of the true value of the structural or reducedform parameters. However, this note shows by counterexample that this condition is not sufficient for global identification. Analytical investigation of the counterexample clarifies why their sufficiency claim breaks down. The problem with the rank condition is that it allows for the possibility that restrictions are redundant, in the sense that one or more restrictions may be implied by other restrictions, in which case the implied restriction contains no identifying information. We derive a modified necessary and sufficient condition for SVAR global identification and clarify how it can be assessed in practice. 
Date:  2021–02–17 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:03/21&r= 
By:  Emanuele Bacchiocchi (Institute for Fiscal Studies); Toru Kitagawa (Institute for Fiscal Studies and cemmap and University College London) 
Abstract:  This paper analyzes Structural Vector Autoregressions (SVARs) where identification of structural parameters holds locally but not globally. In this case there exists a set of isolated structural parameter points that are observationally equivalent under the imposed restrictions. Although the data do not inform us which observationally equivalent point should be selected, the common frequentist practice is to obtain one as a maximum likelihood estimate and perform impulse response analysis accordingly. For Bayesians, the lack of global identification translates to nonvanishing sensitivity of the posterior to the prior, and the multimodal likelihood gives rise to computational challenges as posterior sampling algorithms can fail to explore all the modes. This paper overcomes these challenges by proposing novel estimation and inference procedures. We characterize a class of identifying restrictions that deliver local but nonglobal identification, and the resulting number of observationally equivalent parameter values. We propose algorithms to exhaustively compute all admissible structural parameter given reducedform parameters and utilize them to sampling from the multimodal posterior. In addition, viewing the set of observationally equivalent parameter points as the identified set, we develop Bayesian and frequentist procedures for inference on the corresponding set of impulse responses. An empirical example illustrates our proposal. 
Date:  2020–07–27 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:40/20&r= 
By:  Bodha Hannadige, Sium; Gao, Jiti; Silvapulle, Mervyn; Silvapulle, Param 
Abstract:  We develop a method for constructing prediction intervals for a nonstationary variable, such as GDP. The method uses a factor augmented regression [FAR] model. The predictors in the model includes a small number of factors generated to extract most of the information in a set of panel data on a large number of macroeconomic variables considered to be potential predictors. The novelty of this paper is that it provides a method and justification for a mixture of stationary and nonstationary factors as predictors in the FAR model; we refer to this as mixtureFAR method. This method is important because typically such a large set of panel data, for example the FREDMD, is likely to contain a mixture of stationary and nonstationary variables. In our simulation study, we observed that the proposed mixtureFAR method performed better than its competitor that requires all the predictors to be nonstationary; the MSE of prediction was at least 33% lower for mixtureFAR. Using the data in FREDQD for the US, we evaluated the aforementioned methods for forecasting the nonstationary variables, GDP and Industrial Production. We observed that the mixtureFAR method performed better than its competitors. 
Keywords:  Gross domestic product; high dimensional data; industrial production; macroeconomic forecasting; panel data 
JEL:  C13 C3 C32 C33 
Date:  2021–01–30 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:108669&r= 
By:  Matteo Barigozzi; Giuseppe Cavaliere; Lorenzo Trapani 
Abstract:  We study the issue of determining the rank of cointegration, R, in an Nvariate time series yt, allowing for the possible presence of heavy tails. Our methodology does not require any estimation of nuisance parameters such as the tail index, and indeed even knowledge as to whether certain moments (such as the variance) exist or not is not required. Our estimator of the rank is based on a sequence of tests on the eigenvalues of the sample second moment matrix of yt. We derive the rates of such eigenvalues, showing that these do depend on the tail index, but also that there exists a gap in rates between the first N  R and the remaining eigenvalues. The former ones, in particular, diverge at a rate which is faster than the latter ones by a factor T (where T denotes the sample size), irrespective of the tail index. We thus exploit this eigengap by constructing, for each eigenvalue, a test statistic which diverges to positive infinity or drifts to zero according as the relevant eigenvalue belongs in the set of first N  R eigenvalues or not. We then construct a randomised statistic based on this, using it as part of a sequential testing procedure. The resulting estimator of R is consistent, in that it picks the true value R with probability 1 as the sample size passes to infinity. 
Keywords:  cointegration, heavy tails, randomized tests. 
Date:  2020 
URL:  http://d.repec.org/n?u=RePEc:not:notgts:20/01&r= 
By:  Raffaella Giacomini (Institute for Fiscal Studies and cemmap and UCL); Toru Kitagawa (Institute for Fiscal Studies and cemmap and University College London); Alessio Volpicella (Institute for Fiscal Studies and Queen Mary University of London) 
Abstract:  Uncertainty about the choice of identifying assumptions is common in causal studies, but is often ignored in empirical practice. This paper considers uncertainty over models that impose different identifying assumptions, which, in general, leads to a mix of point and setidentified models. We propose performing inference in the presence of such uncertainty by generalizing Bayesian model averaging. The method considers multiple posteriors for the setidentified models and combines them with a single posterior for models that are either pointidentified or that impose nondogmatic assumptions. The output is a set of posteriors (postaveraging ambiguous belief) that are mixtures of the single posterior and any element of the class of multiple posteriors, with weights equal to the posterior model probabilities. We suggest reporting the set of posterior means and the associated credible region in practice, and provide a simple algorithm to compute them. We establish that the prior model probabilities are updated when the models are ``distinguishable" and/or they specify different priors for reducedform parameters, and characterize the asymptotic behavior of the posterior model probabilities. The method provides a formal framework for conducting sensitivity analysis of empirical findings to the choice of identifying assumptions. In a standard monetary model, for example, we show that, in order to support a negative response of output to a contractionary monetary policy shock, one would need to attach a prior probability greater than 0.05 to the validity of the assumption that prices do not react contemporaneously to the shock. The method is general and allows for dogmatic and nondogmatic identifying assumptions, multiple pointidentified models, multiple setidentified models, and nested or nonnested models. 
Date:  2020–07–06 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:33/20&r= 
By:  Richard Davis; Serena Ng 
Abstract:  The paper provides three results for SVARs under the assumption that the primitive shocks are mutually independent. First, a framework is proposed to study the dynamic effects of disastertype shocks with infinite variance. We show that the least squares estimates of the VAR are consistent but have nonstandard properties. Second, it is shown that the restrictions imposed on a SVAR can be validated by testing independence of the identified shocks. The test can be applied whether the data have fat or thin tails, and to over as well as exactly identified models. Third, the disaster shock is identified as the component with the largest kurtosis, where the mutually independent components are estimated using an estimator that is valid even in the presence of an infinite variance shock. Two applications are considered. In the first, the independence test is used to shed light on the conflicting evidence regarding the role of uncertainty in economic fluctuations. In the second, disaster shocks are shown to have short term economic impact arising mostly from feedback dynamics. 
Date:  2021–07 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2107.06663&r= 
By:  Gilles Zumbach 
Abstract:  For many financial applications, it is important to have reliable and tractable models for the behavior of assets and indexes, for example in risk evaluation. A successful approach is based on ARCH processes, which strike the right balance between statistical properties and ease of computation. This study focuses on quadratic ARCH processes and the theoretical conditions to have a stable longterm behavior. In particular, the weights for the variance estimators should sum to 1, and the variance of the innovations should be 1. Using historical data, the realized empirical innovations can be computed, and their statistical properties assessed. Using samples of 3 to 5 decades, the variance of the empirical innovations are always significantly above 1, for a sample of stock indexes, commodity indexes and FX rates. This departure points to a short term instability, or to a fast adaptability due to changing conditions. Another theoretical condition on the innovations is to have a zero mean. This condition is also investigated empirically, with some time series showing significant departure from zero. 
Date:  2021–07 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2107.06758&r= 
By:  Isaiah Andrews (Institute for Fiscal Studies and Harvard University); Toru Kitagawa (Institute for Fiscal Studies and cemmap and University College London); Adam McCloskey (Institute for Fiscal Studies and Brown University) 
Abstract:  In an important class of econometric problems, researchers select a target parameter by maximizing the Euclidean norm of a datadependent vector. Examples that can be cast into this frame include threshold regression models with estimated thresholds and structural break models with estimated breakdates. Estimation and inference procedures that ignore the randomness of the target parameter can be severely biased and misleading when this randomness is nonnegligible. This paper studies conditional and unconditional inference in such settings, accounting for the datadependent choice of target parameters. We detail the construction of quantileunbiased estimators and confidence sets with correct coverage, and prove their asymptotic validity under data generating process such that the target parameter remains random in the limit. We also provide a novel sample splitting approach that improves on conventional splitsample inference. 
Date:  2020–07–06 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:34/20&r= 
By:  Antoine FerrÉ (IFPEN  IFP Energies nouvelles  IFPEN  IFP Energies nouvelles, IFP School); Guillaume de Certaines (IFPEN  IFP Energies nouvelles  IFPEN  IFP Energies nouvelles, IFP School); Jérôme Cazelles (IFPEN  IFP Energies nouvelles  IFPEN  IFP Energies nouvelles, IFP School); Tancrède Cohet (IFPEN  IFP Energies nouvelles  IFPEN  IFP Energies nouvelles, IFP School); Arash Farnoosh (IFPEN  IFP Energies nouvelles  IFPEN  IFP Energies nouvelles, IFP School); Frédéric Lantz (IFPEN  IFP Energies nouvelles  IFPEN  IFP Energies nouvelles, IFP School) 
Abstract:  This paper gives an overview of several models applied to forecast the dayahead prices of the German electricity market between 2014 and 2015 using hourly wind and solar productions as well as load. Four econometric models were built: SARIMA, SARIMAX, HoltWinters and Monte Carlo Markov Chain Switching Regimes. Two machine learning approaches were also studied: a Gaussian mixture classification coupled with a random forest and finally, an LSTM algorithm. The best performances were obtained using the SARIMAX and LSTM models. The SARIMAX model makes good predictions and has the advantage through its explanatory variables to better capture the price volatility. The addition of other explanatory variables could improve the prediction of the models presented. The RF exhibits good results and allows to build a confidence interval. The LSTM model provides excellent results, but the precise understanding of the functioning of this model is much more complex. 
Keywords:  Energy Markets,Renewable Energy,Econometric modelling,Bootstrap Method,MeritOrder effect 
Date:  2021–05 
URL:  http://d.repec.org/n?u=RePEc:hal:wpaper:hal03262208&r= 