nep-ets New Economics Papers
on Econometric Time Series
Issue of 2023‒03‒06
twelve papers chosen by
Jaqueson K. Galimberti
Auckland University of Technology

  1. Estimating Time-Varying Networks for High-Dimensional Time Series By Jia Chen; Degui Li; Yuning Li; Oliver Linton
  2. High-Dimensional Conditionally Gaussian State Space Models with Missing Data By Joshua C. C. Chan; Aubrey Poon; Dan Zhu
  3. Penalized Quasi-likelihood Estimation and Model Selection in Time Series Models with Parameters on the Boundary By Heino Bohn Nielsen; Anders Rahbek
  4. Sequential Bayesian Learning for Hidden Semi-Markov Models By Patrick Aschermayr; Konstantinos Kalogeropoulos
  5. Data cloning for a threshold asymmetric stochastic volatility model By Marín Díazaraque, Juan Miguel; Lopes Moreira Da Veiga, María Helena
  6. Bayesian Local Projections By Leonardo N. Ferreira; Silvia Miranda-Agrippino; Giovanni Ricco
  7. Volatility modeling of property markets: A note on the distribution of GARCH innovation By Karl-Friedrich Keunecke; Cay Oertel
  8. Simultaneous decorrelation of matrix time series By Hana, Yuefeng; Chenb, Rong; Zhangb, Cun-Hui; Yao, Qiwei
  9. Out of Sample Predictability in Predictive Regressions with Many Predictor Candidates By Jesus Gonzalo; Jean-Yves Pitarakis
  10. Adaptive local VAR for dynamic economic policy uncertainty spillover By Niels Gillmann; Ostap Okhrin
  11. To Boost or Not to Boost? That is the Question By Ye Lu; Adrian Pagan
  12. Testing for Structural Change under Nonstationarity By Christis Katsouris

  1. By: Jia Chen; Degui Li; Yuning Li; Oliver Linton
    Abstract: We explore time-varying networks for high-dimensional locally stationary time series, using the large VAR model framework with both the transition and (error) precision matrices evolving smoothly over time. Two types of time-varying graphs are investigated: one containing directed edges of Granger causality linkages, and the other containing undirected edges of partial correlation linkages. Under the sparse structural assumption, we propose a penalised local linear method with time-varying weighted group LASSO to jointly estimate the transition matrices and identify their significant entries, and a time-varying CLIME method to estimate the precision matrices. The estimated transition and precision matrices are then used to determine the time-varying network structures. Under some mild conditions, we derive the theoretical properties of the proposed estimates including the consistency and oracle properties. In addition, we extend the methodology and theory to cover highly-correlated large-scale time series, for which the sparsity assumption becomes invalid and we allow for common factors before estimating the factor-adjusted time-varying networks. We provide extensive simulation studies and an empirical application to a large U.S. macroeconomic dataset to illustrate the finite-sample performance of our methods.
    Date: 2023–02
  2. By: Joshua C. C. Chan; Aubrey Poon; Dan Zhu
    Abstract: We develop an efficient sampling approach for handling complex missing data patterns and a large number of missing observations in conditionally Gaussian state space models. Two important examples are dynamic factor models with unbalanced datasets and large Bayesian VARs with variables in multiple frequencies. A key insight underlying the proposed approach is that the joint distribution of the missing data conditional on the observed data is Gaussian. Moreover, the inverse covariance or precision matrix of this conditional distribution is sparse, and this special structure can be exploited to substantially speed up computations. We illustrate the methodology using two empirical applications. The first application combines quarterly, monthly and weekly data using a large Bayesian VAR to produce weekly GDP estimates. In the second application, we extract latent factors from unbalanced datasets involving over a hundred monthly variables via a dynamic factor model with stochastic volatility.
    Date: 2023–02
  3. By: Heino Bohn Nielsen; Anders Rahbek
    Abstract: We extend the theory from Fan and Li (2001) on penalized likelihood-based estimation and model-selection to statistical and econometric models which allow for non-negativity constraints on some or all of the parameters, as well as time-series dependence. It differs from classic non-penalized likelihood estimation, where limiting distributions of likelihood-based estimators and test-statistics are non-standard, and depend on the unknown number of parameters on the boundary of the parameter space. Specifically, we establish that the joint model selection and estimation, results in standard asymptotic Gaussian distributed estimators. The results are applied to the rich class of autoregressive conditional heteroskedastic (ARCH) models for the modelling of time-varying volatility. We find from simulations that the penalized estimation and model-selection works surprisingly well even for a large number of parameters. A simple empirical illustration for stock-market returns data confirms the ability of the penalized estimation to select ARCH models which fit nicely the autocorrelation function, as well as confirms the stylized fact of long-memory in financial time series data.
    Date: 2023–02
  4. By: Patrick Aschermayr; Konstantinos Kalogeropoulos
    Abstract: In this paper, we explore the class of the Hidden Semi-Markov Model (HSMM), a flexible extension of the popular Hidden Markov Model (HMM) that allows the underlying stochastic process to be a semi-Markov chain. HSMMs are typically used less frequently than their basic HMM counterpart due to the increased computational challenges when evaluating the likelihood function. Moreover, while both models are sequential in nature, parameter estimation is mainly conducted via batch estimation methods. Thus, a major motivation of this paper is to provide methods to estimate HSMMs (1) in a computationally feasible time, (2) in an exact manner, i.e. only subject to Monte Carlo error, and (3) in a sequential setting. We provide and verify an efficient computational scheme for Bayesian parameter estimation on HSMMs. Additionally, we explore the performance of HSMMs on the VIX time series using Autoregressive (AR) models with hidden semi-Markov states and demonstrate how this algorithm can be used for regime switching, model selection and clustering purposes.
    Date: 2023–01
  5. By: Marín Díazaraque, Juan Miguel; Lopes Moreira Da Veiga, María Helena
    Abstract: In this paper, we propose a new asymmetric stochastic volatility model whose asymmetry parameter can change depending on the intensity of the shock and is modeled as a threshold function whose threshold depends on past returns. We study the model in terms of leverage and propagation using a new concept that has recently appeared in the literature. We find that the new model can generate more leverage and propagation than a well-known asymmetric volatility model. We also propose to estimate the parameters of the model by cloning data. We compare the estimates in finite samples of data cloning and a Bayesian approach and find that data cloning is often more accurate. Data cloning is a general technique for computing maximum likelihood estimators and their asymptotic variances using a Markov chain Monte Carlo (MCMC) method. The empirical application shows that the new model often improves the fit compared to the benchmark model. Finally, the new proposal together with data cloning estimation often leads to more accurate 1-day and 10-day volatility forecasts, especially for return series with high volatility.
    Keywords: Asymmetric Stochastic Volatility; Data Cloning; Leverage Effect; Propagation; Volatility Forecasting
    Date: 2023–02–14
  6. By: Leonardo N. Ferreira (Central Bank of Brazil); Silvia Miranda-Agrippino (Bank of England, CfM(LSE) and CEPR); Giovanni Ricco (École Polytechnique, University of Warwick, OFCE-Sciences Po and CEPR)
    Abstract: We propose a Bayesian approach to Local Projections that optimally addresses the empirical bias-variance trade-off intrinsic in the choice between direct and iterative methods. Bayesian Local Projections (BLP) regularise LP regressions via informative priors, and estimate impulse response functions that capture the properties of the data more accurately than iterative VARs. BLPs preserve the flexibility of LPs while retaining a degree of estimation uncertainty comparable to Bayesian VARs with standard macroeconomic priors. As regularised direct forecasts, BLPs are also a valuable alternative to BVARs for multivariate out-of-sample projections.
    Keywords: Local Projections, VARs, Bayesian Techniques, Impulse Response Functions, Direct Forecasting
    JEL: C32 C11 C14
    Date: 2023–02–12
  7. By: Karl-Friedrich Keunecke; Cay Oertel
    Abstract: Autoregressive heteroscedastic effects in financial time series have been subject to a broad field of applied econometrics. Both academic research, as well as the industry, apply GARCH processes to real estate data with previous investigation mostly focused on securitized real estate positions. So far, the common approach in the literature has been to assume normal distribution of the innovation term for the GARCH modelling of direct real estate markets (Miles, 2008). The specified assumption of normality however falls short of the data characteristics exhibited by direct real estate markets, such as returns of real estate prices explicitly not normally distributed and better characterized by a more leptokurtic, skewed distribution (Schindler, 2009). Ghahramani and Thavaneswaran (2007) point out that typically the innovation distribution is selected without further justification (also see Pin-te & Fuest (2014) footnote for a simple switch to student-t without further justification). Consequently, the omission of a priori assumptions about the innovation term distributions being fit to direct real estate leading to misspecification and -parameterization of GARCH models is the research aim of this study. The employed analysis will utilize monthly transaction-based data for ten US property market subsets, whilst observing a window of time to encompass different market conditions and volatility regimes (Perlin et al., 2021). Determining how ARCH effects might differ across different US real estate submarkets as well as major and non-major markets builds on and extends previous research focused on geographical disaggregation (see Crawford and Fratantoni, 2003; Dolde and Tirtioglu, 1997; Miles, 2008; Schindler, 2009). Subsequently fitting and estimating each data subset with a conditionally normally distributed GARCH model will be juxtaposed by employing a variety of innovation distributions to the data. It follows the central hypothesis of this paper, that the goodness of fit for GARCH models can be improved by allowing for the conditional distribution to be modeled as a flexible a priori assumption. Investigating the differing goodness of fit for the models and employing the most appropriate models to re-estimate the GARCH parameters will allow an analysis of the differences in volatility clustering effects to the model employing normally distributed innovations. The aim is to show empirically, that non-normal innovation term distribution leads to a potentially better goodness of fit of the GARCH model. The utilization of a priori assumptions of GARCH model specification is of high importance not only for portfolio management of investors, but also risk management for economic institutions such as central banks and mortgage banks (Schindler, 2009). To the best of the authors’ knowledge, there is no study which scientifically examines the innovation term distribution of GARCH models of direct real estate investments. This paper aims to provide a better understanding of the influence a priori assumptions of the innovation term can take to increase the validity of volatility models for direct real estate investments.
    Keywords: Capital Values; GARCH; Innovation term distribution; Volatiltiy modeling
    JEL: R3
    Date: 2022–01–01
  8. By: Hana, Yuefeng; Chenb, Rong; Zhangb, Cun-Hui; Yao, Qiwei
    Abstract: We propose a contemporaneous bilinear transformation for a p × q matrix time series to alleviate the difficulties in modeling and forecasting matrix time series when p and/or q are large. The resulting transformed matrix assumes a block structure consisting of several small matrices, and those small matrix series are uncorrelated across all times. Hence, an overall parsimonious model is achieved by modeling each of those small matrix series separately without the loss of information on the linear dynamics. Such a parsimonious model often has better forecasting performance, even when the underlying true dynamics deviates from the assumed uncorrelated block structure after transformation. The uniform convergence rates of the estimated transformation are derived, which vindicate an important virtue of the proposed bilinear transformation, that is, it is technically equivalent to the decorrelation of a vector time series of dimension max(p, q) instead of p × q. The proposed method is illustrated numerically via both simulated and real data examples. Supplementary materials for this article are available online.
    Keywords: decorrelation transformation; eigenanalysis; matrix time series; forecasting; uniform convergence rates; grant IIS-1741390. Chen was supported in part by National Science Foundation grants DMS-1503409; DMS-1737857 and IIS-1741390. Zhang was supported in part by NSF grants DMS-1721495; IIS-1741390 and CCF-1934924.; EP/V007556/1; T&F deal
    JEL: C1
    Date: 2023–01–11
  9. By: Jesus Gonzalo; Jean-Yves Pitarakis
    Abstract: This paper is concerned with detecting the presence of out of sample predictability in linear predictive regressions with a potentially large set of candidate predictors. We propose a procedure based on out of sample MSE comparisons that is implemented in a pairwise manner using one predictor at a time and resulting in an aggregate test statistic that is standard normally distributed under the global null hypothesis of no linear predictability. Predictors can be highly persistent, purely stationary or a combination of both. Upon rejection of the null hypothesis we subsequently introduce a predictor screening procedure designed to identify the most active predictors. An empirical application to key predictors of US economic activity illustrates the usefulness of our methods and highlights the important forward looking role played by the series of manufacturing new orders.
    Date: 2023–02
  10. By: Niels Gillmann (ifo Institute Dresden; Technische Universit\"at Dresden); Ostap Okhrin (Technische Universit\"at Dresden)
    Abstract: The availability of data on economic uncertainty sparked a lot of interest in models that can timely quantify episodes of international spillovers of uncertainty. This challenging task involves trading off estimation accuracy for more timely quantification. This paper develops a local vector autoregressive model (VAR) that allows for adaptive estimation of the time-varying multivariate dependency. Under local, we mean that for each point in time, we simultaneously estimate the longest interval on which the model is constant with the model parameters. The simulation study shows that the model can handle one or multiple sudden breaks as well as a smooth break in the data. The empirical application is done using monthly Economic Policy Uncertainty data. The local model highlights that the empirical data primarily consists of long homogeneous episodes, interrupted by a small number of heterogeneous ones, that correspond to crises. Based on this observation, we create a crisis index, which reflects the homogeneity of the sample over time. Furthermore, the local model shows superiority against the rolling window estimation.
    Date: 2023–02
  11. By: Ye Lu; Adrian Pagan
    Abstract: Phillips and Shi (2021) have argued that there may be some leakage from the estimate of the permanent component to what is meant to be the transitory component when one uses the Hodrick-Prescott filter. They argue that this can be eliminated by boosting the filter. We show that there is no leakage from the filter per se, so boosting is not needed for that. They also argue that there are DGP’s for the components for which the boosted filter tracks these more accurately. We show that there are other plausible DGP’s where the boosted filter tracks less accurately, and what is crucial to tracking performance is how important permanent shocks are to growth in the series being filtered. In particular, the DGP’s used in Phillips and Shi (2021) have a very high contribution from permanent shocks.
    Keywords: Boosting, Hodrick-Prescott filter, Component models
    JEL: E32 E37 C10
    Date: 2023–02
  12. By: Christis Katsouris
    Abstract: This Appendix (dated: July 2021) includes supplementary derivations related to the main limit results of the econometric framework for structural break testing in predictive regression models based on the OLS-Wald and IVX-Wald test statistics, developed by Katsouris C (2021). In particular, we derive the asymptotic distributions of the test statistics when the predictive regression model includes either mildly integrated or persistent regressors. Moreover, we consider the case in which a model intercept is included in the model vis-a-vis the case that the predictive regression model has no model intercept. In a subsequent version of this study we reexamine these particular aspects in more depth with respect to the demeaned versions of the variables of the predictive regression.
    Date: 2023–02

This nep-ets issue is ©2023 by Jaqueson K. Galimberti. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.