|
on Econometric Time Series |
By: | Justyna Wr\'oblewska; {\L}ukasz Kwiatkowski |
Abstract: | We develop a Bayesian framework for cointegrated structural VAR models identified by two-state Markovian breaks in conditional covariances. The resulting structural VEC specification with Markov-switching heteroskedasticity (SVEC-MSH) is formulated in the so-called B-parameterization, in which the prior distribution is specified directly for the matrix of the instantaneous reactions of the endogenous variables to structural innovations. We discuss some caveats pertaining to the identification conditions presented earlier in the literature on stationary structural VAR-MSH models, and revise the restrictions to actually ensure the unique global identification through the two-state heteroskedasticity. To enable the posterior inference in the proposed model, we design an MCMC procedure, combining the Gibbs sampler and the Metropolis-Hastings algorithm. The methodology is illustrated both with a simulated as well as real-world data examples. |
Date: | 2024–06 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2406.03053 |
By: | Chen Tong; Peter Reinhard Hansen; Ilya Archakov |
Abstract: | We introduce a novel multivariate GARCH model with flexible convolution-t distributions that is applicable in high-dimensional systems. The model is called Cluster GARCH because it can accommodate cluster structures in the conditional correlation matrix and in the tail dependencies. The expressions for the log-likelihood function and its derivatives are tractable, and the latter facilitate a score-drive model for the dynamic correlation structure. We apply the Cluster GARCH model to daily returns for 100 assets and find it outperforms existing models, both in-sample and out-of-sample. Moreover, the convolution-t distribution provides a better empirical performance than the conventional multivariate t-distribution. |
Date: | 2024–06 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2406.06860 |
By: | Yijie Fei (College of Finance and Statistics, Hunan University); Yiu Lim Lui (Institute for Advanced Economic Research, Dongbei University of Finance and Economics); Jun Yu (Department of Finance and Business Economics, Faculty of Business Administration, University of Macau) |
Abstract: | This paper considers testing predictability in predictive regression models with persistent errors. We derive limiting distributions of the ordinary least squares estimator and the corresponding Wald statistic under the condition of moderately integrated errors or local-to-unity errors. The asymptotic result establishes the connection between super-consistent estimation in correctly specified predictive regression and inconsistent estimation in spurious regression. To provide a robust test, a modification to the IVX-AR test of Yang et al. (2020) is proposed. The modified test is uniformly valid across different degrees of persistency in both predictors and errors. Simulation studies show that the new test enjoys satisfactory finite sample performances. Leveraging on the new test, we reexamine the predictive power of numerous economic variables in predicting the growth rate of the U.S. housing market, demonstrating the usefulness of the proposed test, particularly in the context of multivariate regression. |
Keywords: | Spurious regression, Predictive regression, Uniform inference; Robust test; Moderately integrated; Nearly integrated, Housing price |
JEL: | C12 C22 G01 |
Date: | 2024–06 |
URL: | https://d.repec.org/n?u=RePEc:boa:wpaper:202401 |
By: | Tibor Szendrei; Arnab Bhattacharjee; Mark E. Schaffer |
Abstract: | Mixed frequency data has been shown to improve the performance of growth-at-risk models in the literature. Most of the research has focused on imposing structure on the high-frequency lags when estimating MIDAS-QR models akin to what is done in mean models. However, only imposing structure on the lag-dimension can potentially induce quantile variation that would otherwise not be there. In this paper we extend the framework by introducing structure on both the lag dimension and the quantile dimension. In this way we are able to shrink unnecessary quantile variation in the high-frequency variables. This leads to more gradual lag profiles in both dimensions compared to the MIDAS-QR and UMIDAS-QR. We show that this proposed method leads to further gains in nowcasting and forecasting on a pseudo-out-of-sample exercise on US data. |
Date: | 2024–06 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2406.15157 |
By: | Jianqing Fan; Yuling Yan; Yuheng Zheng |
Abstract: | This article establishes a new and comprehensive estimation and inference theory for principal component analysis (PCA) under the weak factor model that allow for cross-sectional dependent idiosyncratic components under nearly minimal the factor strength relative to the noise level or signal-to-noise ratio. Our theory is applicable regardless of the relative growth rate between the cross-sectional dimension $N$ and temporal dimension $T$. This more realistic assumption and noticeable result requires completely new technical device, as the commonly-used leave-one-out trick is no longer applicable to the case with cross-sectional dependence. Another notable advancement of our theory is on PCA inference $ - $ for example, under the regime where $N\asymp T$, we show that the asymptotic normality for the PCA-based estimator holds as long as the signal-to-noise ratio (SNR) grows faster than a polynomial rate of $\log N$. This finding significantly surpasses prior work that required a polynomial rate of $N$. Our theory is entirely non-asymptotic, offering finite-sample characterizations for both the estimation error and the uncertainty level of statistical inference. A notable technical innovation is our closed-form first-order approximation of PCA-based estimator, which paves the way for various statistical tests. Furthermore, we apply our theories to design easy-to-implement statistics for validating whether given factors fall in the linear spans of unknown latent factors, testing structural breaks in the factor loadings for an individual unit, checking whether two units have the same risk exposures, and constructing confidence intervals for systematic risks. Our empirical studies uncover insightful correlations between our test results and economic cycles. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.03616 |
By: | Marco Cozzi (Department of Economics, University of Victoria) |
Abstract: | I propose a modified implementation of the popular Hamilton filter, to make the cyclical component extracted from an aggregate variable consistent with the aggregation of the cyclical components extracted from its underlying variables. This procedure is helpful in many circumstances, for instance when dealing with a variable that comes from a definition or when the empirical relationship is based on an equilibrium condition of a growth model. The procedure consists of the following steps: 1) build the aggregate variable, 2) run the Hamilton filter regression on the aggregate variable and store the related OLS estimates, 3) use these estimated parameters to predict the trends of all the underlying variables, 4) rescale the constant terms to obtain mean-zero cyclical components that are aggregation-consistent. I consider two applications, exploiting U.S. and Canadian data. The former is based on the GDP expenditure components, while the latter on the GDP of its Provinces and Territories. I find sizable differences between the cyclical components of aggregate GDP computed with and without the adjustment, making it a valuable procedure for both assessing the output gap and validating empirically DSGE models. |
Keywords: | Business cycles, Filtering, Hamilton filter, Output gap, Trend-cycle decomposition. JEL Classifications: C22, E30, E32. |
Date: | 2024–05 |
URL: | https://d.repec.org/n?u=RePEc:vic:vicddp:2401 |