|
on Econometric Time Series |
By: | Tassos Magdalinos; Katerina Petrova |
Abstract: | A unified theory of estimation and inference is developed for an autoregressive process with root in (-∞, ∞) that includes the stationary, local-to-unity, explosive and all intermediate regions. The discontinuity of the limit distribution of the t-statistic outside the stationary region and its dependence on the distribution of the innovations in the explosive regions (-∞, -1) ∪ (1, ∞) are addressed simultaneously. A novel estimation procedure, based on a data-driven combination of a near-stationary and a mildly explosive artificially constructed instrument, delivers mixed-Gaussian limit theory and gives rise to an asymptotically standard normal t-statistic across all autoregressive regions. The resulting hypothesis tests and confidence intervals are shown to have correct asymptotic size (uniformly over the space of autoregressive parameters and the space of innovation distribution functions) in autoregressive, predictive regression and local projection models, thereby establishing a general and unified framework for inference with autoregressive processes. Extensive Monte Carlo simulation shows that the proposed methodology exhibits very good finite sample properties over the entire autoregressive parameter space (-∞, ∞) and compares favorably to existing methods within their parametric (-1, 1] validity range. We demonstrate how our procedure can be used to construct valid confidence intervals in standard epidemiological models as well as to test in real-time for speculative bubbles in the price of the Magnificent Seven tech stocks. |
Keywords: | uniform inference; central limit theory (CLT); autoregression; Predictive regressions; instrumentation; mixed-Gaussianity; t-statistic; confidence intervals |
JEL: | C12 C22 |
Date: | 2025–04–01 |
URL: | https://d.repec.org/n?u=RePEc:fip:fednsr:99905 |
By: | Sung Hoon Choi; Donggyu Kim (Department of Economics, University of California Riverside) |
Abstract: | Several approaches for predicting large volatility matrices have been developed based on high-dimensional factor-based Ito processes. These methods often impose restrictions to reduce the model complexity, such as constant eigenvectors or factor loadings over time. However, several studies indicate that eigenvector processes are also time-varying. To address this feature, this paper generalizes the factor structure by representing the integrated volatility matrix process as a cubic (order-3 tensor) form, which is decomposed into low-rank tensor and idiosyncratic tensor components. To predict conditional expected large volatility matrices, we propose the Projected Tensor Principal Orthogonal componEnt Thresholding (PT-POET) procedure and establish its asymptotic properties. The advantages of PT-POET are validated through a simulation study and demonstrated in an application to minimum variance portfolio allocation using high-frequency trading data. |
Date: | 2025–05 |
URL: | https://d.repec.org/n?u=RePEc:ucr:wpaper:202506 |
By: | Hannah O’Keeffe; Katerina Petrova |
Abstract: | In this paper, we propose a component-based dynamic factor model for nowcasting GDP growth. We combine ideas from “bottom-up” approaches, which utilize the national income accounting identity through modelling and predicting sub-components of GDP, with a dynamic factor (DF) model, which is suitable for dimension reduction as well as parsimonious real-time monitoring of the economy. The advantages of the new model are twofold: (i) in contrast to existing dynamic factor models, it respects the GDP accounting identity; (ii) in contrast to existing “bottom-up” approaches, it models all GDP components jointly through the dynamic factor model, inheriting its main advantages. An additional advantage of the resulting CBDF approach is that it generates nowcast densities and impact decompositions for each component of GDP as a by-product. We present a comprehensive forecasting exercise, where we evaluate the model’s performance in terms of point and density forecasts, and we compare it to existing models (e.g. the model of Almuzara, Baker, O’Keeffe, and Sbordone (2023)) currently used by the New York Fed, as well as the model of Higgins (2014) currently used by the Atlanta Fed. We demonstrate that, on average, the point nowcast performance (in terms of RMSE) of the standard DF model can be improved by 15 percent and its density nowcast performance (in terms of log-predictive scores) can be improved by 20 percent over a large historical sample. |
Keywords: | Dynamic factor model; GDP nowcasting |
JEL: | C32 C38 C53 |
Date: | 2025–04–01 |
URL: | https://d.repec.org/n?u=RePEc:fip:fednsr:99906 |
By: | Bulat Gafarov; Matthias Meier; Jos\'e Luis Montiel Olea |
Abstract: | We study the properties of projection inference for set-identified Structural Vector Autoregressions. A nominal $1-\alpha$ projection region collects the structural parameters that are compatible with a $1-\alpha$ Wald ellipsoid for the model's reduced-form parameters (autoregressive coefficients and the covariance matrix of residuals). We show that projection inference can be applied to a general class of stationary models, is computationally feasible, and -- as the sample size grows large -- it produces regions for the structural parameters and their identified set with both frequentist coverage and \emph{robust} Bayesian credibility of at least $1-\alpha$. A drawback of the projection approach is that both coverage and robust credibility may be strictly above their nominal level. Following the work of \cite{Kaido_Molinari_Stoye:2014}, we `calibrate' the radius of the Wald ellipsoid to guarantee that -- for a given posterior on the reduced-form parameters -- the robust Bayesian credibility of the projection method is exactly $1-\alpha$. If the bounds of the identified set are differentiable, our calibrated projection also covers the identified set with probability $1-\alpha$. %eliminating the excess of robust Bayesian credibility also eliminates excessive frequentist coverage. We illustrate the main results of the paper using the demand/supply-model for the U.S. labor market in Baumeister_Hamilton(2015) |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.14106 |
By: | Bobeica, Elena; Holton, Sarah; Huber, Florian; Martínez Hernández, Catalina |
Abstract: | We propose a novel empirical structural inflation model that captures non-linear shock transmission using a Bayesian machine learning framework that combines VARs with non-linear structural factor models. Unlike traditional linear models, our approach allows for non-linear effects at all impulse response horizons. Identification is achieved via sign, zero, and magnitude restrictions within the factor model. Applying our method to euro area energy shocks, we find that inflation reacts disproportionately to large shocks, while small shocks trigger no significant response. These non-linearities are present along the pricing chain, more pronounced upstream and gradually attenuating downstream. JEL Classification: E31, C32, C38, Q43 |
Keywords: | energy, euro area, inflation, machine learning, non-linear model |
Date: | 2025–05 |
URL: | https://d.repec.org/n?u=RePEc:ecb:ecbwps:20253052 |
By: | Alain-Philippe Fortin (University of Geneva); Patrick Gagliardini (University of Lugano; Swiss Finance Institute); O. Scaillet (Swiss Finance Institute - University of Geneva) |
Abstract: | We derive optimal maximin tests for errors sphericity in latent factor analysis of short panels. We rely on a Generalized Method of Moments setting with optimal weighting under a large cross-sectional dimension n and a fixed time series dimension T. We outline the asymptotic distributions of the estimators as well as the asymptotic maximin optimality of the Wald, Lagrange Multiplier, and Likelihood Ratio-type tests. The characterisation of optimality relies on finding the limit Gaussian experiment in strongly identified GMM models under a block-dependence structure and unobserved heterogeneity. We reject sphericity in an empirical application to a large cross-section of U.S. stocks, which casts doubt on the validity of routinely applying Principal Component Analysis to short panels of monthly financial returns. |
Keywords: | Latent factor analysis, Generalized Method of Moments, maximin test, Gaussian experiment, fixed effects, panel data, sphericity, large n and fixed T asymptotics, equity returns |
JEL: | C12 C23 C38 C58 G12 |
Date: | 2025–03 |
URL: | https://d.repec.org/n?u=RePEc:chf:rpseri:rp2527 |
By: | Sai Krishna Kamepalli; Serena Ng; Francisco Ruge-Murcia |
Abstract: | Skewness is a prevalent feature of macroeconomic time series and may arise exogenously because shocks are asymmetrically distributed, or endogenously, as shocks propagate through production networks. Previous theoretical work often studies these two possibilities in isolation. We nest all possible sources of skewness in a model where output has a network, a common, and an idiosyncratic component. In this model, skewness can arise not only from the three components, but also from coskewness due to the higher order covariation between components. An analysis of output growth in 43 U.S. sectors shows that coskewness is a key source of asymmetry in the data and constitutes a connectivity channel not previously explored. To help interpret our results, we construct and estimate a micro-founded multi-sector general equilibrium model and show that it can generate skewness and coskewness consistent with the data. |
JEL: | C3 C5 E03 |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:33701 |