|
on Econometric Time Series |
|
Issue of 2026–03–16
fourteen papers chosen by Simon Sosvilla-Rivero, Instituto Complutense de Análisis Económico |
| By: | Alexander Aue; Sebastian K\"uhnert; Gregory Rice; Jeremy VanderDoes |
| Abstract: | AutoRegressive Conditional Heteroscedasticity (ARCH) models are standard for modeling time series exhibiting volatility, with a rich literature in univariate and multivariate settings. In recent years, these models have been extended to function spaces. However, functional ARCH and generalized ARCH (GARCH) processes established in the literature have thus far been restricted to model ``pointwise'' variances. In this paper, we propose a new ARCH framework for data residing in general separable Hilbert spaces that accounts for the full evolution of the conditional covariance operator. We define a general operator-level ARCH model. For a simplified Constant Conditional Correlation version of the model, we establish conditions under which such models admit strictly and weakly stationary solutions, finite moments, and weak serial dependence. Additionally, we derive consistent Yule--Walker-type estimators of the infinite-dimensional model parameters. The practical relevance of the model is illustrated through simulations and a data application to high-frequency cumulative intraday returns. |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2603.10272 |
| By: | Fotso, Chris Toumping; Özer, Yeliz; Palumbo, Dario; Sibbertsen, Philipp |
| Abstract: | A dynamic modelling for heavy-tailed cylindrical time series is developed by combining score-driven models with a generalised Pareto-type cylindrical distribution. The proposed specification extends existing cylindrical models by allowing location, scale, concentration, andcrucially, the tail index of the linear component through the conditional distribution of speed to vary according to its score. Whereas the Weibull-von Mises model, whose linear componentexhibits exponentially decaying tails, the GPar specification admits polynomial tail decay. An explicit expression for the time-varying circular-linear dependence measure is also derived. The methodology is applied to high-frequency data from two onshore wind turbines in Germany. The empirical results indicate that allowing time-varying tail thickness leads to overall improvements compared to the Weibull-von Mises model. The proposedmodelprovidesaflexibleandcomputationallytractableframeworkforanalysing heavy-tailed cylindrical time series in environmental and energy applications. |
| Keywords: | cylindrical distributions, dynamic correlation, generalised Pareto, score-driven models, Weibull-von Mises, wind energy. |
| JEL: | C13 C18 C22 C46 Q42 |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:han:dpaper:dp-745 |
| By: | Han Chen (College of Finance and Statistics, Hunan University); Yijie Fei (College of Finance and Statistics, Hunan University); Jun Yu (Faculty of Business Administration, University of Macau) |
| Abstract: | Modeling the dynamics of correlations of multiple time series is an important yet difficult task, especially when the dimension is not confined to be low. In this paper, we propose a new multivariate stochastic volatility model featuring a block correlation structure. Our specification is built upon the new parametrization of the correlation matrix of Archakov & Hansen (2021) and extends the MSV-GFT model introduced in Chen et al. (2025). A Particle Gibbs Ancestor Sampling (PGAS) method is proposed to conduct the Bayesian analysis. It is shown to perform well for our model in finite samples. An empirical application based on a dozen U.S. stocks shows that our new model outperforms alternative specifications in terms of both the in-sample performance and the out-of-sample performance. |
| Keywords: | Block correlation matrix; Generalized Fisher transformation; Markov chain Monte Carlo; Multivariate stochastic volatility; Particle filter |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:boa:wpaper:202638 |
| By: | Kurt G. Lunsford; Kenneth D. West |
| Abstract: | We use long-run annual cross-country data for 10 macroeconomic variables to evaluate the long-horizon forecast distributions of six forecasting models. The variables we use range from ones having little serial correlation to ones having persistence consistent with unit roots. Our forecasting models include simple time series models and frequency domain models developed in Müller and Watson (2016). For plausibly stationary variables, an AR(1) model and a frequency domain model that does not require the user to take a stand on the order of integration appear reasonably well calibrated for forecast horizons of 10 and 25 years. For plausibly non-stationary variables, a random walk model appears reasonably well calibrated for forecast horizons of 10 and 25 years. |
| JEL: | C22 C53 E17 |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:34904 |
| By: | Alexander Chudik; Lutz Kilian |
| Abstract: | This paper proposes mean group and pooled estimators of impulse responses based on mixed-frequency auxiliary distributed lag (DL), autoregressive distributed lag (ARDL) or vector autoregressive distributed lag (VARDL) estimating equations. Our setup assumes that the data are generated by a high-frequency VAR process. While the shock of interest is directly observed at high frequency, the outcome variable is only observed as a temporally aggregated variable at a lower frequency. We derive the asymptotic distributions of the six proposed estimators. Monte Carlo experiments show that pooled estimators generally perform better than the corresponding mean group estimators for relevant sample sizes. An empirical illustration to the pass-through from daily wholesale gasoline price shocks to monthly consumer price inflation illustrates the usefulness of the proposed methods. |
| Keywords: | Mixed frequencies; temporal aggregation; impulse responses; shock sequences; distributed lag (DL); autoregression distributed lag (ARDL); vector autoregression distributed lag (VARDL) |
| Date: | 2026–02–17 |
| URL: | https://d.repec.org/n?u=RePEc:fip:feddwp:102857 |
| By: | Niko Hauzenberger Massimiliano Marcellino Michael Pfarrhofer Anna Stelzer |
| Abstract: | We develop Bayesian machine learning methods for mixed frequency data. This involves handling frequency mismatches and specifying functional relationships between (possibly many) predictors and low frequency dependent variables. We use Gaussian Processes (GPs) in direct nonlinear predictive regressions, and compress higher frequency variables in a structured way. This yields a set of kernels for GPs with distinct properties and implications. We evaluate the proposed framework in an out-of-sample exercise focusing on quarterly US GDP growth and inflation. Our approach leverages high-dimensional mixed frequency data in a computationally efficient way, and offers robustness and gains in predictive accuracy along several dimensions. |
| Keywords: | Bayesian nonparametrics, direct forecasting, nowcasting, dimension reduction, MIDAS |
| JEL: | C11 C22 C53 E31 E37 |
| Date: | 2026 |
| URL: | https://d.repec.org/n?u=RePEc:baf:cbafwp:cbafwp26265 |
| By: | Francesco Giancaterini; Alain Hecq; Joann Jasiak; Aryan Manafi Neyazi |
| Abstract: | This paper introduces a regularized test of the null hypothesis of the absence of linear and nonlinear serial dependence for high-dimensional non-Gaussian time series. Our approach extends the portmanteau test introduced in Jasiak and Neyazi (2023) to the high-dimensional setting. |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2603.10152 |
| By: | Milos Ciganovic; Federico D'Amario; Massimiliano Tancioni |
| Abstract: | We modify the Double Machine Learning estimator to broaden its applicability to macroeconomic time-series settings. A deterministic cross-fitting step, termed Reverse Cross-Fitting, leverages the time-reversibility of stationary series to improve sample utilization and efficiency. We detail and prove the conditions under which the estimator is asymptotically valid. We then demonstrate, through simulations, that its performance remains valid in realistic finite samples and is robust to model misspecification and violations of assumptions, such as heteroskedasticity. In high dimensions, predictive metrics for tuning nuisance learners do not generally minimize bias in the causal score. We propose a calibration rule targeting a "Goldilocks zone", a region of tuning parameters that delivers stable, partialled-out signals and reduced small-sample bias. Finally, we apply our procedure to residualized Local Projections to estimate the dynamic effects of a rise in Tier 1 regulatory capital. The results underscore the usefulness of the methodology for inference in macroeconomic applications. |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2603.10999 |
| By: | Abdulrahman Alswaidan; Jeffrey D. Varner |
| Abstract: | Generating synthetic financial time series that preserve statistical properties of real market data is essential for stress testing, risk model validation, and scenario design. Existing approaches, from parametric models to deep generative networks, struggle to simultaneously reproduce heavy-tailed distributions, negligible linear autocorrelation, and persistent volatility clustering. We propose a hybrid hidden Markov framework that discretizes continuous excess growth rates into Laplace quantile-defined market states and augments regime switching with a Poisson-driven jump-duration mechanism to enforce realistic tail-state dwell times. Parameters are estimated by direct transition counting, bypassing the Baum-Welch EM algorithm. Synthetic data quality is evaluated using Kolmogorov-Smirnov and Anderson-Darling pass rates for distributional fidelity, and ACF mean absolute error for temporal structure. Applied to ten years of SPY data across 1, 000 simulated paths, the framework achieves KS and AD pass rates exceeding 97% and 91% in-sample and 94% out-of-sample (calendar year 2025), partially reproducing the ARCH effect that standard regime-switching models miss. No single model dominates all quality dimensions: GARCH(1, 1) reproduces volatility clustering more accurately but fails distributional tests (5.5% KS pass rate), while the standard HMM without jumps achieves higher distributional fidelity but cannot generate persistent high-volatility regimes. The proposed framework offers the best joint quality profile across distributional, temporal, and tail-coverage metrics. A Single-Index Model extension propagates the SPY factor path to a 424-asset universe, enabling scalable correlated synthetic path generation while preserving cross-sectional correlation structure. |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2603.10202 |
| By: | Yinhuan Li; Chenxin Lyu; Ruodu Wang |
| Abstract: | Risk forecasts in financial regulation and internal management are calculated through historical data. The unknown structural changes of financial data poses a substantial challenge in selecting an appropriate look-back window for risk modeling and forecasting. We develop a data-driven online learning method, called the bootstrap-based adaptive window selection (BAWS), that adaptively determines the window size in a sequential manner. A central component of BAWS is to compare the realized scores against a data-dependent threshold, which is evaluate based on an idea of bootstrap. The proposed method is applicable to the forecast of risk measures that are elicitable individually or jointly, such as the Value-at-Risk (VaR) and the pair of the VaR and the corresponding Expected Shortfall. Through simulation studies and empirical analyses, we demonstrate that BAWS generally outperforms the standard rolling window approach and the recently developed method of stability-based adaptive window selection, especially when there are structural changes in the data-generating process. |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2603.01157 |
| By: | Adir Saly-Kaufmann; Kieran Wood; Jan Peter-Calliess; Stefan Zohren |
| Abstract: | We present a large scale benchmark of modern deep learning architectures for a financial time series prediction and position sizing task, with a primary focus on Sharpe ratio optimization. Evaluating linear models, recurrent networks, transformer based architectures, state space models, and recent sequence representation approaches, we assess out of sample performance on a daily futures dataset spanning commodities, equity indices, bonds, and FX spanning 2010 to 2025. Our evaluation goes beyond average returns and includes statistical significance, downside and tail risk measures, breakeven transaction cost analysis, robustness to random seed selection, and computational efficiency. We find that models explicitly designed to learn rich temporal representations consistently outperform linear benchmarks and generic deep learning models, which often lead the ranking in standard time series benchmarks. Hybrid models such as VSN with LSTM, a combination of Variable Selection Networks (VSN) and LSTMs, achieves the highest overall Sharpe ratio, while VSN with xLSTM and LSTM with PatchTST exhibit superior downside adjusted characteristics. xLSTM demonstrates the largest breakeven transaction cost buffer, indicating improved robustness to trading frictions. |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2603.01820 |
| By: | Cassim, Lucius; Mallick, Debdulal |
| Abstract: | We investigate whether public debt in least developed countries (LDCs) is serviced through fiscal adjustments or by triggering inflation. Using time series data from LDCs for the 1980-2023 period and employing the Bayesian structural vector auto-regressive model, we estimate the response of public liabilities to positive surplus shocks. We find support for the latter hypothesis of the Fiscal Theory of Price Level that these countries are characterized by non-Ricardian fiscal regimes. One important implication of our findings is that the conventional monetary policy may not be effective in stabilizing inflation in these countries. In contrast, when we replicate the same exercise for developed countries, we find support for the former hypothesis. We further explore the role of institutional development in explaining the fiscal (in)discipline in LDCs. |
| Keywords: | Non-Ricardian fiscal regime; Fiscal theory of price level; Bayesian SVAR; Central bank; Monetary policy; Least developed countries; Institutions. |
| JEL: | C11 C32 E02 E58 E62 E63 |
| Date: | 2025 |
| URL: | https://d.repec.org/n?u=RePEc:pra:mprapa:127592 |
| By: | Dante Amengual (CEMFI, Centro de Estudios Monetarios y Financieros); Gabriele Fiorentini (Università di Firenze and RCEA); Enrique Sentana (CEMFI, Centro de Estudios Monetarios y Financieros) |
| Abstract: | The EM principle implies the moments underlying the information matrix test for switching regressions are the expectation given the data of the moments one would test if one knew the subpopulation each observation originated from. Thus, we identify components related to conditional heteroskedasticity, conditional and unconditional skewness, and unconditional kurtosis of regression residuals within each regime. Simulations indicate analytical expressions for the asymptotic covariance matrix of those moments adjusted for sampling variability in parameter estimators provide reliable finite sample sizes and good power against various alternatives, especially combined with the parametric bootstrap. We apply the test to cross-country convergence regressions. |
| Keywords: | Asymmetry, convergence regressions, expectation - maximization principle, heteroskedasticity, incomplete data, kurtosis. |
| JEL: | C24 C34 C52 O47 |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:cmf:wpaper:wp2026_2601 |
| By: | Campbell R. Harvey; Alessio Sancetta; Yuqian Zhao |
| Abstract: | Researchers generally acknowledge that statistical tests must be adjusted when hundreds of factors and trading strategies have been examined. But how should these adjustments be made? Existing methods are often misunderstood or misapplied. We show that proper inference requires accounting for dependence across tests, correctly specifying the null distribution, and mitigating sample-selection bias. We develop a simple framework that avoids assumptions about the total number of tests run and yields a lower bound on valid significance thresholds - implying that researchers should employ a t-statistic cutoff of at least 3.0. In addition, we advocate using the local False Discovery Rate, which provides the probability that the null hypothesis is true for a given test-statistic realization - information that a conventional p-value cannot supply. |
| JEL: | C12 C58 G11 G12 |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:34898 |