|
on Econometric Time Series |
|
Issue of 2025–12–15
sixteen papers chosen by Simon Sosvilla-Rivero, Instituto Complutense de Análisis Económico |
| By: | Andersson, Jonas (Dept. of Business and Management Science, Norwegian School of Economics); Karlis, Dimitris (Dept. of Statistics, Athens University of Economics and Business) |
| Abstract: | The literature on multivariate time series is, largely, limited to either models based on the multivariate Gaussian distribution or models specifically developed for a given application. In this paper we develop a general approach which is based on an underlying, unobserved, Gaussian Vector Autoregressive (VAR) model. Using a transformation, we can capture the time dynamics as well as the distributional properties of a multivariate time series. The model is called the Vector AutoRegressive To Anyting (VARTA) model and was originally presented by Biller and Nelson (2003) who used it for the purpose of simulation. In this paper we derive a maximum likelihood estimator for the model and investigate its performance. We also provide diagnostic analysis and how to compute the predictive distribution. The proposed approach can provide better estimates about the forecasting distributions which can be of every kind not necessarily Gaussian distributions as for the standard VAR models. |
| Keywords: | non-Gaussian time series; maximum likelihood estimation; predictive distribution |
| JEL: | C13 C22 C58 |
| Date: | 2025–12–04 |
| URL: | https://d.repec.org/n?u=RePEc:hhs:nhhfms:2025_025 |
| By: | Tarek Jouini (Department of Economics, University of Windsor) |
| Abstract: | We propose an upper bound for the asymptotic approximation of the one-step-ahead forecast mean squared error (MSE) in infinite-order vector autoregression (VAR) settings, i.e., VAR(infinity). Once minimized over a truncation-lag of small order o(T^(1/3)), where T is the sample size, it yields a consistent truncation of the autoregression associated with the efficient one-step forecast error covariance matrix. When the infinite-order process degenerates to a finite-order VAR, we show that the resulting truncation is strongly consistent (eventually asymptotically), given a parameter epsilon >= 2. We particularly note that when epsilon tends to infinity, our order-selection criterion (upper bound) becomes inconsistent, with a variant of it reducing to Akaike information criterion (AIC). Thus, unlike the final prediction error (FPE) criterion and AIC, our criteria have the good sampling property of being consistent, like those by Hannan and Quinn, and Schwarz, respectively. Compared to conventional criteria, our model-selection procedures not only better handle the multivariate dynamic structure of the time series data, through a compound penalty term that we specify, but also tend to avoid model overfitting in large samples, hence the singularity problems encountered in practice. Variants of our primary criterion, which are in small samples less parsimonious than AIC in large systems, are also proposed. Besides being strongly consistent asymptotically, they tend to select the actual data-generating process (DGP) most of the time in small samples, as shown with Monte Carlo (MC) simulations. |
| Keywords: | infinite-order autoregression, truncation-lag, order-selection criterion, time series, strongly consistent asymptotically, Monte Carlo simulation. |
| JEL: | C13 C14 C15 C18 C22 C24 C32 C34 C51 C52 C53 C62 C63 C82 C83 |
| Date: | 2025–12 |
| URL: | https://d.repec.org/n?u=RePEc:wis:wpaper:2506 |
| By: | Stevenson Bolivar; Rong Chen; Yuefeng Han |
| Abstract: | This paper proposes a new Threshold Tensor Factor Model in Canonical Polyadic (CP) form for tensor time series. By integrating a thresholding autoregressive structure for the latent factor process into the tensor factor model in CP form, the model captures regime-switching dynamics in the latent factor processes while retaining the parsimony and interpretability of low-rank tensor representations. We develop estimation procedures for the model and establish the theoretical properties of the resulting estimators. Numerical experiments and a real-data application illustrate the practical performance and usefulness of the proposed framework. |
| Date: | 2025–11 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2511.19796 |
| By: | Astill, Sam; Harvey, David I; Leybourne, Stephen J; Taylor, AM Robert |
| Abstract: | The general solution of the standard stock pricing equation commonly employed in the finance literature decomposes the price of an asset into the sum of a fundamental price and a bubble component that is explosive in expectation. Despite this, the extant literature on bubble detection focuses almost exclusively on modelling asset prices using a single time-varying autoregressive process, a model which is not consistent with the general solution of the stock pricing equation. We consider a different approach, based on an unobserved components time series model whose components correspond to the fundamental and bubble parts of the general solution. Based on the locally best invariant testing principle, we derive a statistic for testing the null hypothesis that no bubble component is present, against the alternative that a bubble episode occurs in a given subsample of the data. In order to take an ambivalent stance on the possible number and timing of the bubble episodes, our proposed test is based on the maximum of a doubly recursive implementation of this statistic over all possible break dates. Simulation results show that our proposed tests can be significantly more powerful than the industry standard tests developed by Phillips, Shi and Yu (2015). |
| Keywords: | rational bubbles; unobserved components model; locally best invariant testing principle; double recursion |
| Date: | 2025–12–04 |
| URL: | https://d.repec.org/n?u=RePEc:esy:uefcwp:42258 |
| By: | Yuefeng Han; Likai Chen; Wei Biao Wu |
| Abstract: | High-dimensional vector autoregressive (VAR) models have numerous applications in fields such as econometrics, biology, climatology, among others. While prior research has mainly focused on linear VAR models, these approaches can be restrictive in practice. To address this, we introduce a high-dimensional non-parametric sparse additive model, providing a more flexible framework. Our method employs basis expansions to construct high-dimensional nonlinear VAR models. We derive convergence rates and model selection consistency for least squared estimators, considering dependence measures of the processes, error moment conditions, sparsity, and basis expansions. Our theory significantly extends prior linear VAR models by incorporating both non-Gaussianity and non-linearity. As a key contribution, we derive sharp Bernstein-type inequalities for tail probabilities in both non-sub-Gaussian linear and nonlinear VAR processes, which match the classical Bernstein inequality for independent random variables. Additionally, we present numerical experiments that support our theoretical findings and demonstrate the advantages of the nonlinear VAR model for a gene expression time series dataset. |
| Date: | 2025–11 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2511.18641 |
| By: | Angelini, Elena; Darracq Pariès, Matthieu; Haertel, Thomas; Lalik, Magdalena; Aldama, Pierre; Brázdik, František; Damjanović, Milan; Fantino, Davide; Sanchez, Pablo Garcia; Guarda, Paolo; Kearney, Ide; Mociunaite, Laura; Saliba, Maria Christine; Sun, Yiqiao; Tóth, Máté Barnabás; Stoevsky, Grigor; Van der Veken, Wouter; Virbickas, Ernestas; Bulligan, Guido; Castro, Gabriela; Feješ, Martin; Grejcz, Kacper; Hertel, Katarzyna; Imbrasas, Darius; Kontny, Markus; Krebs, Bob; Opmane, Ieva; Rapa, Abigail Marie; Sariola, Mikko; Sequeira, Ana; Duarte, Rubén Veiga; Viertola, Hannu; Vondra, Klaus |
| Abstract: | This report provides a comprehensive overview of the models and tools used for macroeconomic projections within the European System of Central Banks (ESCB). These include semi-structural models, dynamic stochastic general equilibrium (DSGE) models, time series models and specialised satellite models tailored to particular questions or country-specific aspects. Each type of model has its own strengths and weaknesses and can help answer different questions. The models should therefore be seen as complementary rather than mutually exclusive. Semi-structural models are commonly used to produce baseline projection exercises, since they offer the flexibility to combine expert judgement with empirical data and have enough complexity and structure to provide a good representation of the economy. DSGE models, valued for their internal consistency and strong theoretical foundations, are another core forecasting tool used by some central banks, particularly to analyse counterfactuals. Time series models tend to be better suited to forecasting the short term, while scenario analysis and special events may require satellite models, extensions of existing models or even the development of new models tailored to the question at hand. The report also addresses the challenges to macroeconomic projections posed by data quality, including revisions and missing data, and describes the methods implemented to mitigate their effects. The report identifies “quick wins” to improve the projection process by enhancing the transparency and comparability of results through standardised reporting frameworks and better measurement of the judgement integrated in forecasts. The findings highlight the fundamental role of macroeconomic models in underpinning the ESCB’s projection exercises and ensuring that the Governing Council’s assessments and deliberations rest on coherent, granular and credible analysis of both demand-side and supply-side dynamics. JEL Classification: C30, C53, C54, E52 |
| Keywords: | economic models, forecasting, macroeconometrics, monetary policy |
| Date: | 2025–12 |
| URL: | https://d.repec.org/n?u=RePEc:ecb:ecbops:2025381 |
| By: | Rutger-Jan Lange; Bram van Os; Dick van Dijk |
| Abstract: | We propose an observation-driven modeling framework that permits time variation in the model parameters using an implicit score-driven (ISD) update. The ISD update maximizes the logarithmic observation density with respect to the parameter vector, while penalizing the weighted L2 norm relative to a one-step-ahead predicted parameter. This yields an implicit stochastic-gradient update. We show that the popular class of explicit score-driven (ESD) models arises if the observation log density is linearly approximated around the prediction. By preserving the full density, the ISD update globalizes favorable local properties of the ESD update. Namely, for log-concave observation densities, whether correctly specified or not, the ISD filter is stable for all learning rates, while its updates are contractive in mean squared error toward the (pseudo-)true parameter at every time step. We demonstrate the usefulness of ISD filters in simulations and empirical illustrations in finance and macroeconomics. |
| Date: | 2025–12 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2512.02744 |
| By: | Klakow Akepanidtaworn; Korkrid Akepanidtaworn |
| Abstract: | Are Machine Learning (ML) algorithms superior to traditional econometric models for GDP nowcasting in a time series setting? Based on our evaluation of all models from both classes ever used in nowcasting across simulation and six country cases, traditional econometric models tend to outperform ML algorithms. Among the ML algorithms, linear ML algorithm – Lasso and Elastic Net – perform best in nowcasting, even surpassing traditional econometric models in cases of long GDP data and rich high-frequency indicators. Among the traditional econometric models, the Bridge and Dynamic Factor deliver the strongest empirical results, while Three-Pass Regression Filter performs well in our simulation. Due to the relatively short length of GDP series, complex and non-linear ML algorithms are prone to overfitting, which compromises their out-of-sample performance. |
| Keywords: | Nowcasting; Machine Learning; Forecast evaluation; Real-time data |
| Date: | 2025–12–05 |
| URL: | https://d.repec.org/n?u=RePEc:imf:imfwpa:2025/252 |
| By: | Boge Lyu; Qianye Yin; Iris Denise Tommelein; Hanyang Liu; Karnamohit Ranka; Karthik Yeluripati; Junzhe Shi |
| Abstract: | The persistent volatility of construction material prices poses significant risks to cost estimation, budgeting, and project delivery, underscoring the urgent need for granular and scalable forecasting methods. This study develops a forecasting framework that leverages the Construction Specifications Institute (CSI) MasterFormat as the target data structure, enabling predictions at the six-digit section level and supporting detailed cost projections across a wide spectrum of building materials. To enhance predictive accuracy, the framework integrates explanatory variables such as raw material prices, commodity indexes, and macroeconomic indicators. Four time-series models, Long Short-Term Memory (LSTM), Autoregressive Integrated Moving Average (ARIMA), Vector Error Correction Model (VECM), and Chronos-Bolt, were evaluated under both baseline configurations (using CSI data only) and extended versions with explanatory variables. Results demonstrate that incorporating explanatory variables significantly improves predictive performance across all models. Among the tested approaches, the LSTM model consistently achieved the highest accuracy, with RMSE values as low as 1.390 and MAPE values of 0.957, representing improvements of up to 59\% over the traditional statistical time-series model, ARIMA. Validation across multiple CSI divisions confirmed the framework's scalability, while Division 06 (Wood, Plastics, and Composites) is presented in detail as a demonstration case. This research offers a robust methodology that enables owners and contractors to improve budgeting practices and achieve more reliable cost estimation at the Definitive level. |
| Date: | 2025–12 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2512.09360 |
| By: | Micha{\l} Sikorski |
| Abstract: | Volatility clustering is one of the most robust stylized facts of financial markets, yet it is typically detected using moment-based diagnostics or parametric models such as GARCH. This paper shows that clustered volatility also leaves a clear imprint on the time-reversal symmetry of horizontal visibility graphs (HVGs) constructed on absolute returns in physical time. For each time point, we compute the maximal forward and backward visibility distances, $L^{+}(t)$ and $L^{-}(t)$, and use their empirical distributions to build a visibility-asymmetry fingerprint comprising the Kolmogorov--Smirnov distance, variance difference, entropy difference, and a ratio of extreme visibility spans. In a Monte Carlo study, these HVG asymmetry features sharply separate volatility-clustered GARCH(1, 1) dynamics from i.i.d.\ Gaussian noise and from randomly shuffled GARCH series that preserve the marginal distribution but destroy temporal dependence; a simple linear classifier based on the fingerprint achieves about 90\% in-sample accuracy. Applying the method to daily S\&P500 data reveals a pronounced forward--backward imbalance, including a variance difference $\Delta\mathrm{Var}$ that exceeds the simulated GARCH values by two orders of magnitude and vanishes after shuffling. Overall, the visibility-graph asymmetry fingerprint emerges as a simple, model-free, and geometrically interpretable indicator of volatility clustering and time irreversibility in financial time series. |
| Date: | 2025–12 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2512.02352 |
| By: | Eghbal Rahimikia; Hao Ni; Weiguan Wang |
| Abstract: | Financial time series forecasting is central to trading, portfolio optimization, and risk management, yet it remains challenging due to noisy, non-stationary, and heterogeneous data. Recent advances in time series foundation models (TSFMs), inspired by large language models, offer a new paradigm for learning generalizable temporal representations from large and diverse datasets. This paper presents the first comprehensive empirical study of TSFMs in global financial markets. Using a large-scale dataset of daily excess returns across diverse markets, we evaluate zero-shot inference, fine-tuning, and pre-training from scratch against strong benchmark models. We find that off-the-shelf pre-trained TSFMs perform poorly in zero-shot and fine-tuning settings, whereas models pre-trained from scratch on financial data achieve substantial forecasting and economic improvements, underscoring the value of domain-specific adaptation. Increasing the dataset size, incorporating synthetic data augmentation, and applying hyperparameter tuning further enhance performance. |
| Date: | 2025–11 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2511.18578 |
| By: | Eden Gross; Ryan Kruger; Francois Toerien |
| Abstract: | This study introduces a dynamic Bayesian network (DBN) framework for forecasting value at risk (VaR) and stressed VaR (SVaR) and compares its performance to several commonly applied models. Using daily S&P 500 index returns from 1991 to 2020, we produce 10-day 99% VaR and SVaR forecasts using a rolling period and historical returns for the traditional models, while three DBNs use both historical and forecasted returns. We evaluate the models' forecasting accuracy using standard backtests and forecasting error measures. Results show that autoregressive models deliver the most accurate VaR forecasts, while the DBNs achieve comparable performance to the historical simulation model, despite incorporating forward-looking return forecasts. For SVaR, all models produce highly conservative forecasts, with minimal breaches and limited differentiation in accuracy. While DBNs do not outperform traditional models, they demonstrate feasibility as a forward-looking approach to provide a foundation for future research on integrating causal inference into financial risk forecasting. |
| Date: | 2025–12 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2512.05661 |
| By: | Gilles Zumbach |
| Abstract: | For long term investments, model portfolios are defined at the level of indexes, a setup known as Strategic Asset Allocation (SAA). The possible outcomes at a scale of a few decades can be obtained by Monte Carlo simulations, resulting in a probability density for the possible portfolio values at the investment horizon. Such studies are critical for long term wealth plannings, for example in the financial component of social insurances or in accumulated capital for retirement. The quality of the results depends on two inputs: the process used for the simulations and its parameters. The base model is a constant drift, a constant covariance and normal innovations, as pioneered by Bachelier. Beyond this model, this document presents in details a multivariate process that incorporate the most recent advances in the models for financial time series. This includes the negative correlations of the returns at a scale of a few years, the heteroskedasticity (i.e. the volatility' dynamics), and the fat tails and asymmetry for the distributions of returns. For the parameters, the quantitative outcomes depend critically on the estimate for the drift, because this is a non random contribution acting at each time step. Replacing the point forecast by a probabilistic forecast allows us to analyze the impact of the drift values, and then to incorporate this uncertainty in the Monte Carlo simulations. |
| Date: | 2025–11 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2511.18125 |
| By: | Yingyao Hu |
| Abstract: | This paper develops new identification results for multidimensional continuous measurement-error models where all observed measurements are contaminated by potentially correlated errors and none provides an injective mapping of the latent distribution. Using third order cross moments, the paper constructs a three way tensor whose unique decomposition, guaranteed by Kruskal theorem, identifies the factor loading matrices. Starting with a linear structure, the paper recovers the full distribution of latent factors by constructing suitable measurements and applying scalar or multivariate versions of Kotlarski identity. As a result, the joint distribution of the latent vector and measurement errors is fully identified without requiring injective measurements, showing that multivariate latent structure can be recovered in broader settings than previously believed. Under injectivity, the paper also provides user-friendly testable conditions for identification. Finally, this paper provides general identification results for nonlinear models using a newly-defined generalized Kruskal rank - signal rank - of intergral operators. These results have wide applicability in empirical work involving noisy or indirect measurements, including factor models, survey data with reporting errors, mismeasured regressors in econometrics, and multidimensional latent-trait models in psychology and marketing, potentially enabling more robust estimation and interpretation when clean measurements are unavailable. |
| Date: | 2025–12 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2512.02970 |
| By: | Ziyao Wang; A. Alexandre Trindade; Svetlozar T. Rachev |
| Abstract: | This paper develops a three-dimensional decomposition of volatility memory into orthogonal components of level, shape, and tempo. The framework unifies regime-switching, fractional-integration, and business-time approaches within a single canonical representation that identifies how each dimension governs persistence strength, long-memory form, and temporal speed. We establish conditions for existence, uniqueness, and ergodicity of this decomposition and show that all GARCH-type processes arise as special cases. Empirically, applications to SPY and EURUSD (2005--2024) reveal that volatility memory is state-dependent: regime and tempo gates dominate in equities, while fractional-memory gates prevail in foreign exchange. The unified tri-gate model jointly captures these effects. By formalizing volatility dynamics through a level--shape--tempo structure, the paper provides a coherent link between information flow, market activity, and the evolving memory of financial volatility. |
| Date: | 2025–12 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2512.02166 |
| By: | Frédérique Bec; François Courtoy; Philipp Mohl; Frederic Opitz |
| Abstract: | Stochastic debt projections are essential for understanding uncertainties in debt dynamics and ensuring robust debt sustainability analyses. The Commission’s stochastic debt sustainability analysis (SDSA) currently features two technical aspects that deserve to be addressed: i) the non-consideration of the persistence of shocks and ii) the assumption of a Gaussian distribution for simulating shock trajectories. This paper presents two technical refinements to improve the Commission’s SDSA by i) allowing for the persistence of shocks by applying a pre-filtering approach with a shock-specific lag structure across all countries and ii) implementing a bootstrapping technique to relax the Gaussian distribution assumption. These new features will be incorporated in the Commission’s DSA, to identify fiscal sustainability risks. |
| JEL: | H63 E62 C15 C22 C53 |
| Date: | 2025–09 |
| URL: | https://d.repec.org/n?u=RePEc:euf:dispap:226 |