|
on Econometric Time Series |
|
Issue of 2025–11–24
eighteen papers chosen by Simon Sosvilla-Rivero, Instituto Complutense de Análisis Económico |
| By: | Yaling Qi |
| Abstract: | The availability of multidimensional economic datasets has grown significantly in recent years. An example is bilateral trade values across goods among countries, comprising three dimensions -- importing countries, exporting countries, and goods -- forming a third-order tensor time series. This paper introduces a general Bayesian tensor autoregressive framework to analyze the dynamics of large, multidimensional time series with a particular focus on international trade across different countries and sectors. Departing from the standard homoscedastic assumption in this literature, we incorporate flexible stochastic volatility into the tensor autoregressive models. The proposed models can capture time-varying volatility due to the COVID-19 pandemic and recent outbreaks of war. To address computational challenges and mitigate overfitting, we develop an efficient sampling method based on low-rank Tucker decomposition and hierarchical shrinkage priors. Additionally, we provide a factor interpretation of the model showing how the Tucker decomposition projects large-dimensional disaggregated trade flows onto global factors. |
| Date: | 2025–11 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2511.03097 |
| By: | Rustam Ibragimov; Jihyun Kim; Anton Skrobotov |
| Abstract: | This paper develops robust inference methods for predictive regressions that address key challenges posed by endogenously persistent or heavy-tailed regressors, as well as persistent volatility in errors. Building on the Cauchy estimation framework, we propose two novel tests: one based on $t$-statistic group inference and the other employing a hybrid approach that combines Cauchy and OLS estimation. These methods effectively mitigate size distortions that commonly arise in standard inference procedures under endogeneity, near nonstationarity, heavy tails, and persistent volatility. The proposed tests are simple to implement and applicable to both continuous- and discrete-time models. Extensive simulation experiments demonstrate favorable finite-sample performance across a range of realistic settings. An empirical application examines the predictability of excess stock returns using the dividend-price and earnings-price ratios as predictors. The results suggest that the dividend-price ratio possesses predictive power, whereas the earnings-price ratio does not significantly forecast returns. |
| Date: | 2025–11 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2511.09249 |
| By: | Dennis Thumm |
| Abstract: | Energy markets exhibit complex causal relationships between weather patterns, generation technologies, and price formation, with regime changes occurring continuously rather than at discrete break points. Current approaches model electricity prices without explicit causal interpretation or counterfactual reasoning capabilities. We introduce Augmented Time Series Causal Models (ATSCM) for energy markets, extending counterfactual reasoning frameworks to multivariate temporal data with learned causal structure. Our approach models energy systems through interpretable factors (weather, generation mix, demand patterns), rich grid dynamics, and observable market variables. We integrate neural causal discovery to learn time-varying causal graphs without requiring ground truth DAGs. Applied to real-world electricity price data, ATSCM enables novel counterfactual queries such as "What would prices be under different renewable generation scenarios?". |
| Date: | 2025–11 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2511.04361 |
| By: | Tetsuya Takaishi |
| Abstract: | The finite sample effect on the Hurst exponent (HE) of realized volatility time series is examined using Bitcoin data. This study finds that the HE decreases as the sampling period $\Delta$ increases and a simple finite sample ansatz closely fits the HE data. We obtain values of the HE as $\Delta \rightarrow 0$, which are smaller than 1/2, indicating rough volatility. The relative error is found to be $1\%$ for the widely used five-minute realized volatility. Performing a multifractal analysis, we find the multifractality in the realized volatility time series, smaller than that of the price-return time series. |
| Date: | 2025–11 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2511.03314 |
| By: | Meenagh, David (Cardiff Business School); Minford, Patrick (Cardiff Business School, Cardiff University); Xu, Yongdeng (Cardiff Business School, Cardiff University) |
| Abstract: | This paper examines how Bayesian estimation performs in applied macroeconomic DSGE models when prior beliefs are misspecified. Using controlled Monte Carlo experiments on a standard Real Business Cycle model and a New Keynesian model, the authors show that Bayesian procedures can deliver severely biased and misleading parameter estimates, with posteriors pulled toward the researcher’s prior rather than the true data-generating process. In contrast, a classical simulation-based method, Indirect Inference, remains largely unbiased and robust even under substantial model uncertainty. The results imply that heavy reliance on Bayesian estimation can entrench false conclusions about key structural features, such as the degree of nominal rigidity, and thereby mislead policy analysis. The paper argues for greater use of robust estimation and model-validation techniques, such as Indirect Inference, to ensure that DSGE-based policy advice rests on credible empirical evidence. |
| Keywords: | Bayesian Estimation; DSGE Models; Indirect Inference; Monte Carlo Simulation; Model Misspecification |
| JEL: | C11 C15 C52 E32 |
| Date: | 2025–11 |
| URL: | https://d.repec.org/n?u=RePEc:cdf:wpaper:2025/22 |
| By: | Xu Zhang; Zhengang Huang; Yunzhi Wu; Xun Lu; Erpeng Qi; Yunkai Chen; Zhongya Xue; Qitong Wang; Peng Wang; Wei Wang |
| Abstract: | Time series forecasting is important in finance domain. Financial time series (TS) patterns are influenced by both short-term public opinions and medium-/long-term policy and market trends. Hence, processing multi-period inputs becomes crucial for accurate financial time series forecasting (TSF). However, current TSF models either use only single-period input, or lack customized designs for addressing multi-period characteristics. In this paper, we propose a Multi-period Learning Framework (MLF) to enhance financial TSF performance. MLF considers both TSF's accuracy and efficiency requirements. Specifically, we design three new modules to better integrate the multi-period inputs for improving accuracy: (i) Inter-period Redundancy Filtering (IRF), that removes the information redundancy between periods for accurate self-attention modeling, (ii) Learnable Weighted-average Integration (LWI), that effectively integrates multi-period forecasts, (iii) Multi-period self-Adaptive Patching (MAP), that mitigates the bias towards certain periods by setting the same number of patches across all periods. Furthermore, we propose a Patch Squeeze module to reduce the number of patches in self-attention modeling for maximized efficiency. MLF incorporates multiple inputs with varying lengths (periods) to achieve better accuracy and reduces the costs of selecting input lengths during training. The codes and datasets are available at https://github.com/Meteor-Stars/MLF. |
| Date: | 2025–11 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2511.08622 |
| By: | Emmanuel Gnabeyeu; Gilles Pag\`es |
| Abstract: | This paper provide a comprehensive analysis of the finite and long time behavior of continuous-time non-Markovian dynamical systems, with a focus on the forward Stochastic Volterra Integral Equations(SVIEs).We investigate the properties of solutions to such equations specifically their stationarity, both over a finite horizon and in the long run. In particular, we demonstrate that such an equation does not exhibit a strong stationary regime unless the kernel is constant or in a degenerate settings. However, we show that it is possible to induce a $\textit{fake stationary regime}$ in the sense that all marginal distributions share the same expectation and variance. This effect is achieved by introducing a deterministic stabilizer $\varsigma$ associated with the kernel.We also look at the $L^p$ -confluence (for $p>0$) of such process as time goes to infinity(i.e. we investigate if its marginals when starting from various initial values are confluent in $L^p$ as time goes to infinity) and finally the functional weak long-run assymptotics for some classes of diffusion coefficients. Those results are applied to the case of Exponential-Fractional Stochastic Volterra Integral Equations, with an $\alpha$-gamma fractional integration kernel, where $\alpha\leq 1$ enters the regime of $\textit{rough path}$ whereas $\alpha> 1$ regularizes diffusion paths and invoke $\textit{long-term memory}$, persistence or long range dependence. With this fake stationary Volterra processes, we introduce a family of stabilized volatility models. |
| Date: | 2025–11 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2511.03474 |
| By: | Krzysztof Drachal (Faculty of Economic Sciences, University of Warsaw); Joanna J?drzejewska (Faculty of Economic Sciences, University of Warsaw) |
| Abstract: | Bayesian dynamic mixture models offer a flexible framework for capturing evolving relationships between dependent and independent variables over time. They address both structural and variable uncertainty, incorporating real-time market information through dynamic updating. Unlike static approaches, they allow the underlying process to change, which is particularly relevant for the fluctuating nature of commodity markets. In scenarios with a large number of possible predictors, various regression models can be employed, each yielding its own probability distribution for the coefficients. Forecasts are then constructed by combining these distributions using time-varying weights. This paper utilizes Bayesian dynamic mixture models to allow both the regression parameters and their associated weights to change over time. Computational efficiency is maintained by preserving distributional forms and limiting numerical approximations to statistics distributions. The study uses monthly Global Price Index of All Commodities from the International Monetary Fund, spanning the period 2003?2024. Key explanatory variables include interest rates, exchange rates, and stock market indices. The forecasting performance of the proposed models is compared to other techniques such as Dynamic Model Averaging, LASSO, ridge regression, and ARIMA, etc. Evaluation is conducted using the Diebold-Mariano test, Giacomini-Rossi test, Model Confidence Set procedure, and Clark-West test. (This research was funded in whole by National Science Centre, Poland, grant number 2022/45/B/HS4/00510.) |
| Keywords: | Bayesian dynamic mixture models; Commodities prices; Mixture models; Model averaging; Time-series forecasting; Variable uncertainty |
| JEL: | C32 C53 Q02 |
| URL: | https://d.repec.org/n?u=RePEc:sek:iefpro:15316933 |
| By: | Eiji Kurozumi; Anton Skrobotov |
| Abstract: | We propose constructing confidence sets for the emergence, collapse, and recovery dates of a bubble by inverting tests for the location of the break date. We examine both likelihood ratio-type tests and the Elliott-Muller-type (2007) tests for detecting break locations. The limiting distributions of these tests are derived under the null hypothesis, and their asymptotic consistency under the alternative is established. Finite-sample properties are evaluated through Monte Carlo simulations. The results indicate that combining different types of tests effectively controls the empirical coverage rate while maintaining a reasonably small length of the confidence set. |
| Date: | 2025–11 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2511.16172 |
| By: | Sotiris; Tsolacos; Tatiana Franus |
| Abstract: | In this paper, we evaluate the performance of various methodologies for forecasting real estate yields. Expected yield changes are a crucial input for valuations and investment strategies. We conduct a comparative study to assess the forecast accuracy of econometric and time series models relative to machine learning algorithms. Our target series include net initial and equivalent yields across key real estate sectors: office, industrial, and retail. The analysis is based on monthly UK data, though the framework can be applied to different contexts, including quarterly data. The econometric and time series models considered include ARMA, ARMAX, stepwise regression, and VAR family models, while the machine learning methods encompass Random Forest, XGBoost, Decision Tree, Gradient Boosting and Support Vector Machines. We utilise a comprehensive set of economic, financial, and survey data to predict yield movements and evaluate forecast performance over three-, six-, and twelve-month horizons. While conventional forecast metrics are calculated, our primary focus is on directional forecasting. The findings have significant practical implications. By capturing directional changes, our assessment aids price discovery in real estate markets. Given that private-market real estate data are reported with a lag - even for monthly data - early signals of price movements are valuable for investors and lenders. This study aims to identify the most successful methods to gauge forthcoming yield movements. |
| Keywords: | directional forecasting; econometric models; Machine Learning; property yields |
| JEL: | R3 |
| Date: | 2025–01–01 |
| URL: | https://d.repec.org/n?u=RePEc:arz:wpaper:eres2025_269 |
| By: | Yilong Zeng; Boyan Tang; Xuanhao Ren; Sherry Zhefang Zhou; Jianghua Wu; Raymond Lee |
| Abstract: | This paper introduces the Fractal-Chaotic Oscillation Co-driven (FCOC) framework, a novel paradigm for financial volatility forecasting that systematically resolves the dual challenges of feature fidelity and model responsiveness. FCOC synergizes two core innovations: our novel Fractal Feature Corrector (FFC), engineered to extract high-fidelity fractal signals, and a bio-inspired Chaotic Oscillation Component (COC) that replaces static activations with a dynamic processing system. Empirically validated on the S\&P 500 and DJI, the FCOC framework demonstrates profound and generalizable impact. The framework fundamentally transforms the performance of previously underperforming architectures, such as the Transformer, while achieving substantial improvements in key risk-sensitive metrics for state-of-the-art models like Mamba. These results establish a powerful co-driven approach, where models are guided by superior theoretical features and powered by dynamic internal processors, setting a new benchmark for risk-aware forecasting. |
| Date: | 2025–11 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2511.10365 |
| By: | Stuart Lane |
| Abstract: | The standard fuzzy regression discontinuity (FRD) estimator is a ratio of differences of local polynomial estimators. I show that this estimator does not have finite moments of any order in finite samples, regardless of the choice of kernel function, bandwidth, or order of polynomial. This leads to an imprecise estimator with a heavy-tailed sampling distribution, and inaccurate inference with small sample sizes or when the discontinuity in the probability of treatment assignment at the cutoff is small. I present a generalised class of computationally simple FRD estimators, which contains a continuum of estimators with finite moments of all orders in finite samples, and nests both the standard FRD and sharp (SRD) estimators. The class is indexed by a single tuning parameter, and I provide simple values that lead to substantial improvements in median bias, median absolute deviation and root mean squared error. These new estimators remain very stable in small samples, or when the discontinuity in the probability of treatment assignment at the cutoff is small. Simple confidence intervals that have strong coverage and length properties in small samples are also developed. The improvements are seen across a wide range of models and using common bandwidth selection algorithms in extensive Monte Carlo simulations. The improved stability and performance of the estimators and confidence intervals is also demonstrated using data on class size effects on educational attainment. |
| Date: | 2025–11 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2511.03424 |
| By: | Lebotsa Daniel Metsileng (North-West University); Johannes Tshepiso Tsoku (North-West University) |
| Abstract: | This study investigates the performance of the South African inflation rate using Box-Jenkins ARIMA models. Several competing ARIMA specifications were identified through ACF, PACF, and EACF analyses, including ARIMA(1, 1, 0), ARIMA(2, 1, 0), ARIMA(1, 1, 1), and ARIMA(2, 1, 1). All models were estimated using the maximum likelihood method, with results indicating statistical significance and low standard errors across the board, suggesting strong model fit. The optimal model, ARIMA(1, 1, 1), was selected based on the Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC), aligning with findings by Mondal et al. (2014). Diagnostic tests, including the Ljung-Box test and residual analysis, confirmed that the ARIMA(1, 1, 1) model is robust and reliable for modelling inflation dynamics in South Africa. The study highlights the usefulness of ARIMA models in forecasting inflation, a crucial task for policymakers and the South African Reserve Bank in managing inflation expectations and guiding monetary policy. While the linear ARIMA model performed well, the study also recognises its limitations in capturing complex macroeconomic behaviours, suggesting future exploration of nonlinear models such as GARCH. Though the findings are specific to South Africa, the approach provides a replicable framework for other macroeconomic applications and geographical contexts. |
| Keywords: | Accuracy measures, ARIMA, Inflation rate, Linearity, South Africa |
| JEL: | C10 C52 E31 |
| URL: | https://d.repec.org/n?u=RePEc:sek:iefpro:15316783 |
| By: | Gregory Fletcher Cox; Xiaoxia Shi; Yuya Shimizu |
| Abstract: | This paper proposes a new test for inequalities that are linear in possibly partially identified nuisance parameters. This type of hypothesis arises in a broad set of problems, including subvector inference for linear unconditional moment (in)equality models, specification testing of such models, and inference for parameters bounded by linear programs. The new test uses a two-step test statistic and a chi-squared critical value with data-dependent degrees of freedom that can be calculated by an elementary formula. Its simple structure and tuning-parameter-free implementation make it attractive for practical use. We establish uniform asymptotic validity of the test, demonstrate its finite-sample size and power in simulations, and illustrate its use in an empirical application that analyzes women's labor supply in response to a welfare policy reform. |
| Date: | 2025–10 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2510.27633 |
| By: | Ramaharo, Franck M. |
| Abstract: | In this paper, we investigate the predictive power of petroleum consumption for Malagasy real GDP using the Mixed Data Sampling (MIDAS) framework over the period 2007-2024. While GDP data are available at a quarterly frequency, petroleum consumption is observed monthly and disaggregated by sectoral use and product type. We use this high-frequency disaggregated data to identify which components deliver the strongest nowcasting performance. Our results show that, at the sectoral level, transportation, aviation and bunkers consistently deliver the most accurate GDP nowcasts over the sample period. The best-performing product-level specifications correspond precisely to the fuels predominantly used in these sectors, namely, gas oil, super-unleaded petrol, aviation gasoline, and jet fuel. The aggregate measure of total petroleum consumption also yields competitive forecasting accuracy across specifications. This supports its use as a broad high-frequency indicator of economic activity. Our findings suggest that forecasters of Madagascar’s GDP can significantly improve predictive accuracy by using appropriately disaggregated energy data, particularly from sectoral categories linked to mobility and trade. |
| Keywords: | nowcasting; petroleum consumption; real gross domestic product; MIDAS; Mixed-frequency Data Sampling; Madagascar |
| JEL: | C53 E17 O47 Q43 |
| Date: | 2025–10–27 |
| URL: | https://d.repec.org/n?u=RePEc:pra:mprapa:126629 |
| By: | George Wheaton (Department of Economics, New School for Social Research, USA) |
| Abstract: | Vector auto-regressive (VAR) models remain highly influential in the macroeconomic literature regarding the existence and explanation of the macroeconomic Goodwin pattern – a counter-cyclical movement in capacity utilization × labor share space. The study by Barbosa-Filho and Taylor (2006), which provided the theoretical backbone of the neo-Goodwin model, demonstrated through vector auto-regression that the theoretically required profit-led nature of demand was empirically prevalent in the Goodwin pattern in the US, and further studies follow its approach. This is often cited in the debate about whether capitalism has profit-led or wage-led demand characteristics, with implications for macroeconomic policy. In this paper, I replicate the VAR approach and extend it to recent years. Through additional econometric techniques not currently employed in the literature, I demonstrate that the supposed profit-led demand derivatives are statistically insignificant. Noting further issues with robustness, I critically analyze the use of VAR models for this purpose, suggesting that other methods need be employed in the debate between profit-led and wage-led demand regimes and the Goodwin pattern. |
| Keywords: | Goodwin pattern, business cycles, profit-led demand, wage-led demand, vector auto-regression, delta method, Fieller’s theorem |
| JEL: | E11 E12 E32 E60 |
| Date: | 2025–11 |
| URL: | https://d.repec.org/n?u=RePEc:new:wpaper:2517 |
| By: | G\"ozde Sert; Abhishek Chakrabortty; Anirban Bhattacharya |
| Abstract: | We propose a semiparametric Bayesian methodology for estimating the average treatment effect (ATE) within the potential outcomes framework using observational data with high-dimensional nuisance parameters. Our method introduces a Bayesian debiasing procedure that corrects for bias arising from nuisance estimation and employs a targeted modeling strategy based on summary statistics rather than the full data. These summary statistics are identified in a debiased manner, enabling the estimation of nuisance bias via weighted observables and facilitating hierarchical learning of the ATE. By combining debiasing with sample splitting, our approach separates nuisance estimation from inference on the target parameter, reducing sensitivity to nuisance model specification. We establish that, under mild conditions, the marginal posterior for the ATE satisfies a Bernstein-von Mises theorem when both nuisance models are correctly specified and remains consistent and robust when only one is correct, achieving Bayesian double robustness. This ensures asymptotic efficiency and frequentist validity. Extensive simulations confirm the theoretical results, demonstrating accurate point estimation and credible intervals with nominal coverage, even in high-dimensional settings. The proposed framework can also be extended to other causal estimands, and its key principles offer a general foundation for advancing Bayesian semiparametric inference more broadly. |
| Date: | 2025–11 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2511.15904 |
| By: | Bachmair, K.; Schmitz, N. |
| Abstract: | While financial markets are known to contain information about future economic developments, the channels through which asset prices enhance macroeconomic forecastability remain insufficiently understood. We develop a structured set of like-for-like experiments to isolate which data and model properties drive forecasting power. Using U.S. data on inflation, industrial production, unemployment and equity returns, we test eight hypotheses along two dimensions: the contribution of financial data given different estimation methods and model classes, and the role of model choice given different financial inputs. Data aspects include cross-sectional granularity, intra-period frequency, and real-time, revisionless availability; model aspects include sparsity, direct versus indirect specification, nonlinearity, and state dependence on volatile periods. We find that financial data can deliver consistent and economically meaningful gains, but only under suitable modeling choices: Random Forest most reliably extracts useful signals, whereas an unregularised VAR often fails to do so; by contrast, expanding the financial information set along granularity, frequency, or real-time dimensions yields little systematic benefit. Gains strengthen somewhat under elevated policy uncertainty, especially for inflation, but are otherwise fragile. The analysis clarifies how data and model choices interact and provides practical guidance for forecasters on when and how to use financial inputs. |
| Keywords: | Macroeconomic Forecasting, Stock Returns, Hypothesis Testing, Machine Learning, Regularisation, Vector Autoregressions, Ridge Regression, Lasso, Random Forests, Support Vector Regression, Elastic Net, Principal Component Analysis, Neural Networks |
| JEL: | C32 C45 C53 C58 E27 E37 E44 G17 |
| Date: | 2025–11–13 |
| URL: | https://d.repec.org/n?u=RePEc:cam:camdae:2574 |