nep-ets New Economics Papers
on Econometric Time Series
Issue of 2026–01–05
twelve papers chosen by
Simon Sosvilla-Rivero, Instituto Complutense de Análisis Económico


  1. Plausible GMM: A Quasi-Bayesian Approach By Victor Chernozhukov; Christian B. Hansen; Lingwei Kong; Weining Wang
  2. Motif Discovery in the Irregularly Sampled Time Series Data By Alyakin, Anton
  3. Forecasts of Period-average Exchange Rates: Insights from Real-time Daily Data By Martin McCarthy; Stephen Snudden
  4. Alpha-R1: Alpha Screening with LLM Reasoning via Reinforcement Learning By Zuoyou Jiang; Li Zhao; Rui Sun; Ruohan Sun; Zhongjian Li; Jing Li; Daxin Jiang; Zuo Bai; Cheng Hua
  5. Nonparametric methods for comparing distribution functionals for dependent samples with application to inequality measures By Jean-Marie Dufour; Tianyu He
  6. Bayesian Modeling for Uncertainty Management in Financial Risk Forecasting and Compliance By Sharif Al Mamun; Rakib Hossain; Md. Jobayer Rahman; Malay Kumar Devnath; Farhana Afroz; Lisan Al Amin
  7. Multivariate kernel regression in vector and product metric spaces By Schafgans, Marcia M. A.; Zinde-Walsh, Victoria
  8. Macroeconomic effects of lowering South Africa's inflation target: An SVAR analysis By Richard Kima; Keagile Lesame
  9. Panel Coupled Matrix-Tensor Clustering Model with Applications to Asset Pricing By Liyuan Cui; Guanhao Feng; Yuefeng Han; Jiayan Li
  10. The Inherent Nonlinearity in Learning: Implications for Understanding Stock Returns By Ian Dew-Becker; Stefano Giglio; Pooya Molavi
  11. The Nonstationarity-Complexity Tradeoff in Return Prediction By Agostino Capponi; Chengpiao Huang; J. Antonio Sidaoui; Kaizheng Wang; Jiacheng Zou
  12. Food & Oil Price Volatility Dynamics: Insights from a TVP-SVAR-DCC-MIDAS Model By Stewart, Shamar L.; Isengildina Massa, Olga

  1. By: Victor Chernozhukov; Christian B. Hansen; Lingwei Kong; Weining Wang
    Abstract: Structural estimation in economics often makes use of models formulated in terms of moment conditions. While these moment conditions are generally well-motivated, it is often unknown whether the moment restrictions hold exactly. We consider a framework where researchers model their belief about the potential degree of misspecification via a prior distribution and adopt a quasi-Bayesian approach for performing inference on structural parameters. We provide quasiposterior concentration results, verify that quasi-posteriors can be used to obtain approximately optimal Bayesian decision rules under the maintained prior structure over misspecification, and provide a form of frequentist coverage results. We illustrate the approach through empirical examples where we obtain informative inference for structural objects allowing for substantial relaxations of the requirement that moment conditions hold exactly
    Date: 2025–04–02
    URL: https://d.repec.org/n?u=RePEc:bri:uobdis:25/817
  2. By: Alyakin, Anton
    Abstract: Motifs are patterns that repeat within and across different time series. They can be used for various applications, such as clustering or discovering association rules. For example, in patient monitoring they can be used to identify features that are predictive of a diagnosis. Most of the motif definitions in literature are not applicable to the case when the data is irregularly sampled, which is often the case in the areas such as medical data. In this work, we present a generative model for unsupervised identification of motifs for the case when the observation times are highly irregular. In particular, we model each motif as a combination of a Poisson Point Process for the distribution of the timestamps and a Gaussian Process for the distribution of the observations. This allows us to use both the sampling frequency and the observation values in order to identify a motif. The whole time series is modeled as a Hidden Markov Model, in which each time step corresponds to a new motif. We present a version of the Viterbi Training procedure for the learning of the parameters of this model. We demonstrate experimentally that this procedure is able to re-learn the motifs in the data set generated from this model. Lastly, we present the results of using this model on laboratory tests data of the MIMIC-III, a well-known critical care dataset.
    Date: 2025–12–23
    URL: https://d.repec.org/n?u=RePEc:osf:thesis:c5vzg_v1
  3. By: Martin McCarthy (Reserve Bank of Australia); Stephen Snudden (Wilfrid Laurier University)
    Abstract: Forecasting period-average exchange rates requires using high-frequency data to efficiently construct forecasts and to test the accuracy of these forecasts against the traditional random walk hypothesis. To achieve this, we construct the first real-time dataset of daily effective exchange rates for all available countries, both nominal and real. The real-time vintages account for the typical delay in the publication of trade weights and inflation. Our findings indicate that forecasts constructed with daily data can significantly improve accuracy, up to 40 per cent compared to using monthly averages. We also find that unlike bilateral exchange rates, daily effective exchange rates exhibit properties distinct from random walk processes. When applying efficient estimation and testing methods made possible for the first time by the daily data, we find new evidence of real-time predictability for effective exchange rates in up to fifty per cent of countries.
    Keywords: temporal aggregation; exchange rates; forecasting; forecast evaluation; high-frequency data
    JEL: C43 C5 F31 F37
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:rba:rbardp:rdp2025-09
  4. By: Zuoyou Jiang; Li Zhao; Rui Sun; Ruohan Sun; Zhongjian Li; Jing Li; Daxin Jiang; Zuo Bai; Cheng Hua
    Abstract: Signal decay and regime shifts pose recurring challenges for data-driven investment strategies in non-stationary markets. Conventional time-series and machine learning approaches, which rely primarily on historical correlations, often struggle to generalize when the economic environment changes. While large language models (LLMs) offer strong capabilities for processing unstructured information, their potential to support quantitative factor screening through explicit economic reasoning remains underexplored. Existing factor-based methods typically reduce alphas to numerical time series, overlooking the semantic rationale that determines when a factor is economically relevant. We propose Alpha-R1, an 8B-parameter reasoning model trained via reinforcement learning for context-aware alpha screening. Alpha-R1 reasons over factor logic and real-time news to evaluate alpha relevance under changing market conditions, selectively activating or deactivating factors based on contextual consistency. Empirical results across multiple asset pools show that Alpha-R1 consistently outperforms benchmark strategies and exhibits improved robustness to alpha decay. The full implementation and resources are available at https://github.com/FinStep-AI/Alpha-R1.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.23515
  5. By: Jean-Marie Dufour; Tianyu He
    Abstract: This paper proposes asymptotically distribution-free inference methods for comparing a broad range of welfare indices across dependent samples, including those employed in inequality, poverty, and risk analysis. Two distinct situations are considered. \emph{First}, we propose asymptotic and bootstrap intersection methods which are completely robust to arbitrary dependence between two samples. \emph{Second}, we focus on the common case of overlapping samples -- a special form of dependent samples where sample dependence arises solely from matched pairs -- and provide asymptotic and bootstrap methods for comparing indices. We derive consistent estimates for asymptotic variances using the influence function approach. The performance of the proposed methods is studied in a simulation experiment: we find that confidence intervals with overlapping samples exhibit satisfactory coverage rates with reasonable precision, whereas conventional methods based on an assumption of independent samples have an inferior performance in terms of coverage rates and interval widths. Asymptotic inference can be less reliable when dealing with heavy-tailed distributions, while the bootstrap method provides a viable remedy, unless the variance is substantial or nonexistent. The intersection method yields reliable results with arbitrary dependent samples, including instances where overlapping samples are not feasible. We demonstrate the practical applicability of our proposed methods in analyzing dynamic changes in household financial inequality in Italy over time.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.21862
  6. By: Sharif Al Mamun; Rakib Hossain; Md. Jobayer Rahman; Malay Kumar Devnath; Farhana Afroz; Lisan Al Amin
    Abstract: A Bayesian analytics framework that precisely quantifies uncertainty offers a significant advance for financial risk management. We develop an integrated approach that consistently enhances the handling of risk in market volatility forecasting, fraud detection, and compliance monitoring. Our probabilistic, interpretable models deliver reliable results: We evaluate the performance of one-day-ahead 95% Value-at-Risk (VaR) forecasts on daily S&P 500 returns, with a training period from 2000 to 2019 and an out-of-sample test period spanning 2020 to 2024. Formal tests of unconditional (Kupiec) and conditional (Christoffersen) coverage reveal that an LSTM baseline achieves near-nominal calibration. In contrast, a GARCH(1, 1) model with Student-t innovations underestimates tail risk. Our proposed discount-factor DLM model produces a slightly liberal VaR estimate, with evidence of clustered violations. Bayesian logistic regression improves recall and AUC-ROC for fraud detection, and a hierarchical Beta state-space model provides transparent and adaptive compliance risk assessment. The pipeline is distinguished by precise uncertainty quantification, interpretability, and GPU-accelerated analysis, delivering up to 50x speedup. Remaining challenges include sparse fraud data and proxy compliance labels, but the framework enables actionable risk insights. Future expansion will extend feature sets, explore regime-switching priors, and enhance scalable inference.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.15739
  7. By: Schafgans, Marcia M. A.; Zinde-Walsh, Victoria
    Abstract: This paper derives limit properties of nonparametric kernel regression estimators without requiring existence of density for regressors in ℝ . In functional regression limit properties are established for multivariate functional regression. The rate and asymptotic normality for the Nadaraya–Watson (NW) estimator is established for distributions of regressors in ℝ that allow for mass points, factor structure, multicollinearity and nonlinear dependence, as well as fractal distribution; when bounded density exists we provide statistical guarantees for the standard rate and the asymptotic normality without requiring smoothness. We demonstrate faster convergence associated with dimension reducing types of singularity, such as a fractal distribution or a factor structure in the regressors. The paper extends asymptotic normality of kernel functional regression to multivariate regression over a product of any number of metric spaces. Finite sample evidence confirms rate improvement due to singularity in regression over ℝ . For functional regression the simulations underline the importance of accounting for multiple functional regressors. We demonstrate the applicability and advantages of the NW estimator in our empirical study, which reexamines the job training program evaluation based on the LaLonde data .
    Keywords: Nadaraya–Watson estimator; singular distribution; multivariate functional regression; small cube probability
    JEL: C1
    Date: 2025–12–22
    URL: https://d.repec.org/n?u=RePEc:ehl:lserod:130725
  8. By: Richard Kima; Keagile Lesame
    Abstract: We estimate the macroeconomic effects of shifting to a lower inflation target for South Africa, within a Structural Vector Autoregressive (SVAR) framework identified using the Max Share Identification strategy and estimated with Bayesian methods. We find that a decrease of 1% (in terms of percentage points change) in the inflation target leads to output expanding over the next few quarters after an initial muted response, with a peak of about 1.20% after about two years and remains positive and statistically significant for nearly three years after the shock.
    Keywords: Inflation targeting, Macroeconomics, Econometric models (Monetary policy), South Africa
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:unu:wpaper:wp-2025-106
  9. By: Liyuan Cui; Guanhao Feng; Yuefeng Han; Jiayan Li
    Abstract: We tackle the challenge of estimating grouping structures and factor loadings in asset pricing models, where traditional regressions struggle due to sparse data and high noise. Existing approaches, such as those using fused penalties and multi-task learning, often enforce coefficient homogeneity across cross-sectional units, reducing flexibility. Clustering methods (e.g., spectral clustering, Lloyd's algorithm) achieve consistent recovery under specific conditions but typically rely on a single data source. To address these limitations, we introduce the Panel Coupled Matrix-Tensor Clustering (PMTC) model, which simultaneously leverages a characteristics tensor and a return matrix to identify latent asset groups. By integrating these data sources, we develop computationally efficient tensor clustering algorithms that enhance both clustering accuracy and factor loading estimation. Simulations demonstrate that our methods outperform single-source alternatives in clustering accuracy and coefficient estimation, particularly under moderate signal-to-noise conditions. Empirical application to U.S. equities demonstrates the practical value of PMTC, yielding higher out-of-sample total $R^2$ and economically interpretable variation in factor exposures.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.23567
  10. By: Ian Dew-Becker; Stefano Giglio; Pooya Molavi
    Abstract: Financial markets (and more generally the real economy) display a wide range of important nonlinearities. This paper focuses on stock returns, which are skewed left– generating crashes– and have volatility that moves over time, is itself skewed, is strongly related to the level of prices, and displays long memory. This paper shows that such behavior is actually almost inevitable when prices are formed by investors acquiring information about the true, but latent, value of stocks. It studies a general model of filtering in which agents receive signals about the fundamental value of the stock market and dynamically update their beliefs (potentially with biases). When those beliefs are non-normal and investors believe crashes can happen, prices generically display the range of nonlinearities observed in the data. While the model does not explain where crashes come from, it shows that investors believing that prices can crash is sufficient to generate the rich higher-order dynamics observed empirically. In a simple calibration with iid shocks to fundamentals, the model fits well quantitatively, and regression-based tests support the model’s mechanism.
    Keywords: Skewness; crashes; Tail risk; Learning
    JEL: G1 G12 C11 C32
    Date: 2025–08–27
    URL: https://d.repec.org/n?u=RePEc:fip:fedhwp:102246
  11. By: Agostino Capponi; Chengpiao Huang; J. Antonio Sidaoui; Kaizheng Wang; Jiacheng Zou
    Abstract: We investigate machine learning models for stock return prediction in non-stationary environments, revealing a fundamental nonstationarity-complexity tradeoff: complex models reduce misspecification error but require longer training windows that introduce stronger non- stationarity. We resolve this tension with a novel model selection method that jointly optimizes model class and training window size using a tournament procedure that adaptively evaluates candidates on non-stationary validation data. Our theoretical analysis demonstrates that this approach balances misspecification error, estimation variance, and non-stationarity, performing close to the best model in hindsight. Applying our method to 17 industry portfolio returns, we consistently outperform standard rolling-window benchmarks, improving out-of-sample $R^2$ by 14-23% on average. During NBER- designated recessions, improvements are substantial: our method achieves positive $R^2$ during the Gulf War recession while benchmarks are negative, and improves $R^2$ in absolute terms by at least 80bps during the 2001 recession as well as superior performance during the 2008 Financial Crisis. Economically, a trading strategy based on our selected model generates 31% higher cumulative returns averaged across the industries.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.23596
  12. By: Stewart, Shamar L.; Isengildina Massa, Olga
    Keywords: Risk and Uncertainty, Demand and Price Analysis
    Date: 2024
    URL: https://d.repec.org/n?u=RePEc:ags:aaea24:343936

This nep-ets issue is ©2026 by Simon Sosvilla-Rivero. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.