nep-cmp New Economics Papers
on Computational Economics
Issue of 2025–07–28
27 papers chosen by
Stan Miles, Thompson Rivers University


  1. Forecasting Budgetary Items in Türkiye Using Deep Learning By Altug Aydemir; Cem Cebi
  2. Does Deep Learning Improve Forecast Accuracy of Crude Oil Prices? Evidence from a Neural Network Approach By Altug Aydemir; Mert Gokcu
  3. Interpretable Machine Learning for Macro Alpha: A News Sentiment Case Study By Yuke Zhang
  4. Adjusting Manual Rates to Own Experience: Comparing the Credibility Approach to Machine Learning By Giorgio Alfredo Spedicato; Christophe Dutang; Quentin Guibert
  5. Machine Learning Applications in Credit Risk Prediction By Kubra Bolukbas; Ertan Tok
  6. FinMaster: A Holistic Benchmark for Mastering Full-Pipeline Financial Workflows with LLMs By Junzhe Jiang; Chang Yang; Aixin Cui; Sihan Jin; Ruiyu Wang; Bo Li; Xiao Huang; Dongning Sun; Xinrun Wang
  7. Quantum Reservoir Computing for Realized Volatility Forecasting By Qingyu Li; Chiranjib Mukhopadhyay; Abolfazl Bayat; Ali Habibnia
  8. AI shrinkage: a data-driven approach for risk-optimized portfolios By Gianluca De Nard; Damjan Kostovic
  9. Satellites turn “concrete”: Tracking cement with satellite data and neural networks By Alexandre d'Aspremont; Simon Ben Arous; Jean-Charles Bricongne; Benjamin Lietti; Baptiste Meunier
  10. IISE PG&E Energy Analytics Challenge 2025: Hourly-Binned Regression Models Beat Transformers in Load Forecasting By Millend Roy; Vladimir Pyltsov; Yinbo Hu
  11. Can NASDAQ-100 derivatives ETF portfolio beat QQQ? By Lo, Chi-Sheng
  12. Hybrid machine learning models for marketing business analytics: A selective review By Saad, Haythem
  13. A Review of Financial Data Analysis Techniques for Unstructured Data in the Deep Learning Era: Methods, Challenges, and Applications By Duane, Jackson; Morgan, Ashley; Carter, Emily
  14. AI: A Fed Policymaker's View: A speech at the National Bureau of Economic Research, Summer Institute 2025: Digital Economics and Artificial Intelligence, Cambridge, Massachusetts., July 17, 2025 By Lisa D. Cook
  15. Humans expect rationality and cooperation from LLM opponents in strategic games By Darija Barak; Miguel Costa-Gomes
  16. Testing stationarity and change point detection in reinforcement learning By Li, Mengbing; Shi, Chengchun; Wu, Zhenke; Fryzlewicz, Piotr
  17. Predicting Financial Market Crises using Multilayer Network Analysis and LSTM-based Forecasting of Spillover Effects By Mahdi Kohan Sefidi
  18. Simulation of square-root processes made simple: applications to the Heston model By Eduardo Abi Jaber
  19. Predictive modeling the past By Paker, Meredith; Stephenson, Judy; Wallis, Patrick
  20. R&D-Agent-Quant: A Multi-Agent Framework for Data-Centric Factors and Model Joint Optimization By Yuante Li; Xu Yang; Xiao Yang; Minrui Xu; Xisen Wang; Weiqing Liu; Jiang Bian
  21. Public science vs. mission-oriented policies in long-run growth: An agent-based model By Andrea Borsato; André Lorentz
  22. Repenser la Gestion des Connaissances à l’ère de l’IA Limites cognitives, tensions éthiques et enjeux organisationnels By Achy, Lahcen
  23. Foundation Time-Series AI Model for Realized Volatility Forecasting By Anubha Goel; Puneet Pasricha; Martin Magris; Juho Kanniainen
  24. Agent-based Liquidity Risk Modelling for Financial Markets By Perukrishnen Vytelingum; Rory Baggott; Namid Stillman; Jianfei Zhang; Dingqiu Zhu; Tao Chen; Justin Lyon
  25. Measuring Family (Dis)Advantage: Lessons from Detailed Parental Information By Sander de Vries
  26. Opportunities and Risks of Artificial Intelligence in Recruitment and Selection By Asmaa El Ajouri
  27. Generative AI at the Crossroads: Light Bulb, Dynamo, or Microscope? By Martin Neil Baily; David M. Byrne; Aidan T. Kane; Paul E. Soto

  1. By: Altug Aydemir; Cem Cebi
    Abstract: This study aims at forecasting the future behavior of budget variables, using Artificial Neural Network (ANN) and Deep Neural Network (DNN) techniques for Türkiye. Particularly, we focus on budget expenditures, tax revenues and their main components. Annual data were used and divided into two sub-periods: a training set (2002-2019) and a test set (2020-2022). Each fiscal item is estimated using relevant explanatory variables selected based on economic theory. We achieved good forecasting performance for main budget items using ANN and DNN methodologies. We found that most of the Mean Absolute Error (MAE) values fell within the acceptable range, an indicator of good prediction performance. Second, we see that the MAE values for public expenditures are lower than taxes. Third, estimating total tax revenues (aggregate data) performs better compared to subcomponents of taxes (disaggregated data). The opposite is the case for public expenditures.
    Keywords: Machine Learning, Deep Learning, Artificial Neural Network (ANN), Deep Neural Network (DNN), Budget Forecast, Government Spending, Tax Revenue
    JEL: C53 H20 H50 H68
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:tcb:wpaper:2509
  2. By: Altug Aydemir; Mert Gokcu
    Abstract: [EN] In recent years, machine learning-based techniques have gained prominence in forecasting crude oil prices due to their ability effectively handle the highly volatile and nonlinear nature of oil prices. The primary objective of this paper is to forecast monthly oil prices with the highest level of precision and accuracy possible. To do this, we propose a deepened and high-parametrized version of the deep neural network model framework that integrates widely adopted algorithms and a variety of datasets. Additionally, our approach involves the optimal architecture for deep neural networks used in oil price forecasting and offers forecasts that are repeatable and consistent. All the evaluation metrics values indicate that the proposed model achieves superior forecasting performance compared to some simple conventional statistical models. [TR] Son zamanlarda, makine ogrenimi tabanli yontemler, petrol fiyatlarinin son derece oynak ve dogrusal olmayan dogasi ile etkin bir sekilde basa cikma yetenekleri sayesinde ham petrol fiyatlarini tahmin etmede onem kazanmistir. Bu calismanin temel amaci, aylik bazda petrol fiyatlarini mumkun olan en yuksek hassasiyet ve dogrulukla tahmin etmektir. Bunu yapmak icin, ham petrol fiyat tahmini icin iyi bilinen algoritmalari ve cesitli veri kumelerini kullanan derin sinir agi modeli cercevesinin derinlestirilmis ve yuksek parametreli bir versiyonunu oneriyoruz. Ayrica, yaklasimimiz petrol fiyat tahmininde kullanilan derin sinir aglari icin en uygun mimariyi icermekte ve tekrarlanabilir ve tutarli tahminler sunmaktadir. Tum degerlendirme metrik degerleri, onerilen modelimizin geleneksel yontemlere kiyasla tahmin performansinda onemli bir iyilesmeye sahip oldugunu gostermektedir.
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:tcb:econot:2511
  3. By: Yuke Zhang
    Abstract: This study introduces an interpretable machine learning (ML) framework to extract macroeconomic alpha from global news sentiment. We process the Global Database of Events, Language, and Tone (GDELT) Project's worldwide news feed using FinBERT -- a Bidirectional Encoder Representations from Transformers (BERT) based model pretrained on finance-specific language -- to construct daily sentiment indices incorporating mean tone, dispersion, and event impact. These indices drive an XGBoost classifier, benchmarked against logistic regression, to predict next-day returns for EUR/USD, USD/JPY, and 10-year U.S. Treasury futures (ZN). Rigorous out-of-sample (OOS) backtesting (5-fold expanding-window cross-validation, OOS period: c. 2017-April 2025) demonstrates exceptional, cost-adjusted performance for the XGBoost strategy: Sharpe ratios achieve 5.87 (EUR/USD), 4.65 (USD/JPY), and 4.65 (Treasuries), with respective compound annual growth rates (CAGRs) exceeding 50% in Foreign Exchange (FX) and 22% in bonds. Shapley Additive Explanations (SHAP) affirm that sentiment dispersion and article impact are key predictive features. Our findings establish that integrating domain-specific Natural Language Processing (NLP) with interpretable ML offers a potent and explainable source of macro alpha.
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2505.16136
  4. By: Giorgio Alfredo Spedicato (Leitha SRL); Christophe Dutang (ASAR - Applied Statistics And Reliability - ASAR - LJK - Laboratoire Jean Kuntzmann - Inria - Institut National de Recherche en Informatique et en Automatique - CNRS - Centre National de la Recherche Scientifique - UGA - Université Grenoble Alpes - Grenoble INP - Institut polytechnique de Grenoble - Grenoble Institute of Technology - UGA - Université Grenoble Alpes); Quentin Guibert (CEREMADE - CEntre de REcherches en MAthématiques de la DEcision - Université Paris Dauphine-PSL - PSL - Université Paris Sciences et Lettres - CNRS - Centre National de la Recherche Scientifique, LSAF - Laboratoire de Sciences Actuarielle et Financière - UCBL - Université Claude Bernard Lyon 1 - Université de Lyon)
    Abstract: Credibility theory is the usual framework in actuarial science when it comes to reinforcing individual experience by transfering rates estimated from collective information. Based on the paradigm of transfer learning, this article presents the idea that a machine learning (ML) model pre-trained using a rich market data porfolio can improve the prediction of rates for an individual insurance portfolio. This framework consists first in training several ML models on a market portfolio of insurance data. Pre-trained models provide valuable information on relations between features and predicted rates. Furthermore, features shared with the company dataset are used to predict rates better than the same ML models trained on the insurer's dataset alone. Our approach is illustrated with classical ML models on an anonymized dataset including both market data and data from an European non-life insurance company, and is compared with a hierarchical Bühlmann-Straub credibility model. We observe the transfert learning stragegy combining company data with external market data significantly improves the prediction accuracy compared to a ML model only trained on the insurer's data and provides competitive results compared to hierarchical credibility models.
    Keywords: Transfer learning, Hierarchical credibility theory, Bühlmann credibility theory, Boosting, Deep Learning
    Date: 2025–06–27
    URL: https://d.repec.org/n?u=RePEc:hal:journl:hal-04821310
  5. By: Kubra Bolukbas; Ertan Tok
    Abstract: The goal of this study is to identify the most effective model for predicting credit risk, the likelihood a commercial loan defaults (become a non-performing loan) in the Turkish banking sector and to determine which firm and loan characteristics influence that risk. The analysis draws on an unbalanced dataset of 1.2 million firm-level observations for 2018–2023, combining financial ratios with detailed loan- and firm-specific information. Class imbalance is addressed through oversampling (including SMOTE) and multiple down-sampling schemes. Although the risk is assessed ex-ante, model performance is evaluated ex-post using the ROC-AUC metric. Within tested conventional econometric and machine learning approaches accompanied with different sampling techniques, Extreme Gradient Boosting (XGBoost) with oversampling delivers the best result with a ROC-AUC score of 0.914. Compared with logistic regression under the same sampling setup, a 4.9- percentage-point increase in test ROC-AUC is attained, confirming the model’s superior predictive performance over conventional approaches. Accordingly, the study finds that the industry and location in which a firm operates, its loan-restructuring status, loan cost and type (fixed vs. floating rate), the firm’s record of bad checks, and core ratios capturing profitability, liquidity and leverage to be the most influential predictors of credit risk.
    Keywords: Credit Risk, Machine Learning Techniques, Financial Ratios, Banking Sector, Macro-Financial Stability, Feature Importance
    JEL: C52 C53 C55 G17 G2 G32 G33
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:tcb:wpaper:2508
  6. By: Junzhe Jiang; Chang Yang; Aixin Cui; Sihan Jin; Ruiyu Wang; Bo Li; Xiao Huang; Dongning Sun; Xinrun Wang
    Abstract: Financial tasks are pivotal to global economic stability; however, their execution faces challenges including labor intensive processes, low error tolerance, data fragmentation, and tool limitations. Although large language models (LLMs) have succeeded in various natural language processing tasks and have shown potential in automating workflows through reasoning and contextual understanding, current benchmarks for evaluating LLMs in finance lack sufficient domain-specific data, have simplistic task design, and incomplete evaluation frameworks. To address these gaps, this article presents FinMaster, a comprehensive financial benchmark designed to systematically assess the capabilities of LLM in financial literacy, accounting, auditing, and consulting. Specifically, FinMaster comprises three main modules: i) FinSim, which builds simulators that generate synthetic, privacy-compliant financial data for companies to replicate market dynamics; ii) FinSuite, which provides tasks in core financial domains, spanning 183 tasks of various types and difficulty levels; and iii) FinEval, which develops a unified interface for evaluation. Extensive experiments over state-of-the-art LLMs reveal critical capability gaps in financial reasoning, with accuracy dropping from over 90% on basic tasks to merely 40% on complex scenarios requiring multi-step reasoning. This degradation exhibits the propagation of computational errors, where single-metric calculations initially demonstrating 58% accuracy decreased to 37% in multimetric scenarios. To the best of our knowledge, FinMaster is the first benchmark that covers full-pipeline financial workflows with challenging tasks. We hope that FinMaster can bridge the gap between research and industry practitioners, driving the adoption of LLMs in real-world financial practices to enhance efficiency and accuracy.
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2505.13533
  7. By: Qingyu Li; Chiranjib Mukhopadhyay; Abolfazl Bayat; Ali Habibnia
    Abstract: Recent advances in quantum computing have demonstrated its potential to significantly enhance the analysis and forecasting of complex classical data. Among these, quantum reservoir computing has emerged as a particularly powerful approach, combining quantum computation with machine learning for modeling nonlinear temporal dependencies in high-dimensional time series. As with many data-driven disciplines, quantitative finance and econometrics can hugely benefit from emerging quantum technologies. In this work, we investigate the application of quantum reservoir computing for realized volatility forecasting. Our model employs a fully connected transverse-field Ising Hamiltonian as the reservoir with distinct input and memory qubits to capture temporal dependencies. The quantum reservoir computing approach is benchmarked against several econometric models and standard machine learning algorithms. The models are evaluated using multiple error metrics and the model confidence set procedures. To enhance interpretability and mitigate current quantum hardware limitations, we utilize wrapper-based forward selection for feature selection, identifying optimal subsets, and quantifying feature importance via Shapley values. Our results indicate that the proposed quantum reservoir approach consistently outperforms benchmark models across various metrics, highlighting its potential for financial forecasting despite existing quantum hardware constraints. This work serves as a proof-of-concept for the applicability of quantum computing in econometrics and financial analysis, paving the way for further research into quantum-enhanced predictive modeling as quantum hardware capabilities continue to advance.
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2505.13933
  8. By: Gianluca De Nard; Damjan Kostovic
    Abstract: The paper introduces a new type of shrinkage estimation that is not based on asymptotic optimality but uses artificial intelligence (AI) techniques to shrink the sample eigenvalues. The proposed AI Shrinkage estimator applies to both linear and nonlinear shrinkage, demonstrating improved performance compared to the classic shrinkage estimators. Our results demonstrate that reinforcement learning solutions identify a downward bias in classic shrinkage intensity estimates derived under the i.i.d. assumption and automatically correct for it in response to prevailing market conditions. Additionally, our data-driven approach enables more efficient implementation of risk-optimized portfolios and is well-suited for real-world investment applications including various optimization constraints.
    Keywords: Covariance matrix estimation, linear and nonlinear shrinkage, portfolio management reinforcement learning, risk optimization
    JEL: C13 C58 G11
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:zur:econwp:470
  9. By: Alexandre d'Aspremont (LIENS - Laboratoire d'informatique de l'école normale supérieure - DI-ENS - Département d'informatique - ENS-PSL - ENS-PSL - École normale supérieure - Paris - PSL - Université Paris Sciences et Lettres - Inria - Institut National de Recherche en Informatique et en Automatique - CNRS - Centre National de la Recherche Scientifique - CNRS - Centre National de la Recherche Scientifique, SIERRA - Statistical Machine Learning and Parsimony - DI-ENS - Département d'informatique - ENS-PSL - ENS-PSL - École normale supérieure - Paris - PSL - Université Paris Sciences et Lettres - Inria - Institut National de Recherche en Informatique et en Automatique - CNRS - Centre National de la Recherche Scientifique - CNRS - Centre National de la Recherche Scientifique - Centre Inria de Paris - Inria - Institut National de Recherche en Informatique et en Automatique, Kayrros); Simon Ben Arous (Kayrros); Jean-Charles Bricongne (LEO - Laboratoire d'Économie d'Orleans [2022-...] - UO - Université d'Orléans - UT - Université de Tours - UCA - Université Clermont Auvergne, Centre de recherche de la Banque de France - Banque de France); Benjamin Lietti (EPEE - Centre d'Etudes des Politiques Economiques - UEVE - Université d'Évry-Val-d'Essonne - Université Paris-Saclay); Baptiste Meunier (Centre de recherche de la Banque Centrale européenne - Banque Centrale Européenne, AMSE - Aix-Marseille Sciences Economiques - EHESS - École des hautes études en sciences sociales - AMU - Aix Marseille Université - ECM - École Centrale de Marseille - CNRS - Centre National de la Recherche Scientifique)
    Abstract: This paper exploits daily infrared images taken from satellites to track economic activity in advanced and emerging countries. We first develop a framework to read, clean, and exploit satellite images. Our algorithm uses the laws of physics (Planck's law) and machine learning to detect the heat produced by cement plants in activity. This allows us to monitor in real-time whether a cement plant is working. Using this on around 1, 000 plants, we construct a satellitebased index. We show that using this satellite index outperforms benchmark models and alternative indicators for nowcasting the production of the cement industry as well as the activity in the construction sector. Comparing across methods, neural networks appear to yield more accurate predictions as they allow to exploit the granularity of our dataset. Overall, combining satellite images and machine learning can help policymakers to take informed and swift economic policy decisions by nowcasting accurately and in real-time economic activity.
    Keywords: Big data, Data science, Machine learning, Construction, High-frequency data
    Date: 2024
    URL: https://d.repec.org/n?u=RePEc:hal:journl:hal-05104995
  10. By: Millend Roy; Vladimir Pyltsov; Yinbo Hu
    Abstract: Accurate electricity load forecasting is essential for grid stability, resource optimization, and renewable energy integration. While transformer-based deep learning models like TimeGPT have gained traction in time-series forecasting, their effectiveness in long-term electricity load prediction remains uncertain. This study evaluates forecasting models ranging from classical regression techniques to advanced deep learning architectures using data from the ESD 2025 competition. The dataset includes two years of historical electricity load data, alongside temperature and global horizontal irradiance (GHI) across five sites, with a one-day-ahead forecasting horizon. Since actual test set load values remain undisclosed, leveraging predicted values would accumulate errors, making this a long-term forecasting challenge. We employ (i) Principal Component Analysis (PCA) for dimensionality reduction and (ii) frame the task as a regression problem, using temperature and GHI as covariates to predict load for each hour, (iii) ultimately stacking 24 models to generate yearly forecasts. Our results reveal that deep learning models, including TimeGPT, fail to consistently outperform simpler statistical and machine learning approaches due to the limited availability of training data and exogenous variables. In contrast, XGBoost, with minimal feature engineering, delivers the lowest error rates across all test cases while maintaining computational efficiency. This highlights the limitations of deep learning in long-term electricity forecasting and reinforces the importance of model selection based on dataset characteristics rather than complexity. Our study provides insights into practical forecasting applications and contributes to the ongoing discussion on the trade-offs between traditional and modern forecasting methods.
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2505.11390
  11. By: Lo, Chi-Sheng
    Abstract: This study explores whether a NASDAQ-100 derivatives ETF portfolio can outperform the Invesco QQQ Trust (QQQ) using a Deep Reinforcement Learning framework based on Proximal Policy Optimization (PPO). The portfolio dynamically allocates across three NASDAQ-100 derivative ETFs: YQQQ (short options income), QYLD (covered calls), and TQQQ (3x leveraged), employing Isolation Forest anomaly detection to optimize rebalancing timing. A train-validation-test framework (2010-2018 training, 2019-2023 validation, 2024-2025 testing) utilizes a multi-objective function to balance tracking error minimization and excess return maximization, integrating dividend payments and quarterly with event-driven rebalancing. The results show significant alpha generation over QQQ by leveraging YQQQ’s inverse exposure, QYLD’s income stability, and TQQQ’s leveraged growth. Though experiencing higher volatility and drawdowns, the PPO agent skillfully optimizes allocations, achieving positive excess returns in the testing phase, with performance varying by market condition, emphasizing the need for adaptive strategies in dynamic markets.
    Keywords: Deep reinforcement learning, enhanced index tracking, isolation forest, QQQ, Nasdaq 100, exchange traded fund, options derivatives
    JEL: C32 C44 C61
    Date: 2025–07–10
    URL: https://d.repec.org/n?u=RePEc:pra:mprapa:125307
  12. By: Saad, Haythem (Tunisian Ministry of Equipment and Housing)
    Abstract: This is the original version in English of the paper with DOI: 10.5281/zenodo.15470272 published on Zenodo under the Creative Commons Attribution 4.0 International license. Link : https://doi.org/10.5281/zenodo.15470272 A French version is also available as a preprint on HAL under the identifier hal-05078079, version 1: https://hal.science/hal-05078079v1. This paper explores the use of hybrid machine learning models in business data analysis, based on a selective review of four landmark articles. The aim is to identify gaps in the existing literature and justify the importance of these models for enhancing decision-making across various fields, including finance and marketing. The findings indicate that hybrid models can optimize forecasting and campaign personalization, while highlighting the need for specific approaches to address the complexity of business data.
    Date: 2025–05–19
    URL: https://d.repec.org/n?u=RePEc:osf:osfxxx:xtahd_v1
  13. By: Duane, Jackson; Morgan, Ashley; Carter, Emily
    Abstract: Financial institutions are increasingly leveraging---such as text, audio, and images---to gain insights and competitive advantage. Deep learning (DL) has emerged as a powerful paradigm for analyzing these complex data types, transforming tasks like financial news analysis, earnings call interpretation, and document parsing. This paper provides a comprehensive academic review of deep learning techniques for unstructured financial data. We present a taxonomy of data types and DL methods, including natural language processing models, speech and audio processing frameworks, multimodal fusion approaches, and transformer-based architectures. We survey key applications ranging from sentiment analysis and market prediction to fraud detection, credit risk assessment, and beyond, highlighting recent advancements in each domain. Additionally, we discuss major challenges unique to financial settings, such as data scarcity and annotation cost, model interpretability and regulatory compliance, and the dynamic, non-stationary nature of financial data. We enumerate prominent datasets and benchmarks that have accelerated research, and identify research gaps and future directions. The review emphasizes the latest developments up to 2025, including the rise of large pre-trained models and multimodal learning, and outlines how these innovations are shaping the next generation of financial analytics.
    Date: 2025–06–25
    URL: https://d.repec.org/n?u=RePEc:osf:osfxxx:gdvbj_v1
  14. By: Lisa D. Cook
    Date: 2025–07–17
    URL: https://d.repec.org/n?u=RePEc:fip:fedgsq:101342
  15. By: Darija Barak; Miguel Costa-Gomes
    Abstract: As Large Language Models (LLMs) integrate into our social and economic interactions, we need to deepen our understanding of how humans respond to LLMs opponents in strategic settings. We present the results of the first controlled monetarily-incentivised laboratory experiment looking at differences in human behaviour in a multi-player p-beauty contest against other humans and LLMs. We use a within-subject design in order to compare behaviour at the individual level. We show that, in this environment, human subjects choose significantly lower numbers when playing against LLMs than humans, which is mainly driven by the increased prevalence of `zero' Nash-equilibrium choices. This shift is mainly driven by subjects with high strategic reasoning ability. Subjects who play the zero Nash-equilibrium choice motivate their strategy by appealing to perceived LLM's reasoning ability and, unexpectedly, propensity towards cooperation. Our findings provide foundational insights into the multi-player human-LLM interaction in simultaneous choice games, uncover heterogeneities in both subjects' behaviour and beliefs about LLM's play when playing against them, and suggest important implications for mechanism design in mixed human-LLM systems.
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2505.11011
  16. By: Li, Mengbing; Shi, Chengchun; Wu, Zhenke; Fryzlewicz, Piotr
    Abstract: We consider reinforcement learning (RL) in possibly nonstationary environments. Many existing RL algorithms in the literature rely on the stationarity assumption that requires the state transition and reward functions to be constant over time. However, this assumption is restrictive in practice and is likely to be violated in a number of applications, including traffic signal control, robotics and mobile health. In this paper, we develop a model-free test to assess the stationarity of the optimal Q-function based on pre-collected historical data, without additional online data collection. Based on the proposed test, we further develop a change point detection method that can be naturally coupled with existing state-of-the-art RL methods designed in stationary environments for online policy optimization in nonstationary environments. The usefulness of our method is illustrated by theoretical results, simulation studies, and a real data example from the 2018 Intern Health Study. A Python implementation of the proposed procedure is publicly available at https://github.com/limengbinggz/CUSUM-RL .
    Keywords: change point detection; hypothesis testing; nonstationarity; reinforcement learning
    JEL: C1
    Date: 2025–06–30
    URL: https://d.repec.org/n?u=RePEc:ehl:lserod:127507
  17. By: Mahdi Kohan Sefidi
    Abstract: Financial crises often occur without warning, yet markets leading up to these events display increasing volatility and complex interdependencies across multiple sectors. This study proposes a novel approach to predicting market crises by combining multilayer network analysis with Long Short-Term Memory (LSTM) models, using Granger causality to capture within-layer connections and Random Forest to model interlayer relationships. Specifically, we utilize Granger causality to model the temporal dependencies between market variables within individual layers, such as asset prices, trading values, and returns. To represent the interactions between different market variables across sectors, we apply Random Forest to model the interlayer connections, capturing the spillover effects between these features. The LSTM model is then trained to predict market instability and potential crises based on the dynamic features of the multilayer network. Our results demonstrate that this integrated approach, combining Granger causality, Random Forest, and LSTM, significantly enhances the accuracy of market crisis prediction, outperforming traditional forecasting models. This methodology provides a powerful tool for financial institutions and policymakers to better monitor systemic risks and take proactive measures to mitigate financial crises.
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2505.11019
  18. By: Eduardo Abi Jaber (CMAP - Centre de Mathématiques Appliquées de l'Ecole polytechnique - Inria - Institut National de Recherche en Informatique et en Automatique - X - École polytechnique - IP Paris - Institut Polytechnique de Paris - CNRS - Centre National de la Recherche Scientifique)
    Abstract: We introduce a simple, efficient and accurate nonnegative preserving numerical scheme for simulating the square-root process. The novel idea is to simulate the integrated squareroot process first instead of the square-root process itself. Numerical experiments on realistic parameter sets, applied for the integrated process and the Heston model, display high precision with a very low number of time steps. As a bonus, our scheme yields the exact limiting Inverse Gaussian distributions of the integrated square-root process with only one single time-step in two scenarios: (i) for high mean-reversion and volatility-of-volatility regimes, regardless of maturity; and (ii) for long maturities, independent of the other parameters.
    Date: 2024–12–15
    URL: https://d.repec.org/n?u=RePEc:hal:journl:hal-04839193
  19. By: Paker, Meredith; Stephenson, Judy; Wallis, Patrick
    Abstract: Understanding long-run economic growth requires reliable historical data, yet the vast majority of long-run economic time series are drawn from incomplete records with significant temporal and geographic gaps. Conventional solutions to these gaps rely on linear regressions that risk bias or overfitting when data are scarce. We introduce “past predictive modeling, ” a framework that leverages machine learning and out-of-sample predictive modeling techniques to reconstruct representative historical time series from scarce data. Validating our approach using nominal wage data from England, 1300-1900, we show that this new method leads to more accurate and generalizable estimates, with bootstrapped standard errors 72% lower than benchmark linear regressions. Beyond just bettering accuracy, these improved wage estimates for England yield new insights into the impact of the Black Death on inequality, the economic geography of pre-industrial growth, and productivity over the long-run.
    Keywords: machine learning; predictive modeling; wages; black death; industrial revolution
    JEL: J31 C53 N33 N13 N63
    Date: 2025–06–13
    URL: https://d.repec.org/n?u=RePEc:ehl:lserod:128852
  20. By: Yuante Li; Xu Yang; Xiao Yang; Minrui Xu; Xisen Wang; Weiqing Liu; Jiang Bian
    Abstract: Financial markets pose fundamental challenges for asset return prediction due to their high dimensionality, non-stationarity, and persistent volatility. Despite advances in large language models and multi-agent systems, current quantitative research pipelines suffer from limited automation, weak interpretability, and fragmented coordination across key components such as factor mining and model innovation. In this paper, we propose R&D-Agent for Quantitative Finance, in short RD-Agent(Q), the first data-centric multi-agent framework designed to automate the full-stack research and development of quantitative strategies via coordinated factor-model co-optimization. RD-Agent(Q) decomposes the quant process into two iterative stages: a Research stage that dynamically sets goal-aligned prompts, formulates hypotheses based on domain priors, and maps them to concrete tasks, and a Development stage that employs a code-generation agent, Co-STEER, to implement task-specific code, which is then executed in real-market backtests. The two stages are connected through a feedback stage that thoroughly evaluates experimental outcomes and informs subsequent iterations, with a multi-armed bandit scheduler for adaptive direction selection. Empirically, RD-Agent(Q) achieves up to 2X higher annualized returns than classical factor libraries using 70% fewer factors, and outperforms state-of-the-art deep time-series models on real markets. Its joint factor-model optimization delivers a strong balance between predictive accuracy and strategy robustness. Our code is available at: https://github.com/microsoft/RD-Agent.
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2505.15155
  21. By: Andrea Borsato (UniBg - Università degli Studi di Bergamo = University of Bergamo, BETA - Bureau d'Économie Théorique et Appliquée - AgroParisTech - UNISTRA - Université de Strasbourg - Université de Haute-Alsace (UHA) - Université de Haute-Alsace (UHA) Mulhouse - Colmar - UL - Université de Lorraine - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement); André Lorentz (BETA - Bureau d'Économie Théorique et Appliquée - AgroParisTech - UNISTRA - Université de Strasbourg - Université de Haute-Alsace (UHA) - Université de Haute-Alsace (UHA) Mulhouse - Colmar - UL - Université de Lorraine - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement)
    Abstract: This paper offers a contribution to the literature on science policies and on the possible trade-off between broad science-technology policies and mission-oriented programs. We develop a multi-country, multi-sectoral agentbased model that represents a small-scale monetary union. Findings are threefold. Firstly, symmetric science policies from governments significantly reduce cross-country growth divergence. Secondly, even if economic growth is largely driven by the sectors with absolute advantages, having some flow of open science investments is sufficient for the other industries to survive and innovate. Thirdly, science policy limits monopolistic tendencies and reduces income inequality. Yet, the working of the model suggests that supply-side science policies should be paired with demand-side policies to meet grand societal challenges.
    Keywords: Science policies, Structural and technical change, Economic growth
    Date: 2025–09
    URL: https://d.repec.org/n?u=RePEc:hal:journl:hal-05092674
  22. By: Achy, Lahcen
    Abstract: In an era shaped by artificial intelligence, massive data flows, and growing organizational uncertainty, classical models of knowledge management (KM) are facing increasing limitations. Initially developed in relatively stable environments, these frameworks struggle to account for hybrid knowledge ecosystems, algorithmic mediation, and new cognitive tensions. This paper offers a critical and integrative reassessment of the main KM model families. It highlights key contemporary tensions, including knowledge reification, fragmentation of cognitive processes, and human deskilling. To address these challenges, the paper relies on the concept of composite cognitive ecology, which offers a new lens to understand the interplay between human and algorithmic agents in the production, transmission, and use of knowledge. The paper thus aims to open new avenues for research and action, fostering the emergence of organizations that are more learning-oriented, critical, and adaptive.
    Keywords: Knowledge Management, Artificial Intelligence, Organizational Learning, Cognitive Ecosystems, Uncertainty, Digital Transformation.
    JEL: D83 L86 M15 O33
    Date: 2025–07–10
    URL: https://d.repec.org/n?u=RePEc:pra:mprapa:125311
  23. By: Anubha Goel; Puneet Pasricha; Martin Magris; Juho Kanniainen
    Abstract: Time series foundation models (FMs) have emerged as a popular paradigm for zero-shot multi-domain forecasting. These models are trained on numerous diverse datasets and claim to be effective forecasters across multiple different time series domains, including financial data. In this study, we evaluate the effectiveness of FMs, specifically the TimesFM model, for volatility forecasting, a core task in financial risk management. We first evaluate TimesFM in its pretrained (zero-shot) form, followed by our custom fine-tuning procedure based on incremental learning, and compare the resulting models against standard econometric benchmarks. While the pretrained model provides a reasonable baseline, our findings show that incremental fine-tuning, which allows the model to adapt to new financial return data over time, is essential for learning volatility patterns effectively. Fine-tuned variants not only improve forecast accuracy but also statistically outperform traditional models, as demonstrated through Diebold-Mariano and Giacomini-White tests. These results highlight the potential of foundation models as scalable and adaptive tools for financial forecasting-capable of delivering strong performance in dynamic market environments when paired with targeted fine-tuning strategies.
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2505.11163
  24. By: Perukrishnen Vytelingum; Rory Baggott; Namid Stillman; Jianfei Zhang; Dingqiu Zhu; Tao Chen; Justin Lyon
    Abstract: In this paper, we describe a novel agent-based approach for modelling the transaction cost of buying or selling an asset in financial markets, e.g., to liquidate a large position as a result of a margin call to meet financial obligations. The simple act of buying or selling in the market causes a price impact and there is a cost described as liquidity risk. For example, when selling a large order, there is market slippage -- each successive trade will execute at the same or worse price. When the market adjusts to the new information revealed by the execution of such a large order, we observe in the data a permanent price impact that can be attributed to the change in the fundamental value as market participants reassess the value of the asset. In our ABM model, we introduce a novel mechanism where traders assume orderflow is informed and each trade reveals some information about the value of the asset, and traders update their belief of the fundamental value for every trade. The result is emergent, realistic price impact without oversimplifying the problem as most stylised models do, but within a realistic framework that models the exchange with its protocols, its limit orderbook and its auction mechanism and that can calculate the transaction cost of any execution strategy without limitation. Our stochastic ABM model calculates the costs and uncertainties of buying and selling in a market by running Monte-Carlo simulations, for a better understanding of liquidity risk and can be used to optimise for optimal execution under liquidity risk. We demonstrate its practical application in the real world by calculating the liquidity risk for the Hang-Seng Futures Index.
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2505.15296
  25. By: Sander de Vries (Vrije Universiteit Amsterdam and Tinbergen Institute)
    Abstract: This paper provides new insights on the importance of family background by linking 1.7 million Dutch children’s incomes to an exceptionally rich set of family characteristics — including income, wealth, education, occupation, crime, and health. Using a machine learning approach, I show that conventional analyses using parental income only considerably underestimate intergenerational dependence. This underestimation is concentrated at the extremes of the child income distribution, where families are often (dis)advantaged across multiple dimensions. Gender differences in intergenerational dependence are minimal, despite allowing for complex gender-specific patterns. A comparison with adoptees highlights the role of pre-birth factors in driving intergenerational transmission.
    Keywords: Intergenerational mobility, inequality of opportunity
    JEL: I24 J24 J62
    Date: 2025–02–14
    URL: https://d.repec.org/n?u=RePEc:tin:wpaper:20250010
  26. By: Asmaa El Ajouri (UM5 - Université mohamed 5, Rabat)
    Abstract: Abstract Artificial intelligence (AI) is emerging as a strategic lever in the evolution of recruitment practices, automating key tasks such as CV screening, candidate shortlisting, and conducting interviews via advanced tools like chatbots and video analysis. These innovations promise significant improvements in process efficiency, a reduction in human bias, and faster talent identification.However, this transformation raises major concerns. Algorithmic biases, despite their supposed neutrality, can reinforce existing discrimination. The opacity of AI-driven decisions limits transparency and fosters mistrust. Furthermore, the dehumanization of the recruitment process may alter the candidate experience and weaken the relationship between companies and applicants. This article adopts a conceptual approach, based on an in-depth literature review, to explore both the opportunities and the ethical, legal, and organizational risks associated with the use of AI in recruitment.The analysis highlights that while AI can optimize processes and reduce certain biases, it also introduces new challenges related to fairness, transparency, and trust. The study concludes that a hybrid recruitment model combining AI tools with human oversight appears to be the most effective and ethical path forward. Such an approach allows organizations to benefit from technological innovations without compromising the human dimension essential to recruitment. Keywords: Artificial intelligence, automated recruitment, algorithmic bias, recruitment ethics, transparency, candidate experience, discrimination, human resource management, talent acquisition.
    Abstract: L'intelligence artificielle (IA) s'impose comme un levier stratégique dans l'évolution des pratiques de recrutement, automatisant des tâches clés telles que le tri des CV, la présélection des candidats et la conduite d'entretiens via des outils avancés comme les chatbots et l'analyse vidéo. Ces innovations promettent une amélioration significative de l'efficacité des processus, une réduction des biais humains et une identification plus rapide des talents. Cependant, cette transformation soulève des préoccupations majeures. Les biais algorithmiques, malgré leur supposée neutralité, peuvent renforcer les discriminations existantes. L'opacité des décisions prises par l'IA limite la transparence et nourrit la méfiance. De plus, la déshumanisation du processus de recrutement peut altérer l'expérience des candidats et affaiblir la relation entre les entreprises et les postulants. Cet article adopte une approche conceptuelle, fondée sur une revue approfondie de la littérature, pour explorer à la fois les opportunités et les risques éthiques, juridiques et organisationnels liés à l'utilisation de l'IA dans le recrutement. L'analyse met en évidence que si l'IA peut optimiser les processus et réduire certains biais, elle introduit également de nouveaux défis en matière d'équité, de transparence et de confiance. L'étude conclut qu'un modèle hybride de recrutement, combinant outils d'IA et supervision humaine, apparaît comme la voie la plus efficace et éthique. Une telle approche permet aux organisations de tirer parti des innovations technologiques sans compromettre la dimension humaine, essentielle au recrutement.
    Keywords: Artificial intelligence, automated recruitment, algorithmic bias, recruitment ethics, transparency, candidate experience, discrimination, human resource management, talent acquisition.
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:hal:journl:hal-05086544
  27. By: Martin Neil Baily; David M. Byrne; Aidan T. Kane; Paul E. Soto
    Abstract: With the advent of generative AI (genAI), the potential scope of artificial intelligence has increased dramatically, but the future effect of genAI on productivity remains uncertain. The effect of the technology on the innovation process is a crucial open question. Some inventions, such as the light bulb, temporarily raise productivity growth as adoption spreads, but the effect fades when the market is saturated; that is, the level of output per hour is permanently higher but the growth rate is not. In contrast, two types of technologies stand out as having longer-lived effects on productivity growth. First, there are technologies known as general-purpose technologies (GPTs). GPTs (1) are widely adopted, (2) spur abundant knock-on innovations (new goods and services, process efficiencies, and business reorganization), and (3) show continual improvement, refreshing this innovation cycle; the electric dynamo is an example. Second, there are inventions of methods of invention (IMIs). IMIs increase the efficiency of the research and development process via improvements to observation, analysis, communication, or organization; the compound microscope is an example. We show that GenAI has the characteristics of both a GPT and an IMI—an encouraging sign that genAI will raise the level of productivity. Even so, genAI’s contribution to productivity growth will depend on the speed with which that level is attained and, historically, the process for integrating revolutionary technologies into the economy is a protracted one.
    Keywords: Artificial Intelligence; Machine Learning; Productivity; Technological Growth
    JEL: C45 O31 O33 O40
    Date: 2025–07–17
    URL: https://d.repec.org/n?u=RePEc:fip:fedgfe:2025-53

This nep-cmp issue is ©2025 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.