nep-cmp New Economics Papers
on Computational Economics
Issue of 2026–02–23
fifteen papers chosen by
Stan Miles, Thompson Rivers University


  1. Optimal Audit Targeting with Machine Learning: Evidence from Pakistan By Nicholas Lacoste; Zehra Farooq
  2. Transformer-based CoVaR: Systemic Risk in Textual Information By Junyu Chen; Tom Boot; Lingwei Kong; Weining Wang
  3. Predicting Corporate ESG Scores from Financial Performance and Environmental Indicators: A Machine Learning Framework By Chouech, Olfa
  4. DeXposure-FM: A Time-series, Graph Foundation Model for Credit Exposures and Stability on Decentralized Financial Networks By Aijie Shu; Wenbin Wu; Gbenga Ibikunle; Fengxiang He
  5. Forecasting European Sovereign Spreads using Machine Learning By Bouillot, Roland; Candelon, Bertrand; Kool, Clemens
  6. Artificial Intelligence and Financial Stability Risks in Nigeria By Ozili, Peterson K; Obiora, Kingsley; Onuzo, Chinwe
  7. Role of Artificial Intelligence in Finance: Selective Literature Review and Implications for Asia's Financial Stability By Yang ZHANG; Ziang QIU Ziang; Donghyun PARK; Shu TIAN
  8. Choice via AI By Christopher Kops; Elias Tsakas
  9. Predicting Well-Being with Mobile Phone Data: Evidence from Four Countries By M. Merritt Smith; Emily Aiken; Joshua E. Blumenstock; Sveta Milusheva
  10. Quantum Speedups for Derivative Pricing Beyond Black-Scholes By Dylan Herman; Yue Sun; Jin-Peng Liu; Marco Pistoia; Charlie Che; Rob Otter; Shouvanik Chakrabarti; Aram Harrow
  11. A Novel approach to portfolio construction By T. Di Matteo; L. Riso; M. G. Zoia
  12. It must be very hard to publish null results By Briggs, Ryan C.; Mellon, Jonathan; Arel-Bundock, Vincent
  13. Group Selection as a Safeguard Against AI Substitution By Qiankun Zhong; Thomas F. Eisenmann; Julian Garcia; Iyad Rahwan
  14. AI Assisted Economics Measurement From Survey: Evidence from Public Employee Pension Choice By Tiancheng Wang; Krishna Sharma
  15. Efficient Monte Carlo Valuation of Corporate Bonds in Financial Networks By Dohyun Ahn; Agostino Capponi

  1. By: Nicholas Lacoste (Tulane University); Zehra Farooq (Federal Board of Revenue, Pakistan)
    Abstract: This paper bridges welfare economics and machine learning econometrics to develop empirically implementable algorithms for optimal audit targeting. We derive a sufficient statistic-based targeting algorithm that depends on three individualized causal effects: the immediate revenue recovered from an audit, the causal effect of an audit on long-run tax revenue, and the marginal administrative cost of an audit. We estimate these effects with a variety of machine learners comparing causal forests, LASSO, gradient boosted trees, and neural networks using the universe of Pakistani income tax returns, exploiting years in which audits were assigned completely at random. We implement our targeting algorithms in out-of-bag years, comparing them to the real-world policy when audits were partially or entirely targeted. We show that the real world audit program in Pakistan lost almost 173, 000 Rs ($1, 700) in net revenue per-audit, while our optimal policy generates 285, 000 Rs ($2, 800) in expected net revenue per-audit. We also find that targeting audits based on immediate recoup is sub-optimal to targeting on long-run deterrence in this setting. Moving forward, our framework offers a general approach to empirical welfare maximization using machine learning in resource-constrained policy settings.
    Keywords: optimal audit policy, tax enforcement, machine learning, sufficient statistics
    JEL: H21 H26 C14 C45
    Date: 2026–02
    URL: https://d.repec.org/n?u=RePEc:tul:wpaper:2603
  2. By: Junyu Chen; Tom Boot; Lingwei Kong; Weining Wang
    Abstract: Conditional Value-at-Risk (CoVaR) quantifies systemic financial risk by measuring the loss quantile of one asset, conditional on another asset experiencing distress. We develop a Transformer-based methodology that integrates financial news articles directly with market data to improve CoVaR estimates. Unlike approaches that use predefined sentiment scores, our method incorporates raw text embeddings generated by a large language model (LLM). We prove explicit error bounds for our Transformer CoVaR estimator, showing that accurate CoVaR learning is possible even with small datasets. Using U.S. market returns and Reuters news items from 2006--2013, our out-of-sample results show that textual information impacts the CoVaR forecasts. With better predictive performance, we identify a pronounced negative dip during market stress periods across several equity assets when comparing the Transformer-based CoVaR to both the CoVaR without text and the CoVaR using traditional sentiment measures. Our results show that textual data can be used to effectively model systemic risk without requiring prohibitively large data sets.
    Date: 2026–02
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2602.12490
  3. By: Chouech, Olfa
    Abstract: As investors, regulators, and the public increasingly emphasize sustainable investment amid growing climate concerns, the accurate prediction of Environmental, Social, and Governance (ESG) metrics has become a crucial complement to traditional assessment methods. This study analyzes 1, 000 companies across nine industries and seven regions between 2015 and 2025 to predict overall ESG scores using key financial and environmental indicators. To ensure robust predictive performance, a diverse set of machine learning algorithms—including Linear Regression, Random Forests, and four boosting models (AdaBoost, LightGBM, XGBoost, and CatBoost)—was employed. To address potential bias in panel data, a panel-aware machine learning framework incorporating GroupKFold cross-validation was implemented. The results show that boosting algorithms consistently outperform traditional linear approaches in predicting ESG scores. Among them, CatBoost achieved the best overall performance, with the lowest RMSE (4.608), MAE (2.222), and MSE (21.234), and the highest R² (0.913), indicating strong predictive accuracy. Overall, this study presents an innovative and transferable framework for predicting ESG scores, thus contributing to both empirical research and quantitative modeling practices. Furthermore, it advances the sustainability field by providing a machine learning–based application that enables companies to predict their ESG scores in real time.
    Keywords: ESG, Machine Learning, Boosting Algorithms, Sustainable Development, Predictive Modeling
    JEL: O32 Q55 Q56
    Date: 2025–09–01
    URL: https://d.repec.org/n?u=RePEc:pra:mprapa:127272
  4. By: Aijie Shu; Wenbin Wu; Gbenga Ibikunle; Fengxiang He
    Abstract: Credit exposure in Decentralized Finance (DeFi) is often implicit and token-mediated, creating a dense web of inter-protocol dependencies. Thus, a shock to one token may result in significant and uncontrolled contagion effects. As the DeFi ecosystem becomes increasingly linked with traditional financial infrastructure through instruments, such as stablecoins, the risk posed by this dynamic demands more powerful quantification tools. We introduce DeXposure-FM, the first time-series, graph foundation model for measuring and forecasting inter-protocol credit exposure on DeFi networks, to the best of our knowledge. Employing a graph-tabular encoder, with pre-trained weight initialization, and multiple task-specific heads, DeXposure-FM is trained on the DeXposure dataset that has 43.7 million data entries, across 4, 300+ protocols on 602 blockchains, covering 24, 300+ unique tokens. The training is operationalized for credit-exposure forecasting, predicting the joint dynamics of (1) protocol-level flows, and (2) the topology and weights of credit-exposure links. The DeXposure-FM is empirically validated on two machine learning benchmarks; it consistently outperforms the state-of-the-art approaches, including a graph foundation model and temporal graph neural networks. DeXposure-FM further produces financial economics tools that support macroprudential monitoring and scenario-based DeFi stress testing, by enabling protocol-level systemic-importance scores, sector-level spillover and concentration measures via a forecast-then-measure pipeline. Empirical verification fully supports our financial economics tools. The model and code have been publicly available. Model: https://huggingface.co/EVIEHub/DeXposure-FM. Code: https://github.com/EVIEHub/DeXposure-FM.
    Date: 2026–02
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2602.03981
  5. By: Bouillot, Roland (Maastricht University); Candelon, Bertrand (Université catholique de Louvain, LIDAM/LFIN, Belgium); Kool, Clemens (Maastricht University)
    Abstract: Accurate forecasting constitutes a central objective for policymakers. This paper examines the application of advanced machine-learning techniques to predict the 10-year sovereign bond spreads vis-à-vis the German bund, employing a novel high-dimensional dataset covering 10 European countries over the period 2007−2025. An exhaustive comparison of predictive performance, both in-sample and out-of-sample, demonstrates that XGBoost delivers the highest degree of accuracy. Building on these forecasts, we construct fragmentation matrices that capture the extent of asymmetry across Euro area sovereign bond markets. Prior to the COVID-19 crisis, results confirm the well-documented clustering between core and peripheral countries. However, since 2021 this segmentation appears to have weakened, as French and Belgian spreads exhibit a synchronous trajectory. Thesefindingscontribute totheliterature on financialintegrationand fragmentation within the Euro area, offering new insights into the evolving dynamics of sovereign bond markets.
    Keywords: Machine learning ; Financial fragmentation risk ; XGBoost ; Sovereign spreads
    Date: 2025–11–30
    URL: https://d.repec.org/n?u=RePEc:ajf:louvlf:2025004
  6. By: Ozili, Peterson K; Obiora, Kingsley; Onuzo, Chinwe
    Abstract: Artificial intelligence is disrupting the financial sector globally. Artificial intelligence will also affect financial regulation and financial system stability in several ways. Little is known about how artificial intelligence might affect the stability of the financial system. Using a contextual framework and discourse analysis methodology, this article identifies some risks that artificial intelligence could pose to financial system stability in Nigeria. The study focused on how AI risks affect those directly involved in financial stability work in Nigeria. If these risks are mitigated, the adoption of AI for financial stability work will yield positive benefits for financial stability in Nigeria.
    Keywords: Nigeria, artificial intelligence, financial stability, algorithm, banking supervision, financial regulation, financial sector.
    JEL: G21 G23 O31 O33
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:pra:mprapa:127370
  7. By: Yang ZHANG (Faculty of Business Administration, University of Macau); Ziang QIU Ziang (Faculty of Business Administration, University of Macau); Donghyun PARK (The South East Asian Central Banks (SEACEN) Research and Training Centre); Shu TIAN (Economic Research and Development Impact Department, Asian Development Bank)
    Abstract: This paper provides a comprehensive systematic review of the transformative impact of Artificial Intelligence (AI) on the global financial landscape. By synthesising 249 peer-reviewed studies published between 1990 and 2025, the research categorises AI’s contributions into three primary domains: asset pricing and portfolio management; financial markets and institutions; and corporate finance and governance. Furthermore, the review offers a specialised assessment of AI’s implications for financial stability within Asia. The findings reveal that while AI acts as a "stabilising intelligence" by enhancing efficiency, predictive precision, and financial inclusion, it simultaneously introduces "adaptive fragility" by concentrating market power, embedding algorithmic biases, and intensifying systemic linkages.
    Keywords: Artificial Intelligence, literature review, financial markets, financial stability, Asia,
    JEL: G10 G20 G30 M15 O33
    Date: 2026–02
    URL: https://d.repec.org/n?u=RePEc:sea:wpaper:wp61
  8. By: Christopher Kops; Elias Tsakas
    Abstract: This paper proposes a model of choice via agentic artificial intelligence (AI). A key feature is that the AI may misinterpret a menu before recommending what to choose. A single acyclicity condition guarantees that there is a monotonic interpretation and a strict preference relation that together rationalize the AI's recommendations. Since this preference is in general not unique, there is no safeguard against it misaligning with that of a decision maker. What enables the verification of such AI alignment is interpretations satisfying double monotonicity. Indeed, double monotonicity ensures full identifiability and internal consistency. But, an additional idempotence property is required to guarantee that recommendations are fully rational and remain grounded within the original feasible set.
    Date: 2026–02
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2602.04526
  9. By: M. Merritt Smith; Emily Aiken; Joshua E. Blumenstock; Sveta Milusheva
    Abstract: We provide systematic evidence on the potential for estimating household well-being from mobile phone data. Using data from four countries - Afghanistan, Cote d'Ivoire, Malawi, and Togo - we conduct parallel, standardized machine learning experiments to assess which measures of welfare can be most accurately predicted, which types of phone data are most useful, and how much training data is required. We find that long-term poverty measures such as wealth indices (Pearson's rho = 0.20-0.59) and multidimensional poverty (rho = 0.29-0.57) can be predicted more accurately than consumption (rho = 0.04 - 0.54); transient vulnerability measures like food security and mental health are very difficult to predict. Models using calls and text message behavior are more predictive than those using metadata on mobile internet usage, mobile money transactions, and airtime top-ups. Predictive accuracy improves rapidly through the first 1, 000-2, 000 training observations, with continued gains beyond 4, 500 observations. Model performance depends strongly on sample heterogeneity: nationally-representative samples yield 20-70 percent higher accuracy than urban-only or rural-only samples.
    Date: 2026–02
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2602.02805
  10. By: Dylan Herman; Yue Sun; Jin-Peng Liu; Marco Pistoia; Charlie Che; Rob Otter; Shouvanik Chakrabarti; Aram Harrow
    Abstract: This paper explores advancements in quantum algorithms for derivative pricing of exotics, a computational pipeline of fundamental importance in quantitative finance. For such cases, the classical Monte Carlo integration procedure provides the state-of-the-art provable, asymptotic performance: polynomial in problem dimension and quadratic in inverse-precision. While quantum algorithms are known to offer quadratic speedups over classical Monte Carlo methods, end-to-end speedups have been proven only in the simplified setting over the Black-Scholes geometric Brownian motion (GBM) model. This paper extends existing frameworks to demonstrate novel quadratic speedups for more practical models, such as the Cox-Ingersoll-Ross (CIR) model and a variant of Heston's stochastic volatility model, utilizing a characteristic of the underlying SDEs which we term fast-forwardability. Additionally, for general models that do not possess the fast-forwardable property, we introduce a quantum Milstein sampler, based on a novel quantum algorithm for sampling L\'evy areas, which enables quantum multi-level Monte Carlo to achieve quadratic speedups for multi-dimensional stochastic processes exhibiting certain correlation types. We also present an improved analysis of numerical integration for derivative pricing, leading to substantial reductions in the resource requirements for pricing GBM and CIR models. Furthermore, we investigate the potential for additional reductions using arithmetic-free quantum procedures. Finally, we critique quantum partial differential equation (PDE) solvers as a method for derivative pricing based on amplitude estimation, identifying theoretical barriers that obstruct achieving a quantum speedup through this approach. Our findings significantly advance the understanding of quantum algorithms in derivative pricing, addressing key challenges and open questions in the field.
    Date: 2026–02
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2602.03725
  11. By: T. Di Matteo; L. Riso; M. G. Zoia
    Abstract: This paper proposes a machine learning-based framework for asset selection and portfolio construction, termed the Best-Path Algorithm Sparse Graphical Model (BPASGM). The method extends the Best-Path Algorithm (BPA) by mapping linear and non-linear dependencies among a large set of financial assets into a sparse graphical model satisfying a structural Markov property. Based on this representation, BPASGM performs a dependence-driven screening that removes positively or redundantly connected assets, isolating subsets that are conditionally independent or negatively correlated. This step is designed to enhance diversification and reduce estimation error in high-dimensional portfolio settings. Portfolio optimization is then conducted on the selected subset using standard mean-variance techniques. BPASGM does not aim to improve the theoretical mean-variance optimum under known population parameters, but rather to enhance realized performance in finite samples, where sample-based Markowitz portfolios are highly sensitive to estimation error. Monte Carlo simulations show that BPASGM-based portfolios achieve more stable risk-return profiles, lower realized volatility, and superior risk-adjusted performance compared to standard mean-variance portfolios. Empirical results for U.S. equities, global stock indices, and foreign exchange rates over 1990-2025 confirm these findings and demonstrate a substantial reduction in portfolio cardinality. Overall, BPASGM offers a statistically grounded and computationally efficient framework that integrates sparse graphical modeling with portfolio theory for dependence-aware asset selection.
    Date: 2026–02
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2602.03325
  12. By: Briggs, Ryan C.; Mellon, Jonathan; Arel-Bundock, Vincent
    Abstract: Publication practices in the social sciences act as a filter that favors statistically significant results over null findings. While the problem of selection on significance (SoS) is well-known in theory, it has been difficult to measure its scope empirically, and it has been challenging to determine how selection varies across contexts. In this article, we use large language models to extract granular and validated data on about 100, 000 articles published in over 150 political science journals from 2010 to 2024. We show that fewer than 2% of articles that rely on statistical methods report null-only findings in their abstracts, while over 90% of papers highlight significant results. To put these findings in perspective, we develop and calibrate a simple model of publication bias. Across a range of plausible assumptions, we find that statistically significant results are estimated to be one to two orders of magnitude more likely to enter the published record than null results. Leveraging metadata extracted from individual articles, we show that the pattern of strong SoS holds across subfields, journals, methods, and time periods. However, a few factors such as pre-registration and randomized experiments correlate with greater acceptance of null results. We conclude by discussing implications for the field and the potential of our new dataset for investigating other questions about political science.
    Date: 2026
    URL: https://d.repec.org/n?u=RePEc:zbw:i4rdps:281
  13. By: Qiankun Zhong; Thomas F. Eisenmann; Julian Garcia; Iyad Rahwan
    Abstract: Reliance on generative AI can reduce cultural variance and diversity, especially in creative work. This reduction in variance has already led to problems in model performance, including model collapse and hallucination. In this paper, we examine the long-term consequences of AI use for human cultural evolution and the conditions under which widespread AI use may lead to "cultural collapse", a process in which reliance on AI-generated content reduces human variation and innovation and slows cumulative cultural evolution. Using an agent-based model and evolutionary game theory, we compare two types of AI use: complement and substitute. AI-complement users seek suggestions and guidance while remaining the main producers of the final output, whereas AI-substitute users provide minimal input, and rely on AI to produce most of the output. We then study how these use strategies compete and spread under evolutionary dynamics. We find that AI-substitute users prevail under individual-level selection despite the stronger reduction in cultural variance. By contrast, AI-complement users can benefit their groups by maintaining the variance needed for exploration, and can therefore be favored under cultural group selection when group boundaries are strong. Overall, our findings shed light on the long-term, population-level effects of AI adoption and inform policy and organizational strategies to mitigate these risks.
    Date: 2026–02
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2602.03541
  14. By: Tiancheng Wang; Krishna Sharma
    Abstract: We develop an iterative framework for economic measurement that leverages large language models to extract measurement structure directly from survey instruments. The approach maps survey items to a sparse distribution over latent constructs through what we term a soft mapping, aggregates harmonized responses into respondent level sub dimension scores, and disciplines the resulting taxonomy through out of sample incremental validity tests and discriminant validity diagnostics. The framework explicitly integrates iteration into the measurement construction process. Overlap and redundancy diagnostics trigger targeted taxonomy refinement and constrained remapping, ensuring that added measurement flexibility is retained only when it delivers stable out of sample performance gains. Applied to a large scale public employee retirement plan survey, the framework identifies which semantic components contain behavioral signal and clarifies the economic mechanisms, such as beliefs versus constraints, that matter for retirement choices. The methodology provides a portable measurement audit of survey instruments that can guide both empirical analysis and survey design.
    Date: 2026–02
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2602.02604
  15. By: Dohyun Ahn; Agostino Capponi
    Abstract: Valuing corporate bonds in systemic economies is challenging due to intricate webs of inter-institutional exposures. When a bank defaults, cascading losses propagate through the network, with payments determined by a system of fixed-point equations lacking closed-form solutions. Standard Monte Carlo methods cannot capture rare yet critical default events, while existing rare-event simulation techniques fail to account for higher-order network effects and scale poorly with network size. To overcome these challenges, we propose a novel approach -- Bi-Level Importance Sampling with Splitting -- and characterize individual bank defaults by decoupling them from the network's complex fixed-point dynamics. This separation enables a two-stage estimation process that directly generates samples from the banks' default events. We demonstrate theoretically that the method is both scalable and asymptotically optimal, and validate its effectiveness through numerical studies on empirically observed networks.
    Date: 2026–02
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2602.12770

This nep-cmp issue is ©2026 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.