nep-cmp New Economics Papers
on Computational Economics
Issue of 2026–01–12
twenty-two papers chosen by
Stan Miles, Thompson Rivers University


  1. An Efficient Machine Learning Framework for Option Pricing via Fourier Transform By Liying Zhang; Ying Gao
  2. Deep Learning and Elicitability for McKean-Vlasov FBSDEs With Common Noise By Felipe J. P. Antunes; Yuri F. Saporito; Sebastian Jaimungal
  3. Risk-Aware Financial Forecasting Enhanced by Machine Learning and Intuitionistic Fuzzy Multi-Criteria Decision-Making By Safiye Turgay; Serkan Erdo\u{g}an; \v{Z}eljko Stevi\'c; Orhan Emre Elma; Tevfik Eren; Zhiyuan Wang; Mahmut Bayda\c{s}
  4. Hybrid Quantum-Classical Ensemble Learning for S\&P 500 Directional Prediction By Abraham Itzhak Weinberg
  5. Generative Agents and Expectations: Do LLMs Align with Heterogeneous Agent Models? By Filippo Gusella; Eugenio Vicario
  6. Uncertainty-Adjusted Sorting for Asset Pricing with Machine Learning By Yan Liu; Ye Luo; Zigan Wang; Xiaowei Zhang
  7. xtdml: Double Machine Learning Estimation to Static Panel Data Models with Fixed Effects in R By Annalivia Polselli
  8. Generative AI-enhanced Sector-based Investment Portfolio Construction By Alina Voronina; Oleksandr Romanko; Ruiwen Cao; Roy H. Kwon; Rafael Mendoza-Arriaga
  9. Ultimate Forward Rate Prediction and its Application to Bond Yield Forecasting: A Machine Learning Perspective By Jiawei Du; Yi Hong
  10. A Test of Lookahead Bias in LLM Forecasts By Zhenyu Gao; Wenxi Jiang; Yutong Yan
  11. Modeling Bank Systemic Risk of Emerging Markets under Geopolitical Shocks: Empirical Evidence from BRICS Countries By Haibo Wang
  12. The Red Queen's Trap: Limits of Deep Evolution in High-Frequency Trading By Yijia Chen
  13. High-Dimensional Spatial-Plus-Vertical Price Relationships and Price Transmission: A Machine Learning Approach By Mallory, Mindy; Peng, Rundong; Ma, Meilin; Wang, H. Holly
  14. The Peter Principle Revisited: An Agent-Based Model of Promotions, Efficiency, and Mitigation Policies By P. Rajguru; I. R. Churchill; G. Graham
  15. From Model Choice to Model Belief: Establishing a New Measure for LLM-Based Research By Hongshen Sun; Juanjuan Zhang
  16. LLM Agents for Combinatorial Efficient Frontiers: Investment Portfolio Optimization By Simon Paquette-Greenbaum; Jiangbo Yu
  17. The Impact of LLMs on Online News Consumption and Production By Hangcheng Zhao; Ron Berman
  18. Deep Learning for Art Market Valuation By Jianping Mei; Michael Moses; Jan Waelty; Yucheng Yang
  19. Fairness-Aware Insurance Pricing: A Multi-Objective Optimization Approach By Tim J. Boonen; Xinyue Fan; Zixiao Quan
  20. Knowing (not) to know: Explainable artificial intelligence and human metacognition By von Zahn, Moritz; Liebich, Lena; Jussupow, Ekaterina; Hinz, Oliver; Bauer, Kevin
  21. Modeling Loss Risk in Loan Portfolios with Various Heterogeneity Factors By Osadchiy, Maksim
  22. An Agent-Based approach to high-cost drugs for infectious diseases By Andrea Caravaggio; Silvia Leoni

  1. By: Liying Zhang; Ying Gao
    Abstract: The increasing need for rapid recalibration of option pricing models in dynamic markets places stringent computational demands on data generation and valuation algorithms. In this work, we propose a hybrid algorithmic framework that integrates the smooth offset algorithm (SOA) with supervised machine learning models for the fast pricing of multiple path-independent options under exponential L\'evy dynamics. Building upon the SOA-generated dataset, we train neural networks, random forests, and gradient boosted decision trees to construct surrogate pricing operators. Extensive numerical experiments demonstrate that, once trained, these surrogates achieve order-of-magnitude acceleration over direct SOA evaluation. Importantly, the proposed framework overcomes key numerical limitations inherent to fast Fourier transform-based methods, including the consistency of input data and the instability in deep out-of-the-money option pricing.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.16115
  2. By: Felipe J. P. Antunes; Yuri F. Saporito; Sebastian Jaimungal
    Abstract: We present a novel numerical method for solving McKean-Vlasov forward-backward stochastic differential equations (MV-FBSDEs) with common noise, combining Picard iterations, elicitability and deep learning. The key innovation involves elicitability to derive a path-wise loss function, enabling efficient training of neural networks to approximate both the backward process and the conditional expectations arising from common noise - without requiring computationally expensive nested Monte Carlo simulations. The mean-field interaction term is parameterized via a recurrent neural network trained to minimize an elicitable score, while the backward process is approximated through a feedforward network representing the decoupling field. We validate the algorithm on a systemic risk inter-bank borrowing and lending model, where analytical solutions exist, demonstrating accurate recovery of the true solution. We further extend the model to quantile-mediated interactions, showcasing the flexibility of the elicitability framework beyond conditional means or moments. Finally, we apply the method to a non-stationary Aiyagari--Bewley--Huggett economic growth model with endogenous interest rates, illustrating its applicability to complex mean-field games without closed-form solutions.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.14967
  3. By: Safiye Turgay; Serkan Erdo\u{g}an; \v{Z}eljko Stevi\'c; Orhan Emre Elma; Tevfik Eren; Zhiyuan Wang; Mahmut Bayda\c{s}
    Abstract: In the face of increasing financial uncertainty and market complexity, this study presents a novel risk-aware financial forecasting framework that integrates advanced machine learning techniques with intuitionistic fuzzy multi-criteria decision-making (MCDM). Tailored to the BIST 100 index and validated through a case study of a major defense company in T\"urkiye, the framework fuses structured financial data, unstructured text data, and macroeconomic indicators to enhance predictive accuracy and robustness. It incorporates a hybrid suite of models, including extreme gradient boosting (XGBoost), long short-term memory (LSTM) network, graph neural network (GNN), to deliver probabilistic forecasts with quantified uncertainty. The empirical results demonstrate high forecasting accuracy, with a net profit mean absolute percentage error (MAPE) of 3.03% and narrow 95% confidence intervals for key financial indicators. The risk-aware analysis indicates a favorable risk-return profile, with a Sharpe ratio of 1.25 and a higher Sortino ratio of 1.80, suggesting relatively low downside volatility and robust performance under market fluctuations. Sensitivity analysis shows that the key financial indicator predictions are highly sensitive to variations of inflation, interest rates, sentiment, and exchange rates. Additionally, using an intuitionistic fuzzy MCDM approach, combining entropy weighting, evaluation based on distance from the average solution (EDAS), and the measurement of alternatives and ranking according to compromise solution (MARCOS) methods, the tabular data learning network (TabNet) outperforms the other models and is identified as the most suitable candidate for deployment. Overall, the findings of this work highlight the importance of integrating advanced machine learning, risk quantification, and fuzzy MCDM methodologies in financial forecasting, particularly in emerging markets.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.17936
  4. By: Abraham Itzhak Weinberg
    Abstract: Financial market prediction is a challenging application of machine learning, where even small improvements in directional accuracy can yield substantial value. Most models struggle to exceed 55--57\% accuracy due to high noise, non-stationarity, and market efficiency. We introduce a hybrid ensemble framework combining quantum sentiment analysis, Decision Transformer architecture, and strategic model selection, achieving 60.14\% directional accuracy on S\&P 500 prediction, a 3.10\% improvement over individual models. Our framework addresses three limitations of prior approaches. First, architecture diversity dominates dataset diversity: combining different learning algorithms (LSTM, Decision Transformer, XGBoost, Random Forest, Logistic Regression) on the same data outperforms training identical architectures on multiple datasets (60.14\% vs.\ 52.80\%), confirmed by correlation analysis ($r>0.6$ among same-architecture models). Second, a 4-qubit variational quantum circuit enhances sentiment analysis, providing +0.8\% to +1.5\% gains per model. Third, smart filtering excludes weak predictors (accuracy $
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.15738
  5. By: Filippo Gusella; Eugenio Vicario
    Abstract: Results in the Heterogeneous Agent Model (HAM) literature determine the proportion of fundamentalists and trend followers in the financial market. This proportion varies according to the periods analyzed. In this paper, we use a large language model (LLM) to construct a generative agent (GA) that determines the probability of adopting one of the two strategies based on current information. The probabilities of strategy adoption are compared with those in the HAM literature for the S&P 500 index between 1990 and 2020. Our findings suggest that the resulting artificial intelligence (AI) expectations align with those reported in the HAM literature. At the same time, extending the analysis to artificial market data helps us to filter the decision-making process of the AI agent. In the artificial market, results confirm the heterogeneity in expectations but reveal systematic asymmetry toward the fundamentalist behavior.
    Keywords: Heterogeneous Expectations, Large Language Model, Generative Agent, Funda mentalists, Trend Followers
    JEL: E30 E70 D84
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:frz:wpaper:wp2025_18.rdf
  6. By: Yan Liu; Ye Luo; Zigan Wang; Xiaowei Zhang
    Abstract: Machine learning is central to empirical asset pricing, but portfolio construction still relies on point predictions and largely ignores asset-specific estimation uncertainty. We propose a simple change: sort assets using uncertainty-adjusted prediction bounds instead of point predictions alone. Across a broad set of ML models and a U.S. equity panel, this approach improves portfolio performance relative to point-prediction sorting. These gains persist even when bounds are built from partial or misspecified uncertainty information. They arise mainly from reduced volatility and are strongest for flexible machine learning models. Identification and robustness exercises show that these improvements are driven by asset-level rather than time or aggregate predictive uncertainty.
    Date: 2026–01
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2601.00593
  7. By: Annalivia Polselli
    Abstract: The double machine learning (DML) method combines the predictive power of machine learning with statistical estimation to conduct inference about the structural parameter of interest. This paper presents the R package `xtdml`, which implements DML methods for partially linear panel regression models with low-dimensional fixed effects, high-dimensional confounding variables, proposed by Clarke and Polselli (2025). The package provides functionalities to: (a) learn nuisance functions with machine learning algorithms from the `mlr3` ecosystem, (b) handle unobserved individual heterogeneity choosing among first-difference transformation, within-group transformation, and correlated random effects, (c) transform the covariates with min-max normalization and polynomial expansion to improve learning performance. We showcase the use of `xtdml` with both simulated and real longitudinal data.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.15965
  8. By: Alina Voronina; Oleksandr Romanko; Ruiwen Cao; Roy H. Kwon; Rafael Mendoza-Arriaga
    Abstract: This paper investigates how Large Language Models (LLMs) from leading providers (OpenAI, Google, Anthropic, DeepSeek, and xAI) can be applied to quantitative sector-based portfolio construction. We use LLMs to identify investable universes of stocks within S&P 500 sector indices and evaluate how their selections perform when combined with classical portfolio optimization methods. Each model was prompted to select and weight 20 stocks per sector, and the resulting portfolios were compared with their respective sector indices across two distinct out-of-sample periods: a stable market phase (January-March 2025) and a volatile phase (April-June 2025). Our results reveal a strong temporal dependence in LLM portfolio performance. During stable market conditions, LLM-weighted portfolios frequently outperformed sector indices on both cumulative return and risk-adjusted (Sharpe ratio) measures. However, during the volatile period, many LLM portfolios underperformed, suggesting that current models may struggle to adapt to regime shifts or high-volatility environments underrepresented in their training data. Importantly, when LLM-based stock selection is combined with traditional optimization techniques, portfolio outcomes improve in both performance and consistency. This study contributes one of the first multi-model, cross-provider evaluations of generative AI algorithms in investment management. It highlights that while LLMs can effectively complement quantitative finance by enhancing stock selection and interpretability, their reliability remains market-dependent. The findings underscore the potential of hybrid AI-quantitative frameworks, integrating LLM reasoning with established optimization techniques, to produce more robust and adaptive investment strategies.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.24526
  9. By: Jiawei Du; Yi Hong
    Abstract: This study focuses on forecasting the ultimate forward rate (UFR) and developing a UFRbased bond yield prediction model using data from Chinese treasury bonds and macroeconomic variables spanning from December 2009 to December 2024. The de Kort-Vellekooptype methodology is applied to estimate the UFR, incorporating the optimal turning parameter determination technique proposed in this study, which helps mitigate anomalous fluctuations. In addition, both linear and nonlinear machine learning techniques are employed to forecast the UFR and ultra-long-term bond yields. The results indicate that nonlinear machine learning models outperform their linear counterparts in forecasting accuracy. Incorporating macroeconomic variables, particularly price index-related variables, significantly improves the accuracy of predictions. Finally, a novel UFR-based bond yield forecasting model is developed, demonstrating superior performance across different bond maturities.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2601.00011
  10. By: Zhenyu Gao; Wenxi Jiang; Yutong Yan
    Abstract: We develop a statistical test to detect lookahead bias in economic forecasts generated by large language models (LLMs). Using state-of-the-art pre-training data detection techniques, we estimate the likelihood that a given prompt appeared in an LLM's training corpus, a statistic we term Lookahead Propensity (LAP). We formally show that a positive correlation between LAP and forecast accuracy indicates the presence and magnitude of lookahead bias, and apply the test to two forecasting tasks: news headlines predicting stock returns and earnings call transcripts predicting capital expenditures. Our test provides a cost-efficient, diagnostic tool for assessing the validity and reliability of LLM-generated forecasts.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.23847
  11. By: Haibo Wang
    Abstract: The growing economic influence of the BRICS nations requires risk models that capture complex, long-term dynamics. This paper introduces the Bank Risk Interlinkage with Dynamic Graph and Event Simulations (BRIDGES) framework, which analyzes systemic risk based on the level of information complexity (zero-order, first-order, and second-order). BRIDGES utilizes the Dynamic Time Warping (DTW) distance to construct a dynamic network for 551 BRICS banks based on their strategic similarity, using zero-order information such as annual balance sheet data from 2008 to 2024. It then employs first-order information, including trends in risk ratios, to detect shifts in banks' behavior. A Temporal Graph Neural Network (TGNN), as the core of BRIDGES, is deployed to learn network evolutions and detect second-order information, such as anomalous changes in the structural relationships of the bank network. To measure the impact of anomalous changes on network stability, BRIDGES performs Agent-Based Model (ABM) simulations to assess the banking system's resilience to internal financial failure and external geopolitical shocks at the individual country level and across BRICS nations. Simulation results show that the failure of the largest institutions causes more systemic damage than the failure of the financially vulnerable or dynamically anomalous ones, driven by powerful panic effects. Compared to this "too big to fail" scenario, a geopolitical shock with correlated country-wide propagation causes more destructive systemic damage, leading to a near-total systemic collapse. It suggests that the primary threats to BRICS financial stability are second-order panic and large-scale geopolitical shocks, which traditional risk analysis models might not detect.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.20515
  12. By: Yijia Chen
    Abstract: The integration of Deep Reinforcement Learning (DRL) and Evolutionary Computation (EC) is frequently hypothesized to be the "Holy Grail" of algorithmic trading, promising systems that adapt autonomously to non-stationary market regimes. This paper presents a rigorous post-mortem analysis of "Galaxy Empire, " a hybrid framework coupling LSTM/Transformer-based perception with a genetic "Time-is-Life" survival mechanism. Deploying a population of 500 autonomous agents in a high-frequency cryptocurrency environment, we observed a catastrophic divergence between training metrics (Validation APY $>300\%$) and live performance (Capital Decay $>70\%$). We deconstruct this failure through a multi-disciplinary lens, identifying three critical failure modes: the overfitting of \textit{Aleatoric Uncertainty} in low-entropy time-series, the \textit{Survivor Bias} inherent in evolutionary selection under high variance, and the mathematical impossibility of overcoming microstructure friction without order-flow data. Our findings provide empirical evidence that increasing model complexity in the absence of information asymmetry exacerbates systemic fragility.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.15732
  13. By: Mallory, Mindy; Peng, Rundong; Ma, Meilin; Wang, H. Holly
    Abstract: Price transmission has been studied extensively in agricultural economics through the lens of spatial and vertical price relationships. Classical time-series econometric techniques suffer from the “curse of dimensionality” and are applied almost exclusively to small sets of price series — either prices of one commodity in a few regions or prices of a few commodities in one region. However, an agrifood supply chain usually contains several commodities (e.g., cattle and beef) and spans numerous regions. Failing to jointly examine multi-region, multi-commodity price relationships limits researchers’ ability to derive insights from increasingly high-dimensional price datasets of agrifood supply chains. We apply a machine-learning method – specifically, regularized regression – to augment the classical vector error correction model (VECM) and study large spatial-plus-vertical price systems. Leveraging weekly provincial-level data on the piglet-hog-pork supply chain in China, we uncover economically interesting changes in price relationships in the system before and after the outbreak of a major hog disease. To quantify price transmission in the large system, we rely on the spatial-plus-vertical price relationships identified by the regularized VECM to visualize comprehensive spatial and vertical price transmission of hypothetical shocks through joint impulse response functions. Price transmission shows considerable heterogeneity across regions and commodities as the VECM outcomes imply and display different dynamics over time.
    Keywords: Production Economics
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:ags:aaea25:361074
  14. By: P. Rajguru; I. R. Churchill; G. Graham
    Abstract: The Peter Principle posits that organizations promoting their best performers risk elevating employees to roles where their competence no longer translates, thereby degrading overall efficiency. We investigate when this dynamic emerges and how to mitigate it using a large-scale agent-based model (ABM) of a five-level hierarchy. Results show the Peter Principle is most pronounced under merit promotion when role requirements change substantially between levels; seniority and random exhibit the weakest Peter effects. Both interventions mitigate performance declines, with merit-with-training particularly effective when skill transfer is limited, and selective demotion restoring agents whose 'true' peak performance is at lower levels.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.21467
  15. By: Hongshen Sun; Juanjuan Zhang
    Abstract: Large language models (LLMs) are increasingly used to simulate human behavior, but common practices to use LLM-generated data are inefficient. Treating an LLM's output ("model choice") as a single data point underutilizes the information inherent to the probabilistic nature of LLMs. This paper introduces and formalizes "model belief, " a measure derived from an LLM's token-level probabilities that captures the model's belief distribution over choice alternatives in a single generation run. The authors prove that model belief is asymptotically equivalent to the mean of model choices (a non-trivial property) but forms a more statistically efficient estimator, with lower variance and a faster convergence rate. Analogous properties are shown to hold for smooth functions of model belief and model choice often used in downstream applications. The authors demonstrate the performance of model belief through a demand estimation study, where an LLM simulates consumer responses to different prices. In practical settings with limited numbers of runs, model belief explains and predicts ground-truth model choice better than model choice itself, and reduces the computation needed to reach sufficiently accurate estimates by roughly a factor of 20. The findings support using model belief as the default measure to extract more information from LLM-generated data.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.23184
  16. By: Simon Paquette-Greenbaum; Jiangbo Yu
    Abstract: Investment portfolio optimization is a task conducted in all major financial institutions. The Cardinality Constrained Mean-Variance Portfolio Optimization (CCPO) problem formulation is ubiquitous for portfolio optimization. The challenge of this type of portfolio optimization, a mixed-integer quadratic programming (MIQP) problem, arises from the intractability of solutions from exact solvers, where heuristic algorithms are used to find approximate portfolio solutions. CCPO entails many laborious and complex workflows and also requires extensive effort pertaining to heuristic algorithm development, where the combination of pooled heuristic solutions results in improved efficient frontiers. Hence, common approaches are to develop many heuristic algorithms. Agentic frameworks emerge as a promising candidate for many problems within combinatorial optimization, as they have been shown to be equally efficient with regard to automating large workflows and have been shown to be excellent in terms of algorithm development, sometimes surpassing human-level performance. This study implements a novel agentic framework for the CCPO and explores several concrete architectures. In benchmark problems, the implemented agentic framework matches state-of-the-art algorithms. Furthermore, complex workflows and algorithm development efforts are alleviated, while in the worst case, lower but acceptable error is reported.
    Date: 2026–01
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2601.00770
  17. By: Hangcheng Zhao; Ron Berman
    Abstract: Large language models (LLMs) change how consumers acquire information online; their bots also crawl news publishers' websites for training data and to answer consumer queries; and they provide tools that can lower the cost of content creation. These changes lead to predictions of adverse impact on news publishers in the form of lowered consumer demand, reduced demand for newsroom employees, and an increase in news "slop." Consequently, some publishers strategically responded by blocking LLM access to their websites using the robots.txt file standard. Using high-frequency granular data, we document four effects related to the predicted shifts in news publishing following the introduction of generative AI (GenAI). First, we find a consistent and moderate decline in traffic to news publishers occurring after August 2024. Second, using a difference-in-differences approach, we find that blocking GenAI bots can have adverse effects on large publishers by reducing total website traffic by 23% and real consumer traffic by 14% compared to not blocking. Third, on the hiring side, we do not find evidence that LLMs are replacing editorial or content-production jobs yet. The share of new editorial and content-production job listings increases over time. Fourth, regarding content production, we find no evidence that large publishers increased text volume; instead, they significantly increased rich content and use more advertising and targeting technologies. Together, these findings provide early evidence of some unforeseen impacts of the introduction of LLMs on news production and consumption.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.24968
  18. By: Jianping Mei; Michael Moses; Jan Waelty; Yucheng Yang
    Abstract: We study how deep learning can improve valuation in the art market by incorporating the visual content of artworks into predictive models. Using a large repeated-sales dataset from major auction houses, we benchmark classical hedonic regressions and tree-based methods against modern deep architectures, including multi-modal models that fuse tabular and image data. We find that while artist identity and prior transaction history dominate overall predictive power, visual embeddings provide a distinct and economically meaningful contribution for fresh-to-market works where historical anchors are absent. Interpretability analyses using Grad-CAM and embedding visualizations show that models attend to compositional and stylistic cues. Our findings demonstrate that multi-modal deep learning delivers significant value precisely when valuation is hardest, namely first-time sales, and thus offers new insights for both academic research and practice in art market valuation.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.23078
  19. By: Tim J. Boonen; Xinyue Fan; Zixiao Quan
    Abstract: Machine learning improves predictive accuracy in insurance pricing but exacerbates trade-offs between competing fairness criteria across different discrimination measures, challenging regulators and insurers to reconcile profitability with equitable outcomes. While existing fairness-aware models offer partial solutions under GLM and XGBoost estimation methods, they remain constrained by single-objective optimization, failing to holistically navigate a conflicting landscape of accuracy, group fairness, individual fairness, and counterfactual fairness. To address this, we propose a novel multi-objective optimization framework that jointly optimizes all four criteria via the Non-dominated Sorting Genetic Algorithm II (NSGA-II), generating a diverse Pareto front of trade-off solutions. We use a specific selection mechanism to extract a premium on this front. Our results show that XGBoost outperforms GLM in accuracy but amplifies fairness disparities; the Orthogonal model excels in group fairness, while Synthetic Control leads in individual and counterfactual fairness. Our method consistently achieves a balanced compromise, outperforming single-model approaches.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.24747
  20. By: von Zahn, Moritz; Liebich, Lena; Jussupow, Ekaterina; Hinz, Oliver; Bauer, Kevin
    Abstract: The use of explainable AI (XAI) methods to render the prediction logic of black-box AI interpretable to humans is becoming more popular and more widely used in practice, among other things due to regulatory requirements such as the EU AI Act. Previous research on human-XAI interaction has shown that explainability may help mitigate black-box problems but also unintentionally alter individuals' cognitive processes, e.g., distorting their reasoning and evoking informational overload. While empirical evidence on the impact of XAI on how individuals "think" is growing, it has been largely overlooked whether XAI can even affect individuals' "thinking about thinking", i.e., metacognition, which theory conceptualizes to monitor and control previously-studied thinking processes. Aiming to take a first step in filling this gap, we investigate whether XAI affects confidence calibrations, and, thereby, decisions to transfer decision-making responsibility to AI, on the meta-level of cognition. We conduct two incentivized experiments in which human experts repeatedly perform prediction tasks, with the option to delegate each task to an AI. We exogenously vary whether participants initially receive explanations that reveal the AI's underlying prediction logic. We find that XAI improves individuals' metaknowledge (the alignment between confidence and actual performance) and partially enhances confidence sensitivity (the variation of confidence with task performance). These metacognitive shifts causally increase both the frequency and effectiveness of human-to-AI delegation decisions. Interestingly, these effects only occur when explanations reveal to individuals that AI's logic diverges from their own, leading to a systematic reduction in confidence. Our findings suggest that XAI can correct overconfidence at the potential cost of lowering confidence even when individuals perform well. Both effects influence decisions to cede responsibility to AI, highlighting metacognition as a central mechanism in human-XAI collaboration.
    Keywords: Explainable Artificial Intelligence, Metacognition, Metaknowledge, Delegation, Machine Learning, Human-AI Collaboration
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:zbw:safewp:334511
  21. By: Osadchiy, Maksim
    Abstract: This paper extends the classical Vasicek credit risk model by introducing a comprehensive multi-factor framework that simultaneously incorporates key sources of portfolio heterogeneity – namely, variations in asset weights, recovery rates, default probabilities, and asset correlations. By modeling the complex interactions among these factors, our approach provides a more realistic and nuanced assessment of loss distributions and risk measures. Monte Carlo simulations demonstrate that the extended Vasicek-style model yields accurate approximations of portfolio Value at Risk (VaR) across portfolios with diverse recovery profiles and moderate concentration levels. This advancement improves the precision of credit risk measurement, addresses current regulatory gaps, and offers a solid foundation for more sophisticated risk management of heterogeneous credit portfolios.
    Keywords: Heterogeneous Credit Portfolios; Granularity Adjustment; Vasicek Model; Value at Risk; Monte Carlo Simulation
    JEL: C63 G17 G21 G28 G32 G33
    Date: 2025–11–27
    URL: https://d.repec.org/n?u=RePEc:pra:mprapa:127032
  22. By: Andrea Caravaggio; Silvia Leoni
    Abstract: The COVID-19 pandemic has underscored the need for modeling tools that account for the spatial and institutional heterogeneity underlying real-world epidemic dynamics. We develop a spatially structured agent-based model (ABM) in which decentralized health authorities (HAs) allocate costly treatments under local budget constraints to manage the spread of an infectious disease. Individuals are distributed across a grid of locations, with contagion governed by discrete-time Susceptible-Infected-Recovered (SIR) dynamics and spatial spillovers through local interactions. At each time step, HAs choose treatment intensity endogenously based on local infection levels, available resources, and pricing conditions. We analyze how key factors—such as treatment efficacy, pricing schemes, and initial outbreak distribution—shape both local and aggregate outcomes. In addition to a benchmark case with homogeneous pricing, we explore a parsimonious pricing scheme where prices vary across cells. Analytical results identify the threshold conditions for disease eradication, while simulations show how decentralized decisions and spatial feedback can generate persistent inequalities in infection and treatment. Our findings highlight the importance of integrating spatial structure, economic constraints, and pricing design in epidemic policy modeling.
    Keywords: Agent-based modeling; health policy; infectious disease control; SIR model; treatment pricing.
    JEL: C63 H51 I18
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:frz:wpaper:wp2025_20.rdf

This nep-cmp issue is ©2026 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.