nep-ain New Economics Papers
on Artificial Intelligence
Issue of 2025–09–15
nine papers chosen by
Ben Greiner, Wirtschaftsuniversität Wien


  1. Too Noisy to Collude? Algorithmic Collusion Under Laplacian Noise By Niuniu Zhang
  2. An Economy of AI Agents By Gillian K. Hadfield; Andrew Koh
  3. Bias-Adjusted LLM Agents for Human-Like Decision-Making via Behavioral Economics By Ayato Kitadai; Yusuke Fukasawa; Nariaki Nishino
  4. Are Businesses Scaling Back Hiring Due to AI? By Jaison R. Abel; Richard Deitz; Natalia Emanuel; Benjamin Hyman; Nick Montalbano
  5. Beyond Code: The Multidimensional Impacts of Large Language Models in Software Development By Sardar Bonabi; Sarah Bana; Vijay Gurbaxani; Tingting Nian
  6. Artificial Intelligence in the Office and the Factory: Evidence from Administrative Software Registry Data By Gustavo de Souza
  7. Integrating Large Language Models in Financial Investments and Market Analysis: A Survey By Sedigheh Mahdavi; Jiating; Chen; Pradeep Kumar Joshi; Lina Huertas Guativa; Upmanyu Singh
  8. FinAI-BERT: A Transformer-Based Model for Sentence-Level Detection of AI Disclosures in Financial Reports By Muhammad Bilal Zafar
  9. Artificial or Human Intelligence? By Eric Gao

  1. By: Niuniu Zhang
    Abstract: The rise of autonomous pricing systems has sparked growing concern over algorithmic collusion in markets from retail to housing. This paper examines controlled information quality as an ex ante policy lever: by reducing the fidelity of data that pricing algorithms draw on, regulators can frustrate collusion before supracompetitive prices emerge. We show, first, that information quality is the central driver of competitive outcomes, shaping prices, profits, and consumer welfare. Second, we demonstrate that collusion can be slowed or destabilized by injecting carefully calibrated noise into pooled market data, yielding a feasibility region where intervention disrupts cartels without undermining legitimate pricing. Together, these results highlight information control as a lightweight yet practical lever to blunt digital collusion at its source.
    Date: 2025–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2509.02800
  2. By: Gillian K. Hadfield; Andrew Koh
    Abstract: In the coming decade, artificially intelligent agents with the ability to plan and execute complex tasks over long time horizons with little direct oversight from humans may be deployed across the economy. This chapter surveys recent developments and highlights open questions for economists around how AI agents might interact with humans and with each other, shape markets and organizations, and what institutions might be required for well-functioning markets.
    Date: 2025–08
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2509.01063
  3. By: Ayato Kitadai; Yusuke Fukasawa; Nariaki Nishino
    Abstract: Large language models (LLMs) are increasingly used to simulate human decision-making, but their intrinsic biases often diverge from real human behavior--limiting their ability to reflect population-level diversity. We address this challenge with a persona-based approach that leverages individual-level behavioral data from behavioral economics to adjust model biases. Applying this method to the ultimatum game--a standard but difficult benchmark for LLMs--we observe improved alignment between simulated and empirical behavior, particularly on the responder side. While further refinement of trait representations is needed, our results demonstrate the promise of persona-conditioned LLMs for simulating human-like decision patterns at scale.
    Date: 2025–08
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2508.18600
  4. By: Jaison R. Abel; Richard Deitz; Natalia Emanuel; Benjamin Hyman; Nick Montalbano
    Abstract: The swift advancement of artificial intelligence (AI) has sparked significant concern that this new technology will replace jobs and stifle hiring. To explore the effects of AI on employment, our August regional business surveys asked firms about their adoption of AI and if they had made any corresponding adjustments to their workforces. Businesses reported a notable increase in AI use over the past year, yet very few firms reported AI-induced layoffs. Indeed, for those already employed, our results indicate AI is more likely to result in retraining than job loss, similar to our findings from last year. That said, AI is influencing recruiting, with some firms scaling back hiring due to AI and some firms adding workers proficient in its use. Looking ahead, however, layoffs and reductions in hiring plans due to AI use are expected to increase, especially for workers with a college degree.
    Keywords: AI; artificial intelligence (AI); layoffs; retraining
    JEL: J0 R0
    Date: 2025–09–04
    URL: https://d.repec.org/n?u=RePEc:fip:fednls:101661
  5. By: Sardar Bonabi; Sarah Bana; Vijay Gurbaxani; Tingting Nian
    Abstract: Large language models (LLMs) are poised to significantly impact software development, especially in the Open-Source Software (OSS) sector. To understand this impact, we first outline the mechanisms through which LLMs may influence OSS through code development, collaborative knowledge transfer, and skill development. We then empirically examine how LLMs affect OSS developers' work in these three key areas. Leveraging a natural experiment from a temporary ChatGPT ban in Italy, we employ a Difference-in-Differences framework with two-way fixed effects to analyze data from all OSS developers on GitHub in three similar countries, Italy, France, and Portugal, totaling 88, 022 users. We find that access to ChatGPT increases developer productivity by 6.4%, knowledge sharing by 9.6%, and skill acquisition by 8.4%. These benefits vary significantly by user experience level: novice developers primarily experience productivity gains, whereas more experienced developers benefit more from improved knowledge sharing and accelerated skill acquisition. In addition, we find that LLM-assisted learning is highly context-dependent, with the greatest benefits observed in technically complex, fragmented, or rapidly evolving contexts. We show that the productivity effects of LLMs extend beyond direct code generation to include enhanced collaborative learning and knowledge exchange among developers, dynamics that are essential for gaining a holistic understanding of LLMs' impact in OSS. Our findings offer critical managerial implications: strategically deploying LLMs can accelerate novice developers' onboarding and productivity, empower intermediate developers to foster knowledge sharing and collaboration, and support rapid skill acquisition, together enhancing long-term organizational productivity and agility.
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2506.22704
  6. By: Gustavo de Souza
    Abstract: I use administrative data on artificial intelligence (AI) software created in Brazil to study its effects on the labor market. Owing to a unique copyright system, Brazilian firms have registered their software with the government since the 1980s, creating a detailed record of nearly all commercial AI applications developed in the country. Drawing on this registry, I show that AI is widely used not only in administrative tasks but also in production settings, where it primarily supports process optimization and quality control. Using an instrument based on variation in software development costs, I find that AI affects administrative and production workers differently. Among office workers, AI reduces employment and wages, particularly for middle-wage earners. Among production workers, it increases employment of low-skilled and young workers operating machinery. These results suggest that AI displaces routine office tasks while making machines more productive and easier to operate, leading to a net increase in employment.
    Keywords: Artificial intelligence; Automation; software; inequality
    JEL: J23 J24 F63
    Date: 2025–07–21
    URL: https://d.repec.org/n?u=RePEc:fip:fedhwp:101714
  7. By: Sedigheh Mahdavi (Kristin); Jiating (Kristin); Chen; Pradeep Kumar Joshi; Lina Huertas Guativa; Upmanyu Singh
    Abstract: Large Language Models (LLMs) have been employed in financial decision making, enhancing analytical capabilities for investment strategies. Traditional investment strategies often utilize quantitative models, fundamental analysis, and technical indicators. However, LLMs have introduced new capabilities to process and analyze large volumes of structured and unstructured data, extract meaningful insights, and enhance decision-making in real-time. This survey provides a structured overview of recent research on LLMs within the financial domain, categorizing research contributions into four main frameworks: LLM-based Frameworks and Pipelines, Hybrid Integration Methods, Fine-Tuning and Adaptation Approaches, and Agent-Based Architectures. This study provides a structured review of recent LLMs research on applications in stock selection, risk assessment, sentiment analysis, trading, and financial forecasting. By reviewing the existing literature, this study highlights the capabilities, challenges, and potential directions of LLMs in financial markets.
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2507.01990
  8. By: Muhammad Bilal Zafar
    Abstract: The proliferation of artificial intelligence (AI) in financial services has prompted growing demand for tools that can systematically detect AI-related disclosures in corporate filings. While prior approaches often rely on keyword expansion or document-level classification, they fall short in granularity, interpretability, and robustness. This study introduces FinAI-BERT, a domain-adapted transformer-based language model designed to classify AI-related content at the sentence level within financial texts. The model was fine-tuned on a manually curated and balanced dataset of 1, 586 sentences drawn from 669 annual reports of U.S. banks (2015 to 2023). FinAI-BERT achieved near-perfect classification performance (accuracy of 99.37 percent, F1 score of 0.993), outperforming traditional baselines such as Logistic Regression, Naive Bayes, Random Forest, and XGBoost. Interpretability was ensured through SHAP-based token attribution, while bias analysis and robustness checks confirmed the model's stability across sentence lengths, adversarial inputs, and temporal samples. Theoretically, the study advances financial NLP by operationalizing fine-grained, theme-specific classification using transformer architectures. Practically, it offers a scalable, transparent solution for analysts, regulators, and scholars seeking to monitor the diffusion and framing of AI across financial institutions.
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2507.01991
  9. By: Eric Gao
    Abstract: Artificial intelligence (AI) tools such as large language models (LLMs) are already altering student learning. Unlike previous technologies, LLMs can independently solve problems regardless of student understanding, yet are not always accurate (due to hallucination) and face sharp performance cutoffs (due to emergence). Access to these tools significantly alters a student's incentives to learn, potentially decreasing the sum knowledge of humans and AI. Additionally, the marginal benefit of learning changes depending on which side of the AI frontier a human is on, creating a discontinuous gap between those that know more than or less than AI. This contrasts with downstream models of AI's impact on the labor force which assume continuous ability. Finally, increasing the portion of assignments where AI cannot be used can counteract student mis-specification about AI accuracy, preventing underinvestment. A better understanding of how AI impacts learning and student incentives is crucial for educators to adapt to this new technology.
    Date: 2025–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2509.02879

This nep-ain issue is ©2025 by Ben Greiner. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.