nep-ain New Economics Papers
on Artificial Intelligence
Issue of 2025–09–22
thirteen papers chosen by
Ben Greiner, Wirtschaftsuniversität Wien


  1. From Digital Distrust to Codified Honesty: Experimental Evidence on Generative AI in Credence Goods Markets By Alexander Erlei
  2. News Customization with AI By Felix Chopra; Ingar K. Haaland; Fabian Roeben; Christopher Roth; Vanessa Sticher
  3. The social embeddedness of trust in AI: How existing trust relations to decision-makers and institutions influence trust in AI decision aids for public administration By Tamara Schnell; Ricarda Schmidt-Scheele
  4. The Potential Distributive Impact of AI-driven Labor Changes in Latin America By Ciaschi, Matias; Falcone, Guillermo; Garganta, Santiago; Gasparini, Leonardo; Bertín, Octavio; Ramírez-Leira, Lucía
  5. Digital Transformation and the Restructuring of Employment: Evidence from Chinese Listed Firms By Yubo Cheng
  6. Social Group Bias in AI Finance By Thomas R. Cook; Sophia Kazinnik
  7. Artificial intelligence and financial crises By Danielsson, Jon; Uthemann, Andreas
  8. Identifying economic narratives in large text corpora: An integrated approach using large language models By Schmidt, Tobias; Lange, Kai-Robin; Reccius, Matthias; Müller, Henrik; Roos, Michael W. M.; Jentsch, Carsten
  9. Finance-Grounded Optimization For Algorithmic Trading By Kasymkhan Khubiev; Mikhail Semenov; Irina Podlipnova
  10. The Formation of AI Capital in Higher Education: Enhancing Students' Academic Performance and Employment Rates By Drydakis, Nick
  11. Artificial Writing and Automated Detection By Brian Jabarian; Alex Imas
  12. AI Agents for Economic Research By Anton Korinek
  13. A Decision Theoretic Perspective on Artificial Superintelligence: Coping with Missing Data Problems in Prediction and Treatment Choice By Jeff Dominitz; Charles F. Manski

  1. By: Alexander Erlei
    Abstract: Generative AI is transforming the provision of expert services. This article uses a series of one-shot experiments to quantify the behavioral, welfare and distribution consequences of large language models (LLMs) on AI-AI, Human-Human, Human-AI and Human-AI-Human expert markets. Using a credence goods framework where experts have private information about the optimal service for consumers, we find that Human-Human markets generally achieve higher levels of efficiency than AI-AI and Human-AI markets through pro-social expert preferences and higher consumer trust. Notably, LLM experts still earn substantially higher surplus than human experts -- at the expense of consumer surplus - suggesting adverse incentives that may spur the harmful deployment of LLMs. Concurrently, a majority of human experts chooses to rely on LLM agents when given the opportunity in Human-AI-Human markets, especially if they have agency over the LLM's (social) objective function. Here, a large share of experts prioritizes efficiency-loving preferences over pure self-interest. Disclosing these preferences to consumers induces strong efficiency gains by marginalizing self-interested LLM experts and human experts. Consequently, Human-AI-Human markets outperform Human-Human markets under transparency rules. With obfuscation, however, efficiency gains disappear, and adverse expert incentives remain. Our results shed light on the potential opportunities and risks of disseminating LLMs in the context of expert services and raise several regulatory challenges. On the one hand, LLMs can negatively affect human trust in the presence of information asymmetries and partially crowd-out experts' other-regarding preferences through automation. On the other hand, LLMs allow experts to codify and communicate their objective function, which reduces information asymmetries and increases efficiency.
    Date: 2025–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2509.06069
  2. By: Felix Chopra; Ingar K. Haaland; Fabian Roeben; Christopher Roth; Vanessa Sticher
    Abstract: News outlets compete for engagement rather than reader satisfaction, leading to persistent mismatches between consumer demand and the supply of news. We test whether offering people the opportunity to customize the news can address this mismatch by unbundling presentation from coverage. In our AI-powered news app, users can customize article characteristics, such as the complexity of the writing or the extent of opinion, while holding the underlying news event constant. Using rich news consumption data from large-scale field experiments, we uncover substantial heterogeneity in news preferences. While a significant fraction of users demand politically aligned news, the vast majority of users display a high and persistent demand for less opinionated and more fact-driven news. Customization also leads to a better match between the news consumed and stated preferences, increasing news satisfaction.
    Keywords: news consumption, customization, artificial intelligence, matching
    JEL: C93 D83 L82 P00
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:ces:ceswps:_12121
  3. By: Tamara Schnell (Carl von Ossietzky Universität Oldenburg, Institute for Social Sciences, Working group “Organiza-tion and Innovation”, Oldenburg); Ricarda Schmidt-Scheele (Carl von Ossietzky Universität Oldenburg, Institute for Social Sciences, Working group “Organiza-tion and Innovation”, Oldenburg)
    Abstract: AI decision aids are increasingly adopted in public administration to support complex decisions tra-ditionally carried out by employees of public authorities. While prior research has emphasized deci-sion-makers’ trust in AI, less attention has been paid to stakeholders who are exposed to and af-fected by these emerging AI-supported decision-making processes and outcomes. In such con-texts, it is not only the AI itself – its process, performance, and purpose – that is assessed for trustworthiness, but also existing constellations of decision-makers and institutions that govern decision-making. We argue that trust in AI is socially embedded. Drawing on sociological theories of trust, we propose a framework that conceptualizes trust in AI decision aids as shaped by existing trust relations with decision-makers and institutions involved in decision-making – the ‘shadow of the past’. To explore this, we examine a case study of an AI-augmented geographic information system (AI-GIS) developed to support spatial planning for onshore wind energy in the course of sustainably energy transition dynamics in Germany. Based on 38 interviews with stakeholders from seven groups involved in spatial planning and wind energy development, we analyze initial (mis)trust in the AI-GIS. Using a combination of qualitative comparative analysis (QCA) and qualitative content analysis, we identify four distinct configurations that condition stakeholders’ (mis)trust. Each re-flects a unique interplay of interpersonal and institutional trust relations. The study offers a more nuanced understanding of trust in AI as a relational, context-dependent phenomenon, highlighting the relevance of institutions and existing trust relations for understanding and guiding AI adoption. It therefore directly contributes to the literature on sustainability transitions and their place-specific dynamics. AI systems are considered viable technical solutions for the transformation of energy, water, or food systems. Accordingly, trust in these AI systems needs to be understood as highly context-dependent: Trust is developed and experienced within specific institutional set-tings, regulatory cultures, and histories of technology adoption. Hence, our paper di-rects attention to who trusts what AI, where, and under what institutional arrangements and urges this to be a central question in the sustainability transitions literature.
    Keywords: Trust in AI, Social embeddedness of trust, Trust in institutions, Qualitative Comparative Analysis (QCA), Spatial planning, Onshore wind energy
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:aoe:wpaper:2502
  4. By: Ciaschi, Matias; Falcone, Guillermo; Garganta, Santiago; Gasparini, Leonardo; Bertín, Octavio; Ramírez-Leira, Lucía
    Abstract: This paper investigates the potential distributional consequences of artificial intelligence (AI) adoption in Latin American labor markets. Using harmonized household survey data from 14 countries, we combine four recently developed AI occupational exposure indices---the AI Occupational Exposure Index (AIOE), the Complementarity-Adjusted AIOE (C-AIOE), the Generative AI Exposure Index (GBB), and the AI-Generated Occupational Exposure Index (GENOE)--to analyze patterns across countries and worker groups. We validate these measures by comparing task profiles between Latin America and high-income economies using PIAAC data, and develop a contextual adjustment that incorporates informality, wage structures, and union coverage. Finally, we simulate first order impacts of AI-induced displacement on earnings, poverty, and inequality. The results show substantial heterogeneity, with higher levels of AI- related risk among women, younger, more educated, and formal workers. Indices that account for task complementarities show flatter gradients across the income and education distribution. Simulations suggest that displacement effects may lead to only moderate increases in inequality and poverty in the absence of mitigating policies.
    JEL: O33 J21 D31
    Date: 2025–08
    URL: https://d.repec.org/n?u=RePEc:idb:brikps:14253
  5. By: Yubo Cheng
    Abstract: This paper examines how digital transformation reshapes employment structures within Chinese listed firms, focusing on occupational functions and task intensity. Drawing on recruitment data classified under ISCO-08 and the Chinese Standard Occupational Classification 2022, we categorize jobs into five functional groups: management, professional, technical, auxiliary, and manual. Using a task-based framework, we construct routine, abstract, and manual task intensity indices through keyword analysis of job descriptions. We find that digitalization is associated with increased hiring in managerial, professional, and technical roles, and reduced demand for auxiliary and manual labor. At the task level, abstract task demand rises, while routine and manual tasks decline. Moderation analyses link these shifts to improvements in managerial efficiency and executive compensation. Our findings highlight how emerging technologies, including large language models (LLMs), are reshaping skill demands and labor dynamics in Chinas corporate sector.
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2506.23230
  6. By: Thomas R. Cook; Sophia Kazinnik
    Abstract: Financial institutions increasingly rely on large language models (LLMs) for high-stakes decision-making. However, these models risk perpetuating harmful biases if deployed without careful oversight. This paper investigates racial bias in LLMs specifically through the lens of credit decision-making tasks, operating on the premise that biases identified here are indicative of broader concerns across financial applications. We introduce a reproducible, counterfactual testing framework that evaluates how models respond to simulated mortgage applicants identical in all attributes except race. Our results reveal significant race-based discrepancies, exceeding historically observed bias levels. Leveraging layer-wise analysis, we track the propagation of sensitive attributes through internal model representations. Building on this, we deploy a control-vector intervention that effectively reduces racial disparities by up to 70% (33% on average) without impairing overall model performance. Our approach provides a transparent and practical toolkit for the identification and mitigation of bias in financial LLM deployments.
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2506.17490
  7. By: Danielsson, Jon; Uthemann, Andreas
    Abstract: The rapid adoption of artificial intelligence (AI) poses new and poorly understood threats to financial stability. We use a game-theoretic model to analyse the stability impact of AI, finding that it amplifies existing financial system vulnerabilities — leverage, liquidity stress and opacity — through superior information processing, common data, speed and strategic complementarities. The consequence is crises become faster and more severe, where the likelihood of a crisis is directly affected by how effectively the authorities engage with AI. In response, we propose that the financial authorities develop their own AI systems and expertise, establish direct AI-to-AI communication, implement automated crisis facilities and monitor AI use.
    Keywords: crises; systemic risk; AI
    JEL: F3 G3
    Date: 2025–09–30
    URL: https://d.repec.org/n?u=RePEc:ehl:lserod:128657
  8. By: Schmidt, Tobias; Lange, Kai-Robin; Reccius, Matthias; Müller, Henrik; Roos, Michael W. M.; Jentsch, Carsten
    Abstract: As interest in economic narratives has grown in recent years, so has the number of pipelines dedicated to extracting such narratives from texts. Pipelines often employ a mix of state-of-the-art natural language processing techniques, such as BERT, to tackle this task. While effective on foundational linguistic operations essential for narrative extraction, such models lack the deeper semantic understanding required to distinguish extracting economic narratives from merely conducting classic tasks like Semantic Role Labeling. Instead of relying on complex model pipelines, we evaluate the benefits of Large Language Models (LLMs) by analyzing a corpus of Wall Street Journal and New York Times newspaper articles about inflation. We apply a rigorous narrative definition and compare GPT 4o outputs to gold-standard narratives produced by expert annotators. Our results suggests that GPT-4o is capable of extracting valid economic narratives in a structured format, but still falls short of expert-level performance when handling complex documents and narratives. Given the novelty of LLMs in economic research, we also provide guidance for future work in economics and the social sciences that employs LLMs to pursue similar objectives.
    Keywords: Economic narratives, natural language processing, large language models
    JEL: C18 C55 C87 E70
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:zbw:rwirep:325494
  9. By: Kasymkhan Khubiev; Mikhail Semenov; Irina Podlipnova
    Abstract: Deep Learning is evolving fast and integrates into various domains. Finance is a challenging field for deep learning, especially in the case of interpretable artificial intelligence (AI). Although classical approaches perform very well with natural language processing, computer vision, and forecasting, they are not perfect for the financial world, in which specialists use different metrics to evaluate model performance. We first introduce financially grounded loss functions derived from key quantitative finance metrics, including the Sharpe ratio, Profit-and-Loss (PnL), and Maximum Draw down. Additionally, we propose turnover regularization, a method that inherently constrains the turnover of generated positions within predefined limits. Our findings demonstrate that the proposed loss functions, in conjunction with turnover regularization, outperform the traditional mean squared error loss for return prediction tasks when evaluated using algorithmic trading metrics. The study shows that financially grounded metrics enhance predictive performance in trading strategies and portfolio optimization.
    Date: 2025–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2509.04541
  10. By: Drydakis, Nick
    Abstract: The study evaluates the effectiveness of a 12-week AI module delivered to non-STEM university students in England, aimed at building students' AI Capital, encompassing AI-related knowledge, skills, and capabilities. An integral part of the process involved the development and validation of the AI Capital of Students scale, used to measure AI Capital before and after the educational intervention. The module was delivered on four occasions to final-year students between 2023 and 2024, with follow-up data collected on students' employment status. The findings indicate that AI learning enhances students' AI Capital across all three dimensions. Moreover, AI Capital is positively associated with academic performance in AI-related coursework. However, disparities persist. Although all demographic groups experienced progress, male students, White students, and those with stronger backgrounds in mathematics and empirical methods achieved higher levels of AI Capital and academic success. Furthermore, enhanced AI Capital is associated with higher employment rates six months after graduation. To provide a theoretical foundation for this pedagogical intervention, the study introduces and validates the AI Learning-Capital-Employment Transition model, which conceptualises the pathway from structured AI education to the development of AI Capital and, in turn, to improved employment outcomes. The model integrates pedagogical, empirical and equity-centred perspectives, offering a practical framework for curriculum design and digital inclusion. The study highlights the importance of targeted interventions, inclusive pedagogy, and the integration of AI across curricula, with support tailored to students' prior academic experience.
    Keywords: Artificial Intelligence, AI literacy, AI Capital, University students, Grades, Academic performance, Employment rates
    JEL: I23 I21 J24 J21 O33 O15 I24 J15 J16
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:zbw:glodps:1668
  11. By: Brian Jabarian; Alex Imas
    Abstract: Artificial intelligence (AI) tools are increasingly used for written deliverables. This has created demand for distinguishing human-generated text from AI-generated text at scale, e.g., ensuring assignments were completed by students, product reviews written by actual customers, etc. A decision-maker aiming to implement a detector in practice must consider two key statistics: the False Negative Rate (FNR), which corresponds to the proportion of AI-generated text that is falsely classified as human, and the False Positive Rate (FPR), which corresponds to the proportion of human-written text that is falsely classified as AI-generated. We evaluate three leading commercial detectors—Pangram, OriginalityAI, GPTZero—and an open-source one —RoBERTa—on their performance in minimizing these statistics using a large corpus spanning genres, lengths, and models. Commercial detectors outperform open-source, with Pangram achieving near-zero FNR and FPR rates that remain robust across models, threshold rules, ultra-short passages, "stubs" (≤ 50 words) and ’humanizer’ tools. A decision-maker may weight one type of error (Type I vs. Type II) as more important than the other. To account for such a preference, we introduce a framework where the decision-maker sets a policy cap—a detector-independent metric reflecting tolerance for false positives or negatives. We show that Pangram is the only tool to satisfy a strict cap (FPR ≤ 0.005) without sacrificing accuracy. This framework is especially relevant given the uncertainty surrounding how AI may be used at different stages of writing, where certain uses may be encouraged (e.g., grammar correction) but may be difficult to separate from other uses.
    JEL: D6 M15
    Date: 2025–09
    URL: https://d.repec.org/n?u=RePEc:nbr:nberwo:34223
  12. By: Anton Korinek
    Abstract: The objective of this paper is to demystify AI agents - autonomous LLM-based systems that plan, use tools, and execute multi-step research tasks - and to provide hands-on instructions for economists to build their own, even if they do not have programming expertise. As AI has evolved from simple chatbots to reasoning models and now to autonomous agents, the main focus of this paper is to make these powerful tools accessible to all researchers. Through working examples and step-by-step code, it shows how economists can create agents that autonomously conduct literature reviews across myriads of sources, write and debug econometric code, fetch and analyze economic data, and coordinate complex research workflows. The paper demonstrates that by "vibe coding" (programming through natural language) and building on modern agentic frameworks like LangGraph, any economist can build sophisticated research assistants and other autonomous tools in minutes. By providing complete, working implementations alongside conceptual frameworks, this guide demonstrates how to employ AI agents in every stage of the research process, from initial investigation to final analysis.
    JEL: A11 B41 C63
    Date: 2025–09
    URL: https://d.repec.org/n?u=RePEc:nbr:nberwo:34202
  13. By: Jeff Dominitz; Charles F. Manski
    Abstract: Enormous attention and resources are being devoted to the quest for artificial general intelligence and, even more ambitiously, artificial superintelligence. We wonder about the implications for our methodological research, which aims to help decision makers cope with what econometricians call identification problems, inferential problems in empirical research that do not diminish as sample size grows. Of particular concern are missing data problems in prediction and treatment choice. Essentially all data collection intended to inform decision making is subject to missing data, which gives rise to identification problems. Thus far, we see no indication that the current dominant architecture of machine learning (ML)-based artificial intelligence (AI) systems will outperform humans in this context. In this paper, we explain why we have reached this conclusion and why we see the missing data problem as a cautionary case study in the quest for superintelligence more generally. We first discuss the concept of intelligence, before presenting a decision-theoretic perspective that formalizes the connection between intelligence and identification problems. We next apply this perspective to two leading cases of missing data problems. Then we explain why we are skeptical that AI research is currently on a path toward machines doing better than humans at solving these identification problems.
    Date: 2025–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2509.12388

This nep-ain issue is ©2025 by Ben Greiner. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.