nep-ain New Economics Papers
on Artificial Intelligence
Issue of 2025–09–01
twenty-one papers chosen by
Ben Greiner, Wirtschaftsuniversität Wien


  1. The Behavioral Signature of GenAI in Scientific Communication By Askitas, Nikos
  2. Who is More Bayesian: Humans or ChatGPT? By John Rust; Tianshi Mu; Pranjal Rawat; Chengjun Zhang; Qixuan Zhong
  3. To Trade or Not to Trade: An Agentic Approach to Estimating Market Risk Improves Trading Decisions By Dimitrios Emmanoulopoulos; Ollie Olby; Justin Lyon; Namid R. Stillman
  4. The Market Effects of Algorithms By Lindsey Raymond
  5. Algorithmic Collusion is Algorithm Orchestration By Cesare Carissimo; Fryderyk Falniowski; Siavash Rahimi; Heinrich Nax
  6. Algorithmic Collusion of Pricing and Advertising on E-commerce Platforms By Hangcheng Zhao; Ron Berman
  7. Signal or Noise? Evaluating Large Language Models in Resume Screening Across Contextual Variations and Human Expert Benchmarks By Aryan Varshney; Venkat Ram Reddy Ganuthula
  8. Is AI Contributing to Rising Unemployment? Evidence from Occupational Variation By Serdar Ozkan; Nicholas Sullivan
  9. The Skill Premium Across Countries in the Era of Industrial Robots and Generative AI By Marcos J Ribeiro; Klaus Prettner
  10. Generative AI in Higher Education: Evidence from an Elite College By Contractor, Zara; Reyes, Germán
  11. The Skill Inside the Task: How AI and Robotics Reshape the Structure of Work. By Cossu, Fenicia
  12. Automation, AI, and the Intergenerational Transmission of Knowledge By Enrique Ide
  13. Artificial Intelligence, Domain AI Readiness, and Firm Productivity By Sipeng Zeng; Xiaoning Wang; Tianshu Sun
  14. How Retrainable Are AI-Exposed Workers? By Benjamin Lahey; Benjamin Hyman; Karen Ni; Laura Pilossoph
  15. Artificial Finance: How AI Thinks About Money By Orhan Erdem; Ragavi Pobbathi Ashok
  16. AlphaAgents: Large Language Model based Multi-Agents for Equity Portfolio Constructions By Tianjiao Zhao; Jingrao Lyu; Stokes Jones; Harrison Garber; Stefano Pasquali; Dhagash Mehta
  17. Verba volant, transcripta manent: what corporate earnings calls reveal about the AI stock rally By Ca' Zorzi, Michele; Manu, Ana-Simona; Lopardo, Gianluigi
  18. FX sentiment analysis with large language models By Daniele Ballinari; Jessica Maly
  19. Event-Aware Sentiment Factors from LLM-Augmented Financial Tweets: A Transparent Framework for Interpretable Quant Trading By Yueyi Wang; Qiyao Wei
  20. Collective dynamics of strategic classification By Marta C. Couto; Flavia Barsotti; Fernando P. Santos
  21. Notes on a World with Generative AI By Askitas, Nikos

  1. By: Askitas, Nikos (IZA)
    Abstract: We examine the uptake of GPT-assisted writing in economics working paper abstracts. Using data from the IZA DP series, we detect a clear stylistic shift after the release of ChatGPT-3.5 in March 2023. This shift is evident in core textual metrics—mean word length, type-token ratio, and readability—and reflects growing convergence with machine-generated writing. While the ChatGPT launch was an exogenous shock, adoption is endogenous: authors choose whether to use AI. To capture this behavioral response, we combine stylometric analysis, machine learning classification, and prompt-based similarity testing. Event-study regressions with fixed effects and placebo checks confirm that the change is abrupt, persistent, and not explained by pre-existing trends. A similarity experiment using OpenAI’s API shows that post-ChatGPT abstracts resemble their GPT-optimized versions more closely than pre-ChatGPT resemble theirs. A classifier, trained on these variants, flags a growing share of post-March 2023 texts as GPT-like. Rather than suggesting full automation, our findings indicate selective human–AI augmentation. Our framework generalizes to other contexts such as e.g. resumes, job ads, legal briefs, research proposals, or programming code.
    Keywords: AI-assisted writing, linguistic metrics, event study, machine learning, natural language processing (NLP), text analysis, academic writing, GPT adoption, diffusion of technology
    JEL: C55 C88 O33 C81 L86 J24
    Date: 2025–08
    URL: https://d.repec.org/n?u=RePEc:iza:izadps:dp18062
  2. By: John Rust (Department of Economics, Georgetown University); Tianshi Mu (Tsinghua University); Pranjal Rawat (Georgetown University); Chengjun Zhang (Morgan Stanley); Qixuan Zhong (Department of Economics, Georgetown University)
    Abstract: We compare human and artificially intelligent (AI) subjects in classification tasks where the optimal decision rule is given by Bayes’ Rule. Experimental studies reach mixed conclusions about whether human beliefs and decisions accord with Bayes’ Rule. We reanalyze land- mark experiments using a new model of decision making and show that decisions can be nearly optimal even when beliefs are not Bayesian. Using an objective measure of “decision efficiency, ” we find that humans are 96% efficient despite the fact that only a minority have Bayesian beliefs. We replicate these same experiments using three generations of ChatGPT as subjects. Using the reasoning provided by GPT responses to understand its “thought process, ” we find that GPT-3.5 ignores the prior and is only 75% efficient, whereas GPT-4 and GPT-4o use Bayes’ Rule and are 93% and 99% efficient, respectively. Most errors by GPT-4 and GPT-4o are algebraic mistakes in computing the posterior, but GPT-4o is far less error-prone. GPT performance increased from sub-human to super-human in just 3 years. By version 4o, its beliefs and decision making had become nearly perfectly Bayesian.
    Keywords: Bayes’ Rule, decision making, statistical decision theory, win and loss func- tions, learning, Bayes’ compatible beliefs, noisy Bayesians, classification, machine learning, artificial intelligence, large language models, ChatGPT, maximum likelihood, heterogeneity, mixture models, Estimation-Classification (EC) algorithm, binary logit model, structural models
    JEL: C91 D91
    Date: 2025–07–10
    URL: https://d.repec.org/n?u=RePEc:geo:guwopa:gueconwpa~25-25-02
  3. By: Dimitrios Emmanoulopoulos; Ollie Olby; Justin Lyon; Namid R. Stillman
    Abstract: Large language models (LLMs) are increasingly deployed in agentic frameworks, in which prompts trigger complex tool-based analysis in pursuit of a goal. While these frameworks have shown promise across multiple domains including in finance, they typically lack a principled model-building step, relying instead on sentiment- or trend-based analysis. We address this gap by developing an agentic system that uses LLMs to iteratively discover stochastic differential equations for financial time series. These models generate risk metrics which inform daily trading decisions. We evaluate our system in both traditional backtests and using a market simulator, which introduces synthetic but causally plausible price paths and news events. We find that model-informed trading strategies outperform standard LLM-based agents, improving Sharpe ratios across multiple equities. Our results show that combining LLMs with agentic model discovery enhances market risk estimation and enables more profitable trading decisions.
    Date: 2025–07
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2507.08584
  4. By: Lindsey Raymond
    Abstract: While there is excitement about the potential for algorithms to optimize individual decision-making, changes in individual behavior will, almost inevitably, impact markets. Yet little is known about such effects. In this paper, I study how the availability of algorithmic prediction changes entry, allocation, and prices in the US single-family housing market, a key driver of household wealth. I identify a market-level natural experiment that generates variation in the cost of using algorithms to value houses: digitization, the transition from physical to digital housing records. I show that digitization leads to entry by investors using algorithms, but does not push out investors using human judgment. Instead, human investors shift toward houses that are difficult to predict algorithmically. Algorithmic investors predominantly purchase minority-owned homes, a segment of the market where humans may be biased. Digitization increases the average sale price of minority-owned homes by 5% and reduces racial disparities in home prices by 45%. Algorithmic investors, via competition, affect the prices paid by owner-occupiers and human investors for minority homes; such changes drive the majority of the reduction in racial disparities. The decrease in racial inequality underscores the potential for algorithms to mitigate human biases at the market level.
    Date: 2025–08
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2508.09513
  5. By: Cesare Carissimo; Fryderyk Falniowski; Siavash Rahimi; Heinrich Nax
    Abstract: This paper proposes a fresh `meta-game' perspective on the problem of algorithmic collusion in pricing games a la Bertrand. Economists have interpreted the fact that algorithms can learn to price collusively as tacit collusion. We argue instead that the co-parametrization of algorithms -- that we show is necessary to obtain algorithmic collusion -- requires algorithm designer(s) to engage in explicit collusion by algorithm orchestration. To highlight this, we model a meta-game of algorithm parametrization that is played by algorithm designers, and the relevant strategic analyses at that level reveal new equilibrium and collusion phenomena.
    Date: 2025–08
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2508.14766
  6. By: Hangcheng Zhao; Ron Berman
    Abstract: Online sellers have been adopting AI learning algorithms to automatically make product pricing and advertising decisions on e-commerce platforms. When sellers compete using such algorithms, one concern is that of tacit collusion - the algorithms learn to coordinate on higher than competitive. We empirically investigate whether these concerns are valid when sellers make pricing and advertising decisions together, i.e., two-dimensional decisions. Our empirical strategy is to analyze competition with multi-agent reinforcement learning, which we calibrate to a large-scale dataset collected from Amazon.com products. Our first contribution is to find conditions under which learning algorithms can facilitate win-win-win outcomes that are beneficial for consumers, sellers, and even the platform, when consumers have high search costs. In these cases the algorithms learn to coordinate on prices that are lower than competitive prices. The intuition is that the algorithms learn to coordinate on lower advertising bids, which lower advertising costs, leading to lower prices. Our second contribution is an analysis of a large-scale, high-frequency keyword-product dataset for more than 2 million products on Amazon.com. Our estimates of consumer search costs show a wide range of costs for different product keywords. We generate an algorithm usage and find a negative interaction between the estimated consumer search costs and the algorithm usage index, providing empirical evidence of beneficial collusion. Finally, we analyze the platform's strategic response. We find that reserve price adjustments will not increase profits for the platform, but commission adjustments will. Our analyses help alleviate some worries about the potentially harmful effects of competing learning algorithms, and can help sellers, platforms and policymakers to decide on whether to adopt or regulate such algorithms.
    Date: 2025–08
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2508.08325
  7. By: Aryan Varshney; Venkat Ram Reddy Ganuthula
    Abstract: This study investigates whether large language models (LLMs) exhibit consistent behavior (signal) or random variation (noise) when screening resumes against job descriptions, and how their performance compares to human experts. Using controlled datasets, we tested three LLMs (Claude, GPT, and Gemini) across contexts (No Company, Firm1 [MNC], Firm2 [Startup], Reduced Context) with identical and randomized resumes, benchmarked against three human recruitment experts. Analysis of variance revealed significant mean differences in four of eight LLM-only conditions and consistently significant differences between LLM and human evaluations (p 0.1), while all LLMs differed significantly from human experts across contexts. Meta-cognition analysis highlighted adaptive weighting patterns that differ markedly from human evaluation approaches. Findings suggest LLMs offer interpretable patterns with detailed prompts but diverge substantially from human judgment, informing their deployment in automated hiring systems.
    Date: 2025–07
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2507.08019
  8. By: Serdar Ozkan; Nicholas Sullivan
    Abstract: Is AI driving job displacement? This analysis compares jobs’ theoretical AI exposure and actual AI adoption with changes in occupation-level unemployment.
    Keywords: artificial intelligence (AI); unemployment
    Date: 2025–08–26
    URL: https://d.repec.org/n?u=RePEc:fip:l00001:101478
  9. By: Marcos J Ribeiro (Department of Economics, University of Sao Paulo); Klaus Prettner (Department of Economics, Vienna University of Economics and Business)
    Abstract: How do new technologies affect economic growth and the skill premium? To answer this question, we analyze the impact of industrial robots and artificial intelligence (AI) on the wage differential between low-skill and high-skill workers across 52 countries using counterfactual simulations. In so doing, we extend the nested CES production function framework of Bloom et al. (2025) to account for cross-country income heterogeneity. Confirming prior findings, we Show that the use of industrial robots tends to increase wage inequality, while the use of AI tends to reduce it. Our contribution lies in documenting substantial heterogeneity across income groups: the inequality-increasing effect of robots and the inequality-reducing effects of AI are particularly strong in high-income countries, while they are less pronounced among middle- and lower-middle income countries. In addition, we show that both technologies boost economic growth. In terms of policy recommendations, our findings suggest that investments in education and skill-upgrading can simultaneously raise average incomes and mitigate the negative effects of automation on wage inequality.
    Keywords: Skill Premium, Automation, Industrial Robots, Artificial Intelligence
    JEL: J31 O14
    Date: 2025–08
    URL: https://d.repec.org/n?u=RePEc:wiw:wiwwuw:wuwp381
  10. By: Contractor, Zara (Middlebury College); Reyes, Germán (Middlebury College)
    Abstract: Generative AI is transforming higher education, yet systematic evidence on student adoption remains limited. Using novel survey data from a selective U.S. college, we document over 80 percent of students using AI academically within two years of ChatGPT's release. Adoption varies across disciplines, demographics, and achievement levels, highlighting AI's potential to reshape educational inequalities. Students predominantly use AI for augmenting learning (e.g., explanations, feedback), but also to automate tasks (e.g., essay generation). Positive perceptions of AI's educational benefits strongly predict adoption. Institutional policies can influence usage patterns but risk creating unintended disparate impacts across student groups due to uneven compliance.
    Keywords: technology adoption, higher education, Generative AI, ChatGPT, student learning
    JEL: I23 O33 I21 J24 D83
    Date: 2025–08
    URL: https://d.repec.org/n?u=RePEc:iza:izadps:dp18055
  11. By: Cossu, Fenicia
    Abstract: We examine how exposure to artificial intelligence (AI) and robotics reshapes the skill composition of occupations. Using O*NET data from 2006 to 2019, we construct indicators tracking the importance of seven broad skill categories within each occupation over time. We link these indicators to task-based measures of technological exposure at the occupational level. We then focus on the effect of AI and robotics in altering the skill composition of high-, middle- and low-skilled groups of occupations. We find that AI primarily affects high-skill occupations by increasing the importance of Technical and Resource Management skills and decreasing that of Systems and Social skills. Robotics instead boosts Technical skills in middle and low-skill occupations and reduces Process skills in low-skilled ones. Notably, neither AI nor robots affect the importance of Complex Problem Solving skills.
    Keywords: Skills, Artificial Intelligence, Robots, Skill Composition, Occupational change
    JEL: J23 J24 O3 O33
    Date: 2025–07–22
    URL: https://d.repec.org/n?u=RePEc:pra:mprapa:125404
  12. By: Enrique Ide
    Abstract: Recent advances in Artificial Intelligence (AI) have fueled predictions of unprecedented productivity growth. Yet, by enabling senior workers to perform more tasks on their own, AI may inadvertently reduce entry-level opportunities, raising concerns about how future generations will acquire essential skills. In this paper, I develop a model to examine how advanced automation affects the intergenerational transmission of knowledge. The analysis reveals that automating entry-level tasks yields immediate productivity gains but can undermine long-run growth by eroding the skills of subsequent generations. Back-of-the-envelope calculations suggest that AI-driven entry-level automation could reduce U.S. long-term annual growth by approximately 0.05 to 0.35 percentage points, depending on its scale. I also demonstrate that AI co-pilots - systems that democratize access to expertise previously acquired only through hands-on experience - can partially mitigate these negative effects. However, their introduction is not always beneficial: by providing expert insights, co-pilots may inadvertently diminish younger workers' incentives to invest in hands-on learning. These findings cast doubt on the optimistic view that AI will automatically lead to sustained productivity growth, unless it either generates new entry-level roles or significantly boosts the economy's underlying innovation rate.
    Date: 2025–07
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2507.16078
  13. By: Sipeng Zeng; Xiaoning Wang; Tianshu Sun
    Abstract: Although Artificial Intelligence (AI) holds great promise for enhancing innovation and productivity, many firms struggle to realize its benefits. We investigate why some firms and industries succeed with AI while others do not, focusing on the degree to which an industrial domain is technologically integrated with AI, which we term "domain AI readiness". Using panel data on Chinese listed firms from 2016 to 2022, we examine how the interaction between firm-level AI capabilities and domain AI readiness affects firm performance. We create novel constructs from patent data and measure the domain AI readiness of a specific domain by analyzing the co-occurrence of four-digit International Patent Classification (IPC4) codes related to AI with the specific domain across all patents in that domain. Our findings reveal a strong complementarity: AI capabilities yield greater productivity and innovation gains when deployed in domains with higher AI readiness, whereas benefits are limited in domains that are technologically unprepared or already obsolete. These results remain robust when using local AI policy initiatives as instrumental variables. Further analysis shows that this complementarity is driven by external advances in domain-AI integration, rather than firms' own strategic pivots. Time-series analysis of IPC4 co-occurrence patterns further suggests that improvements in domain AI readiness stem primarily from the academic advancements of AI in specific domains.
    Date: 2025–08
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2508.09634
  14. By: Benjamin Lahey; Benjamin Hyman; Karen Ni; Laura Pilossoph
    Abstract: We document the extent to which workers in AI-exposed occupations can successfully retrain for AI-intensive work. We assemble a new workforce development dataset spanning over 1.6 million job training participation spells from all U.S. Workforce Investment and Opportunity Act programs from 2012-2023 linked with occupational measures of AI exposure. Using earnings records observed before and after training, we compare high AI exposure trainees to a matched sample of similar workers who only received job search assistance. We find that AI-exposed workers have high earnings returns from training that are only 25 percent lower than the returns for low AI exposure workers. However, training participants who target AI-intensive occupations face a penalty for doing so, with 29 percent lower returns than AI-exposed workers pursuing more general training. We estimate that between 25 percent to 40 percent of occupations are “AI retrainable” as measured by its workers receiving higher pay for moving to more AI-intensive occupations—a large magnitude given the relatively low-income sample of displaced workers. Positive earnings returns in all groups are driven by the most recent years when labor markets were tightest, suggesting training programs may have stronger signal value when firms reach deeper into the skill market.
    Keywords: artificial intelligence; active labor market policies; job training; labor markets
    JEL: J08 M53 O31
    Date: 2025–08–01
    URL: https://d.repec.org/n?u=RePEc:fip:fednsr:101471
  15. By: Orhan Erdem; Ragavi Pobbathi Ashok
    Abstract: In this paper, we explore how large language models (LLMs) approach financial decision-making by systematically comparing their responses to those of human participants across the globe. We posed a set of commonly used financial decision-making questions to seven leading LLMs, including five models from the GPT series(GPT-4o, GPT-4.5, o1, o3-mini), Gemini 2.0 Flash, and DeepSeek R1. We then compared their outputs to human responses drawn from a dataset covering 53 nations. Our analysis reveals three main results. First, LLMs generally exhibit a risk-neutral decision-making pattern, favoring choices aligned with expected value calculations when faced with lottery-type questions. Second, when evaluating trade-offs between present and future, LLMs occasionally produce responses that appear inconsistent with normative reasoning. Third, when we examine cross-national similarities, we find that the LLMs' aggregate responses most closely resemble those of participants from Tanzania. These findings contribute to the understanding of how LLMs emulate human-like decision behaviors and highlight potential cultural and training influences embedded within their outputs.
    Date: 2025–07
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2507.10933
  16. By: Tianjiao Zhao; Jingrao Lyu; Stokes Jones; Harrison Garber; Stefano Pasquali; Dhagash Mehta
    Abstract: The field of artificial intelligence (AI) agents is evolving rapidly, driven by the capabilities of Large Language Models (LLMs) to autonomously perform and refine tasks with human-like efficiency and adaptability. In this context, multi-agent collaboration has emerged as a promising approach, enabling multiple AI agents to work together to solve complex challenges. This study investigates the application of role-based multi-agent systems to support stock selection in equity research and portfolio management. We present a comprehensive analysis performed by a team of specialized agents and evaluate their stock-picking performance against established benchmarks under varying levels of risk tolerance. Furthermore, we examine the advantages and limitations of employing multi-agent frameworks in equity analysis, offering critical insights into their practical efficacy and implementation challenges.
    Date: 2025–08
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2508.11152
  17. By: Ca' Zorzi, Michele; Manu, Ana-Simona; Lopardo, Gianluigi
    Abstract: This paper investigates the economic impact of technological innovation, focusing on generative AI (GenAI) following ChatGPT’s release in November 2022. We propose a novel framework leveraging large language models to analyze earnings call transcripts. Our method quantifies firms’ GenAI exposure and classifies sentiment as opportunity, adoption, or risk. Using panel econometric techniques, we assess GenAI exposure’s impact on S&P 500 firms’ financial performance over 2014-2023. We find two main results. First, GenAI exposure rose sharply after ChatGPT’s release, particularly in IT, Consumer Services, and Consumer Discretionary sectors, coinciding with sentiment shifts toward adoption. Second, GenAI exposure significantly influenced stock market performance. Firms with early and high GenAI exposure saw stronger returns, though earnings expectations improved modestly. Panel regressions show a 1 percentage point increase in GenAI exposure led to 0.26% rise in quarterly excess returns. Difference-in-Difference estimates indicate 2.4% average quarterly stock price increases following ChatGPT’s release. JEL Classification: C80, G14, G30, L25, O33
    Keywords: artificial intelligence, ChatGPT, earnings call, equity returns, generative AI
    Date: 2025–08
    URL: https://d.repec.org/n?u=RePEc:ecb:ecbwps:20253093
  18. By: Daniele Ballinari; Jessica Maly
    Abstract: We enhance sentiment analysis in the foreign exchange (FX) market by fine-tuning large language models (LLMs) to better understand and interpret the complex language specific to FX markets. We build on existing methods by using state-of-the-art open source LLMs, fine-tuning them with labelled FX news articles and then comparing their performance against traditional approaches and alternative models. Furthermore, we tested these fine-tuned LLMs by creating investment strategies based on the sentiment they detect in FX analysis articles with the goal of demonstrating how well these strategies perform in real-world trading scenarios. Our findings indicate that the fine-tuned LLMs outperform the existing methods in terms of both the classification accuracy and trading performance, highlighting their potential for improving FX market sentiment analysis and investment decision-making.
    Keywords: Large language models, Sentiment analysis, Fine-tuning, Text classification, Natural language processing, Foreign exchange, Financial markets
    JEL: F31 G12 G15
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:snb:snbwpa:2025-11
  19. By: Yueyi Wang; Qiyao Wei
    Abstract: In this study, we wish to showcase the unique utility of large language models (LLMs) in financial semantic annotation and alpha signal discovery. Leveraging a corpus of company-related tweets, we use an LLM to automatically assign multi-label event categories to high-sentiment-intensity tweets. We align these labeled sentiment signals with forward returns over 1-to-7-day horizons to evaluate their statistical efficacy and market tradability. Our experiments reveal that certain event labels consistently yield negative alpha, with Sharpe ratios as low as -0.38 and information coefficients exceeding 0.05, all statistically significant at the 95\% confidence level. This study establishes the feasibility of transforming unstructured social media text into structured, multi-label event variables. A key contribution of this work is its commitment to transparency and reproducibility; all code and methodologies are made publicly available. Our results provide compelling evidence that social media sentiment is a valuable, albeit noisy, signal in financial forecasting and underscore the potential of open-source frameworks to democratize algorithmic trading research.
    Date: 2025–08
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2508.07408
  20. By: Marta C. Couto; Flavia Barsotti; Fernando P. Santos
    Abstract: Classification algorithms based on Artificial Intelligence (AI) are nowadays applied in high-stakes decisions in finance, healthcare, criminal justice, or education. Individuals can strategically adapt to the information gathered about classifiers, which in turn may require algorithms to be re-trained. Which collective dynamics will result from users' adaptation and algorithms' retraining? We apply evolutionary game theory to address this question. Our framework provides a mathematically rigorous way of treating the problem of feedback loops between collectives of users and institutions, allowing to test interventions to mitigate the adverse effects of strategic adaptation. As a case study, we consider institutions deploying algorithms for credit lending. We consider several scenarios, each representing different interaction paradigms. When algorithms are not robust against strategic manipulation, we are able to capture previous challenges discussed in the strategic classification literature, whereby users either pay excessive costs to meet the institutions' expectations (leading to high social costs) or game the algorithm (e.g., provide fake information). From this baseline setting, we test the role of improving gaming detection and providing algorithmic recourse. We show that increased detection capabilities reduce social costs and could lead to users' improvement; when perfect classifiers are not feasible (likely to occur in practice), algorithmic recourse can steer the dynamics towards high users' improvement rates. The speed at which the institutions re-adapt to the user's population plays a role in the final outcome. Finally, we explore a scenario where strict institutions provide actionable recourse to their unsuccessful users and observe cycling dynamics so far unnoticed in the literature.
    Date: 2025–08
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2508.09340
  21. By: Askitas, Nikos (IZA)
    Abstract: Generative AI (GenAI) and Large Language Models (LLMs) are moving into domains once seen as uniquely human: reasoning, synthesis, abstraction, and rhetoric. Addressed to labor economists and informed readers, this paper clarifies what is truly new about LLMs, what is not, and why it matters. Using an analogy to auto-regressive models from economics, we explain their stochastic nature, whose fluency is often mistaken for agency. We place LLMs in the longer history of human–machine outsourcing, from digestion to cognition, and examine disruptive effects on white-collar labor, institutions, and epistemic norms. Risks emerge when synthetic content becomes both product and input, creating feedback loops that erode originality and reliability. Grounding the discussion in conceptual clarity over hype, we argue that while GenAI may substitute for some labor, statistical limits will, probably but not without major disruption, preserve a key role for human judgment. The question is not only how these tools are used, but which tasks we relinquish and how we reallocate expertise in a new division of cognitive labor.
    Keywords: automation and outsourcing, technological change, labor economics, autoregressive models, Large Language Models, Generative Artificial Intelligence, human-machine collaboration knowledge work, epistemic norms, digital transformation
    JEL: J24 O33 O31 J22 D83 L86 J44 O38
    Date: 2025–08
    URL: https://d.repec.org/n?u=RePEc:iza:izadps:dp18070

This nep-ain issue is ©2025 by Ben Greiner. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.