nep-ain New Economics Papers
on Artificial Intelligence
Issue of 2025–04–21
seven papers chosen by
Ben Greiner, Wirtschaftsuniversität Wien


  1. Replication Report: Corrupted by Algorithms? How AI-generated And Human-written Advice Shape (Dis)Honesty By Deer, Lachlan; Krishna, Adithya; Zhang, Lyla
  2. Artificial Intelligence for Public Use By Lodefalk, Magnus; Engberg, Erik; Lidskog, Rolf; Tang, Aili
  3. The Impact of Generative AI on Work Productivity By Alexander Bick; Adam Blandin; David Deming
  4. Generative Artificial Intelligence and Revolution of Market for Legal Services By Bruno Deffains; Frédéric Marty
  5. Practical and Ethical Perspectives on AI-Based Employee Performance Evaluation By Pletcher, Scott Nicholas
  6. Speculative Bubbles in the Recent AI Boom: Nasdaq and the Magnificent Seven By Rerotlhe B. Basele; Peter C.B. Phillips; Shuping Shi
  7. How to Choose a Fairness Measure: A Decision-Making Workflow for Auditors By Picogna, Federica; de swart, jacques; Kaya, Heysem; wetzels, ruud

  1. By: Deer, Lachlan; Krishna, Adithya; Zhang, Lyla
    Abstract: Leib et al. (2024) examine how artificial intelligence (AI) generated advice affects dishonesty compared to equivalent human advice in a laboratory experiment. In their preferred empirical specification, the authors report that dishonesty-promoting advice increases dishonest behavior by approximately 15% compared to a baseline without advice, while honesty-promoting advice has no significant effect. Additionally, they find that algorithmic transparency - disclosing whether advice comes from AI or humans - does not affect behavior. We computationally reproduce the main results of the paper using the same procedures and original data. Our results confirm the sign, magnitude, and statistical significance of the authors' reported estimates across each of their main findings. Additional robustness checks show that the significance of the results remains stable under alternative specifications and methodological choices.
    Keywords: artificial intelligence, dishonesty, laboratory experiment, computational reproducibility
    JEL: D01 D91 C91
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:zbw:i4rdps:212
  2. By: Lodefalk, Magnus (Örebro University School of Business); Engberg, Erik (Örebro University School of Business); Lidskog, Rolf (School of Humanities, Education and Social Sciences); Tang, Aili (Örebro University School of Business)
    Abstract: This paper investigates the economic and societal impacts of Artificial Intelligence (AI) in the public sector, focusing on its potential to enhance productivity and mitigate labour shortages. Employing detailed administrative data and novel occupational exposure measures, we simulate future scenarios over a 20-year horizon, using Sweden as an illustrative case. Our findings indicate that advances in AI development and uptake could significantly alleviate projected labour shortages and enhance productivity. However, outcomes vary substantially across sectors and organisational types, driven by differing workforce compositions. Complementing the economic analysis, we identify key challenges that hinder AI’s effective deployment, including technical limitations, organisational barriers, regulatory ambiguity, and ethical risks such as algorithmic bias and lack of transparency. Drawing from an interdisciplinary conceptual framework, we argue that AI’s integration in the public sector must address these socio-technical and institutional factors comprehensively. To unlock AI’s full potential, substantial investments in technological infrastructure, human capital development, regulatory clarity, and robust governance mechanisms are essential. Our study thus contributes both novel economic evidence and an integrated societal perspective, informing strategies for sustainable and equitable public-sector digitalisation.
    Keywords: Artificial intelligence; Implementation of technology; Productivity; Labour demand
    JEL: E24 J23 J24 N34 O33
    Date: 2025–04–02
    URL: https://d.repec.org/n?u=RePEc:hhs:oruesi:2025_006
  3. By: Alexander Bick; Adam Blandin; David Deming
    Abstract: Workers using generative AI reported they saved 5.4% of their work hours in the previous week, which suggests a 1.1% increase in productivity for the entire workforce.
    Keywords: generative artificial intelligence (AI); productivity
    Date: 2025–02–27
    URL: https://d.repec.org/n?u=RePEc:fip:l00001:99631
  4. By: Bruno Deffains (CRED - Université-Paris-Panthéon-Assas); Frédéric Marty (CNRS GREDEG, Université Côte d'Azur)
    Abstract: The implementation of generative artificial intelligence in legal services offers undeniable efficiency gains, but also raises fundamental issues for law firms. These challenges can be categorised along a broad continuum, ranging from changes in business lines to changes in the competitive environment and the internal organisation of law firms. This paper considers the risks that law firms face in terms of both the quality of the services they provide and perceived competition, both horizontally and vertically, considering possible relationships of dependency on suppliers of large language models and cloud infrastructures.
    Keywords: Generative artificial intelligence, legal services, accountability, competition, vertical relationships
    JEL: L42 L86
    Date: 2025–03
    URL: https://d.repec.org/n?u=RePEc:afd:wpaper:2503
  5. By: Pletcher, Scott Nicholas
    Abstract: For most, job performance evaluations are often just another expected part of the employee experience. While these evaluations take on different forms depending on the occupation, the usual objective is to align the employee’s activities with the values and objectives of the greater organization. Of course, pursuing this objective involves a whole host of complex skills and abilities which sometimes pose challenges to leaders and organizations. Automation has long been a favored tool of businesses to help bring consistency, efficiency, and accuracy to various processes, including many human capital management processes. Recent improvements in artificial intelligence (AI) approaches have enabled new options for its use in the HCM space. One such use case is assisting leaders in evaluating their employees’ performance. While using technology to measure and evaluate worker production is not novel, the potential now exists through AI algorithms to delve beyond just piece-meal work and make inferences about an employee’s economic impact, emotional state, aptitude for leadership and the likelihood of leaving. Many organizations are eager to use these tools, potentially saving time and money, and are keen on removing bias or inconsistency humans can introduce in the employee evaluation process. However, these AI models often consist of large, complex neural networks where transparency and explainability are not easily achieved. These black-box systems might do a reasonable job, but what are the implications of faceless algorithms making life-changing decisions for employees?
    Date: 2023–04–28
    URL: https://d.repec.org/n?u=RePEc:osf:osfxxx:29yej_v1
  6. By: Rerotlhe B. Basele (Macquarie University); Peter C.B. Phillips (Yale University, University of Auckland, Singapore Management University); Shuping Shi (Macquarie University)
    Abstract: The recent artificial intelligence (AI) boom covers a period of rapid innovation and wide adoption of AI intelligence technologies across diverse industries. These developments have fueled an unprecedented frenzy in the Nasdaq, with AI-focused companies experiencing soaring stock prices that raise concerns about speculative bubbles and real- economy consequences. Against this background the present study investigates the formation of speculative bubbles in the Nasdaq stock market with a specific focus on the so-called ÔMagnificent SevenÕ (Mag-7) individual stocks during the AI boom, spanning the period January 2017 to January 2025. We apply the real time PSY bubble detection methodology of Phillips et al. (2015a, b), while controlling for market and industry factors for individual stocks. Confidence intervals to assess the degree of speculative behavior in asset price dynamics are calculated using the near-unit root approach of Phillips (2023). The findings reveal the presence of speculative bubbles in the Nasdaq stock market and across all Mag-7 stocks. Nvidia and Microsoft experience the longest speculative periods over January 2017 Ð December 2021, while Nvidia and Tesla show the fastest rates of explosive behavior. Speculative bubbles persist in the market and in six of the seven stocks (excluding Apple) from December 2022 to January 2025. Near-unit-root inference indicates mildly explosive dynamics for Nvidia and Tesla (2017Ð2021) and local-to-unity near explosive behavior for all assets in both periods.
    Date: 2025–03–04
    URL: https://d.repec.org/n?u=RePEc:cwl:cwldpp:2430
  7. By: Picogna, Federica (Nyenrode Business University); de swart, jacques; Kaya, Heysem (Utrecht University); wetzels, ruud
    Abstract: Recent developments in Artificial Intelligence (AI) have greatly benefited society, but they also come with risks. One of those risks is that AI has the potential to discriminate against certain groups of people. To address that risk, benchmark regulations such as the AI Act have been cre- ated, requiring AI systems to be fair and tasking auditors with ensuring their compliance. In order to do so, auditors use fairness measures. However, selecting a specific definition of fairness from the various available options and choosing a fairness measure from the numerous possibilities com- plicates the auditing process, making it challenging for auditors to correctly assess AI fairness. To assist them, we created a decision-making workflow that guides the auditor through the selection process of the most appropriate measure and, consequently, the most suitable definition of fairness. To simplify the use of this workflow, we have integrated it into the open-source program JASP for Audit and demonstrated its functionality with two examples: the COMPAS recidivism and the DUO case.
    Date: 2025–03–26
    URL: https://d.repec.org/n?u=RePEc:osf:osfxxx:cpxmf_v2

This nep-ain issue is ©2025 by Ben Greiner. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.