nep-ain New Economics Papers
on Artificial Intelligence
Issue of 2025–12–08
nine papers chosen by
Ben Greiner, Wirtschaftsuniversität Wien


  1. Algorithmic Pricing and Sectoral Oversight: Smart Markets, Smarter Telecommunications Regulation By Gannon, John PL
  2. The Elusive Returns to AI Skills: Evidence from a Field Experiment By Teo Firpo; Lukas Niemann; Anastasia Danilov
  3. Workers' exposure to AI across development stages By Lewandowski, Piotr; Madoń, Karol; Park, Albert
  4. Can startups generate a competitive advantage with open AI tools? By Impink, Stephen Michael; Langburd Wright, Nataliya
  5. Beliefs about bots: How employers plan for AI in white-collar work By Brüll, Eduard; Mäurer, Samuel; Rostam-Afschar, Davud
  6. Belief Updating and AI Adoption: Experimental Evidence from Firms By Manuel Menkhoff
  7. Artificial Intelligence and the Rents of Finance Workers By Colliard, Jean-Edouard; Zhao, Junli
  8. AI regulation and policy pathways in China, European Union, and the USA By Deshpande, Advait
  9. What Do LLMs Want? By Thomas R. Cook; Sophia Kazinnik; Zach Modig; Nathan M. Palmer

  1. By: Gannon, John PL
    Abstract: The integration of artificial intelligence (AI) into pricing systems has heightened longstanding concerns about tacit collusion, particularly in structurally concentrated sectors like telecommunications. While competition authorities struggle with doctrinal limits around algorithmic coordination, this paper argues that sectoral regulators, such as in telecommunications, are well placed to respond. Furthermore, rather than expanding direct oversight of AI tools, regulators should adopt a posture of focal point disruption: strategically examining how regulation itself influences the predictability, observability, and dimensionality of competition. Drawing on coordination theory and recent merger jurisprudence, the paper identifies existing rules, such as those governing offer presentation, personalization limits, and product standardisation, that may inadvertently entrench collusive equilibria. In AI-mediated environments, these effects can be magnified. The paper proposes practical criteria for regulatory design that preserve asymmetries, support selective transparency, and reintroduce unpredictability into market interactions. Rather than waiting for general competition law to evolve, sector-specific regulators must actively assess whether their frameworks stabilize tacit alignment. The aim is not to constrain innovation but to ensure that regulatory architecture does not inadvertently make collusion easier in the age of AI while maximizing the benefits it might bring to competition. This approach offers a flexible, forward-looking alternative to AI-specific regulation or contorted competition law of uncertain effect, grounded in structural awareness and anticipatory governance.
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:zbw:itse25:331270
  2. By: Teo Firpo (Humboldt-Universität zu Berlin); Lukas Niemann (Tanso Technologies); Anastasia Danilov (Humboldt-Universität zu Berlin)
    Abstract: As firms increasingly adopt Artificial Intelligence (AI) technologies, how they adjust hiring practices for skilled workers remains unclear. This paper investigates whether AI-related skills are rewarded in talent recruitment by conducting a large-scale correspondence study in the United Kingdom. We submit 1, 185 résumés to vacancies across a range of occupations, randomly assigning the presence or absence of advanced AI-related qualifications. These AI qualifications are added to résumés as voluntary signals and not explicitly requested in the job postings. We find no statistically significant effect of listing AI qualifications in résumés on interview callback rates. However, a heterogeneity analysis reveals some positive and significant effects for positions in Engineering and Marketing. These results are robust to controlling for the total number of skills listed in job ads, the degree of match between résumés and job descriptions, and the level of expertise required. In an exploratory analysis, we find stronger employer responses to AI-related skills in industries with lower exposure to AI technologies. These findings suggest that the labor market valuation of AI-related qualifications is context-dependent and shaped by sectoral innovation dynamics.
    Keywords: return to skills; technological change; labor market; hiring; signaling; human capital; field experiment; ai-related skills;
    JEL: O33 J23 J24 I26
    Date: 2025–11–17
    URL: https://d.repec.org/n?u=RePEc:rco:dpaper:552
  3. By: Lewandowski, Piotr; Madoń, Karol; Park, Albert
    Abstract: This paper develops a task-adjusted, country-specific measure of workers' exposure to Artificial Intelligence (AI) across 108 countries. Building on Felten et al. (2021), we adapt the Artificial Intelligence Occupational Exposure (AIOE) index to worker-level PIAAC data and extend it globally using comparable surveys and regression-based predictions, covering about 89% of global employment. Accounting for country-specific task structures reveals substantial cross-country heterogeneity: workers in low-income countries exhibit AI exposure levels roughly 0.8 U.S. standard deviations below those in high-income countries, largely due to differences in within-occupation task content. Regression decompositions attribute most cross-country variation to ICT intensity and human capital. High-income countries employ the majority of workers in highly AI-exposed occupations, while lowincome countries concentrate in less exposed ones. Using two PIAAC cycles, we document rising AI exposure in high-income countries, driven by shifts in within-occupation tasks rather than employment structure.
    Abstract: In diesem Beitrag wird ein länderspezifischer, aufgabenangepasster Maßstab für die Exposition von Arbeitnehmern gegenüber künstlicher Intelligenz (KI) in 108 Ländern entwickelt. Aufbauend auf Felten et al. (2021) passen wir den Index für die berufliche Exposition gegenüber KI (AIOE) an die PIAAC-Daten auf Arbeitnehmerebene an. Unter Verwendung vergleichbarer Umfragen und regressionsbasierter Prognosen, die etwa 89 % der weltweiten Beschäftigung abdecken, erweitern wir ihn weltweit. Die Berücksichtigung länderspezifischer Aufgabenstrukturen zeigt erhebliche Unterschiede zwischen den Ländern. So weisen Arbeitnehmer in Ländern mit niedrigem Einkommen ein AI-Expositionsniveau auf, das etwa 0, 8 Standardabweichungen unter dem von Ländern mit hohem Einkommen liegt. Dies ist hauptsächlich auf Unterschiede in den Aufgabeninhalten innerhalb eines Berufs zurückzuführen. Regressionszerlegungen führen die meisten Unterschiede zwischen den Ländern auf die IKT-Intensität und das Humankapital zurück. Hochlohnländer beschäftigen die Mehrheit der Arbeitnehmer in Berufen mit hoher KI-Exposition, während sich Niedriglohnländer auf Berufe mit geringerer Exposition konzentrieren. Anhand von zwei PIAAC-Zyklen dokumentieren wir eine steigende KI-Exposition in Hochlohnländern. Diese ist eher auf Veränderungen innerhalb der Aufgabenbereiche als auf Veränderungen in der Beschäftigungsstruktur zurückzuführen.
    Keywords: job tasks, occupations, AI, technology, skills
    JEL: J21 J23 J24
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:zbw:rwirep:331883
  4. By: Impink, Stephen Michael (HEC Paris); Langburd Wright, Nataliya (Columbia University - Columbia Business School, Management)
    Abstract: We examine how open source generative AI adoption affects the venture performance of high-tech software startups. Using a matched sample, we find that startups that use generative AI in open product development raise about 15% less funding, especially in competitive markets with many similar AI adopters. However, startups targeting broad markets raise roughly 30% more funding when adopting generative AI early—within six months of its release—before a dominant design emerges. These findings suggest that while early AI adoption in the open can be beneficial, widespread use may erode differentiation. Overall, these results indicate that generative AI is not a silver bullet and may even hinder fundraising when competitive advantages are easily replicated.
    Keywords: Generative AI; Strategy; Technological Change; Open Innovation; GitHub
    JEL: O30
    Date: 2025–08–12
    URL: https://d.repec.org/n?u=RePEc:ebg:heccah:1583
  5. By: Brüll, Eduard; Mäurer, Samuel; Rostam-Afschar, Davud
    Abstract: We provide experimental evidence on how employers adjust expectations to automation risk in high-skill, white-collar work. Using a randomized information intervention among tax advisors in Germany, we show that firms systematically underestimate automatability. Information provision raises risk perceptions, especially for routine-intensive roles. Yet, it leaves short-run hiring plans unchanged. Instead, updated beliefs increase productivity and financial expectations with minor wage adjustments, implying within-firm inequality like limited rent-sharing. Employers also anticipate new tasks in legal tech, compliance, and AI interaction, and report higher training and adoption intentions.
    Keywords: Artificial Intelligence, Automation, Technological Change, Innovation, Technology Adoption, Firm Expectations, Belief Updating, Expertise, Labor Demand, White Collar Jobs, Training
    JEL: J23 J24 D22 D84 O33 C93
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:zbw:zewdip:333393
  6. By: Manuel Menkhoff
    Abstract: Using a large German firm survey, I randomize information on documented AI productivity gains and industry adoption rates and track firms over time. Beliefs about AI’s productivity potential rise significantly after the treatments across the prior distribution without reducing uncertainty. These treatment-induced belief shifts map into behavior: in firms where the respondent has high decision authority, AI adoption is more likely one year later. Information about competitor adoption has direct effects on actions: incumbent adopters cut prices, while not-yet adopters revise business expectations upward. Together, the results highlight the role of expectations, strategic considerations, and informational frictions in shaping technology diffusion and its macroeconomic impact.
    Keywords: artificial intelligence, technological change, technology adoption, firm expectations, RCT, belief updating, price-setting
    JEL: D22 D84 E22 E31 O33
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:ces:ceswps:_12291
  7. By: Colliard, Jean-Edouard (HEC Paris - Finance Department); Zhao, Junli (Bayes Business School)
    Abstract: This paper studies how artificial intelligence (AI) affects the finance labor market when humans and AI perform different tasks in investment projects, and workers earn agency rents that grow with project size. We identify two key effects of AI improvement: A free-riding effect raises worker rents by increasing the probability of successful investment when the worker shirks; A capital reallocation effect shifts investment toward workers with higher or lower rents, depending on which tasks AI improves. Contrary to standard predictions, AI can raise both worker rents and labor demand. We derive implications for capital allocation, labor demand, compensation, and welfare.
    Keywords: Artificial intelligence; labor market; automation; rents in finance
    JEL: O33
    Date: 2025–07–08
    URL: https://d.repec.org/n?u=RePEc:ebg:heccah:1576
  8. By: Deshpande, Advait
    Abstract: With the emergence of generative Artificial Intelligence (AI) tools (including large language models) in the popular discourse, the debate on managing, governing, and regulating the impacts of AI on society has grown considerably. In part due to the unique breadth of AI's impacts and its varying implications for the various strata of human workforce, and society, approaches to AI regulation appear to diverge significantly. This combination of scale and potential disruption has caught the attention of regulators worldwide, with China, European Union (EU), and the United States of America (USA) as the forerunners in the regulatory activity. The aim of this paper is to examine the current state-of-play vis-à-vis regulatory approaches to AI and related technologies in China, EU, and the USA. The paper draws on documentary sources and peer-reviewed literature to examine the political and market dynamics at work, the policy pathways, including the processes, the decision-making approaches, and the intended outcomes of these regulatory and legislative approaches. The findings suggest that China's state-directed approach is aimed at integration of technical oversight, social harmony, and the growth of its sovereign AI capabilities. The EU's approach is a comprehensive, risk-based regulatory framework for AI building on its strengths in exporting technology-related rule-making. The USA's approach to AI regulation is decentralised with multi-agency legislation targeting specific AI applications and outcomes while retaining its advantages in AI innovation. The findings are expected to be of interest to academics, researchers, and key stakeholders from government, industry, and the third sector actively engaged in regulation and governance of AI.
    Keywords: AI regulation, AI policy, China, European Union, Technology policy, USA
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:zbw:itse25:331266
  9. By: Thomas R. Cook; Sophia Kazinnik; Zach Modig; Nathan M. Palmer
    Abstract: Large language models (LLMs) are now used for economic reasoning, but their implicit "preferences” are poorly understood. We study LLM preferences as revealed by their choices in simple allocation games and a job-search setting. Most models favor equal splits in dictator-style allocation games, consistent with inequality aversion. Structural estimates recover Fehr–Schmidt parameters that indicate inequality aversion is stronger than in similar experiments with human participants. However, we find these preferences are malleable: reframing (e.g., masking social context) and learned control vectors shift choices toward payoff-maximizing behavior, while personas move them less effectively. We then turn to a more complex economic scenario. Extending a McCall job search environment, we also recover effective discounting from accept/reject policies, but observe that model responses may not always be rationalizable, and in some cases suggest inconsistent preferences. Efforts to steer LLM responses in the McCall scenario are also less consistent. Together, our results suggest (i) LLMs exhibit latent preferences that may not perfectly align with typical human preferences and (ii) LLMs can be steered toward desired preferences, though this is more difficult with complex economic tasks.
    Keywords: large language models; Simulation modeling
    JEL: C63 C68 C61 D14 D83 D91 E20 E21
    Date: 2025–11–25
    URL: https://d.repec.org/n?u=RePEc:fip:fedkrw:102166

This nep-ain issue is ©2025 by Ben Greiner. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.