nep-ain New Economics Papers
on Artificial Intelligence
Issue of 2025–05–26
thirteen papers chosen by
Ben Greiner, Wirtschaftsuniversität Wien


  1. Who Gets the Callback? Generative AI and Gender Bias By Sugat Chaturvedi; Rochana Chaturvedi
  2. Users Favor LLM-Generated Content -- Until They Know It's AI By Petr Parshakov; Iuliia Naidenova; Sofia Paklina; Nikita Matkin; Cornel Nesseler
  3. Can AI Regulate Your Emotions? An Empirical Investigation of the Influence of AI Explanations and Emotion Regulation on Human Decision-Making Factors By Olesja Lammert
  4. The Skill Premium Across Countries in the Era of Industrial Robots and Generative AI By Ribeiro, Marcos; Prettner, Klaus
  5. Robots & AI Exposure and Wage Inequality By Jaccoud, Florencia
  6. Unequal impacts of AI on Colombia's labor market: an analysis of AI exposure, wages, and job dynamics By García-Suaza, Andrés; Sarango-Iturralde, Alexander; Caiza-Guamán, Pamela; Gil Díaz, Mateo; Acosta Castillo, Dana
  7. Move Fast and Integrate Things: The Making of a European Industrial Policy for Artificial Intelligence By Simone Vannuccini
  8. The US university-industry link in the R&D of AI: Back to the origins? By Andrea Borsato; Patrick Llerena
  9. The Impact of Generative AI on Productivity: Results of an Early Meta-Analysis By Tom Coupé; Weilun Wu
  10. Making GenAI Smarter: Evidence from a Portfolio Allocation Experiment By Lars Hornuf; David J. Streich; Niklas Töllich
  11. QuantBench: Benchmarking AI Methods for Quantitative Investment By Saizhuo Wang; Hao Kong; Jiadong Guo; Fengrui Hua; Yiyan Qi; Wanyun Zhou; Jiahao Zheng; Xinyu Wang; Lionel M. Ni; Jian Guo
  12. Advanced Digital Simulation for Financial Market Dynamics: A Case of Commodity Futures By Cheng Wang; Chuwen Wang; Shirong Zeng; Changjun Jiang
  13. MAD Chairs: A new tool to evaluate AI By Chris Santos-Lang; Christopher M. Homan

  1. By: Sugat Chaturvedi; Rochana Chaturvedi
    Abstract: Generative artificial intelligence (AI), particularly large language models (LLMs), is being rapidly deployed in recruitment and for candidate shortlisting. We audit several mid-sized open-source LLMs for gender bias using a dataset of 332, 044 real-world online job postings. For each posting, we prompt the model to recommend whether an equally qualified male or female candidate should receive an interview callback. We find that most models tend to favor men, especially for higher-wage roles. Mapping job descriptions to the Standard Occupational Classification system, we find lower callback rates for women in male-dominated occupations and higher rates in female-associated ones, indicating occupational segregation. A comprehensive analysis of linguistic features in job ads reveals strong alignment of model recommendations with traditional gender stereotypes. To examine the role of recruiter identity, we steer model behavior by infusing Big Five personality traits and simulating the perspectives of historical figures. We find that less agreeable personas reduce stereotyping, consistent with an agreeableness bias in LLMs. Our findings highlight how AI-driven hiring may perpetuate biases in the labor market and have implications for fairness and diversity within firms.
    Date: 2025–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2504.21400
  2. By: Petr Parshakov; Iuliia Naidenova; Sofia Paklina; Nikita Matkin; Cornel Nesseler
    Abstract: In this paper, we investigate how individuals evaluate human and large langue models generated responses to popular questions when the source of the content is either concealed or disclosed. Through a controlled field experiment, participants were presented with a set of questions, each accompanied by a response generated by either a human or an AI. In a randomized design, half of the participants were informed of the response's origin while the other half remained unaware. Our findings indicate that, overall, participants tend to prefer AI-generated responses. However, when the AI origin is revealed, this preference diminishes significantly, suggesting that evaluative judgments are influenced by the disclosure of the response's provenance rather than solely by its quality. These results underscore a bias against AI-generated content, highlighting the societal challenge of improving the perception of AI work in contexts where quality assessments should be paramount.
    Date: 2025–02
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2503.16458
  3. By: Olesja Lammert (Paderborn University)
    Abstract: Research indicates that anger is a prevalent emotion in human-technology interactions, often leading to frustration, rejection and reduced trust, significantly impacting user experience and acceptance of technology. Particularly in high-risk or uncertain situations, where AI explanations are intended to help users make more informed decisions, decision-making is influenced by emotional factors, impairing understanding and leading to suboptimal choices. While XAI research continues to evolve, greater consideration of users' emotions and individual characteristics remains necessary. Broadening empirical studies in this area could foster a more comprehensive understanding of decision-making processes following explanations, especially in relation to the interaction between emotions and cognition. In response, this study seeks to contribute to this area by employing an experimental design to examine the effects of AI explanations and emotion regulation on user reliance and trust of emotional users. The results provide a foundation for future human-centered research in XAI, focusing on the impact of emotions and cognition in human-technology interactions.
    Keywords: human-centered XAI, explanation strategy, emotion induction, emotions, emotion regulation, cognitive reappraisal nudge, decision-making, behavioral and psychological decision-making factors, user reliance, trust, empirical study
    JEL: C91 D81 C88
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:pdn:dispap:139
  4. By: Ribeiro, Marcos; Prettner, Klaus
    Abstract: How do new technologies affect economic growth and the skill premium? To answer this question, we analyze the impact of industrial robots and artificial intelligence (AI) on the wage differential between low-skill and high-skill workers across 52 countries using counterfactual simulations. In so doing, we extend the nested CES production function framework of Bloom et al. (2025) to account for cross-country income heterogeneity. Confirming prior findings, we show that the use of industrial robots tends to increase wage inequality, while the use of AI tends to reduce it. Our contribution lies in documenting substantial heterogeneity across income groups: the inequality-increasing effect of robots and the inequality-reducing effects of AI are particularly strong in high-income countries, while they are less pronounced among middle- and lower-middle income countries. In addition, we show that both technologies boost economic growth. In terms of policy recommendations, our findings suggest that investments in education and skill-upgrading can simultaneously raise average incomes and mitigate the negative effects of automation on wage inequality.
    Keywords: Automation Industrial Robots AI Skill premium
    JEL: J31 O33
    Date: 2025–04–28
    URL: https://d.repec.org/n?u=RePEc:pra:mprapa:124633
  5. By: Jaccoud, Florencia (RS: GSBE other - not theme-related research, Mt Economic Research Inst on Innov/Techn)
    Abstract: This paper examines the linkages between occupational exposure to recent automation technologies and inequality across 19 European countries. Using data from the European Union Structure of Earnings Survey (EU-SES), a fixed-effects model is employed to assess the association between occupational exposure to artificial intelligence (AI) and to industrial robots - two distinct forms of automation -and within occupation wage inequality. The analysis reveals that occupations with higher exposure to robots tend to have lower wage inequality, particularly among workers in the lower half of the wage distribution. In contrast, occupations more exposed to AI exhibit greater wage dispersion, especially at the top of the wage distribution. We argue that this disparity arises from differences in how each technology complements individual worker abilities: robot-related tasks often complement routine physical activities, while AI-related tasks tend to amplify the productivity of high-skilled, cognitively intensive work.
    JEL: J31 J24 O15 O33
    Date: 2025–04–22
    URL: https://d.repec.org/n?u=RePEc:unm:unumer:2025013
  6. By: García-Suaza, Andrés (Universidad del Rosario); Sarango-Iturralde, Alexander (Université Paris 1 Panthéon-Sorbonne); Caiza-Guamán, Pamela (Universidad del Rosario); Gil Díaz, Mateo (Universidad del Rosario); Acosta Castillo, Dana (Universidad del Rosario)
    Abstract: The rapid advancements in the domain of artificial intelligence (AI) have exerted a considerable influence on the labor market, thereby engendering alterations in the demand for specific skills and the structure of employment. This study aims to evaluate the extent of exposure to AI within the Colombian labor market and its relation with workforce characteristics and available job openings. To this end, we built a specific AI exposure index or Colombia based on skill demand in job posts. Our findings indicate that 33.8% of workers are highly exposed to AI, with variations observed depending on the measurement method employed. Furthermore, it is revealed a positive and significant correlation between AI exposure and wages, i.e., highly exposed to AI earn a wage premium of 21.8%. On the demand side, only 2.5% of job openings explicitly mention AI-related skills. These findings imply that international indices may underestimate the wage premium associated with AI exposure in Colombia and underscore the potential unequal effects on wages distribution among different demographic groups.
    Keywords: Artificial intelligence; Labor market; Job posts; Occupations; Skills; Colombia
    JEL: E24 J23 J24 O33
    Date: 2025–04–24
    URL: https://d.repec.org/n?u=RePEc:col:000092:021368
  7. By: Simone Vannuccini (Université Côte d'Azur, CNRS, GREDEG, France)
    Abstract: In this paper, I use the case of artificial intelligence (AI) to analyse the challenges and opportunities in designing a European industrial policy that (i) adopts a pro-competitive posture, (ii) does not fall victim of the risk of double weaponization by pro-nationalistic and pro-oligopolistic narratives, and (iii) reorients its goals away from the AI 'arms race' and to the provision of public goods. At the moment, the AI industry is an infant industry, and the European digital stack enabling AI applications is controlled by non-European actors, which reduces European autonomy and justifies policy support. I suggest that while AI's economic impact are overestimated and hyped, AI should be a pillar of European industrial policy due to its strategic asset and dual-use nature. Through a series of proposals, I outline the contours of a European AI industrial policy; its features can be summarised by three keywords: public, as in the public assets that the EU should aim to build on the basis of open source technology and in the public interest; federated, through variety and the decentralisation of AI solutions conceived as a nonoligopolistic European alternative to large scale systems; and federal, realising decoupling across the technology stack, when possible and advisable, through supranational tools, institutions, and finances.
    Keywords: artificial intelligence, strategic asset, industrial policy, European Union, geopolitical rivalries
    JEL: L40 L50 O33
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:gre:wpaper:2025-21
  8. By: Andrea Borsato; Patrick Llerena
    Abstract: Contributing to the fast-growing Economics of Artificial Intelligence (AI), this paper examines the close relationship between university and industry for what concerns to the research and development of AI technologies in the USA. Recalling the history of the university-industry relationships in the several phases of the US national system of innovation (NSI), we argue that current collaborations resemble in some respects what happened during the prewar NSI. Yet, the AI R&D presents some peculiarities. Universities are changing their positioning in the innovation process and turning to a research-based training model in the domains concerned by AI. This could potentially change the trajectory of university-industry links, since it is very much in line with the typical Humboldtian perspective that was at work in some European institutes in XVIII century up to US early XX century. At the same time, if the way in which the production of knowledge and the training of the workforce envisages a return to the origins, differences arise in the definition of the main goals, e.g., Sustainable Development Goals, and in the role of stakeholders. The overall discussion also bears some implications for the link between division of knowledge and division of labour.
    Keywords: Artificial Intelligence, AI research, University-industry relationship, US national innovation system.
    JEL: I2 L2 O31 O33
    Date: 2024
    URL: https://d.repec.org/n?u=RePEc:ulp:sbbeta:2024-46
  9. By: Tom Coupé (University of Canterbury); Weilun Wu
    Abstract: This paper uses meta-analysis to summarize the literature that analyses the impact of generative AI on productivity. While we find substantial heterogeneity across studies, our preferred estimate suggests that on average, across a wide range of tasks, sectors, study methods and productivity measures, the use of GenAI tools increases productivity by 17 %. We further find some evidence that experimental studies show a higher association between GenAI use and productivity than quasi-experimental studies, and weak evidence that the size of the impact of GenAI tools is bigger for quantitative than for qualitative measures of productivity.
    Keywords: Generative Artificial Intelligence, Productivity, Meta-Analysis
    JEL: J24 O3
    Date: 2025–05–01
    URL: https://d.repec.org/n?u=RePEc:cbt:econwp:25/09
  10. By: Lars Hornuf; David J. Streich; Niklas Töllich
    Abstract: Retrieval-augmented generation (RAG) has emerged as a promising way to improve task-specific performance in generative artificial intelligence (GenAI) applications such as large language models (LLMs). In this study, we evaluate the performance implications of providing various types of domain-specific information to LLMs in a simple portfolio allocation task. We compare the recommendations of seven state-of-the-art LLMs in various experimental conditions against a benchmark of professional financial advisors. Our main result is that the provision of domain-specific information does not unambiguously improve the quality of recommendations. In particular, we find that LLM recommendations underperform recommendations by human financial advisors in the baseline condition. However, providing firm-specific information improves historical performance in LLM portfolios and closes the gap with human advisors. Performance improvements are achieved through higher exposure to market risk and not through an increase in mean-variance efficiency within the risky portfolio share. Notably, portfolio risk increases primarily for risk-averse investors. We also document that quantitative firm-specific information affects recommendations more than qualitative firm-specific information, and that equipping models with generic finance theory does not affect recommendations.
    Keywords: generative artificial intelligence, large language models, domain-specific information, retrieval-augmented generation, portfolio management, portfolio allocation.
    JEL: G00 G11 G40
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:ces:ceswps:_11862
  11. By: Saizhuo Wang; Hao Kong; Jiadong Guo; Fengrui Hua; Yiyan Qi; Wanyun Zhou; Jiahao Zheng; Xinyu Wang; Lionel M. Ni; Jian Guo
    Abstract: The field of artificial intelligence (AI) in quantitative investment has seen significant advancements, yet it lacks a standardized benchmark aligned with industry practices. This gap hinders research progress and limits the practical application of academic innovations. We present QuantBench, an industrial-grade benchmark platform designed to address this critical need. QuantBench offers three key strengths: (1) standardization that aligns with quantitative investment industry practices, (2) flexibility to integrate various AI algorithms, and (3) full-pipeline coverage of the entire quantitative investment process. Our empirical studies using QuantBench reveal some critical research directions, including the need for continual learning to address distribution shifts, improved methods for modeling relational financial data, and more robust approaches to mitigate overfitting in low signal-to-noise environments. By providing a common ground for evaluation and fostering collaboration between researchers and practitioners, QuantBench aims to accelerate progress in AI for quantitative investment, similar to the impact of benchmark platforms in computer vision and natural language processing.
    Date: 2025–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2504.18600
  12. By: Cheng Wang; Chuwen Wang; Shirong Zeng; Changjun Jiang
    Abstract: After decades of evolution, the financial system has increasingly deviated from an idealized framework based on theorems. It necessitates accurate projections of complex market dynamics and human behavioral patterns. With the development of data science and machine intelligence, researchers are trying to digitalize and automate market prediction. However, existing methodologies struggle to represent the diversity of individuals and are regardless of the domino effects of interactions on market dynamics, leading to the poor performance facing abnormal market conditions where non-quantitative information dominates the market. To alleviate these disadvantages requires the introduction of knowledge about how non-quantitative information, like news and policy, affects market dynamics. This study investigates overcoming these challenges through rehearsing potential market trends based on the financial large language model agents whose behaviors are aligned with their cognition and analyses in markets. We propose a hierarchical knowledge architecture for financial large language model agents, integrating fine-tuned language models and specialized generators optimized for trading scenarios. For financial market, we develop an advanced interactive behavioral simulation system that enables users to configure agents and automate market simulations. In this work, we take commodity futures as an example to research the effectiveness of our methodologies. Our real-world case simulation succeeds in rehearsing abnormal market dynamics under geopolitical events and reaches an average accuracy of 3.4% across various points in time after the event on predicting futures price. Experimental results demonstrate our method effectively leverages diverse information to simulate behaviors and their impact on market dynamics through systematic interaction.
    Date: 2025–02
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2503.20787
  13. By: Chris Santos-Lang; Christopher M. Homan
    Abstract: This paper contributes a new way to evaluate AI. Much as one might evaluate a machine in terms of its performance at chess, this approach involves evaluating a machine in terms of its performance at a game called "MAD Chairs". At the time of writing, evaluation with this game exposed opportunities to improve Claude, Gemini, ChatGPT, Qwen and DeepSeek. Furthermore, this paper sets a stage for future innovation in game theory and AI safety by providing an example of success with non-standard approaches to each: studying a game beyond the scope of previous game theoretic tools and mitigating a serious AI safety risk in a way that requires neither determination of values nor their enforcement.
    Date: 2025–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2503.20986

This nep-ain issue is ©2025 by Ben Greiner. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.