nep-ain New Economics Papers
on Artificial Intelligence
Issue of 2025–04–28
thirteen papers chosen by
Ben Greiner, Wirtschaftsuniversität Wien


  1. Privacy Concerns and Willingness to Adopt AI Products: A Cross-Country Randomized Survey Experiment By Laura Brandimarte; Jerg Gutmann; Gerd Muehlheusser; Franziska Weber
  2. Uncovering the Fairness of AI: Exploring Focal Point, Inequality Aversion, and Altruism in ChatGPT's Dictator Game Decisions By Eléonore Dodivers; Ismaël Rafaï
  3. Social Reputation as one of the Key Driver of AI Over-Reliance: An Experimental Test with ChatGPT-3.5 By Mathieu Chevrier
  4. The relationship between Artificial Intelligence (AI) exposure and return to education By Karol Madoń
  5. A Bridge Too Far: Signalling Effects of Artificial Intelligence Evaluation of Job Interviews By Agata Mirowska; Jbid Arsenyan
  6. Predictive AI and productivity growth dynamics: evidence from French firms By Luca Fontanelli; Mattia Guerini; Raffaele Miniaci; Angelo Secchi
  7. Elevating Developers' Accountability Awareness in AI Systems Development : The Role of Process and Outcome Accountability Arguments By Schmidt, Jan-Hendrik; Bartsch, Sebastian Clemens; Adam, Martin; Benlian, Alexander
  8. Workers’ exposure to AI across development By Piotr Lewandowski; Karol Madoń; Albert Park
  9. Artificial Intelligence and the Philippine Labor Market: Mapping Occupational Exposure and Complementarity By Micholo Cucio; Tristan Hennig
  10. Algorithm Impact on Fertility and R&D Sector By Miyake, Yusuke
  11. Conditional Gains: When AI Investment Enhances Firm Efficiency By Kazakis, Pantelis
  12. Decoding AI: Nine facts about how firms use artificial intelligence in France By Flavio Calvino; Luca Fontanelli
  13. Using NLP to create preliminary causal system maps for use in policy analysis By Barbrook-Johnson, Peter; Fu, Yuan

  1. By: Laura Brandimarte; Jerg Gutmann; Gerd Muehlheusser; Franziska Weber
    Abstract: We examine the trade-off between functionality and data privacy inherent in many AI products by conducting a randomized survey experiment with 1, 734 participants from the US and several European countries. Participants’ willingness to adopt a hypothetical, AI-enhanced app is measured under three sets of treatments: (i) installation defaults (opt-in vs. opt-out), (ii) salience of data privacy risks, and (iii) regulatory regimes with different levels of data protection. In addition, we study how the willingness to adopt depends on individual attitudes and preferences. We find no effect of defaults or salience, while a regulatory regime with stricter privacy protection increases the likelihood that the app is adopted. Finally, greater data privacy concerns, greater risk aversion, lower levels of trust, and greater skepticism toward AI are associated with a significantly lower willingness to adopt the app.
    Keywords: artificial intelligence, privacy concerns, randomized survey experiment, smart products, technology adoption
    JEL: D80 D90 K24 L86 Z10
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:ces:ceswps:_11774
  2. By: Eléonore Dodivers (Université Côte d'Azur, CNRS, GREDEG, France); Ismaël Rafaï (Toulouse School of Economics, Toulouse School of Management)
    Abstract: This paper investigates Artificial intelligence Large Language Models (AI-LLM) social preferences’ in Dictator Games. Brookins and Debacker (2024, Economics Bulletin) previously observed a tendency of ChatGPT-3.5 to give away half its endowment in a standard Dictator Game and interpreted this as an expression of fairness. We replicate their experiment and introduce a multiplicative factor on donations which varies the efficiency of the transfer. Varying transfer efficiency disentangles three donation explanations (inequality aversion, altruism, or focal point). Our results show that ChatGPT-3.5 donations should be interpreted as a focal point rather than the expression of fairness. In contrast, a more advanced version (ChatGPT-4o) made decisions that are better explained by altruistic motives than inequality aversion. Our study highlights the necessity to explore the parameter space, when designing experiments to study AI-LLM preferences.
    Keywords: Artificial Intelligence, Large Language Models, Dictator Games, Experimental Economics, Social Preferences
    JEL: D90 O33 C02 C91
    Date: 2025–04
    URL: https://d.repec.org/n?u=RePEc:gre:wpaper:2025-09
  3. By: Mathieu Chevrier (Université Côte d'Azur, CNRS, GREDEG, France)
    Abstract: Understanding an agent's true competencies is crucial for a principal, particularly when delegating tasks. A principal may assign a task to an AI system, which is often perceived as highly competent, even in domains where its actual capabilities are limited. This experimental study demonstrates that participants mistakenly bet on ChatGPT-3.5's ability to solve mathematical tasks, even when explicitly informed that it only processes textual data. This overestimation leads participants to earn 67.2% less compared to those who rely on the competencies of another human. Overconfidence in ChatGPT-3.5 persists irrespective of task difficulty, time spent using ChatGPT-3.5, nor prior experience posing mathematical or counting questions to it mitigates this bias. I highlight that overconfidence in ChatGPT-3.5 is driven by the algorithm's social reputation. The more participants perceive ChatGPT-3.5 as socially trusted, the more they tend to rely on it.
    Keywords: ChatGPT-3.5, Overconfidence, Competence, Social Reputation, Overreliance, Laboratory experiment
    JEL: C92 D91
    Date: 2025–04
    URL: https://d.repec.org/n?u=RePEc:gre:wpaper:2025-12
  4. By: Karol Madoń
    Abstract: This paper studies the relationship between exposure to artificial intelligence (AI) and workers’ wages across European countries. Overall, a positive relationship between exposure to AI and workers’ wages is found, however it differs considerably between workers and countries. High-skilled workers experience far higher wage premiums related to AI-related skills than middle- and low-skilled workers. Positive associations are concentrated among occupations moderately and highly exposed to AI (between the 6th and 9th decile of the exposure), and are weaker among the least exposed occupations. Returns of AI-related skills among high-skilled workers are even higher in Eastern European Countries compared to Western European countries. The heterogeneity likely originates from the difference in overall labour costs between country groups. The results presented in this study were obtained from the estimation of Mincerian wage regressions on the 2018 release of the EU Structure of Earning Survey.
    Keywords: artificial intelligence, wages, technological change, Europe
    JEL: E24 J30 O33
    Date: 2025–03
    URL: https://d.repec.org/n?u=RePEc:ibt:wpaper:wp052025
  5. By: Agata Mirowska (NEOMA - Neoma Business School); Jbid Arsenyan (ESC [Rennes] - ESC Rennes School of Business)
    Abstract: Deploying Artificial Intelligence (AI) for job interview evaluations, while a potential signal of high innovativeness, may risk suggesting poor people orientation on the part of the organisation. This study utilizes an experimental methodology to investigate whether AI evaluation (AIE) is interpreted as a positive (high innovativeness) or negative (low people orientation) signal by the job applicant, and whether the ensuing effects on attitudes towards the organisation depend on the type of organization implementing the technology. Results indicate that AIE is interpreted more strongly as a signal of how the organisation treats people rather than of how innovative it is. Additionally, removing humans from the selection process appears to be a ‘bridge too far', when it comes to technological advances in the selection process.
    Keywords: applicant reactions, artificial intelligence, experimental design, job interview, personnel selection, signalling theory
    Date: 2025–03–17
    URL: https://d.repec.org/n?u=RePEc:hal:journl:hal-04996541
  6. By: Luca Fontanelli (University of Brescia, Department of Economics and Management, CMCC Foundation – Euro-Mediterranean Center on Climate Change); Mattia Guerini (University of Brescia, Deparment of Economics and Management and Fondazione Eni Enrico Mattei); Raffaele Miniaci (University of Brescia, Department of Economics and Management); Angelo Secchi (PSE – University Paris 1 Pantheon-Sorbonne, CMCC Foundation – Euro-Mediterranean Center on Climate Change)
    Abstract: While artificial intelligence (AI) adoption holds the potential to enhance business operations through improved forecasting and automation, its relation with average productivity growth remain highly heterogeneous across firms. This paper shifts the focus and investigates the impact of predictive artificial intelligence (AI) on the volatility of firms’ productivity growth rates. Using firm-level data from the 2019 French ICT survey, we provide robust evidence that AI use is associated with increased volatility. This relationship persists across multiple robustness checks, including analyses addressing causality concerns. To propose a possible mechanisms underlying this effect, we compare firms that purchase AI from external providers (“AI buyers†) and those that develop AI in-house (“AI developers†). Our results show that heightened volatility is concentrated among AI buyers, whereas firms that develop AI internally experience no such effect. Finally, we find that AI-induced volatility among “AI buyers†is mitigated in firms with a higher share of ICT engineers and technicians, suggesting that AI’s successful integration requires complementary human capital.
    Keywords: Artificial intelligence, productivity growth volatility, coarsened exact matching
    JEL: D20 J24 O14 O33
    Date: 2025–04
    URL: https://d.repec.org/n?u=RePEc:fem:femwpa:2025.11
  7. By: Schmidt, Jan-Hendrik; Bartsch, Sebastian Clemens; Adam, Martin; Benlian, Alexander
    Abstract: The increasing proliferation of artificial intelligence (AI) systems presents new challenges for the future of information systems (IS) development, especially in terms of holding stakeholders accountable for the development and impacts of AI systems. However, current governance tools and methods in IS development, such as AI principles or audits, are often criticized for their ineffectiveness in influencing AI developers’ attitudes and perceptions. Drawing on construal level theory and Toulmin’s model of argumentation, this paper employed a sequential mixed method approach to integrate insights from a randomized online experiment (Study 1) and qualitative interviews (Study 2). This combined approach helped us investigate how different types of accountability arguments affect AI developers’ accountability perceptions. In the online experiment, process accountability arguments were found to be more effective than outcome accountability arguments in enhancing AI developers’ perceived accountability. However, when supported by evidence, both types of accountability arguments prove to be similarly effective. The qualitative study corroborates and complements the quantitative study’s conclusions, revealing that process and outcome accountability emerge as distinct theoretical constructs in AI systems development. The interviews also highlight critical organizational and individual boundary conditions that shape how AI developers perceive their accountability. Together, the results contribute to IS research on algorithmic accountability and IS development by revealing the distinct nature of process and outcome accountability while demonstrating the effectiveness of tailored arguments as governance tools and methods in AI systems development.
    Date: 2025–02
    URL: https://d.repec.org/n?u=RePEc:dar:wpaper:154037
  8. By: Piotr Lewandowski; Karol Madoń; Albert Park
    Abstract: This paper develops a task-adjusted, country-specific measure of workers’ exposure to artificial intelligence (AI) across 103 countries, covering approximately 86% of global employment. Building on the AI Occupational Exposure index by Felten et al. (2021), we map AI-related abilities to worker-level tasks using survey data from PIAAC, STEP, and CULS. We then predict occupational AI exposure in countries lacking survey data using a regression-based approach. Our findings show that accounting for within-occupation task differences significantly amplifies the development gradient in AI exposure. About 47% of cross-country variation is explained by differences in task content, particularly among high-skilled occupations. We attribute these differences primarily to cross-country differences in ICT use intensity, followed by human capital and globalisation-related firm characteristics. We also document rising AI exposure over the past decade, driven largely by changes in task composition. Our results highlight the central role of digital infrastructure and skill use in shaping global AI exposure.
    Keywords: tasks, AI, labor, technology, skills
    JEL: J21 J23 J24
    Date: 2025–03
    URL: https://d.repec.org/n?u=RePEc:ibt:wpaper:wp022025
  9. By: Micholo Cucio; Tristan Hennig
    Abstract: This paper combines labor force survey microdata with measures of occupational AI exposure and complementarity to examine the potential impact of recent advancements in AI on the Philippine labor market. We find that around one third of workers are highly exposed to AI with around sixty percent of those also rated highly complementary, indicating potential productivity gains. College-educated, young, urban, female, and well-paid workers in the services sector are most exposed. Business process outsourcing (BPO) is identified as the sector with the highest proportion of jobs at risk of displacement. Addressing regulatory gaps, infrastructure needs, and workforce reskilling is crucial to maximize benefits and mitigate negative impacts.
    Keywords: Artificial Intelligence (AI); Labor Market; Philippines; Business Process Outsourcing (BPO); AI Exposure and Complementarity
    Date: 2025–02–21
    URL: https://d.repec.org/n?u=RePEc:imf:imfwpa:2025/043
  10. By: Miyake, Yusuke
    Abstract: This study investigates how Artificial Intelligence (AI) affects fertility decisions, economic growth, and overall social welfare. Despite substantial technological progress and increases in economic output (GDP), advanced economies, notably Japan, face severe demographic challenges due to dramatically declining fertility rates. This phenomenon raises important questions regarding the traditional measures of economic prosperity, prompting a re-evaluation of GDP as a reliable indicator of social welfare. To address these issues, this article develops a dynamic economic growth model incorporating heterogeneous human capital (skilled and unskilled labor) and introduces AI as a new, distinct form of capital investment. Unlike traditional physical capital, AI capital features negligible depreciation rates, significantly altering investment decisions, and long-term growth dynamics. On the demand side, households optimize their utility by allocating their limited time between labor supply, leisure, and child-rearing activities, directly influencing fertility rates and human capital accumulation. This paper argues that AI-driven algorithms fundamentally improve market efficiency by precisely matching heterogeneous consumer preferences and supplier characteristics, leading to optimal resource allocation. Unlike the traditional ”law of one price, ” algorithm-driven markets generate multiple equilibrium prices, varying according to individual preferences and attributes, characterized herein as a shift toward a ”law of multiple prices.” The analysis suggests critical policy implications, emphasizing the need for refined economic and educational policies that address the implications of AI-driven market dynamics on fertility choices and income distribution. In particular, policy interventions must strategically promote educational reforms that diversify and enrich human capital, aligning it more closely with the demands of AI-intensive industries. This model provides a theoretical framework for understanding the intricate interplay between AI, demographic shifts, economic inequality, and long-term growth trajectories.
    Keywords: Artificial Intelligence, Fertility Decline, Endogenous Growth, Algorithmic Economics, Human Capital, Social Welfare
    JEL: J13 J24 O33 O4 O41
    Date: 2025–04–04
    URL: https://d.repec.org/n?u=RePEc:pra:mprapa:124245
  11. By: Kazakis, Pantelis
    Abstract: The rapid adoption of artificial intelligence (AI) in the corporate world has raised important questions about its impact on firm performance. This paper examines whether investments in AI—measured by the share of AI-skilled workers—are associated with improvements in firm efficiency. The analysis reveals that AI investment alone does not lead to higher efficiency. That is, firms employing more AI-skilled labor do not, on average, perform more efficiently than others. However, the results show that this relationship depends on firm context. Firms operating in more competitive markets appear to benefit more from AI investment. Additionally, firms that engage more heavily in tax avoidance also realize greater efficiency gains from AI, possibly due to their more aggressive or strategic resource allocation practices.
    Keywords: artificial intelligence (AI), firm efficiency, market power, tax avoidance
    JEL: D40 E22 G30 H26 L11
    Date: 2025–04–02
    URL: https://d.repec.org/n?u=RePEc:pra:mprapa:124246
  12. By: Flavio Calvino; Luca Fontanelli
    Abstract: This study explores how French firms use artificial intelligence, leveraging a uniquely detailed and representative dataset with information on the use of specific AI technologies and how AI systems are deployed across different business functions within firms, in 2020 and 2022. The use of AI is still rare, amounting to 6% of firms, and varies by technology, with sectors often specialising in specific technologies and functions. While most firms specialise in a single AI technology applied to a single business function, larger firms adopt multiple technologies for different purposes. Firms adopting AI technologies are generally larger - except for those using natural language-related AI - and tend to be more digitally intensive, though firms leveraging NLG and autonomous movement AI deviate from this pattern. Firm size appears a relevant driver of AI use in business functions requiring integration with tangible processes, while digital capabilities appear particularly relevant for AI applications in business functions more related to intangible ones. AI technologies widely differ in terms of technological interdependencies and applicability, with machine learning for data analysis, automation and data-driven decision making-related AI technologies resulting as being at the core of the AI paradigm.
    Keywords: Technology Diffusion, Artificial Intelligence, Business Function, ICT
    Date: 2025–04–07
    URL: https://d.repec.org/n?u=RePEc:ssa:lemwps:2025/13
  13. By: Barbrook-Johnson, Peter; Fu, Yuan
    Abstract: The use of causal systems mapping in interdisciplinary and policy research has increased in recent years. Causal system maps typically rely on stakeholder opinion for their creation. This works well but does not make use of all available literature and can be time-consuming. For most topics, there is an abundance of text data in easily identifiable journal papers, grey literature, and policy documents. Using this data to support causal systems mapping exercises has the potential to make them more comprehensive and connected to evidence. There is also potential for the creation of maps using this data, to be done quickly, if the processes used become routine. In this paper, we develop an approach using Natural Language Processing (NLP) techniques and text data from journal papers to create preliminary causal system maps. Using the example topic of power sector decarbonisation policies and comparisons to a related participatory exercise, we consider the best techniques to use, the workflows which might speed up mapping exercises, and potential risks. The approach produces familiar factors and logical individual relationships, but causal maps with structure that mirrors attention in the literature rather than real causal patterns, and which overemphasise connections directly between policies and outcomes, rather than longer more realistic causal chains. We highlight the importance of choice of documents and sections of documents to use, and that the NLP workflow is full of subjective judgements and decisions. We argue that a clear purpose must be identified before beginning, to inform these choices; purely exploratory, which are relatively common with systems mapping exercises, are likely to be flawed.
    Keywords: Natural language processing, Pretrained language model, Deep learning, Systems mapping, Causal maps, Policy analysis, Decarbonisation policy
    Date: 2025–04
    URL: https://d.repec.org/n?u=RePEc:amz:wpaper:2025-09

This nep-ain issue is ©2025 by Ben Greiner. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.