nep-ain New Economics Papers
on Artificial Intelligence
Issue of 2024‒07‒08
seven papers chosen by
Ben Greiner, Wirtschaftsuniversität Wien


  1. Algorithmic Collusion in Dynamic Pricing with Deep Reinforcement Learning By Shidi Deng; Maximilian Schiffer; Martin Bichler
  2. Generative AI Enhances Team Performance and Reduces Need for Traditional Teams By Ning Li; Huaikang Zhou; Kris Mikel-Hong
  3. How Ethical Should AI Be? How AI Alignment Shapes the Risk Preferences of LLMs By Shumiao Ouyang; Hayong Yun; Xingjian Zheng
  4. Language Models Trained to do Arithmetic Predict Human Risky and Intertemporal Choice By Jian-Qiao Zhu; Haijiang Yan; Thomas L. Griffiths
  5. AI and Digital Technology: Gender Gaps in Higher Education By J. Ignacio Conde-Ruiz; Juan-José Ganuza; Manuel García-Santana; Carlos Victoria
  6. Intelligent financial system: how AI is transforming finance By Iñaki Aldasoro; Leonardo Gambacorta; Anton Korinek; Vatsala Shreeti; Merlin Stein
  7. AI Diffusion to Low-Middle Income Countries; A Blessing or a Curse? By Rafael Andersson Lipcsey

  1. By: Shidi Deng; Maximilian Schiffer; Martin Bichler
    Abstract: Nowadays, a significant share of the Business-to-Consumer sector is based on online platforms like Amazon and Alibaba and uses Artificial Intelligence for pricing strategies. This has sparked debate on whether pricing algorithms may tacitly collude to set supra-competitive prices without being explicitly designed to do so. Our study addresses these concerns by examining the risk of collusion when Reinforcement Learning algorithms are used to decide on pricing strategies in competitive markets. Prior research in this field focused on Tabular Q-learning (TQL) and led to opposing views on whether learning-based algorithms can lead to supra-competitive prices. Our work contributes to this ongoing discussion by providing a more nuanced numerical study that goes beyond TQL by additionally capturing off- and on-policy Deep Reinforcement Learning (DRL) algorithms. We study multiple Bertrand oligopoly variants and show that algorithmic collusion depends on the algorithm used. In our experiments, TQL exhibits higher collusion and price dispersion phenomena compared to DRL algorithms. We show that the severity of collusion depends not only on the algorithm used but also on the characteristics of the market environment. We further find that Proximal Policy Optimization appears to be less sensitive to collusive outcomes compared to other state-of-the-art DRL algorithms.
    Date: 2024–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2406.02437&r=
  2. By: Ning Li; Huaikang Zhou; Kris Mikel-Hong
    Abstract: Recent advancements in generative artificial intelligence (AI) have transformed collaborative work processes, yet the impact on team performance remains underexplored. Here we examine the role of generative AI in enhancing or replacing traditional team dynamics using a randomized controlled experiment with 435 participants across 122 teams. We show that teams augmented with generative AI significantly outperformed those relying solely on human collaboration across various performance measures. Interestingly, teams with multiple AIs did not exhibit further gains, indicating diminishing returns with increased AI integration. Our analysis suggests that centralized AI usage by a few team members is more effective than distributed engagement. Additionally, individual-AI pairs matched the performance of conventional teams, suggesting a reduced need for traditional team structures in some contexts. However, despite this capability, individual-AI pairs still fell short of the performance levels achieved by AI-assisted teams. These findings underscore that while generative AI can replace some traditional team functions, more comprehensively integrating AI within team structures provides superior benefits, enhancing overall effectiveness beyond individual efforts.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.17924&r=
  3. By: Shumiao Ouyang; Hayong Yun; Xingjian Zheng
    Abstract: This study explores the risk preferences of Large Language Models (LLMs) and how the process of aligning them with human ethical standards influences their economic decision-making. By analyzing 30 LLMs, we uncover a broad range of inherent risk profiles ranging from risk-averse to risk-seeking. We then explore how different types of AI alignment, a process that ensures models act according to human values and that focuses on harmlessness, helpfulness, and honesty, alter these base risk preferences. Alignment significantly shifts LLMs towards risk aversion, with models that incorporate all three ethical dimensions exhibiting the most conservative investment behavior. Replicating a prior study that used LLMs to predict corporate investments from company earnings call transcripts, we demonstrate that although some alignment can improve the accuracy of investment forecasts, excessive alignment results in overly cautious predictions. These findings suggest that deploying excessively aligned LLMs in financial decision-making could lead to severe underinvestment. We underline the need for a nuanced approach that carefully balances the degree of ethical alignment with the specific requirements of economic domains when leveraging LLMs within finance.
    Date: 2024–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2406.01168&r=
  4. By: Jian-Qiao Zhu; Haijiang Yan; Thomas L. Griffiths
    Abstract: The observed similarities in the behavior of humans and Large Language Models (LLMs) have prompted researchers to consider the potential of using LLMs as models of human cognition. However, several significant challenges must be addressed before LLMs can be legitimately regarded as cognitive models. For instance, LLMs are trained on far more data than humans typically encounter, and may have been directly trained on human data in specific cognitive tasks or aligned with human preferences. Consequently, the origins of these behavioral similarities are not well understood. In this paper, we propose a novel way to enhance the utility of LLMs as cognitive models. This approach involves (i) leveraging computationally equivalent tasks that both an LLM and a rational agent need to master for solving a cognitive problem and (ii) examining the specific task distributions required for an LLM to exhibit human-like behaviors. We apply this approach to decision-making -- specifically risky and intertemporal choice -- where the key computationally equivalent task is the arithmetic of expected value calculations. We show that an LLM pretrained on an ecologically valid arithmetic dataset, which we call Arithmetic-GPT, predicts human behavior better than many traditional cognitive models. Pretraining LLMs on ecologically valid arithmetic datasets is sufficient to produce a strong correspondence between these models and human decision-making. Our results also suggest that LLMs used as cognitive models should be carefully investigated via ablation studies of the pretraining data.
    Date: 2024–05
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2405.19313&r=
  5. By: J. Ignacio Conde-Ruiz; Juan-José Ganuza; Manuel García-Santana; Carlos Victoria
    Abstract: This article examines gender gaps in higher education in Spain from 1985 to 2023 in the context of technological advancements, particularly digitalization and artificial intelligence (AI). We identify significant disparities, with women over represented in health-related fields and underrepresented in STEM disciplines. This imbalance is concerning as STEM fields offer better employment prospects and higher salaries. We analyze university degrees' exposure to technological change through Routine Task Intensity (RTI) and AI exposure indices. Our findings show that women are more enrolled in degrees with high RTI, prone to automation, and less in degrees with high AI exposure, likely to benefit from technological advancements. This suggests technological change could widen existing labor market gender gaps. To address this, we recommend policies to boost female participation in STEM fields and adapt educational curricula to reduce routine tasks and enhance AI complementarities, ensuring equitable labor market outcomes amid technological change.
    Keywords: gender gaps, artificial intelligence, higher education, STEM, technological change, self-actualization
    JEL: I23 I26 J16 J24
    Date: 2024–05
    URL: https://d.repec.org/n?u=RePEc:bge:wpaper:1450&r=
  6. By: Iñaki Aldasoro; Leonardo Gambacorta; Anton Korinek; Vatsala Shreeti; Merlin Stein
    Abstract: At the core of the financial system is the processing and aggregation of vast amounts of information into price signals that coordinate participants in the economy. Throughout history, advances in information processing, from simple bookkeeping to artificial intelligence (AI), have transformed the financial sector. We use this framing to analyse how generative AI (GenAI) and emerging AI agents as well as, more speculatively, artificial general intelligence will impact finance. We focus on four functions of the financial system: financial intermediation, insurance, asset management and payments. We also assess the implications of advances in AI for financial stability and prudential policy. Moreover, we investigate potential spillover effects of AI on the real economy, examining both an optimistic and a disruptive AI scenario. To address the transformative impact of advances in AI on the financial system, we propose a framework for upgrading financial regulation based on well-established general principles for AI governance.
    Keywords: artificial intelligence, generative AI, AI agents, financial system, financial institutions
    JEL: E31 J24 O33 O40
    Date: 2024–06
    URL: https://d.repec.org/n?u=RePEc:bis:biswps:1194&r=
  7. By: Rafael Andersson Lipcsey
    Abstract: Rapid advancements in AI have sparked significant research into its impacts on productivity and labor, which can be profoundly positive or negative. Often overlooked in this debate is understanding of how AI technologies spread across and within economies. Equally ignored are developing economies facing substantial labor market impacts from rapid, and a loss in competitiveness, from slow AI diffusion. This paper reviews literature on technology diffusion and proposes a three-way framework for understanding AI diffusion: global value chains, research collaboration, and inter-firm knowledge transfers. This is used to measure AI diffusion in sixteen low-middle-income, and four developed economies, as well as to evaluate dependence on China and the USA for access to AI technologies. The study finds a significant gap in diffusion rates between the two groups, but current trends indicate it is narrowing. China is identified as a crucial future source of AI diffusion through value chains, while the USA is more influential in research and knowledge transfers. The paper's limitations include the omission of additional data sources and countries, and the lack of investigation into the relationship between diffusion and technology intensity. Nonetheless, it raises salient macro-level questions about AI diffusion and suggests emphasis on redistribution mechanisms of AI induced economic gains, and bilateral agreements as a complement to international accords, to address diverse needs and corresponding risks faced by economies transitioning into an AI-dominated era. Additionally, it highlights the need for research into the links between AI diffusion, technology intensity, and productivity; case studies combined with targeted policy recommendations; more accurate methods for measuring AI diffusion; and a deeper investigation into its labor market impacts particular to LMICs.
    Date: 2024–05
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2405.20399&r=

This nep-ain issue is ©2024 by Ben Greiner. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.