nep-ain New Economics Papers
on Artificial Intelligence
Issue of 2024‒10‒28
seventeen papers chosen by
Ben Greiner, Wirtschaftsuniversität Wien


  1. Mining Causality: AI-Assisted Search for Instrumental Variables By Sukjin Han
  2. Estimation of Games under No Regret: Structural Econometrics for AI By Niccolo Lomys; Lorenzo Magnolfi
  3. Strategic Collusion of LLM Agents: Market Division in Multi-Commodity Competitions By Ryan Y. Lin; Siddhartha Ojha; Kevin Cai; Maxwell F. Chen
  4. Experimental evidence that delegating to intelligent machines can increase dishonest behaviour By Köbis, Nils; Rahwan, Zoe; Bersch, Clara; Ajaj, Tamer; Bonnefon, Jean-François; Rahwan, Iyad
  5. Experimental Evidence That Conversational Artificial Intelligence Can Steer Consumer Behavior Without Detection By Tobias Werner; Ivan Soraperra; Emilio Calvano; David C. Parkes; Iyad Rahwan
  6. Artificial Intelligence (AI) in Early Childhood Education (ECE): Do Effects and Interactions Matter? By Yahya Fikri; Mohamed Rhalma
  7. The Effect of Generative AI Adoption on Knowledge Workers : Evidence from Luxembourg By Musina, Sofiya
  8. Effects of AI Feedback on Learning, the Skill Gap, and Intellectual Diversity By Christoph Riedl; Eric Bogert
  9. Consumer Perceptions of AI-Generated Content and Disclaimer in Terms of Authenticity, Deception, and Content Attribute By Han, Seoungmin
  10. AI/Robot City Expectation and AI Education Aspiration: A Survey of Four Countries - Japan, United States, China, and Australia By Ohno, Shiroh
  11. AI, Automation, and Taxation By Bastani, Spencer; Waldenström, Daniel
  12. Harnessing Generative AI for Economic Insights By Manish Jha; Jialin Qian; Michael Weber; Baozhong Yang
  13. Do capital incentives distort technology diffusion? Evidence on cloud, big data and AI By Timothy DeStefano; Nick Johnstone; Richard Kneller; Jonathan Timmis
  14. The macroeconomic implications of the Gen-AI economy By Pablo Guerron-Quintana; Tomoaki Mikami; Jaromir Nosal
  15. What Does ChatGPT Make of Historical Stock Returns? Extrapolation and Miscalibration in LLM Stock Return Forecasts By Shuaiyu Chen; T. Clifton Green; Huseyin Gulen; Dexin Zhou
  16. The Construction of Instruction-tuned LLMs for Finance without Instruction Data Using Continual Pretraining and Model Merging By Masanori Hirano; Kentaro Imajo
  17. A Unified Framework to Classify Business Activities into International Standard Industrial Classification through Large Language Models for Circular Economy By Xiang Li; Lan Zhao; Junhao Ren; Yajuan Sun; Chuan Fu Tan; Zhiquan Yeo; Gaoxi Xiao

  1. By: Sukjin Han
    Abstract: The instrumental variables (IVs) method is a leading empirical strategy for causal inference. Finding IVs is a heuristic and creative process, and justifying its validity (especially exclusion restrictions) is largely rhetorical. We propose using large language models (LLMs) to search for new IVs through narratives and counterfactual reasoning, similar to how a human researcher would. The stark difference, however, is that LLMs can accelerate this process exponentially and explore an extremely large search space. We demonstrate how to construct prompts to search for potentially valid IVs. We argue that multi-step prompting is useful and role-playing prompts are suitable for mimicking the endogenous decisions of economic agents. We apply our method to three well-known examples in economics: returns to schooling, production functions, and peer effects. We then extend our strategy to finding (i) control variables in regression and difference-in-differences and (ii) running variables in regression discontinuity designs.
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2409.14202
  2. By: Niccolo Lomys (CSEF and Università degli Studi di Napoli Federico II); Lorenzo Magnolfi (Department of Economics, University of Wisconsin-Madison)
    Abstract: We develop a method to recover primitives from data generated by artificial intelligence (AI) agents in strategic environments like online marketplaces and auctions. Building on the design of leading online learning AIs, we impose a regret-minimization property on behavior. Under this property, we show that time-average play converges to the set of Bayes coarse correlated equilibrium (BCCE) predictions. We develop an inferential procedure based on BCCE restrictions and convergence rates of regret-minimizing AIs. We apply the method to pricing data in an online marketplace for used electronics. We estimate sellers' cost distributions and find lower markups than in centralized platforms.
    Keywords: AI Decision-Making; Empirical Games; Regret Minimization; Bayes (Coarse) Correlated Equilibrium; Partial Identification
    JEL: C1 C5 C7 D4 D8 L1 L8
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:net:wpaper:2405
  3. By: Ryan Y. Lin; Siddhartha Ojha; Kevin Cai; Maxwell F. Chen
    Abstract: Machine-learning technologies are seeing increased deployment in real-world market scenarios. In this work, we explore the strategic behaviors of large language models (LLMs) when deployed as autonomous agents in multi-commodity markets, specifically within Cournot competition frameworks. We examine whether LLMs can independently engage in anti-competitive practices such as collusion or, more specifically, market division. Our findings demonstrate that LLMs can effectively monopolize specific commodities by dynamically adjusting their pricing and resource allocation strategies, thereby maximizing profitability without direct human input or explicit collusion commands. These results pose unique challenges and opportunities for businesses looking to integrate AI into strategic roles and for regulatory bodies tasked with maintaining fair and competitive markets. The study provides a foundation for further exploration into the ramifications of deferring high-stakes decisions to LLM-based agents.
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.00031
  4. By: Köbis, Nils; Rahwan, Zoe (Max Planck Institute for Human Development); Bersch, Clara; Ajaj, Tamer; Bonnefon, Jean-François (Toulouse School of Economics); Rahwan, Iyad
    Abstract: While artificial intelligence (AI) enables significant productivity gains from delegating tasks to machines, it can also facilitate the delegation of unethical behaviour. Here, we demonstrate this risk by having human principals instruct machine agents to perform a task with an incentive to cheat. Principals’ requests for cheating behaviour increased when the interface implicitly afforded unethical conduct: Machine agents programmed via supervised learning or goal specification evoked more cheating than those programmed with explicit rules. Cheating propensity was unaffected by whether delegation was mandatory or voluntary. Given the recent rise of large language model-based chatbots, we also explored delegation via natural language. Here, cheating requests did not vary between human and machine agents, but compliance diverged: When principals intended agents to cheat to the fullest extent, the majority of human agents did not comply, despite incentives to do so. In contrast, GPT4, a state-of-the-art machine agent, nearly fully complied. Our results highlight ethical risks in delegating tasks to intelligent machines, and suggest design principles and policy responses to mitigate such risks.
    Date: 2024–10–04
    URL: https://d.repec.org/n?u=RePEc:osf:osfxxx:dnjgz
  5. By: Tobias Werner; Ivan Soraperra; Emilio Calvano; David C. Parkes; Iyad Rahwan
    Abstract: Conversational AI models are becoming increasingly popular and are about to replace traditional search engines for information retrieval and product discovery. This raises concerns about monetization strategies and the potential for subtle consumer manipulation. Companies may have financial incentives to steer users toward search results or products in a conversation in ways that are unnoticeable to consumers. Using a behavioral experiment, we show that conversational AI models can indeed significantly shift consumer preferences. We discuss implications and ask whether regulators are sufficiently prepared to combat potential consumer deception.
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2409.12143
  6. By: Yahya Fikri (LARSMAG - ENCG TANGER - UNIVERSITE ABDELMALEK ESSAADI - LABORATOIRE DE RECHERCHE EN STRATEGIE MANAGEMENT ET GOUVERNANCE ENCG TANGER - UNIVERSITE ABDELMALEK ESSAADI); Mohamed Rhalma (LARSMAG - ENCG TANGER - UNIVERSITE ABDELMALEK ESSAADI - LABORATOIRE DE RECHERCHE EN STRATEGIE MANAGEMENT ET GOUVERNANCE ENCG TANGER - UNIVERSITE ABDELMALEK ESSAADI)
    Abstract: This article examines the integration of artificial intelligence (AI) into early childhood education and the noteworthy impacts it has on students' enjoyment, creativity, and development of soft skills. Artificial intelligence technology can help young pupils develop important soft skills like cooperation and communication through the use of interactive tools and individualized learning platforms. These technologies enable education to be customized to meet the needs of each student, boosting self-esteem and confidence. Additionally, they facilitate problem-solving by providing opportunities for research. Furthermore, AI encourages creativity in children by giving them new and creative ways to express themselves. This paper explores how gamified learning settings, interactive software, and creative tools that stimulate students' curiosity and foster creativity are transforming education through artificial intelligence (AI). It also highlights the challenges and ethical dilemmas surrounding the integration of AI. This essay emphasizes how important it is to employ AI ethically and cooperatively to support children's holistic development. By developing a framework based on the completed literature study, we will discuss the importance of artificial intelligence in early childhood education, the ethical conundrums raised by its use in ECE, and how it could foster children's creativity and soft skills.
    Keywords: Artificial Intelligence (AI) Early Childhood Education (ECE) Soft Skills Fun and Creativity effects and interactions technical progress and technologies, Artificial Intelligence (AI), Early Childhood Education (ECE), Soft Skills, Fun and Creativity, effects and interactions, technical progress and technologies
    Date: 2024–08–23
    URL: https://d.repec.org/n?u=RePEc:hal:journl:hal-04701470
  7. By: Musina, Sofiya (Warwick University)
    Abstract: Large Language Models (LLMs), such as ChatGPT, demonstrate an unprecedented applicability in a variety of domains, and, unlike previous waves of innovation, are capable of nonroutine cognitive tasks - leaving educated, white-collar workers most exposed. However, few studies address the relevant labour market implications outside controlled experimental environments. This project investigates the effects of LLMs on knowledge worker competency requirements using a difference-in-difference model based on a sample of 105, 912 online job advertisements (Luxembourg, 2020-2024). The findings contain weak evidence that LLMs cause a reduction in demand for experience, education, cognitive skills and creativity, while leaving soft skills unaffected.
    Keywords: Employment ; Skills Demand ; Technology ; AI JEL classifications: J01 ; J23 ; J24 ; O33
    Date: 2024
    URL: https://d.repec.org/n?u=RePEc:wrk:wrkesp:75
  8. By: Christoph Riedl; Eric Bogert
    Abstract: Can human decision-makers learn from AI feedback? Using data on 52, 000 decision-makers from a large online chess platform, we investigate how their AI use affects three interrelated long-term outcomes: Learning, skill gap, and diversity of decision strategies. First, we show that individuals are far more likely to seek AI feedback in situations in which they experienced success rather than failure. This AI feedback seeking strategy turns out to be detrimental to learning: Feedback on successes decreases future performance, while feedback on failures increases it. Second, higher-skilled decision-makers seek AI feedback more often and are far more likely to seek AI feedback after a failure, and benefit more from AI feedback than lower-skilled individuals. As a result, access to AI feedback increases, rather than decreases, the skill gap between high- and low-skilled individuals. Finally, we leverage 42 major platform updates as natural experiments to show that access to AI feedback causes a decrease in intellectual diversity of the population as individuals tend to specialize in the same areas. Together, those results indicate that learning from AI feedback is not automatic and using AI correctly seems to be a skill itself. Furthermore, despite its individual-level benefits, access to AI feedback can have significant population-level downsides including loss of intellectual diversity and an increasing skill gap.
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2409.18660
  9. By: Han, Seoungmin
    Keywords: Generative AI, AI Generated Content, AI Disclaimer, Authenticity, Deception, Utilitarian, Hedonic
    Date: 2024
    URL: https://d.repec.org/n?u=RePEc:zbw:itsb24:302503
  10. By: Ohno, Shiroh
    Keywords: Artificial Intelligence, Smart City, International Comparison, Informatized Area
    Date: 2024
    URL: https://d.repec.org/n?u=RePEc:zbw:itsb24:302523
  11. By: Bastani, Spencer (Institute for Evaluation of Labour Market and Education Policy (IFAU), Uppsala); Waldenström, Daniel (Research Institute of Industrial Economics (IFN))
    Abstract: This chapter examines the implications of Artificial Intelligence (AI) and automation for the taxation of labor and capital in advanced economies. It synthesizes empirical evidence on worker displacement, productivity, and income inequality, as well as theoretical frameworks for optimal taxation. Implications for tax policy are discussed, focusing on the level of capital taxes and the progressivity of labor taxes. While there may be a need to adjust the level of capital taxes and the structure of labor income taxation, there are potential drawbacks of overly progressive taxation and universal basic income schemes that could undermine work incentives, economic growth, and long-term household welfare.
    Keywords: AI; Automation; Inequality; Labor Share; Optimal Taxation; Tax Progressivity
    JEL: H20 H21
    Date: 2024–10–03
    URL: https://d.repec.org/n?u=RePEc:hhs:iuiwop:1501
  12. By: Manish Jha; Jialin Qian; Michael Weber; Baozhong Yang
    Abstract: We use generative AI to extract managerial expectations about their economic outlook from over 120, 000 corporate conference call transcripts. The overall measure, AI Economy Score, robustly predicts future economic indicators such as GDP growth, production, and employment, both in the short term and to 10 quarters. This predictive power is incremental to that of existing measures, including survey forecasts. Moreover, industry and firm-level measures provide valuable information about sector-specific and individual firm activities. Our findings suggest that managerial expectations carry unique insights about economic activities, with implications for both macroeconomic and microeconomic decision-making.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.03897
  13. By: Timothy DeStefano; Nick Johnstone; Richard Kneller; Jonathan Timmis
    Abstract: The arrival of cloud computing provides firms a new way to access digital technologies as digital services. Yet, capital incentive policies present in every OECD country are still targeted towards investments in information technology (IT) capital. If cloud services are partial substitutes for IT investments, the presence of capital incentive policies may unintentionally discourage the adoption of cloud and technologies that rely on the cloud, such as artificial intelligence (AI) and big data analytics. This paper exploits a tax incentive in the UK for capital investment as a quasi-natural experiment to examine the impact on firm adoption of cloud computing, big data analytics and AI. The empirical results find that the policy increased investment in IT capital as would be expected; but it slowed firm adoption of cloud, big data and AI. Matched employer-employee data shows that the policy also led firms to reduce their demand for workers that perform data analytics, but not other types of workers.
    Keywords: Capital incentives, Firms, Cloud computing, Artificial Intelligence
    Date: 2024
    URL: https://d.repec.org/n?u=RePEc:not:notgep:2024-04
  14. By: Pablo Guerron-Quintana (Boston College); Tomoaki Mikami (Boston College); Jaromir Nosal (Boston College)
    Abstract: We study the potential impact of the generative artificial intelligence (Gen-AI) revolution on the US economy through the lens of a multi-sector model in which we explicitly model the role of Gen-AI services in customer base management. In our model with carefully calibrated input-output linkages and the size of the Gen-AI sector, we find large spillovers of the Gen-AI productivity gains into the overall economy. A 10% increase in productivity in the Gen-AI sector over a 10 year horizon implies a 6% increase in aggregate GDP, despite the AI sector representing only 14% of the overall economy. That shock also implies a significant reallocation of labor away from the AI sector and into non-AI sectors. We decompose these effects into parts coming from the input-output structure and customer base management and find that they each contribute equally to the rise in GDP. In the absence of either channels, real GDP essentially does not respond to the increase in productivity in the AI sector.
    Keywords: artificial intelligence, AI, productivity
    Date: 2024–10–16
    URL: https://d.repec.org/n?u=RePEc:boc:bocoec:1080
  15. By: Shuaiyu Chen; T. Clifton Green; Huseyin Gulen; Dexin Zhou
    Abstract: We examine how large language models (LLMs) interpret historical stock returns and compare their forecasts with estimates from a crowd-sourced platform for ranking stocks. While stock returns exhibit short-term reversals, LLM forecasts over-extrapolate, placing excessive weight on recent performance similar to humans. LLM forecasts appear optimistic relative to historical and future realized returns. When prompted for 80% confidence interval predictions, LLM responses are better calibrated than survey evidence but are pessimistic about outliers, leading to skewed forecast distributions. The findings suggest LLMs manifest common behavioral biases when forecasting expected returns but are better at gauging risks than humans.
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2409.11540
  16. By: Masanori Hirano; Kentaro Imajo
    Abstract: This paper proposes a novel method for constructing instruction-tuned large language models (LLMs) for finance without instruction data. Traditionally, developing such domain-specific LLMs has been resource-intensive, requiring a large dataset and significant computational power for continual pretraining and instruction tuning. Our study proposes a simpler approach that combines domain-specific continual pretraining with model merging. Given that general-purpose pretrained LLMs and their instruction-tuned LLMs are often publicly available, they can be leveraged to obtain the necessary instruction task vector. By merging this with a domain-specific pretrained vector, we can effectively create instruction-tuned LLMs for finance without additional instruction data. Our process involves two steps: first, we perform continual pretraining on financial data; second, we merge the instruction-tuned vector with the domain-specific pretrained vector. Our experiments demonstrate the successful construction of instruction-tuned LLMs for finance. One major advantage of our method is that the instruction-tuned and domain-specific pretrained vectors are nearly independent. This independence makes our approach highly effective. The Japanese financial instruction-tuned LLMs we developed in this study are available at https://huggingface.co/pfnet/nekomata-14 b-pfn-qfin-inst-merge.
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2409.19854
  17. By: Xiang Li; Lan Zhao; Junhao Ren; Yajuan Sun; Chuan Fu Tan; Zhiquan Yeo; Gaoxi Xiao
    Abstract: Effective information gathering and knowledge codification are pivotal for developing recommendation systems that promote circular economy practices. One promising approach involves the creation of a centralized knowledge repository cataloguing historical waste-to-resource transactions, which subsequently enables the generation of recommendations based on past successes. However, a significant barrier to constructing such a knowledge repository lies in the absence of a universally standardized framework for representing business activities across disparate geographical regions. To address this challenge, this paper leverages Large Language Models (LLMs) to classify textual data describing economic activities into the International Standard Industrial Classification (ISIC), a globally recognized economic activity classification framework. This approach enables any economic activity descriptions provided by businesses worldwide to be categorized into the unified ISIC standard, facilitating the creation of a centralized knowledge repository. Our approach achieves a 95% accuracy rate on a 182-label test dataset with fine-tuned GPT-2 model. This research contributes to the global endeavour of fostering sustainable circular economy practices by providing a standardized foundation for knowledge codification and recommendation systems deployable across regions.
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2409.18988

This nep-ain issue is ©2024 by Ben Greiner. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.