nep-ain New Economics Papers
on Artificial Intelligence
Issue of 2024–12–02
sixteen papers chosen by
Ben Greiner, Wirtschaftsuniversität Wien


  1. Take Caution in Using LLMs as Human Surrogates: Scylla Ex Machina By Yuan Gao; Dokyun Lee; Gordon Burtch; Sina Fazelpour
  2. Can Machines Think Like Humans? A Behavioral Evaluation of LLM-Agents in Dictator Games By Ji Ma
  3. The Hidden Effects of Algorithmic Recommendations By Alex Albright
  4. Teaming Up with Artificial Agents in Non-routine Analytical Tasks By Lorenzo Cominelli; Federico Galatolo; Caterina Giannetti; Cristiano Ciaccio; Felice Dell’Orletta; Philipp Chaposkvi; Giulia Venturi
  5. Artificial Intelligence, Hiring and Employment: Job Postings Evidence from Sweden By Engberg, Erik; Hellsten, Mark; Javed, Farrukh; Lodefalk, Magnus; Sabolová, Radka; Schroeder, Sara; Tang, Aili
  6. Learning to Adopt Generative AI By Lijia Ma; Xingchen Xu; Yumei He; Yong Tan
  7. The Rise of AI Pricing: Trends, Driving Forces, and Implications for Firm Performance By Jonathan Adams; Min Fang; Zheng Liu; Yajie Wang
  8. Does AI technology deployment benefit the owner of the technology? Impact of gitHub copilot release on microsoft By Vroegindeweij, Martijn; Medappa, Poonacha; Tunç, Murat
  9. Not all twins are identical: the digital layer of “twin” transition market applications By Abbasiharofteh, Milad; Kriesch, Lukas
  10. Evaluating Company-specific Biases in Financial Sentiment Analysis using Large Language Models By Kei Nakagawa; Masanori Hirano; Yugo Fujimoto
  11. TraderTalk: An LLM Behavioural ABM applied to Simulating Human Bilateral Trading Interactions By Alicia Vidler; Toby Walsh
  12. SARF: Enhancing Stock Market Prediction with Sentiment-Augmented Random Forest By Saber Talazadeh; Dragan Perakovic
  13. A Hierarchical conv-LSTM and LLM Integrated Model for Holistic Stock Forecasting By Arya Chakraborty; Auhona Basu
  14. Urban Mobility: AI, ODE-Based Modeling, and Scenario Planning By Katsiaryna Bahamazava
  15. Crowdfunding Success: Human Insights vs Algorithmic Textual Extraction By Caterina Giannetti; Maria Saveria Mavillonio
  16. Modelling of Economic Implications of Bias in AI-Powered Health Emergency Response Systems By Katsiaryna Bahamazava

  1. By: Yuan Gao; Dokyun Lee; Gordon Burtch; Sina Fazelpour
    Abstract: Recent studies suggest large language models (LLMs) can exhibit human-like reasoning, aligning with human behavior in economic experiments, surveys, and political discourse. This has led many to propose that LLMs can be used as surrogates or simulations for humans in social science research. However, LLMs differ fundamentally from humans, relying on probabilistic patterns, absent the embodied experiences or survival objectives that shape human cognition. We assess the reasoning depth of LLMs using the 11-20 money request game. Nearly all advanced approaches fail to replicate human behavior distributions across many models. Causes of failure are diverse and unpredictable, relating to input language, roles, and safeguarding. These results advise caution when using LLMs to study human behavior or as surrogates or simulations.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.19599
  2. By: Ji Ma
    Abstract: As Large Language Model (LLM)-based agents increasingly undertake real-world tasks and engage with human society, how well do we understand their behaviors? This study (1) investigates how LLM agents' prosocial behaviors -- a fundamental social norm -- can be induced by different personas and benchmarked against human behaviors; and (2) introduces a behavioral approach to evaluate the performance of LLM agents in complex decision-making scenarios. We explored how different personas and experimental framings affect these AI agents' altruistic behavior in dictator games and compared their behaviors within the same LLM family, across various families, and with human behaviors. Our findings reveal substantial variations and inconsistencies among LLMs and notable differences compared to human behaviors. Merely assigning a human-like identity to LLMs does not produce human-like behaviors. Despite being trained on extensive human-generated data, these AI agents cannot accurately predict human decisions. LLM agents are not able to capture the internal processes of human decision-making, and their alignment with human behavior is highly variable and dependent on specific model architectures and prompt formulations; even worse, such dependence does not follow a clear pattern.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.21359
  3. By: Alex Albright
    Abstract: Algorithms are intended to improve human decisions with data-driven predictions. However, algorithms provide more than just predictions to decision-makers—they often provide explicit recommendations. In this paper, I demonstrate these algorithmic recommendations have significant independent effects on human decisions. I leverage a natural experiment in which algorithmic recommendations were given to bail judges in some cases but not others. Lenient recommendations increased lenient bail decisions by 40% for marginal cases. The results are consistent with algorithmic recommendations making visible mistakes, such as violent rearrest, less costly to judges by providing them reputational cover. In this way, algorithms can affect human decisions by changing incentives, in addition to informing predictions.
    Keywords: Decision-making; algorithm; Algorithmic recommendation; Bail; Criminal justice
    JEL: D91 K42
    Date: 2024–11–14
    URL: https://d.repec.org/n?u=RePEc:fip:fedmoi:99090
  4. By: Lorenzo Cominelli; Federico Galatolo; Caterina Giannetti; Cristiano Ciaccio; Felice Dell’Orletta; Philipp Chaposkvi; Giulia Venturi
    Abstract: Using a real-life escape room scenario, we investigate how different levels of embodiment in artificial agents influence team performance and conversational dynamics in non-routine analytical tasks. Teams composed of either three humans or two humans and an artificial agent (a Box, an Avatar, and a hyper-realistic Humanoid) worked together to escape the room within a time limit. Our findings reveal that while human-only teams tend to complete all tasks more frequently, they also tend to be slower and make more errors. Additionally, we observe a non-linear relationship between the degree of agent embodiment and team performance, with a significant effect on conversational dynamics. Teams with agents exhibiting higher levels of embodiment display conversational patterns more similar to those occurring among humans. These results highlight the complex role that embodied AI plays in human-agent interactions, offering new insights into how artificial agents can be designed to support team collaboration in problem-solving environments.
    Keywords: complex tasks, artificial agents, teamwork
    JEL: C92
    Date: 2024–11–01
    URL: https://d.repec.org/n?u=RePEc:pie:dsedps:2024/314
  5. By: Engberg, Erik (The Ratio Institute); Hellsten, Mark (The Ratio Institute); Javed, Farrukh (The Ratio Institute); Lodefalk, Magnus (The Ratio Institute); Sabolová, Radka (The Ratio Institute); Schroeder, Sara (The Ratio Institute); Tang, Aili (The Ratio Institute)
    Abstract: This paper investigates the impact of artificial intelligence (AI) on hiring and employment, using the universe of job postings published by the Swedish Public Employment Service from 2014-2022 and universal register data for Sweden. We construct a detailed measure of AI exposure according to occupational content and find that establishments exposed to AI are more likely to hire AI workers. Survey data further indicate that AI exposure aligns with greater use of AI services. Importantly, rather than displacing non-AI workers, AI exposure is positively associated with increased hiring for both AI and non-AI roles. In the absence of substantial productivity gains that might account for this increase, we interpret the positive link between AI exposure and non-AI hiring as evidence that establishments are using AI to augment existing roles and expand task capabilities, rather than to replace non-AI workers.
    Keywords: Artificial Intelligence; Technological Change; Automation; Labour Demand
    JEL: D22 J23 J24 O33
    Date: 2024–11–08
    URL: https://d.repec.org/n?u=RePEc:hhs:ratioi:0380
  6. By: Lijia Ma; Xingchen Xu; Yumei He; Yong Tan
    Abstract: Recent advancements in generative AI, exemplified by ChatGPT, have dramatically transformed how people access information. Despite its powerful capabilities, the benefits it provides may not be equally distributed among individuals - a phenomenon referred to as the digital divide. Building upon prior literature, we propose two forms of digital divide in the generative AI adoption process: (i) the learning divide, capturing individuals' heterogeneous abilities to update their perceived utility of ChatGPT; and (ii) the utility divide, representing differences in individuals' actual utility derived from per use of ChatGPT. To evaluate these two divides, we develop a Bayesian learning model that incorporates demographic heterogeneities in both the utility and signal functions. Leveraging a six-month clickstream dataset, we estimate the model and find significant learning and utility divides across various demographic attributes. Interestingly, lower-educated and non-white individuals derive higher utility gains from ChatGPT but learn about its utility at a slower rate. Furthermore, males, younger individuals, and those with an IT background not only derive higher utility per use from ChatGPT but also learn about its utility more rapidly. Besides, we document a phenomenon termed the belief trap, wherein users underestimate ChatGPT's utility, opt not to use the tool, and consequently lack new experiences to update their perceptions, leading to continued underutilization. Our simulation further demonstrates that the learning divide can significantly affect the probability of falling into the belief trap, another form of the digital divide in adoption outcomes (i.e., outcome divide); however, offering training programs can alleviate the belief trap and mitigate the divide.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.19806
  7. By: Jonathan Adams; Min Fang; Zheng Liu; Yajie Wang
    Abstract: We document key stylized facts about the time-series trends and cross-sectional distributions of AI pricing and study its implications for firm performance, both on average and conditional on monetary policy shocks. We use the universe of online job posting data from Lightcast to measure the adoption of AI pricing. We infer that a firm is adopting AI pricing if it posts a job opening that requires AI-related skills and contains the keyword “pricing.” At the aggregate level, the share of AI-pricing jobs in all pricing jobs has increased by more than tenfold since 2010. The increase in AI-pricing jobs has been broad-based, spreading to more industries than other types of AI jobs. At the firm level, larger and more productive firms are more likely to adopt AI pricing. Moreover, firms that adopted AI pricing experienced faster growth in sales, employment, assets, and markups, and their stock returns are also more sensitive to high-frequency monetary policy surprises than non-adopters. We show that these empirical observations can be rationalized by a simple model where a monopolist firm with incomplete information about the demand function invests in AI pricing to acquire information.
    Keywords: artificial intelligence; firms; pricing; jobs; monetary policy; technology adoption; AI
    JEL: D40 E31 E52 O33
    Date: 2024–11–01
    URL: https://d.repec.org/n?u=RePEc:fip:fedfwp:99052
  8. By: Vroegindeweij, Martijn (Tilburg University, School of Economics and Management); Medappa, Poonacha (Tilburg University, School of Economics and Management); Tunç, Murat (Tilburg University, School of Economics and Management)
    Date: 2024
    URL: https://d.repec.org/n?u=RePEc:tiu:tiutis:af60ac9c-5bf6-45cd-bbcb-e85b5dbcd207
  9. By: Abbasiharofteh, Milad (University of Groningen); Kriesch, Lukas (Justus Liebig University Giessen)
    Abstract: A twin (a joint green and digital) transition aims to facilitate achieving the Green Deal goals. The interplay between regional capabilities and twin transition market applications remains understudied. This research utilizes Large Language Models to analyze web texts of more than 600, 000 German firms, assessing whether their products contribute to the twin transition. Our findings suggest while AI capabilities benefit the twin transition market applications, clean technological capabilities play a significant role only in highly specialized regions. To facilitate future research and informed policymaking, we provide open access to our developed dataset and AI tools (i.e., the TwinTransition Mapper).
    Keywords: Twin transition; TwinTransition Mapper; digital layer; technological capabilities
    JEL: C81 C88 O30 O31
    Date: 2024–11–13
    URL: https://d.repec.org/n?u=RePEc:hhs:lucirc:2024_016
  10. By: Kei Nakagawa; Masanori Hirano; Yugo Fujimoto
    Abstract: This study aims to evaluate the sentiment of financial texts using large language models~(LLMs) and to empirically determine whether LLMs exhibit company-specific biases in sentiment analysis. Specifically, we examine the impact of general knowledge about firms on the sentiment measurement of texts by LLMs. Firstly, we compare the sentiment scores of financial texts by LLMs when the company name is explicitly included in the prompt versus when it is not. We define and quantify company-specific bias as the difference between these scores. Next, we construct an economic model to theoretically evaluate the impact of sentiment bias on investor behavior. This model helps us understand how biased LLM investments, when widespread, can distort stock prices. This implies the potential impact on stock prices if investments driven by biased LLMs become dominant in the future. Finally, we conduct an empirical analysis using Japanese financial text data to examine the relationship between firm-specific sentiment bias, corporate characteristics, and stock performance.
    Date: 2024–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2411.00420
  11. By: Alicia Vidler; Toby Walsh
    Abstract: We introduce a novel hybrid approach that augments Agent-Based Models (ABMs) with behaviors generated by Large Language Models (LLMs) to simulate human trading interactions. We call our model TraderTalk. Leveraging LLMs trained on extensive human-authored text, we capture detailed and nuanced representations of bilateral conversations in financial trading. Applying this Generative Agent-Based Model (GABM) to government bond markets, we replicate trading decisions between two stylised virtual humans. Our method addresses both structural challenges, such as coordinating turn-taking between realistic LLM-based agents, and design challenges, including the interpretation of LLM outputs by the agent model. By exploring prompt design opportunistically rather than systematically, we enhance the realism of agent interactions without exhaustive overfitting or model reliance. Our approach successfully replicates trade-to-order volume ratios observed in related asset markets, demonstrating the potential of LLM-augmented ABMs in financial simulations
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.21280
  12. By: Saber Talazadeh; Dragan Perakovic
    Abstract: Stock trend forecasting, a challenging problem in the financial domain, involves ex-tensive data and related indicators. Relying solely on empirical analysis often yields unsustainable and ineffective results. Machine learning researchers have demonstrated that the application of random forest algorithm can enhance predictions in this context, playing a crucial auxiliary role in forecasting stock trends. This study introduces a new approach to stock market prediction by integrating sentiment analysis using FinGPT generative AI model with the traditional Random Forest model. The proposed technique aims to optimize the accuracy of stock price forecasts by leveraging the nuanced understanding of financial sentiments provided by FinGPT. We present a new methodology called "Sentiment-Augmented Random Forest" (SARF), which in-corporates sentiment features into the Random Forest framework. Our experiments demonstrate that SARF outperforms conventional Random Forest and LSTM models with an average accuracy improvement of 9.23% and lower prediction errors in pre-dicting stock market movements.
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.07143
  13. By: Arya Chakraborty; Auhona Basu
    Abstract: The financial domain presents a complex environment for stock market prediction, characterized by volatile patterns and the influence of multifaceted data sources. Traditional models have leveraged either Convolutional Neural Networks (CNN) for spatial feature extraction or Long Short-Term Memory (LSTM) networks for capturing temporal dependencies, with limited integration of external textual data. This paper proposes a novel Two-Level Conv-LSTM Neural Network integrated with a Large Language Model (LLM) for comprehensive stock advising. The model harnesses the strengths of Conv-LSTM for analyzing time-series data and LLM for processing and understanding textual information from financial news, social media, and reports. In the first level, convolutional layers are employed to identify local patterns in historical stock prices and technical indicators, followed by LSTM layers to capture the temporal dynamics. The second level integrates the output with an LLM that analyzes sentiment and contextual information from textual data, providing a holistic view of market conditions. The combined approach aims to improve prediction accuracy and provide contextually rich stock advising.
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.12807
  14. By: Katsiaryna Bahamazava (Department of Mathematical Sciences G.L. Lagrange, Politecnico di Torino, Italy, iLaVita Nonprofit Foundation, Italy - USA)
    Abstract: Urbanization and technological advancements are reshaping the future of urban mobility, presenting both challenges and opportunities. This paper combines foresight and scenario planning with mathematical modeling using Ordinary Differential Equations (ODEs) to explore how Artificial Intelligence (AI)-driven technologies can transform transportation systems. By simulating ODE-based models in Python, we quantify the impact of AI innovations, such as autonomous vehicles and intelligent traffic management, on reducing traffic congestion under different regulatory conditions. Our ODE models capture the dynamic relationship between AI adoption rates and traffic congestion, providing quantitative insights into how future scenarios might unfold. By incorporating industry collaborations and case studies, we offer strategic guidance for businesses and policymakers navigating this evolving landscape. This study contributes to understanding how foresight, scenario planning, and ODE modeling can inform strategies for creating more efficient, sustainable, and livable cities through AI adoption.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.19915
  15. By: Caterina Giannetti; Maria Saveria Mavillonio
    Abstract: Using a unique dataset of equity offerings from crowdfunding platforms, we explore the synergy between human insights and algorithmic analysis in evaluating campaign success through business plan assessments. Human evaluators (students) used a predefined grid to assess each proposal in a Business Plan competition. We then developed a classifier with advanced textual representations and compared prediction errors between human evaluators, a machine learning model, and their combination. Our goal is to identify the drivers of discrepancies in their evaluations. While AI models outperform humans in overall accuracy, human evaluations offer valuable insights, especially in areas requiring subtle judgment. Combining human and AI predictions leads to improved performance, highlighting the complementary strengths of human intuition and AI's computational power.
    Keywords: Crowdfunding, Natural Language Processing, Human Evaluation
    JEL: C45 C53 G2
    Date: 2024–11–01
    URL: https://d.repec.org/n?u=RePEc:pie:dsedps:2024/315
  16. By: Katsiaryna Bahamazava (Department of Mathematical Sciences G.L. Lagrange, Politecnico di Torino, Italy)
    Abstract: We present a theoretical framework assessing the economic implications of bias in AI-powered emergency response systems. Integrating health economics, welfare economics, and artificial intelligence, we analyze how algorithmic bias affects resource allocation, health outcomes, and social welfare. By incorporating a bias function into health production and social welfare models, we quantify its impact on demographic groups, showing that bias leads to suboptimal resource distribution, increased costs, and welfare losses. The framework highlights efficiency-equity trade-offs and provides economic interpretations. We propose mitigation strategies, including fairness-constrained optimization, algorithmic adjustments, and policy interventions. Our findings offer insights for policymakers, emergency service providers, and technology developers, emphasizing the need for AI systems that are efficient and equitable. By addressing the economic consequences of biased AI, this study contributes to policies and technologies promoting fairness, efficiency, and social welfare in emergency response services.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.20229

This nep-ain issue is ©2024 by Ben Greiner. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.