|
on Artificial Intelligence |
By: | Lippens, Louis (Ghent University) |
Abstract: | The advent of large language models (LLMs) may reshape hiring in the labour market. This paper investigates how generative pre-trained transformers (GPTs)—i.e. OpenAI’s GPT-3.5, GPT-4, and GPT-4o—can aid hiring decisions. In a direct comparison between humans and GPTs on an identical hiring task, I show that GPTs tend to select candidates more liberally than humans but exhibit less ethnic bias. GPT-4 even slightly favours certain ethnic minorities. While LLMs may complement humans in hiring by making a (relatively extensive) pre-selection of job candidates, the findings suggest that they may miss-select due to a lack of contextual understanding and may reproduce pre-trained human bias at scale. |
Date: | 2024–07–11 |
URL: | https://d.repec.org/n?u=RePEc:osf:osfxxx:zxf5y_v1 |
By: | Yuzhi Hao (Department of Economics, The Hong Kong University of Science and Technology); Danyang Xie (Thrust of Innovation, Policy, and Entrepreneurship, the Society Hub, The Hong Kong University of Science and Technology) |
Abstract: | This paper pioneers a novel approach to economic and public policy analysis by leveraging multiple Large Language Models (LLMs) as heterogeneous artificial economic agents. We first evaluate five LLMs' economic decision-making capabilities in solving two-period consumption allocation problems under two distinct scenarios: with explicit utility functions and based on intuitive reasoning. While previous research has often simulated heterogeneity by solely varying prompts, our approach harnesses the inherent variations in analytical capabilities across different LLMs to model agents with diverse cognitive traits. Building on these findings, we construct a Multi-LLM-Agent-Based (MLAB) framework by mapping these LLMs to specific educational groups and corresponding income brackets. Using interest-income taxation as a case study, we demonstrate how the MLAB framework can simulate policy impacts across heterogeneous agents, offering a promising new direction for economic and public policy analysis by leveraging LLMs' human-like reasoning capabilities and computational power. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.16879 |
By: | Zexin Ye |
Abstract: | When the current demand shock is observable, with a high discount factor, Q-learning agents predominantly learn to implement symmetric rigid pricing, i.e., they charge constant prices across demand states. Under this pricing pattern, supra-competitive profits can still be obtained and are sustained through collusive strategies that effectively punish deviations. This shows that Q-learning agents can successfully overcome the stronger incentives to deviate during the positive demand shocks, and consequently algorithmic collusion persists under observed demand shocks. In contrast, with a medium discount factor, Q-learning agents learn that maintaining high prices during the positive demand shocks is not incentive compatible and instead proactively charge lower prices to decrease the temptation for deviating, while maintaining relatively high prices during the negative demand shocks. As a result, the countercyclical pricing pattern becomes predominant, aligning with the theoretical prediction of Rotemberg and Saloner (1986). These findings highlight how Q-learning algorithms can both adapt pricing strategies and develop tacit collusion in response to complex market conditions. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.15084 |
By: | Hein, Ilka; Cecil, Julia (Ludwig-Maximilians-Universität München); Lermer, Eva (LMU Munich) |
Abstract: | Artificial intelligence (AI) is increasingly taking over leadership tasks in companies, including the provision of feedback. However, the effect of AI-driven feedback on employees and its theoretical foundations are poorly understood. We aimed to close this research gap by comparing perceptions of AI and human feedback based on construal level theory and the feedback process model. Using these theories, our objective was also to investigate the moderating role of feedback valence and the mediating effect of social distance. A 2 × 2 between-subjects design was applied to manipulate feedback source (human vs. AI) and valence (negative vs. positive) via vignettes. In a preregistered experimental study (S1) and subsequent direct replication (S2), responses from NS1 = 263 and NS2 = 449 participants were studied who completed a German online questionnaire asking for feedback acceptance, performance motivation, social distance, acceptance of the feedback source itself, and intention to seek further feedback. Regression analyses showed that AI feedback was rated as less accurate and led to lower performance motivation, acceptance of the feedback provider, and intention to seek further feedback. These effects were mediated by perceived social distance. Moreover, for feedback acceptance and performance motivation, the differences were only found for positive but not for negative feedback in the first study. This implies that AI feedback may not inherently be perceived as more negatively than human feedback as it depends on the feedback's valence. Furthermore, the mediation effects indicate that the shown negative evaluations of the AI can be explained by higher social distance and that increased social closeness to feedback providers may improve appraisals of them and of their feedback. Theoretical contributions of the studies and implications for the use of AI for providing feedback in the workplace are discussed, emphasizing the influence of effects related to construal level theory. |
Date: | 2024–12–22 |
URL: | https://d.repec.org/n?u=RePEc:osf:osfxxx:uczaw_v1 |
By: | Otis, Nicholas G.; Cranney, Katelyn; Delecourt, Solene; Koning, Rembrand (Harvard Business School) |
Abstract: | Generative AI has the potential to transform productivity and reduce inequality, but only if adopted broadly. In this paper, we show that recently identified gender gaps in generative AI use are nearly universal. Synthesizing data from 18 studies covering more than 140, 000 individuals across the world, combined with estimates of the gender share of the hundreds of millions of users of popular generative AI platforms, we demonstrate that the gender gap in generative AI usage holds across nearly all regions, sectors, and occupations. Using newly collected data, we also document that this gap remains even when access to try this new technology is improved, highlighting the need for further research into the gap’s underlying causes. If this global disparity persists, it risks creating a self-reinforcing cycle: women’s underrepresentation in generative AI usage would lead to systems trained on data that inadequately sample women’s preferences and needs, ultimately widening existing gender disparities in technology adoption and economic opportunity. |
Date: | 2024–10–14 |
URL: | https://d.repec.org/n?u=RePEc:osf:osfxxx:h6a7c_v1 |
By: | Kevin He; Ran Shorrer; Mengjia Xia |
Abstract: | We conduct an incentivized laboratory experiment to study people's perception of generative artificial intelligence (GenAI) alignment in the context of economic decision-making. Using a panel of economic problems spanning the domains of risk, time preference, social preference, and strategic interactions, we ask human subjects to make choices for themselves and to predict the choices made by GenAI on behalf of a human user. We find that people overestimate the degree of alignment between GenAI's choices and human choices. In every problem, human subjects' average prediction about GenAI's choice is substantially closer to the average human-subject choice than it is to the GenAI choice. At the individual level, different subjects' predictions about GenAI's choice in a given problem are highly correlated with their own choices in the same problem. We explore the implications of people overestimating GenAI alignment in a simple theoretical model. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.14708 |
By: | Schoeffer, Jakob; Jakubik, Johannes; Vössing, Michael; Kühl, Niklas; Satzger, Gerhard |
Abstract: | In AI-assisted decision-making, a central promise of having a human-in-the-loop is that they should be able to complement the AI system by overriding its wrong recommendations. In practice, however, we often see that humans cannot assess the correctness of AI recommendations and, as a result, adhere to wrong or override correct advice. Different ways of relying on AI recommendations have immediate, yet distinct, implications for decision quality. Unfortunately, reliance and decision quality are often inappropriately conflated in the current literature on AI-assisted decision-making. In this work, we disentangle and formalize the relationship between reliance and decision quality, and we characterize the conditions under which human-AI complementarity is achievable. To illustrate how reliance and decision quality relate to one another, we propose a visual framework and demonstrate its usefulness for interpreting empirical findings, including the effects of interventions like explanations. Overall, our research highlights the importance of distinguishing between reliance behavior and decision quality in AI-assisted decision-making. |
Date: | 2024–08–25 |
URL: | https://d.repec.org/n?u=RePEc:osf:osfxxx:cekm9_v1 |
By: | Ghaharian, Kasra (University of Nevada, Las Vegas); Binesh, Nasim |
Abstract: | The proliferation of data and artificial intelligence (AI) throughout society has raised concerns about its potential misuse and threats across industries. In this paper we explore the risks and ethical considerations of AI applications in gambling, an industry that makes significant contributions to many tourism destinations and local economies around the world. We conducted a scoping review to collect the breadth of literature and to understand the current state of knowledge. Our search yielded 2, 499 potentially relevant documents, from which we deemed 16 as eligible for inclusion. A content analysis revealed convergence around six main themes: (1) Explainability, (2) Exploitation, (3) Algorithmic Flaws, (4) Consumer Rights, (5) Accountability, and (6) Human-in-the-Loop. We found that these gambling-specific themes largely overlap with broader AI principles. Most records focused on algorithmic strategies to reduce gambling-related harm (n = 12/16), thus we call for more attention to be turned to commercially driven AI applications. We provide a theoretical evaluation that illustrates the challenges involved for stakeholders tasked with governing AI risks and associated ethical considerations. As a globally reaching product, gambling regulators and operators need to be cognizant, not just of philosophical principles, but also of the rich tapestry of global ethical traditions. |
Date: | 2024–04–15 |
URL: | https://d.repec.org/n?u=RePEc:osf:osfxxx:gpyub_v1 |
By: | Thomas Henning; Siddhartha M. Ojha; Ross Spoon; Jiatong Han; Colin F. Camerer |
Abstract: | This paper explores how Large Language Models (LLMs) behave in a classic experimental finance paradigm widely known for eliciting bubbles and crashes in human participants. We adapt an established trading design, where traders buy and sell a risky asset with a known fundamental value, and introduce several LLM-based agents, both in single-model markets (all traders are instances of the same LLM) and in mixed-model "battle royale" settings (multiple LLMs competing in the same market). Our findings reveal that LLMs generally exhibit a "textbook-rational" approach, pricing the asset near its fundamental value, and show only a muted tendency toward bubble formation. Further analyses indicate that LLM-based agents display less trading strategy variance in contrast to humans. Taken together, these results highlight the risk of relying on LLM-only data to replicate human-driven market phenomena, as key behavioral features, such as large emergent bubbles, were not robustly reproduced. While LLMs clearly possess the capacity for strategic decision-making, their relative consistency and rationality suggest that they do not accurately mimic human market dynamics. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.15800 |
By: | Munipalle, Pravith |
Abstract: | Bot trading, or algorithmic trading, has transformed modern financial markets by using advanced technologies like artificial intelligence and machine learning to execute trades with unparalleled speed and efficiency. This paper examines the mechanisms and types of trading bots, their impact on market liquidity, efficiency, and stability, and the ethical and regulatory challenges they pose. Key findings highlight the dual nature of bot trading—enhancing market performance while introducing systemic risks, such as those observed during the 2010 Flash Crash. Emerging technologies like blockchain and predictive analytics, along with advancements in AI, present opportunities for innovation but also underscore the need for robust regulations and ethical design. To provide deeper insights, we conducted an experiment analyzing the performance of different trading bot strategies in simulated market conditions, revealing the potential and pitfalls of these systems under varying scenarios. |
Date: | 2024–12–22 |
URL: | https://d.repec.org/n?u=RePEc:osf:osfxxx:p98zv_v1 |
By: | Lee, Heungmin |
Abstract: | The rapid advancements in large language models (LLMs) have ushered in a new era of transformative potential for the finance industry. This paper explores the latest developments in the application of LLMs across key areas of the finance domain, highlighting their significant impact and future implications. In the realm of financial analysis and modelling, LLMs have demonstrated the ability to outperform traditional models in tasks such as stock price prediction, portfolio optimization, and risk assessment. By processing vast amounts of financial data and leveraging their natural language understanding capabilities, these models can generate insightful analyses, identify patterns, and provide data-driven recommendations to support decision-making processes. The conversational capabilities of LLMs have also revolutionized the customer service landscape in finance. LLMs can engage in natural language dialogues, addressing customer inquiries, providing personalized financial advice, and even handling complex tasks like loan applications and investment planning. This integration of LLMs into financial institutions has the potential to enhance customer experiences, improve response times, and reduce the workload of human customer service representatives. Furthermore, LLMs are making significant strides in the realm of risk management and compliance. These models can analyze complex legal and regulatory documents, identify potential risks, and suggest appropriate remedial actions. By automating routine compliance tasks, such as anti-money laundering (AML) checks and fraud detection, LLMs can help financial institutions enhance their risk management practices and ensure better compliance, mitigating the risk of costly penalties or reputational damage. As the finance industry continues to embrace the transformative potential of LLMs, it will be crucial to address the challenges surrounding data privacy, algorithmic bias, and the responsible development of these technologies. By navigating these considerations, the finance sector can harness the full capabilities of LLMs to drive innovation, improve efficiency, and ultimately, enhance the overall financial ecosystem. |
Date: | 2025–01–03 |
URL: | https://d.repec.org/n?u=RePEc:osf:osfxxx:ahkd3_v1 |
By: | Guojun Xiong; Zhiyang Deng; Keyi Wang; Yupeng Cao; Haohang Li; Yangyang Yu; Xueqing Peng; Mingquan Lin; Kaleb E Smith; Xiao-Yang Liu; Jimin Huang; Sophia Ananiadou; Qianqian Xie |
Abstract: | Large language models (LLMs) fine-tuned on multimodal financial data have demonstrated impressive reasoning capabilities in various financial tasks. However, they often struggle with multi-step, goal-oriented scenarios in interactive financial markets, such as trading, where complex agentic approaches are required to improve decision-making. To address this, we propose \textsc{FLAG-Trader}, a unified architecture integrating linguistic processing (via LLMs) with gradient-driven reinforcement learning (RL) policy optimization, in which a partially fine-tuned LLM acts as the policy network, leveraging pre-trained knowledge while adapting to the financial domain through parameter-efficient fine-tuning. Through policy gradient optimization driven by trading rewards, our framework not only enhances LLM performance in trading but also improves results on other financial-domain tasks. We present extensive empirical evidence to validate these enhancements. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.11433 |
By: | Xiangyu Li; Yawen Zeng; Xiaofen Xing; Jin Xu; Xiangmin Xu |
Abstract: | As automated trading gains traction in the financial market, algorithmic investment strategies are increasingly prominent. While Large Language Models (LLMs) and Agent-based models exhibit promising potential in real-time market analysis and trading decisions, they still experience a significant -20% loss when confronted with rapid declines or frequent fluctuations, impeding their practical application. Hence, there is an imperative to explore a more robust and resilient framework. This paper introduces an innovative multi-agent system, HedgeAgents, aimed at bolstering system robustness via ``hedging'' strategies. In this well-balanced system, an array of hedging agents has been tailored, where HedgeAgents consist of a central fund manager and multiple hedging experts specializing in various financial asset classes. These agents leverage LLMs' cognitive capabilities to make decisions and coordinate through three types of conferences. Benefiting from the powerful understanding of LLMs, our HedgeAgents attained a 70% annualized return and a 400% total return over a period of 3 years. Moreover, we have observed with delight that HedgeAgents can even formulate investment experience comparable to those of human experts (https://hedgeagents.github.io/). |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.13165 |
By: | Yan Zhang; Lin Chen; Yixiang Tian |
Abstract: | Interpretability analysis methods for artificial intelligence models, such as LIME and SHAP, are widely used, though they primarily serve as post-model for analyzing model outputs. While it is commonly believed that the transparency and interpretability of AI models diminish as their complexity increases, currently there is no standardized method for assessing the inherent interpretability of the models themselves. This paper uses bond market default prediction as a case study, applying commonly used machine learning algorithms within AI models. First, the classification performance of these algorithms in default prediction is evaluated. Then, leveraging LIME and SHAP to assess the contribution of sample features to prediction outcomes, the paper proposes a novel method for evaluating the interpretability of the models themselves. The results of this analysis are consistent with the intuitive understanding and logical expectations regarding the interpretability of these models. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.19615 |
By: | Jian Chen; Guohao Tang; Guofu Zhou; Wu Zhu |
Abstract: | We study whether ChatGPT and DeepSeek can extract information from the Wall Street Journal to predict the stock market and the macroeconomy. We find that ChatGPT has predictive power. DeepSeek underperforms ChatGPT, which is trained more extensively in English. Other large language models also underperform. Consistent with financial theories, the predictability is driven by investors' underreaction to positive news, especially during periods of economic downturn and high information uncertainty. Negative news correlates with returns but lacks predictive value. At present, ChatGPT appears to be the only model capable of capturing economic news that links to the market risk premium. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.10008 |
By: | Ankur Sinha; Chaitanya Agarwal; Pekka Malo |
Abstract: | Large language models (LLMs) excel at generating human-like responses but often struggle with interactive tasks that require access to real-time information. This limitation poses challenges in finance, where models must access up-to-date information, such as recent news or price movements, to support decision-making. To address this, we introduce Financial Agent, a knowledge-grounding approach for LLMs to handle financial queries using real-time text and tabular data. Our contributions are threefold: First, we develop a Financial Context Dataset of over 50, 000 financial queries paired with the required context. Second, we train FinBloom 7B, a custom 7 billion parameter LLM, on 14 million financial news articles from Reuters and Deutsche Presse-Agentur, alongside 12 million Securities and Exchange Commission (SEC) filings. Third, we fine-tune FinBloom 7B using the Financial Context Dataset to serve as a Financial Agent. This agent generates relevant financial context, enabling efficient real-time data retrieval to answer user queries. By reducing latency and eliminating the need for users to manually provide accurate data, our approach significantly enhances the capability of LLMs to handle dynamic financial tasks. Our proposed approach makes real-time financial decisions, algorithmic trading and other related tasks streamlined, and is valuable in contexts with high-velocity data flows. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.18471 |
By: | Jibang Wu; Chenghao Yang; Simon Mahns; Chaoqi Wang; Hao Zhu; Fei Fang; Haifeng Xu |
Abstract: | This paper develops an agentic framework that employs large language models (LLMs) to automate the generation of persuasive and grounded marketing content, using real estate listing descriptions as our focal application domain. Our method is designed to align the generated content with user preferences while highlighting useful factual attributes. This agent consists of three key modules: (1) Grounding Module, mimicking expert human behavior to predict marketable features; (2) Personalization Module, aligning content with user preferences; (3) Marketing Module, ensuring factual accuracy and the inclusion of localized features. We conduct systematic human-subject experiments in the domain of real estate marketing, with a focus group of potential house buyers. The results demonstrate that marketing descriptions generated by our approach are preferred over those written by human experts by a clear margin. Our findings suggest a promising LLM-based agentic framework to automate large-scale targeted marketing while ensuring responsible generation using only facts. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.16810 |
By: | Saint-Paul, Gilles (Paris School of Economics) |
Abstract: | This paper examines the potential for automation and artificial intelligence (AI) to induce a broader economic decline, impacting not only labor but also the owners of capital and advanced technology. While automation has traditionally favored skilled over unskilled workers, recent advancements in AI suggest that it could replace skilled labor as well, raising concerns over a diminishing middle class and the viability of mass consumption society. This study proposes a model with non-homothetic preferences and increasing returns technology, positing that in a world where AI eliminates skilled labor, demand for mass-produced goods may fall, destabilizing the very capitalist class reliant on consumer society. Within this framework, political power lies with the "oligarchs, " or owners of proprietary technology, who may adopt policies such as Universal Basic Income (UBI) or Post-Fordism to sustain consumer demand and profitability. The analysis explores how oligarchs might use different policy mechanisms, including decisive control or lobbying-based menu auctions, to influence economic outcomes. Findings suggest that policy preferences vary among oligarchs based on their market focus, with luxury producers favoring policies that sustain a middle class and necessity producers inclined to support AI-driven automation under minimal redistribution. The paper provides insights into the complex interactions between technology, income distribution, and political economy under advanced automation. |
Keywords: | automation, Artificial Intelligence, income inequality, capitalism, middle class, Universal Basic Income (UBI), Post-Fordism, political economy, consumer society, oligarchs |
JEL: | O33 D63 J24 E25 D72 L16 P16 H23 D31 D42 |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:iza:izadps:dp17682 |