|
on Artificial Intelligence |
By: | Mathieu Chevrier (Université Côte d'Azur, CNRS, GREDEG, France); Vincent Teixeira (Université de Lorraine, CNRS, BETA, France) |
Abstract: | We conducted a laboratory experiment where participants could either choose between an equal or unequal allocation, either delegate the choice to an algorithm controlled by another participant. This participant either has a high control on the algorithm (the algorithm follow perfectly the participants' decision) either a low control (the algorithm sometimes follows the participants' decisions). Our results suggest that a high level of control by the participants over the algorithm implies that participants bear full responsibility in the event of an unequal decision. A low level of control by the participants over the algorithm implies that the participants are perceived as 56.17% less responsible than participants who have a high level of control over the algorithm. The delegator is perceived as 56.52% more responsible than the delegator who delegates to an algorithm that is fully controlled. Finally, we demonstrate that participants with low control over the algorithm are more likely to choose an unequal allocation when they can hide themselves behind the algorithm. These results imply that companies might prioritize an algorithm's profitability over ethical considerations, effectively shifting the burden of responsibility to the user. |
Keywords: | Artificial Intelligence, Delegation, Responsibility, Punishment, Laboratory experiment |
JEL: | C92 D63 |
Date: | 2024–03 |
URL: | http://d.repec.org/n?u=RePEc:gre:wpaper:2024-04&r=ain |
By: | Zengqing Wu; Shuyuan Zheng; Qianying Liu; Xu Han; Brian Inhyuk Kwon; Makoto Onizuka; Shaojie Tang; Run Peng; Chuan Xiao |
Abstract: | Recent advancements have shown that agents powered by large language models (LLMs) possess capabilities to simulate human behaviors and societal dynamics. However, the potential for LLM agents to spontaneously establish collaborative relationships in the absence of explicit instructions has not been studied. To address this gap, we conduct three case studies, revealing that LLM agents are capable of spontaneously forming collaborations even within competitive settings. This finding not only demonstrates the capacity of LLM agents to mimic competition and cooperation in human societies but also validates a promising vision of computational social science. Specifically, it suggests that LLM agents could be utilized to model human social interactions, including those with spontaneous collaborations, thus offering insights into social phenomena. The source codes for this study are available at https://github.com/wuzengqing001225/SABM_ShallWeTalk . |
Date: | 2024–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2402.12327&r=ain |
By: | Ruqing Xu |
Abstract: | A principal designs an algorithm that generates a publicly observable prediction of a binary state. She must decide whether to act directly based on the prediction or to delegate the decision to an agent with private information but potential misalignment. We study the optimal design of the prediction algorithm and the delegation rule in such environments. Three key findings emerge: (1) Delegation is optimal if and only if the principal would make the same binary decision as the agent had she observed the agent's information. (2) Providing the most informative algorithm may be suboptimal even if the principal can act on the algorithm's prediction. Instead, the optimal algorithm may provide more information about one state and restrict information about the other. (3) Common restrictions on algorithms, such as keeping a "human-in-the-loop" or requiring maximal prediction accuracy, strictly worsen decision quality in the absence of perfectly aligned agents and state-revealing signals. These findings predict the underperformance of human-machine collaborations if no measures are taken to mitigate common preference misalignment between algorithms and human decision-makers. |
Date: | 2024–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2402.09384&r=ain |
By: | Brynjolfsson, Erik (Stanford U); Li, Danielle (MIT); Raymond, Lindsey R. (MIT) |
Abstract: | New AI tools have the potential to change the way workers perform and learn, but little is known about their impacts on the job. In this paper, we study the staggered introduction of a generative AI-based conversational assistant using data from 5, 179 customer support agents. Access to the tool increases productivity, as measured by issues resolved per hour, by 14% on average, including a 34% improvement for novice and low-skilled workers but with minimal impact on experienced and highly skilled workers. We provide suggestive evidence that the AI model disseminates the best practices of more able workers and helps newer workers move down the experience curve. In addition, we find that AI assistance improves customer sentiment, increases employee retention, and may lead to worker learning. Our results suggest that access to generative AI can increase productivity, with large heterogeneity in effects across workers. |
JEL: | D8 J24 M15 M51 O33 |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:ecl:stabus:4141&r=ain |
By: | David H. Kreitmeir (Department of Economics and SoDa Labs, Monash University); Paul A. Raschky (Department of Economics and SoDa Labs, Monash University) |
Abstract: | We analyse the individual productivity effects of Italy's ban on ChatGPT, a generative pretrained transformer chatbot. We compile data on the daily coding output quantity and quality of over 36, 000 GitHub users in Italy and other European countries and combine these data with the sudden announcement of the ban in a difference-in-differences framework. Among the affected users in Italy, we find a short-term increase in output quantity and quality for less experienced users and a decrease in productivity on more routine tasks for experienced users. |
Keywords: | artificial intelligence, productivity |
JEL: | D8 J24 O33 |
Date: | 2024–03 |
URL: | http://d.repec.org/n?u=RePEc:ajr:sodwps:2024-01&r=ain |
By: | Joachim Meyer |
Abstract: | Algorithmic decision support (ADS), using Machine-Learning-based AI, is becoming a major part of many processes. Organizations introduce ADS to improve decision-making and make optimal use of data, thereby possibly avoiding deviations from the normative "homo economicus" and the biases that characterize human decision-making. A closer look at the development process of ADS systems reveals that ADS itself results from a series of largely unspecified human decisions. They begin with deliberations for which decisions to use ADS, continue with choices while developing the ADS, and end with using the ADS output for decisions. Finally, conclusions are implemented in organizational settings, often without analyzing the implications of the decision support. The paper explores some issues in developing and using ADS, pointing to behavioral aspects that should be considered when implementing ADS in organizational settings. It points out directions for further research, which is essential for gaining an informed understanding of the processes and their vulnerabilities. |
Date: | 2024–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2402.14674&r=ain |
By: | Edwin Zhang; Sadie Zhao; Tonghan Wang; Safwan Hossain; Henry Gasztowtt; Stephan Zheng; David C. Parkes; Milind Tambe; Yiling Chen |
Abstract: | Artificial Intelligence (AI) holds promise as a technology that can be used to improve government and economic policy-making. This paper proposes a new research agenda towards this end by introducing Social Environment Design, a general framework for the use of AI for automated policy-making that connects with the Reinforcement Learning, EconCS, and Computational Social Choice communities. The framework seeks to capture general economic environments, includes voting on policy objectives, and gives a direction for the systematic analysis of government and economic policy through AI simulation. We highlight key open problems for future research in AI-based policy-making. By solving these challenges, we hope to achieve various social welfare objectives, thereby promoting more ethical and responsible decision making. |
Date: | 2024–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2402.14090&r=ain |
By: | Claudia Biancotti (Bank of Italy); Carolina Camassa (Bank of Italy) |
Abstract: | ChatGPT, a software seeking to simulate human conversational abilities, is attracting increasing attention. It is sometimes portrayed as a groundbreaking productivity aid, including for creative work. In this paper, we run an experiment to assess its potential in complex writing tasks. We ask the software to compose a policy brief for the Board of the Bank of Italy. We find that ChatGPT can accelerate workflows by providing well-structured content suggestions, and by producing extensive, linguistically correct text in a matter of seconds. It does, however, require a significant amount of expert supervision, which partially offsets productivity gains. If the app is used naively, output can be incorrect, superficial, or irrelevant. Superficiality is an especially problematic limitation in the context of policy advice intended for high-level audiences. |
Keywords: | Large language models, generative artificial intelligence, ChatGPT |
JEL: | O33 O32 |
Date: | 2023–10 |
URL: | http://d.repec.org/n?u=RePEc:bdi:opques:qef_814_23&r=ain |
By: | Hang Yuan; Saizhuo Wang; Jian Guo |
Abstract: | Recently, we introduced a new paradigm for alpha mining in the realm of quantitative investment, developing a new interactive alpha mining system framework, Alpha-GPT. This system is centered on iterative Human-AI interaction based on large language models, introducing a Human-in-the-Loop approach to alpha discovery. In this paper, we present the next-generation Alpha-GPT 2.0 \footnote{Draft. Work in progress}, a quantitative investment framework that further encompasses crucial modeling and analysis phases in quantitative investment. This framework emphasizes the iterative, interactive research between humans and AI, embodying a Human-in-the-Loop strategy throughout the entire quantitative investment pipeline. By assimilating the insights of human researchers into the systematic alpha research process, we effectively leverage the Human-in-the-Loop approach, enhancing the efficiency and precision of quantitative investment research. |
Date: | 2024–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2402.09746&r=ain |
By: | Leek, Lauren Caroline (European University Institute); Bischl, Simeon; Freier, Maximilian |
Abstract: | While institutionally independent, monetary policy-makers do not operate in a vacuum. The policy choices of a central bank are intricately linked to government policies and financial markets. We present novel indices of monetary, fiscal and financial policy-linkages based on central bank communication, namely, speeches by 118 central banks worldwide from 1997 to mid-2023. Our indices measure not only instances of monetary, fiscal or financial dominance but, importantly, also identify communication that aims to coordinate monetary policy with the government and financial markets. To create our indices, we use a Large Language Model (ChatGPT 3.5-0301) and provide transparent prompt-engineering steps, considering both accuracy on the basis of a manually coded dataset as well as efficiency regarding token usage. We also test several model improvements and provide descriptive statistics of the trends of the indices over time and across central banks including correlations with political-economic variables. |
Date: | 2024–02–14 |
URL: | http://d.repec.org/n?u=RePEc:osf:socarx:78wnp&r=ain |