|
on Artificial Intelligence |
By: | Felix Haag; Carlo Stingl; Katrin Zerfass; Konstantin Hopf; Thorsten Staake |
Abstract: | Information systems (IS) are frequently designed to leverage the negative effect of anchoring bias to influence individuals' decision-making (e.g., by manipulating purchase decisions). Recent advances in Artificial Intelligence (AI) and the explanations of its decisions through explainable AI (XAI) have opened new opportunities for mitigating biased decisions. So far, the potential of these technological advances to overcome anchoring bias remains widely unclear. To this end, we conducted two online experiments with a total of N=390 participants in the context of purchase decisions to examine the impact of AI and XAI-based decision support on anchoring bias. Our results show that AI alone and its combination with XAI help to mitigate the negative effect of anchoring bias. Ultimately, our findings have implications for the design of AI and XAI-based decision support and IS to overcome cognitive biases. |
Date: | 2024–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2405.04972&r= |
By: | Christoph Engel (Max Planck Institute for Research on Collective Goods, Bonn & University of Bonn); Max R. P. Grossmann (University of Cologne); Axel Ockenfels (University of Cologne & Max Planck Institute for Research on Collective Goods, Bonn) |
Abstract: | Large Language Models (LLMs) have the potential to profoundly transform and enrich experimental economic research. We propose a new software framework, “alter_ego”, which makes it easy to design experiments between LLMs and to integrate LLMs into oTreebased experiments with human subjects. Our toolkit is freely available at github.com/mrpg/ego. To illustrate, we run differently framed prisoner’s dilemmas with interacting machines as well as with humanmachine interaction. Framing effects in machine-only treatments are strong and similar to those expected from previous human-only experiments, yet less pronounced and qualitatively different if machines interact with human participants. |
Keywords: | Software for experiments, large language models, humanmachine interaction, framing |
JEL: | C91 C92 D91 O33 L86 |
Date: | 2024–05 |
URL: | http://d.repec.org/n?u=RePEc:ajk:ajkdps:302&r= |
By: | David Arnold; Will S. Dobbie; Peter Hull |
Abstract: | We develop new quasi-experimental tools to understand algorithmic discrimination and build non-discriminatory algorithms when the outcome of interest is only selectively observed. These tools are applied in the context of pretrial bail decisions, where conventional algorithmic predictions are generated using only the misconduct outcomes of released defendants. We first show that algorithmic discrimination arises in such settings when the available algorithmic inputs are systematically different for white and Black defendants with the same objective misconduct potential. We then show how algorithmic discrimination can be eliminated by measuring and purging these conditional input disparities. Leveraging the quasi-random assignment of bail judges in New York City, we find that our new algorithms not only eliminate algorithmic discrimination but also generate more accurate predictions by correcting for the selective observability of misconduct outcomes. |
JEL: | C26 J15 K42 |
Date: | 2024–05 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:32403&r= |
By: | Rehse, Dominik; Valet, Sebastian; Walter, Johannes |
Abstract: | With the final approval of the EU's Artificial Intelligence Act (AI Act), it is now clear that general-purpose AI (GPAI) models with systemic risk will need to undergo adversarial testing. This provision is a response to the emergence of "generative AI" models, which are currently the most notable form of GPAI models gen- erating rich-form content such as text, images, and video. Adversarial testing involves repeatedly interact- ing with a model to try to lead it to exhibit unwanted behaviour. However, the specific implementation of such testing for GPAI models with systemic risk has not been clearly spelled out in the AI Act. Instead, the legislation only refers to codes of practice and harmonised standards which are soon to be developed. In this policy brief, which is based on research funded by the Baden-Württemberg Foundation, we propose that these codes and standards should reflect that an effective adversarial testing regime requires testing by independent third parties, a well-defined goal, clear roles with proper incentive and coordination schemes for all parties involved, and standardised reporting of the results. The market design approach is helpful for developing, testing and improving the underlying rules and the institutional setup of such adversarial testing regimes. We outline the design space for an extensive form of adversarial testing, called red team- ing, of generative AI models. This is intended to stimulate the discussion in preparation for the codes of practice, harmonised standards and potential additional provisions by governing bodies. |
Date: | 2024 |
URL: | http://d.repec.org/n?u=RePEc:zbw:zewpbs:294875&r= |
By: | Yusuke Narita (Yale University); Kohei Yata (Yale University) |
Abstract: | Algorithms make a growing portion of policy and business decisions. We develop a treatment-effect estimator using algorithmic decisions as instruments for a class of stochastic and deterministic algorithms. Our estimator is consistent and asymptotically normal for well-defined causal effects. A special case of our setup is multidimensional regression discontinuity designs with complex boundaries. We apply our estimator to evaluate the Coronavirus Aid, Relief, and Economic Security Act, which allocated many billions of dollars worth of relief funding to hospitals via an algorithmic rule. The funding is shown to have little effect on COVID-19-related hospital activities. Naive estimates exhibit selection bias. |
Date: | 2024–05 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:2391&r= |
By: | Eric Auerbach; Annie Liang; Max Tabord-Meehan; Kyohei Okumura |
Abstract: | Many algorithms have a disparate impact in that their benefits or harms fall disproportionately on certain social groups. Addressing an algorithm's disparate impact can be challenging, however, because it is not always clear whether there exists an alternative more-fair algorithm that does not compromise on other key objectives such as accuracy or profit. Establishing the improvability of algorithms with respect to multiple criteria is of both conceptual and practical interest: in many settings, disparate impact that would otherwise be prohibited under US federal law is permissible if it is necessary to achieve a legitimate business interest. The question is how a policy maker can formally substantiate, or refute, this necessity defense. In this paper, we provide an econometric framework for testing the hypothesis that it is possible to improve on the fairness of an algorithm without compromising on other pre-specified objectives. Our proposed test is simple to implement and can incorporate any exogenous constraint on the algorithm space. We establish the large-sample validity and consistency of our test, and demonstrate its use empirically by evaluating a healthcare algorithm originally considered by Obermeyer et al. (2019). In this demonstration, we find strong statistically significant evidence that it is possible to reduce the algorithm's disparate impact without compromising on the accuracy of its predictions. |
Date: | 2024–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2405.04816&r= |
By: | Adam, Martin; Wessel, Michael; Benlian, Alexander |
Abstract: | Communicating with customers through live chat interfaces has become an increasingly popular means to provide real-time customer service in many e-commerce settings. Today, human chat service agents are frequently replaced by conversational software agents or chatbots, which are systems designed to communicate with human users by means of natural language often based on artificial intelligence (AI). Though cost- and time-saving opportunities triggered a widespread implementation of AI-based chatbots, they still frequently fail to meet customer expectations, potentially resulting in users being less inclined to comply with requests made by the chatbot. Drawing on social response and commitment-consistency theory, we empirically examine through a randomized online experiment how verbal anthropomorphic design cues and the foot-in-the-door technique affect user request compliance. Our results demonstrate that both anthropomorphism as well as the need to stay consistent significantly increase the likelihood that users comply with a chatbot’s request for service feedback. Moreover, the results show that social presence mediates the effect of anthropomorphic design cues on user compliance. |
Date: | 2024–04–26 |
URL: | http://d.repec.org/n?u=RePEc:dar:wpaper:144613&r= |
By: | Chuanhao Li (Yale University); Runhan Yang (The Chinese University of Hong Kong); Tiankai Li (University of Science and Technology of China); Milad Bafarassat (Sabanci University); Kourosh Sharifi (Sabanci University); Dirk Bergemann (Yale University); Zhuoran Yang (Yale University) |
Abstract: | Large Language Models (LLMs) like GPT-4 have revolutionized natural language processing, showing remarkable linguistic proficiency and reasoning capabilities. However, their application in strategic multi-agent decision-making environments is hampered by significant limitations including poor mathematical reasoning, difficulty in following instructions, and a tendency to generate incorrect information. These deficiencies hinder their performance in strategic and interactive tasks that demand adherence to nuanced game rules, long-term planning, exploration in unknown environments, and anticipation of opponentsÕ moves. To overcome these obstacles, this paper presents a novel LLM agent framework equipped with memory and specialized tools to enhance their strategic decision-making capabilities. We deploy the tools in a number of economically important environments, in particular bilateral bargaining and multi-agent and dynamic mechanism design. We employ quantitative metrics to assess the frameworkÕs performance in various strategic decision-making problems. Our findings establish that our enhanced framework significantly improves the strategic decision-making capability of LLMs. While we highlight the inherent limitations of current LLM models, we demonstrate the improvements through targeted enhancements, suggesting a promising direction for future developments in LLM applications for interactive environments. |
Date: | 2024–05 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:2393&r= |
By: | Tian Tian; Liu Ze hui; Huang Zichen; Yubing Tang |
Abstract: | This paper explores the application of AI and NLP techniques for user feedback analysis in the context of heavy machine crane products. By leveraging AI and NLP, organizations can gain insights into customer perceptions, improve product development, enhance satisfaction and loyalty, inform decision-making, and gain a competitive advantage. The paper highlights the impact of user feedback analysis on organizational performance and emphasizes the reasons for using AI and NLP, including scalability, objectivity, improved accuracy, increased insights, and time savings. The methodology involves data collection, cleaning, text and rating analysis, interpretation, and feedback implementation. Results include sentiment analysis, word cloud visualizations, and radar charts comparing product attributes. These findings provide valuable information for understanding customer sentiment, identifying improvement areas, and making data-driven decisions to enhance the customer experience. In conclusion, promising AI and NLP techniques in user feedback analysis offer organizations a powerful tool to understand customers, improve product development, increase satisfaction, and drive business success |
Date: | 2024–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2405.04692&r= |
By: | Daron Acemoglu; Simon Johnson |
Abstract: | David Ricardo initially believed machinery would help workers but revised his opinion, likely based on the impact of automation in the textile industry. Despite cotton textiles becoming one of the largest sectors in the British economy, real wages for cotton weavers did not rise for decades. As E.P. Thompson emphasized, automation forced workers into unhealthy factories with close surveillance and little autonomy. Automation can increase wages, but only when accompanied by new tasks that raise the marginal productivity of labor and/or when there is sufficient additional hiring in complementary sectors. Wages are unlikely to rise when workers cannot push for their share of productivity growth. Today, artificial intelligence may boost average productivity, but it also may replace many workers while degrading job quality for those who remain employed. As in Ricardo’s time, the impact of automation on workers today is more complex than an automatic linkage from higher productivity to better wages. |
JEL: | B12 J23 O14 |
Date: | 2024–05 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:32416&r= |
By: | Joaquin Vespignani; Russell Smyth |
Abstract: | This paper employs insights from earth science on the financial risk of project developments to present an economic theory of critical minerals. Our theory posits that back-ended critical mineral projects that have unaddressed technical and nontechnical barriers, such as those involving lithium and cobalt, exhibit an additional risk for investors which we term the “back-ended risk premium†. We show that the back-ended risk premium increases the cost of capital and, therefore, has the potential to reduce investment in the sector. We posit that the back-ended risk premium may also reduce the gains in productivity expected from artificial intelligence (AI) technologies in the mining sector. Progress in AI may, however, lessen the back-ended risk premium itself through shortening the duration of mining projects and the required rate of investment through reducing the associated risk. We conclude that the best way to reduce the costs associated with energy transition is for governments to invest heavily in AI mining technologies and research. |
Keywords: | critical minerals, artificial Intelligence, risk premium |
JEL: | Q02 Q40 Q50 |
Date: | 2024–05 |
URL: | http://d.repec.org/n?u=RePEc:een:camaaa:2024-30&r= |
By: | Thorsten Hens (University of Zurich - Department of Banking and Finance; Norwegian School of Economics and Business Administration (NHH); Swiss Finance Institute); Trine Nordlie (Norwegian School of Economics (NHH)) |
Abstract: | This study compares OpenAI’s ChatGPT-4 and Google’s Bard with bank experts in determining investors’ risk profiles. We find that for half of the client cases used, there are no statistically significant differences in the risk profiles. Moreover, the economic relevance of the differences is small. However, the LLMs are not good in explaining the risk profiles. |
Keywords: | Large Language Models, ChatGPT, Bard, Risk Profiling |
JEL: | D8 D14 D81 G51 |
Date: | 2024–04 |
URL: | http://d.repec.org/n?u=RePEc:chf:rpseri:rp2430&r= |
By: | Cortecci, Mattia Benato (Dondena Research Centre for Social Dynamics and Public Policy, Bocconi University); Bonetti, Marco |
Abstract: | Review of the artificial intelligence applications in healthcare and the issues concerning its governance and impact on individuals and society, also taking into account the problems of ethics, accountability, and regulation that arise. |
Date: | 2024–05–09 |
URL: | http://d.repec.org/n?u=RePEc:osf:socarx:beudp&r= |
By: | Zhiyu Cao; Zachary Feinstein |
Abstract: | This study explores the innovative use of Large Language Models (LLMs) as analytical tools for interpreting complex financial regulations. The primary objective is to design effective prompts that guide LLMs in distilling verbose and intricate regulatory texts, such as the Basel III capital requirement regulations, into a concise mathematical framework that can be subsequently translated into actionable code. This novel approach aims to streamline the implementation of regulatory mandates within the financial reporting and risk management systems of global banking institutions. A case study was conducted to assess the performance of various LLMs, demonstrating that GPT-4 outperforms other models in processing and collecting necessary information, as well as executing mathematical calculations. The case study utilized numerical simulations with asset holdings -- including fixed income, equities, currency pairs, and commodities -- to demonstrate how LLMs can effectively implement the Basel III capital adequacy requirements. |
Date: | 2024–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2405.06808&r= |
By: | Pierre Azoulay; Joshua L. Krieger; Abhishek Nagaraj |
Abstract: | Drawing insights from the field of innovation economics, we discuss the likely competitive environment shaping generative AI advances. Central to our analysis are the concepts of appropriability—whether firms in the industry are able to control the knowledge generated by their innovations—and complementary assets—whether effective entry requires access to specialized infrastructure and capabilities to which incumbent firms can ration access. While the rapid improvements in AI foundation models promise transformative impacts across broad sectors of the economy, we argue that tight control over complementary assets will likely result in a concentrated market structure, as in past episodes of technological upheaval. We suggest the likely paths through which incumbent firms may restrict entry, confining newcomers to subordinate roles and stifling broad sectoral innovation. We conclude with speculations regarding how this oligopolistic future might be averted. Policy interventions aimed at fractionalizing or facilitating shared access to complementary assets might help preserve competition and incentives for extending the generative AI frontier. Ironically, the best hopes for a vibrant open source AI ecosystem might rest on the presence of a “rogue” technology giant, who might choose openness and engagement with smaller firms as a strategic weapon wielded against other incumbents. |
JEL: | L17 L86 O32 O38 |
Date: | 2024–05 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:32474&r= |
By: | Yiluan Xing; Chao Yan; Cathy Chang Xie |
Abstract: | Forecasting stock prices remains a considerable challenge in financial markets, bearing significant implications for investors, traders, and financial institutions. Amid the ongoing AI revolution, NVIDIA has emerged as a key player driving innovation across various sectors. Given its prominence, we chose NVIDIA as the subject of our study. |
Date: | 2024–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2405.08284&r= |
By: | Tänzer, Alina |
Abstract: | Central bank intervention in the form of quantitative easing (QE) during times of low interest rates is a controversial topic. This paper introduces a novel approach to study the effectiveness of such unconventional measures. Using U.S. data on six key financial and macroeconomic variables between 1990 and 2015, the economy is estimated by artificial neural networks. Historical counterfactual analyses show that real effects are less pronounced than yield effects. Disentangling the effects of the individual asset purchase programs, impulse response functions provide evidence for QE being less effective the more the crisis is overcome. The peak effects of all QE interventions during the Financial Crisis only amounts to 1.3 pp for GDP growth and 0.6 pp for inflation respectively. Hence, the time as well as the volume of the interventions should be deliberated. |
Keywords: | Artificial Intelligence, Machine Learning, Neural Networks, Forecasting and Simulation: Models and Applications, Financial Markets and the Macroeconomy, Monetary Policy, Central Banks and Their Policies |
JEL: | C45 E47 E44 E52 E58 |
Date: | 2024 |
URL: | http://d.repec.org/n?u=RePEc:zbw:imfswp:295732&r= |
By: | Maria S. Mavillonio |
Abstract: | In this paper, we leverage recent advancements in large language models to extract information from business plans on various equity crowdfunding platforms and predict the success of firm campaigns. Our approach spans a broad and comprehensive spectrum of model complexities, ranging from standard textual analysis to more intricate textual representations - e.g. Transformers-, thereby offering a clear view of the challenges in understanding of the underlying data. To this end, we build a novel dataset comprising more than 640 equity crowdfunding campaigns from major Italian platforms. Through rigorous analysis, our results indicate a compelling correlation between the use of intricate textual representations and the enhanced predictive capacity for identifying successful campaigns. |
Keywords: | Crowdfunding, Text Representation, Natural Language Processing, Transformers |
JEL: | C45 C53 G23 L26 |
Date: | 2024–05–01 |
URL: | http://d.repec.org/n?u=RePEc:pie:dsedps:2024/308&r= |