|
on Artificial Intelligence |
By: | Mahyar Habibi |
Abstract: | This paper explores the economic underpinnings of open sourcing advanced large language models (LLMs) by for-profit companies. Empirical analysis reveals that: (1) LLMs are compatible with R&D portfolios of numerous technologically differentiated firms; (2) open-sourcing likelihood decreases with an LLM's performance edge over rivals, but increases for models from large tech companies; and (3) open-sourcing an advanced LLM led to an increase in research-related activities. Motivated by these findings, a theoretical framework is developed to examine factors influencing a profit-maximizing firm's open-sourcing decision. The analysis frames this decision as a trade-off between accelerating technology growth and securing immediate financial returns. A key prediction from the theoretical analysis is an inverted-U-shaped relationship between the owner's size, measured by its share of LLM-compatible applications, and its propensity to open source the LLM. This finding suggests that moderate market concentration may be beneficial to the open source ecosystems of multi-purpose software technologies. |
Date: | 2025–01 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2501.11581 |
By: | Gilles Grolleau (ESSCA School of Management Lyon); Murat C Mungan (Texas A&M University – School of Law); Naoufel Mzoughi (ECODEVELOPPEMENT - Ecodéveloppement - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement) |
Abstract: | Using an original experimental survey, we analyze how people perceive punishments generated by artificial intelligence (AI) compared to the same punishments generated by a human judge. We use two vignettes pertaining to two different albeit relatively common illegal behaviors, namely not picking up one's dog waste on public roads and setting fire in dry areas.In general, participants perceived AI judgements as having a larger deterrence effect compared to the those rendered by a judge. However, when we analyzed each scenario separately, we found that the differential effect of AI is only significant in the first scenario. We discuss the implications of these findings |
Keywords: | Artificial intelligence, AI, Judges, Punishments, Unethical acts, Wrongdoings |
Date: | 2025 |
URL: | https://d.repec.org/n?u=RePEc:hal:journl:hal-04854067 |
By: | Fernando Perez-Cruz; Hyun Song Shin |
Abstract: | When posed with a logical puzzle that demands reasoning about the knowledge of others and about counterfactuals, large language models (LLMs) display a distinctive and revealing pattern of failure. The LLM performs flawlessly when presented with the original wording of the puzzle available on the internet but performs poorly when incidental details are changed, suggestive of a lack of true understanding of the underlying logic. Our findings do not detract from the considerable progress in central bank applications of machine learning to data management, macro analysis and regulation/supervision. They do, however, suggest that caution should be exercised in deploying LLMs in contexts that demand rigorous reasoning in economic analysis. |
Date: | 2024–01–04 |
URL: | https://d.repec.org/n?u=RePEc:bis:bisblt:83 |
By: | Ismaël Rafaï (Aix Marseille Univ, CNRS, AMSE); Bérengère Davin-Casalena (Observatoire Régional de la Santé); Dimitri Dubois (CEE-M); Bruno Ventelou (Aix Marseille Univ, CNRS, AMSE) |
Abstract: | Background. Earlier detection of neurodegenerative diseases may help patients plan for their future, achieve a better quality of life, access clinical trials and possible future disease modifying treatments. Due to recent advances in artificial intelligence (AI), a significant help can come from the computational approaches targeting diagnosis and monitoring. Yet, detection tools are still underused. We aim to investigate the factors influencing individual valuation of AI-based prediction tools. Methods. We study individual valuation for early diagnosis tests for neurodegenerative diseases when Artificial Intelligence Diagnosis is an option. We conducted a Discrete Choice Experiment on a representative sample of the French adult public (N=1017), where we presented participants with a hypothetical risk of developing in the future a neurodegenerative disease. We ask them to repeatedly choose between two possible early diagnosis tests that differ in terms of (1) type of test (biological tests vs AI tests analyzing electronic health records); (2) identity of whom communicates tests’ results; (3) sensitivity; (4) specificity; and (5) price. We study the weight in the decision for each attribute and how socio-demographic characteristics influence them. Results. Our results are twofold: respondents indeed reveal a reduced utility value when AI testing is at stake (that is evaluated to 36.08 euros in average, IC = [22.13; 50.89]) and when results are communicated by a private company (95.15 €, IC = [82.01; 109.82]). Conclusion. We interpret these figures as the shadow price that the public attaches to medical data privacy. The general public is still reluctant to adopt AI screening on their health data, particularly when these screening tests are carried out on large sets of personal data. |
Date: | 2024–11 |
URL: | https://d.repec.org/n?u=RePEc:aim:wpaimx:2432 |
By: | Nikolova, Milena (University of Groningen); Angrisani, Marco (University of Southern California) |
Abstract: | Can people develop trust in Artificial Intelligence (AI) by learning about its developments? We conducted a survey experiment in a nationally representative panel survey in the United States (N = 1, 491) to study whether exposure to news about AI influences trust differently than learning about non-AI scientific advancements. The results show that people trust AI advancements less than non-AI scientific developments, with significant variations across domains. The mistrust of AI is the smallest in medicine, a high-stakes domain, and largest in the area of personal relationships. The key mediators are context- specific: fear is the most critical mediator for linguistics, excitement for medicine, and societal benefit for dating. Personality traits do not affect trust differences in the linguistics domain. In medicine, mistrust of AI is higher among respondents with high agreeableness and neuroticism scores. In personal relationships, mistrust of AI is strongest among individuals with high openness, conscientiousness, and agreeableness. Furthermore, mistrust of AI advancements is higher among women than men, as well as among older, White, and US-born individuals. Our results have implications for tailored communication strategies about AI advancements in the Fourth Industrial Revolution. |
Keywords: | Randomized Controlled Trial (RCT), survey experiment, Artificial Intelligence (AI), trust, United States |
JEL: | C91 D83 O33 Z10 |
Date: | 2025–01 |
URL: | https://d.repec.org/n?u=RePEc:iza:izadps:dp17635 |
By: | Stefania Albanesi (Department of Economics, University of Miami) |
Abstract: | We examine the link between labour market developments and new technologies such as artificial intelligence (AI) and software in 16 European countries over the period 2011- 2019. Using data for occupations at the 3-digit level in Europe, we find that on average employment shares have increased in occupations more exposed to AI. This is particularly the case for occupations with a relatively higher proportion of younger and skilled workers. This evidence is in line with the Skill Biased Technological Change theory. While there exists heterogeneity across countries, only very few countries show a decline in employment shares of occupations more exposed to AI-enabled automation. Country heterogeneity for this result seems to be linked to the pace of technology diffusion and education, but also to the level of product market regulation (competition) and employment protection laws. In contrast to the findings for employment, we find little evidence for a relationship between wages and potential exposures to new technologies. |
Keywords: | artificial intelligence, employment, skills, occupations |
JEL: | J23 O33 |
Date: | 2023–06–15 |
URL: | https://d.repec.org/n?u=RePEc:mia:wpaper:wp2023-01 |
By: | Iñaki Aldasoro; Olivier Armantier; Sebastian Doerr; Leonardo Gambacorta; Tommaso Oliviero |
Abstract: | A representative survey shows that almost half of US households use generative artificial intelligence (gen AI) tools. The use of and knowledge about gen AI are significantly lower among women, the elderly and households with lower income or educational attainment. Respondents expect gen AI to bring more opportunities than risks for job prospects, especially among men and younger, more educated and higher-income households. Nonetheless, all groups trust gen AI less than humans, especially in the provision of financial and medical services. Survey participants express concern over the risks of data breaches and data abuse and overwhelmingly support the regulation of AI. Consistent with previous surveys, respondents trust government agencies and financial institutions more than big techs to safeguard their data. |
Date: | 2024–04–23 |
URL: | https://d.repec.org/n?u=RePEc:bis:bisblt:86 |
By: | Jin Kim; Shane Schweitzer; Christoph Riedl; David De Cremer |
Abstract: | We investigate whether and why people might reduce compensation for workers who use AI tools. Across 10 studies (N = 3, 346), participants consistently lowered compensation for workers who used AI tools. This "AI Penalization" effect was robust across (1) different types of work and worker statuses and worker statuses (e.g., full-time, part-time, or freelance), (2) different forms of compensation (e.g., required payments or optional bonuses) and their timing, (3) various methods of eliciting compensation (e.g., slider scale, multiple choice, and numeric entry), and (4) conditions where workers' output quality was held constant, subject to varying inferences, or controlled for. Moreover, the effect emerged not only in hypothetical compensation scenarios (Studies 1-5) but also with real gig workers and real monetary compensation (Study 6). People reduced compensation for workers using AI tools because they believed these workers deserved less credit than those who did not use AI (Studies 3 and 4). This effect weakened when it is less permissible to reduce worker compensation, such as when employment contracts provide stricter constraints (Study 4). Our findings suggest that adoption of AI tools in the workplace may exacerbate inequality among workers, as those protected by structured contracts face less vulnerability to compensation reductions, while those without such protections risk greater financial penalties for using AI. |
Date: | 2025–01 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2501.13228 |
By: | Pouliakas, Konstantinos (European Centre for the Development of Vocational Training (Cedefop)); Santangelo, Giulia (European Centre for the Development of Vocational Training (Cedefop)) |
Abstract: | Understanding the labour market impact of new, autonomous digital technologies, particularly generative or other forms of artificial intelligence (AI), is currently at the top of the research and policy agenda. Many initial studies, though not all, have shown that there is a wage premium to AI skills in labour markets. Such evidence tends to draw on data from web-based sources and typically deploys a keyword approach for identifying AI skills. This paper utilises representative adult workforce data from 29 European countries, the second European skills and jobs survey, to examine wage differentials of the AI developer workforce. The latter is uniquely identified as part of the workforce that writes programs using AI algorithms. The analysis shows that, on average, AI developers enjoy a significant wage premium relative to a comparably educated or skilled workforce, such as programmers who do not yet write code using AI at work. Wage decomposition analysis further illustrates that there is a large unexplained component of such wage differential. Part of AI programmers' larger wage variability can be attributed to a greater performance-based component in their wage schedules and higher job-skill requirements. |
Keywords: | artificial intelligence, skills, wage differentials, performance-based pay |
JEL: | J24 J31 J71 M52 |
Date: | 2025–01 |
URL: | https://d.repec.org/n?u=RePEc:iza:izadps:dp17607 |
By: | Shijie Han; Changhai Zhou; Yiqing Shen; Tianning Sun; Yuhua Zhou; Xiaoxia Wang; Zhixiao Yang; Jingshu Zhang; Hongguang Li |
Abstract: | Current financial Large Language Models (LLMs) struggle with two critical limitations: a lack of depth in stock analysis, which impedes their ability to generate professional-grade insights, and the absence of objective evaluation metrics to assess the quality of stock analysis reports. To address these challenges, this paper introduces FinSphere, a conversational stock analysis agent, along with three major contributions: (1) Stocksis, a dataset curated by industry experts to enhance LLMs' stock analysis capabilities, (2) AnalyScore, a systematic evaluation framework for assessing stock analysis quality, and (3) FinSphere, an AI agent that can generate high-quality stock analysis reports in response to user queries. Experiments demonstrate that FinSphere achieves superior performance compared to both general and domain-specific LLMs, as well as existing agent-based systems, even when they are enhanced with real-time data access and few-shot guidance. The integrated framework, which combines real-time data feeds, quantitative tools, and an instruction-tuned LLM, yields substantial improvements in both analytical quality and practical applicability for real-world stock analysis. |
Date: | 2025–01 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2501.12399 |
By: | Bryan T. Kelly (Yale SOM; AQR Capital Management, LLC; National Bureau of Economic Research (NBER)); Boris Kuznetsov (Swiss Finance Institute); Semyon Malamud (Ecole Polytechnique Federale de Lausanne; Centre for Economic Policy Research (CEPR); Swiss Finance Institute); Teng Andrea Xu (École Polytechnique Fédérale de Lausanne (EPFL)) |
Abstract: | The core statistical technology in artificial intelligence is the large-scale transformer network. We propose a new asset pricing model that implants a transformer in the stochastic discount factor. This structure leverages conditional pricing information via cross-asset information sharing and nonlinearity. We also develop a linear transformer that serves as a simplified surrogate from which we derive an intuitive decomposition of the transformer's asset pricing mechanisms. We find large reductions in pricing errors from our artificial intelligence pricing model (AIPM) relative to previous machine learning models and dissect the sources of these gains. |
Date: | 2025–01 |
URL: | https://d.repec.org/n?u=RePEc:chf:rpseri:rp2508 |
By: | Jesús Villota (CEMFI, Centro de Estudios Monetarios y Financieros) |
Abstract: | Markets do not always efficiently incorporate news, particularly when information is complex or ambiguous. Traditional text analysis methods fail to capture the economic structure of information and its firm-specific implications. We propose a novel methodology that guides LLMs to systematically identify and classify firm-specific economic shocks in news articles according to their type, magnitude, and direction. This economically-informed classification allows for a more nuanced understanding of how markets process complex information. Using a simple trading strategy, we demonstrate that our LLM-based classification significantly outperforms a benchmark based on clustering vector embeddings, generating consistent profits out-of-sample while maintaining transparent and durable trading signals. The results suggest that LLMs, when properly guided by economic frameworks, can effectively identify persistent patterns in how markets react to different types of firm-specific news. Our findings contribute to understanding market efficiency and information processing, while offering a promising new tool for analyzing financial narratives. |
Keywords: | Large language models, business news, stock market reaction, market efficiency. |
JEL: | G12 G14 C45 C58 C63 D83 |
Date: | 2025–01 |
URL: | https://d.repec.org/n?u=RePEc:cmf:wpaper:wp2025_2501 |
By: | Avner Seror (Aix Marseille Univ, CNRS, AMSE, Marseille, France) |
Abstract: | As large language models (LLMs) become integrated to decision-making across various sectors, a key question arises: do they exhibit an emergent "moral mind" - a consistent set of moral principles guiding their ethical judgments - and is this reasoning uniform or diverse across models? To investigate this, we presented about forty different models from the main providers with a large array of structured ethical scenarios, creating one of the largest datasets of its kind. Our rationality tests revealed that at least one model from each provider demonstrated behavior consistent with stable moral principles, effectively acting as approximately optimizing a utility function encoding ethical reasoning. We identified these utility functions and observed a notable clustering of models around neutral ethical stances. To investigate variability, we introduced a novel non-parametric permutation approach, revealing that the most rational models shared 59% to 76% of their ethical reasoning patterns. Despite this shared foundation, differences emerged: roughly half displayed greater moral adaptability, bridging diverse perspectives, while the remainder adhered to more rigid ethical structures. |
Keywords: | Decision Theory, revealed preference, Rationality, artificial intelligence, LLM, PSM. |
JEL: | D9 C9 C44 |
Date: | 2024–11 |
URL: | https://d.repec.org/n?u=RePEc:aim:wpaimx:2433 |
By: | Dirk Czarnitzki; Robin Lepers; Maikel Pellens |
Abstract: | The circular economy represents a systematic shift in production and consumption, aimed at extending the life cycle of products and materials while minimizing resource use and waste. Achieving the goals of the circular economy presents firms with the challenge of innovating new products, technologies, and business models, however. This paper explores the role of artificial intelligence as an enabler of circular economy innovations. Through an empirical analysis of the German Community Innovation Survey, we show that firms investing in artificial intelligence are more likely to introduce circular economy innovations than those that do not. Additionally, the results indicate that the use of artificial intelligence enhances firms’ abilities to lower production externalities (for instance, reducing pollution) through these innovations. The findings of this paper underscore artificial intelligence’s potential to accelerate the transition to the circular economy. |
Keywords: | Circular economy, Innovation, Artificial intelligence |
Date: | 2025–01–23 |
URL: | https://d.repec.org/n?u=RePEc:ete:msiper:758339 |