|
on Artificial Intelligence |
By: | Marie-Pierre Dargnies (University of Paris Dauphine); Rustamdjan Hakimov (University of Lausanne); Dorothea Kübler (WZB Berlin, Technische Universität Berlin, CES Ifo) |
Abstract: | The adoption of Artificial Intelligence (AI) for hiring processes is often impeded by a scarcity of comprehensive employee data. We hypothesize that the inclusion of behavioral measures elicited from applicants can enhance the predictive accuracy of AI in hiring. We study this hypothesis in the context of microfinance loan officers. Our findings suggest that survey-based behavioral measures markedly improve the predictions of a random-forest algorithm trained to predict productivity within sample relative to demographic information alone. We then validate the algorithm’s robustness to the selectivity of the training sample and potential strategic responses by applicants by running two out-of-sample tests: one forecasting the future performance of novice employees, and another with a field experiment on hiring. Both tests corroborate the effectiveness of incorporating behavioral data to predict performance. The comparison of workers hired by the algorithm with those hired by human managers in the field experiment reveals that algorithmic hiring is marginally more efficient than managerial hiring. |
Keywords: | hiring; ai; economic and behavioral measures; selective labels; |
Date: | 2025–04–29 |
URL: | https://d.repec.org/n?u=RePEc:rco:dpaper:532 |
By: | Ingar Haaland (NHH Norwegian School of Economics, FAIR & CEPR); Christopher Roth (University of Cologne, Max Planck Institute for Research on Collective Goods, CEPR & NHH Norwegian School of Economics); Stefanie Stantcheva (Harvard University, NBER & CEPR); Johannes Wohlfart (University of Cologne, Max Planck Institute for Research on Collective Goods, CEBI & CESifo) |
Abstract: | We survey the recent literature in economics using open-ended survey data to uncover mechanisms behind economic beliefs and behaviors. We first provide an overview of different applications, including the measurement of motives, mental models, narratives, attention, information transmission, and recall. We next describe different ways of eliciting open-ended responses, including single-item open-ended questions, speech recordings, and AI-powered qualitative interviews. Subsequently, we discuss methods to annotate and analyze such data with a focus on recent advances in large language models. Our review concludes with a discussion of promising avenues for future research. |
Keywords: | Open-ended Questions, Text Data, Methodology, Surveys, Large Language Models |
JEL: | C90 D83 D91 |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:ajk:ajkdps:362 |
By: | Yue Yin |
Abstract: | In online advertising systems, publishers often face a trade-off in information disclosure strategies: while disclosing more information can enhance efficiency by enabling optimal allocation of ad impressions, it may lose revenue potential by decreasing uncertainty among competing advertisers. Similar to other challenges in market design, understanding this trade-off is constrained by limited access to real-world data, leading researchers and practitioners to turn to simulation frameworks. The recent emergence of large language models (LLMs) offers a novel approach to simulations, providing human-like reasoning and adaptability without necessarily relying on explicit assumptions about agent behavior modeling. Despite their potential, existing frameworks have yet to integrate LLM-based agents for studying information asymmetry and signaling strategies, particularly in the context of auctions. To address this gap, we introduce InfoBid, a flexible simulation framework that leverages LLM agents to examine the effects of information disclosure strategies in multi-agent auction settings. Using GPT-4o, we implemented simulations of second-price auctions with diverse information schemas. The results reveal key insights into how signaling influences strategic behavior and auction outcomes, which align with both economic and social learning theories. Through InfoBid, we hope to foster the use of LLMs as proxies for human economic and social agents in empirical studies, enhancing our understanding of their capabilities and limitations. This work bridges the gap between theoretical market designs and practical applications, advancing research in market simulations, information design, and agent-based reasoning while offering a valuable tool for exploring the dynamics of digital economies. |
Date: | 2025–03 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2503.22726 |
By: | Florian Misch; Ben Park; Carlo Pizzinelli; Galen Sher |
Abstract: | The discussion on Artificial Intelligence (AI) often centers around its impact on productivity, but macroeconomic evidence for Europe remains scarce. Using the Acemoglu (2024) approach we simulate the medium-term impact of AI adoption on total factor productivity for 31 European countries. We compile many scenarios by pooling evidence on which tasks will be automatable in the near term, using reduced-form regressions to predict AI adoption across Europe, and considering relevant regulation that restricts AI use heterogeneously across tasks, occupations and sectors. We find that the medium-term productivity gains for Europe as a whole are likely to be modest, at around 1 percent cumulatively over five years. While economcially still moderate, these gains are still larger than estimates by Acemoglu (2024) for the US. They vary widely across scenarios and countries and are sustantially larger in countries with higher incomes. Furthermore, we show that national and EU regulations around occupation-level requirements, AI safety, and data privacy combined could reduce Europe’s productivity gains by over 30 percent if AI exposure were 50 percent lower in tasks, occupations and sectors affected by regulation. |
Keywords: | Artificial Intelligence; Productivity; Technology; Regulation |
Date: | 2025–04–04 |
URL: | https://d.repec.org/n?u=RePEc:imf:imfwpa:2025/067 |
By: | Cathy Yang; David Restrepo Amariles; Leo Allen; Aurore Troussel |
Abstract: | Generative Pre-trained Transformers (GPTs), particularly Large Language Models (LLMs) like ChatGPT, have proven effective in content generation and productivity enhancement. However, legal risks associated with these tools lead to adoption variance and concealment of AI use within organizations. This study examines the impact of disclosure on ChatGPT adoption in legal, audit and advisory roles in consulting firms through the lens of agency theory. We conducted a survey experiment to evaluate agency costs in the context of unregulated corporate use of ChatGPT, with a particular focus on how mandatory disclosure influences information asymmetry and misaligned interests. Our findings indicate that in the absence of corporate regulations, such as an AI policy, firms may incur agency costs, which can hinder the full benefits of GPT adoption. While disclosure policies reduce information asymmetry, they do not significantly lower overall agency costs due to managers undervaluing analysts' contributions with GPT use. Finally, we examine the scope of existing regulations in Europe and the United States regarding disclosure requirements, explore the sharing of risk and responsibility within firms, and analyze how incentive mechanisms promote responsible AI adoption. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.01566 |
By: | Garcia-Suaza, Andrés; Sarango-Iturralde, Alexander; Caiza-Guamán, Pamela; Gil Díaz, Mateo; Acosta Castillo, Dana |
Abstract: | The rapid advancements in the domain of artificial intelligence (AI) have exerted a considerable influence on the labor market, thereby engendering alterations in the demand for specific skills and the structure of employment. This study aims to evaluate the extent of exposure to AI within the Colombian labor market and its relation with workforce characteristics and available job openings. To this end, we built a specific AI exposure index or Colombia based on skill demand in job posts. Our findings indicate that 33.8% of workers are highly exposed to AI, with variations observed depending on the measurement method employed. Furthermore, it is revealed a positive and significant correlation between AI exposure and wages, i.e., highly exposed to AI earn a wage premium of 21.8%. On the demand side, only 2.5% of job openings explicitly mention AI-related skills. These findings imply that international indices may underestimate the wage premium associated with AI exposure in Colombia and underscore the potential unequal effects on wages distribution among different demographic groups. |
Keywords: | Artificial intelligence, labor market, job posts, occupations, skills, Colombia |
JEL: | E24 J23 J24 O33 |
Date: | 2025 |
URL: | https://d.repec.org/n?u=RePEc:zbw:glodps:1604 |
By: | Hötte, Kerstin; Tarannum, Taheya; Verendel, Vilhelm; Bennett, Lauren |
Abstract: | It is often claimed that Artificial Intelligence (AI) is the next general purpose technology (GPT) with profound economic and societal impacts. However, without a consensus definition of AI and its empirical measurement, there are wide discrepancies in beliefs about its trajectory, diffusion, and ownership. In this study, we compare four AI patent classification approaches reflecting different technological trajectories, namely (1) short-range, (2) academic, (3) technical, and (4) broad interpretations of AI. We use US patents granted between 1990-2019 to assess the extent to which each approach qualifies AI as a GPT, and study patterns of its concentration and agency. Strikingly, the four trajectories overlap on only 1.36% of patents and vary in scale, accounting for shares of 3-17% of all US patents. Despite capturing the smallest set of AI patents, the short-range trajectory identified by the latest AI keywords demonstrates the strongest GPT characteristics of high intrinsic growth and generality. All trajectories agree, however, that AI inventions are highly concentrated within a few firms and this has consequences for competition policy and market regulation. Our study highlights how various methods of defining AI can lead to contrasting as well as similar conclusions about its impact. |
Keywords: | Patent, Artificial Intelligence, AI, Classification, General Purpose Technology, Concentration, Inventions, Innovation |
JEL: | O31 O33 O34 |
Date: | 2023–05 |
URL: | https://d.repec.org/n?u=RePEc:amz:wpaper:2023-09 |
By: | Mr. Eugenio M Cerutti; Antonio I Garcia Pascual; Yosuke Kido; Longji Li; Mr. Giovanni Melina; Ms. Marina Mendes Tavares; Mr. Philippe Wingender |
Abstract: | This paper examines the uneven global impact of AI, highlighting how its effects will be a function of (i) countries’ sectoral exposure to AI, (ii) their preparedness to integrate these technologies into their economies, and (iii) their access to essential data and technologies. We feed these three aspects into a multi-sector dynamic general equilibrium model of the global economy and show that AI will exacerbate cross-country income inequality, disproportionately benefiting advanced economies. Indeed, the estimated growth impact in advanced economies could be more than double that in low-income countries. While improvements in AI preparedness and access can mitigate these disparities, they are unlikely to fully offset them. Moreover, the AI-driven productivity gains could reduce the traditional role of exchange rate adjustments due to AI’s large impact in the non-tradable sector—a mechanism akin to an inverse Balassa-Samuelson effect. |
Keywords: | Artificial Intelligence; Productivity; Multi-Region DSGE Model |
Date: | 2025–04–11 |
URL: | https://d.repec.org/n?u=RePEc:imf:imfwpa:2025/076 |
By: | Emma J Rockall; Ms. Marina Mendes Tavares; Carlo Pizzinelli |
Abstract: | There are competing narratives about artificial intelligence’s impact on inequality. Some argue AI will exacerbate economic disparities, while others suggest it could reduce inequality by primarily disrupting high-income jobs. Using household microdata and a calibrated task-based model, we show these narratives reflect different channels through which AI affects the economy. Unlike previous waves of automation that increased both wage and wealth inequality, AI could reduce wage inequality through the displacement of high-income workers. However, two factors may counter this effect: these workers’ tasks appear highly complementary with AI, potentially increasing their productivity, and they are better positioned to benefit from higher capital returns. When firms can choose how much AI to adopt, the wealth inequality effect is particularly pronounced, as the potential cost savings from automating high-wage tasks drive significantly higher adoption rates. Models that ignore this adoption decision risk understating the trade-off policymakers face between inequality and efficiency. |
Keywords: | Artificial intelligence; Employment; Inequality |
Date: | 2025–04–04 |
URL: | https://d.repec.org/n?u=RePEc:imf:imfwpa:2025/068 |
By: | Song, Danbee (Korea Institute for Industrial Economics and Trade); Cho, Jaehan (Korea Institute for Industrial Economics and Trade) |
Abstract: | As a leading general-purpose technology, artificial intelligence (AI) is expected to accelerate digital trans¬formation across industries and exert widespread economic and social impacts. With AI now being rec¬ognized as a new driver of economic growth, governments worldwide are actively implementing policies to advance AI development and adoption. However, despite high expectations for and growing interest in AI, its adoption in South Korea remains limited, with the majority of businesses struggling to see tangible benefits. In order to leverage AI as a catalyst for growth in Korea, it is essential to establish a virtuous cycle between AI utilization and performance through both industry-specific and integrated policies. This paper describes the main characteristics and a set of policy priorities designed to facilitate AI adop¬tion in South Korea. They include: (1) strengthening demand-driven AI innovation capabilities to embed AI into industries; (2) expanding comprehensive financial support for AI-industry convergence; (3) improving AI workforce development systems and aligning human resources management with labor market needs; and (4) establishing a proactive risk management framework along with a “negative” regulatory approach to ensure businesses can freely leverage AI technology. |
Keywords: | artificial intelligence; AI; industrial AI; manufacturing AI; digital transformation; AI adoption; technology adoption; productivity; AI policy; technology policy; South Korea; Korea Institute for Industrial Economics and Trade; KIET |
JEL: | L60 L86 L88 |
Date: | 2025–01–31 |
URL: | https://d.repec.org/n?u=RePEc:ris:kietrp:2025_003 |
By: | Pinski, Marc; Hofmann, Thomas; Benlian, Alexander |
Abstract: | We draw on upper echelons theory to examine whether the AI literacy of a firm’s top management team (i.e., TMT AI literacy) has an effect on two firm characteristics paramount for value generation with AI—a firm’s AI orientation, enabling it to identify AI value potentials, and a firm’s AI implementation ability, empowering it to realize these value potentials. Building on the notion that TMT effects are contingent upon firm contexts, we consider the moderating influence of a firm’s type (i.e., startups vs. incumbents). To investigate these relationships, we leverage observational literacy data of 6986 executives from a professional social network (LinkedIn.com) and firm data from 10-K statements. Our findings indicate that TMT AI literacy positively affects AI orientation as well as AI implementation ability and that AI orientation mediates the effect of TMT AI literacy on AI implementation ability. Further, we show that the effect of TMT AI literacy on AI implementation ability is stronger in startups than in incumbent firms. We contribute to upper echelons literature by introducing AI literacy as a skill-oriented perspective on TMTs, which complements prior role-oriented TMT research, and by detailing AI literacy’s role for the upper echelons-based mechanism that explains value generation with AI. |
Date: | 2025–04–07 |
URL: | https://d.repec.org/n?u=RePEc:dar:wpaper:154096 |
By: | Alejandro Lopez-Lira; Jihoon Kwon; Sangwoon Yoon; Jy-yong Sohn; Chanyeol Choi |
Abstract: | The rapid advancements in Large Language Models (LLMs) have unlocked transformative possibilities in natural language processing, particularly within the financial sector. Financial data is often embedded in intricate relationships across textual content, numerical tables, and visual charts, posing challenges that traditional methods struggle to address effectively. However, the emergence of LLMs offers new pathways for processing and analyzing this multifaceted data with increased efficiency and insight. Despite the fast pace of innovation in LLM research, there remains a significant gap in their practical adoption within the finance industry, where cautious integration and long-term validation are prioritized. This disparity has led to a slower implementation of emerging LLM techniques, despite their immense potential in financial applications. As a result, many of the latest advancements in LLM technology remain underexplored or not fully utilized in this domain. This survey seeks to bridge this gap by providing a comprehensive overview of recent developments in LLM research and examining their applicability to the financial sector. Building on previous survey literature, we highlight several novel LLM methodologies, exploring their distinctive capabilities and their potential relevance to financial data analysis. By synthesizing insights from a broad range of studies, this paper aims to serve as a valuable resource for researchers and practitioners, offering direction on promising research avenues and outlining future opportunities for advancing LLM applications in finance. |
Date: | 2025–03 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2503.22693 |
By: | Takehiro Takayanagi; Kiyoshi Izumi; Javier Sanz-Cruzado; Richard McCreadie; Iadh Ounis |
Abstract: | Large language model-based agents are becoming increasingly popular as a low-cost mechanism to provide personalized, conversational advice, and have demonstrated impressive capabilities in relatively simple scenarios, such as movie recommendations. But how do these agents perform in complex high-stakes domains, where domain expertise is essential and mistakes carry substantial risk? This paper investigates the effectiveness of LLM-advisors in the finance domain, focusing on three distinct challenges: (1) eliciting user preferences when users themselves may be unsure of their needs, (2) providing personalized guidance for diverse investment preferences, and (3) leveraging advisor personality to build relationships and foster trust. Via a lab-based user study with 64 participants, we show that LLM-advisors often match human advisor performance when eliciting preferences, although they can struggle to resolve conflicting user needs. When providing personalized advice, the LLM was able to positively influence user behavior, but demonstrated clear failure modes. Our results show that accurate preference elicitation is key, otherwise, the LLM-advisor has little impact, or can even direct the investor toward unsuitable assets. More worryingly, users appear insensitive to the quality of advice being given, or worse these can have an inverse relationship. Indeed, users reported a preference for and increased satisfaction as well as emotional trust with LLMs adopting an extroverted persona, even though those agents provided worse advice. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.05862 |
By: | Isabella Loaiza; Roberto Rigobon |
Abstract: | AI is transforming industries, raising concerns about job displacement and decision making reliability. AI, as a universal approximation function, excels in data driven tasks but struggles with small datasets, subjective probabilities, and contexts requiring human judgment, relationships, and ethics.The EPOCH framework highlights five irreplaceable human capabilities: Empathy, Presence, Opinion, Creativity, and Hope. These attributes are vital in financial services for trust, inclusion, innovation, and consumer experience. Although AI improves efficiency in risk management and compliance, it will not eliminate jobs but redefine them, similar to how ATMs reshaped bank tellers' roles. The challenge is ensuring professionals adapt, leveraging AI's strengths while preserving essential human capabilities. |
Date: | 2025–03 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2503.22035 |
By: | Amin Haeri; Jonathan Vitrano; Mahdi Ghelichi |
Abstract: | Risk management in finance involves recognizing, evaluating, and addressing financial risks to maintain stability and ensure regulatory compliance. Extracting relevant insights from extensive regulatory documents is a complex challenge requiring advanced retrieval and language models. This paper introduces RiskData, a dataset specifically curated for finetuning embedding models in risk management, and RiskEmbed, a finetuned embedding model designed to improve retrieval accuracy in financial question-answering systems. The dataset is derived from 94 regulatory guidelines published by the Office of the Superintendent of Financial Institutions (OSFI) from 1991 to 2024. We finetune a state-of-the-art sentence BERT embedding model to enhance domain-specific retrieval performance typically for Retrieval-Augmented Generation (RAG) systems. Experimental results demonstrate that RiskEmbed significantly outperforms general-purpose and financial embedding models, achieving substantial improvements in ranking metrics. By open-sourcing both the dataset and the model, we provide a valuable resource for financial institutions and researchers aiming to develop more accurate and efficient risk management AI solutions. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.06293 |
By: | Mr. Christian Bogmans; Patricia Gomez-Gonzalez; Ganchimeg Ganpurev; Mr. Giovanni Melina; Mr. Andrea Pescatori; Sneha D Thube |
Abstract: | The development and deployment of large language models like ChatGPT across the world requires expanding data centers that consume vast amounts of electricity. Using descriptive statistics and a multi-country computable general equilibrium model (IMF-ENV), we examine how AI-driven data center growth affects electricity consumption, electricity prices, and carbon emissions. Our analysis of national accounts reveals AI-producing sectors in the U.S. have grown nearly triple the rate of the private non-farm business sector, with firm-level evidence showing electricity costs for vertically integrated AI companies nearly doubled between 2019-2023. Simulating AI scenarios in the IMF-ENV model based on projected data center power consumption up to 2030, we find the AI boom will cause manageable but varying increases in energy prices and emissions depending on policies and infrastructure constraints. Under scenarios with constrained growth in renewable energy capacity and limited expansion of transmission infrastructure, U.S. electricity prices could increase by 8.6%, while U.S. and global carbon emissions would rise by 5.5% and 1.2% respectively under current policies. Our findings highlight the importance of aligning energy policies with AI development to support this technological revolution, while mitigating environmental impacts. |
Keywords: | generative AI; data centers; energy and the macroeconomy; climate change and growth; CGE models |
Date: | 2025–04–22 |
URL: | https://d.repec.org/n?u=RePEc:imf:imfwpa:2025/081 |