|
on Artificial Intelligence |
By: | David Autor; Neil Thompson |
Abstract: | When job tasks are automated, does this augment or diminish the value of labor in the tasks that remain? We argue the answer depends on whether removing tasks raises or reduces the expertise required for remaining non-automated tasks. Since the same task may be relatively expert in one occupation and inexpert in another, automation can simultaneously replace experts in some occupations while augmenting expertise in others. We propose a conceptual model of occupational task bundling that predicts that changing occupational expertise requirements have countervailing wage and employment effects: automation that decreases expertise requirements reduces wages but permits the entry of less expert workers; automation that raises requirements raises wages but reduces the set of qualified workers. We develop a novel, content-agnostic method for measuring job task expertise, and we use it to quantify changes in occupational expertise demands over four decades attributable to job task removal and addition. We document that automation has raised wages and reduced employment in occupations where it eliminated inexpert tasks, but lowered wages and increased employment in occupations where it eliminated expert tasks. These effects are distinct from—and in the case of employment, opposite to—the effects of changing task quantities. The expertise framework resolves the puzzle of why routine task automation has lowered employment but often raised wages in routine task-intensive occupations. It provides a general tool for analyzing how task automation and new task creation reshape the scarcity value of human expertise within and across occupations. |
JEL: | E24 J11 J23 J24 |
Date: | 2025–06 |
URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:33941 |
By: | Knotz, Carlo Michael (University of Stavanger) |
Abstract: | Machine learning and algorithmic decision-making technology – i.e., “artificial intelligence” (AI) – is rapidly advancing and becoming more widespread at workplaces. For some individuals, this is beneficial in that it increases their productivity and generates new employment opportunities. Other individuals, however, could see their incomes and employment opportunities decline because AI takes over their work tasks or because they are insufficiently skilled to fully take advantage of AI technology in their work. Some of this is already visible and, given what is known from previous research on the implications of technological transformations, these developments are likely to affect people’s political attitudes and preferences. I investigate this empirically using novel indicators of AI exposure and AI complementarity in combination with data from multiple waves of the European Social Survey. I find that AI exposure and complementarity do indeed have meaningful effects on political attitudes and preferences (specifically demand for redistribution and support for right-wing populist parties): Increasing AI exposure, when combined with AI complementarity, significantly lowers support for redistribution and for right-wing populist parties, while high complementarity in the absence of exposure has the opposite effects. In addition, while the former effect got weaker over the last two decades, the latter became stronger. These findings add to a growing literature that shows that the “AI Revolution” is already having meaningful political effects. |
Date: | 2025–07–03 |
URL: | https://d.repec.org/n?u=RePEc:osf:socarx:4xdn8_v1 |
By: | Nikhil Agarwal; Alex Moehring; Alexander Wolitzky |
Abstract: | We propose a sufficient statistic for designing AI information-disclosure and selective automation policies. The approach allows for endogenous and biased beliefs, and effort crowd-out, without using a structural model of human decision-making. We deploy and validate our approach in a fact-checking experiment. Humans under-respond to AI predictions and reduce effort when presented with confident AI predictions. Overconfidence in own-signal rather than under-confidence in AI drives AI under-response. The optimal policy automates decisions where the AI is confident and delegates the other decisions while fully disclosing the AI prediction. Although automation is valuable, the benefit of assisting humans with AI is negligible. |
JEL: | C91 D47 D83 D89 |
Date: | 2025–06 |
URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:33949 |
By: | Wohlschlegel, Julian; Jussupow, Ekaterina |
Abstract: | In algorithm augmented decision-making, humans must successfully judge when to follow or reject algorithmic advice. Here, research showed that humans tend to reject algorithmic advice after experiencing algorithmic errors. This more severe response to incorrect algorithmic advice compared to incorrect human advice gave rise to the definition of, and research on, the phenomenon of algorithm aversion. However, empirical findings on algorithm aversion are conflicting and mostly focused on the decision itself while neglecting the cognitive processes from receiving incorrect advice to deciding. Using a multi-trial mouse tracking experiment, we aim to better understand the emergence of algorithm aversion by investigating decisional conflicts reflected in cognitive process data. Through our research, we mainly aim to contribute to research on algorithm aversion and the IS community’s methodological toolkit while our insights on decisional conflicts can further inform practitioners on how to responsibly enable and onboard users of algorithms. |
Date: | 2025–05 |
URL: | https://d.repec.org/n?u=RePEc:dar:wpaper:155418 |
By: | DiGiuseppe, Matthew (Leiden University); Paula, Katrin; Rommel, Tobias |
Abstract: | Under which conditions are citizens willing to delegate government responsibilities to artificial intelligence? We hypothesize that the identity of incumbent policymakers impacts public support for delegating decisions to AI. In highly polarized societies, AI has the potential to be perceived as a decision maker with apolitical or less partisan motivations in governance decisions. We thus reason that individuals will prefer co-partisans to AI or algorithmic decision making. However, a switch to AI decision making will have more public support when out-partisans hold policy control. To test our hypothesis, we fielded a survey experiment in the summer of 2024 that asked about 2500 respondents in the US to register their support for AI making the most important economic decision in the world -- the setting of the base interest rate by the US Federal Reserve. The basis of our experimental treatments is the fact that Jerome Powell, the current chair of the Fed, was appointed first by President Trump, a Republican, and later re-appointed by President Biden, a Democrat. We find that when we inform respondents that Powell was appointed by a president from another party, support for delegation to AI increases compared to the condition when the Fed chair is appointed by a co-partisan. The complier average causal effect (CACE) indicates that change perception of the Fed Chair to an outpartisan increases support for delegating to AI by over 45%. |
Date: | 2025–07–03 |
URL: | https://d.repec.org/n?u=RePEc:osf:socarx:rnj5h_v2 |
By: | Pierre Boutros (Université Côte d'Azur, CNRS, GREDEG, France); Eliana Diodati (University of Torino, Italy); Michele Pezzoni (Université Côte d'Azur, CNRS, GREDEG, France; Observatoire des Sciences et Techniques, HCERES, Paris, France); Fabiana Visentin (UNU-MERIT, Maastricht University, the Netherlands) |
Abstract: | The rise of Artificial Intelligence (AI) urges us to better understand its impact on the labor market. This paper is the first to analyze the supply of individuals with AI training facing the labor market. We estimate the relationship between AI training and individuals' careers for 35, 492 French PhD students in STEM who graduated between 2010 and 2018. To assess the unbiased effect of AI training, we compare the careers of PhD students trained in AI with those of a control sample of similar students with no AI training. We find that AI training is not associated with a higher probability of pursuing a research career after graduation. However, among students who have AI training during the PhD and pursue a research career after graduation, we observe a path dependence in continuing to publish on AI topics and a higher impact of their research. We also observe disciplinary heterogeneity. In Computer Science, AI-trained students are less likely to end up in private research organizations after graduation compared to their non-AI counterparts, while in disciplines other than Computer Science, AI training stimulates patenting activity and mobility abroad after graduation. |
Keywords: | Artificial Intelligence, Training, PhD students'careers |
JEL: | J24 O30 |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:gre:wpaper:2025-29 |
By: | Sophia Kazinnik; Erik Brynjolfsson |
Abstract: | This paper examines how central banks can strategically integrate artificial intelligence (AI) to enhance their operations. Using a dual-framework approach, we demonstrate how AI can transform both strategic decision-making and daily operations within central banks, taking the Federal Reserve System (FRS) as a representative example. We first consider a top-down view, showing how AI can modernize key central banking functions. We then adopt a bottom-up approach focusing on the impact of generative AI on specific tasks and occupations within the Federal Reserve and find a significant potential for workforce augmentation and efficiency gains. We also address critical challenges associated with AI adoption, such as the need to upgrade data infrastructure and manage workforce transitions. |
JEL: | C8 C9 G4 |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:33998 |
By: | Satyadhar Joshi (Bank of America, Touro University, Bar-Ilan University [Israël], Independent Researcher) |
Abstract: | The rapid advancement of generative artificial intelligence (Gen AI) has revolutionized various domains, including financial analytics. This paper provides a comprehensive review of the applications, challenges, and future directions of Gen Al in financial analytics. We explore its role in risk management, credit scoring, feature engineering, and macroeconomic simulations, while addressing limitations such as data quality, interpretability, and ethical concerns. By synthesizing insights from recent literature, we highlight the transformative potential of Gen AI and propose frameworks for its effective integration into financial workflows. This paper presents a systematic examination of generative artificial intelligence (Gen AI) applications in financial risk management, focusing on architectural frameworks and implementation methodologies. We analyze the integration of large language models (LLMs) with traditional quantitative finance pipelines, addressing key challenges in feature engineering, risk modeling, and regulatory compliance. The study demonstrates how transformer-based architectures enhance financial analytics through automated data processing, risk factor extraction, and scenario generation. Technical implementations leverage hybrid cloud platforms and specialized Python libraries for model deployment, achieving measurable improvements in accuracy and efficiency. Our findings reveal critical considerations for production systems, including computational optimization, model interpretability, and governance protocols. The proposed architecture combines LLM capabilities with domain-specific modules for credit scoring, value-at-risk calculation, and macroeconomic simulation. Empirical results highlight trade-offs between model complexity and operational constraints, providing actionable insights for financial institutions adopting Gen Al solutions. The paper concludes with recommendations for future research directions in financial Al systems. |
Keywords: | Generative AI financial analytics risk management credit scoring large language models feature engineering, Generative AI, financial analytics, risk management, credit scoring, large language models, feature engineering |
Date: | 2025–05–29 |
URL: | https://d.repec.org/n?u=RePEc:hal:journl:hal-05101589 |
By: | Winder, Philipp (University of St.Gallen) |
Abstract: | This paper presents a novel capability-based framework for assessing organizational readiness in deploying large language models (LLMs) in the banking sector. While LLMs offer significant potential across domains such as customer service, compliance, and risk assessment, banks face unique deployment challenges due to regulatory constraints, legacy systems, and data sensitivity. Building on the dynamic capability view and adapting maturity levels from the Capability Maturity Model Integration (CMMI), the framework identifies and structures the organizational, contextual, and technical capabilities necessary for effective LLM deployment. It introduces a maturity-scaled self-assessment tool that enables banks to evaluate their current LLM readiness, diagnose capability gaps, and guide strategic investment decisions. Although developed for banking, the framework offers conceptual relevance to other high-stakes, highly regulated sectors. |
Date: | 2025–06–17 |
URL: | https://d.repec.org/n?u=RePEc:osf:osfxxx:zqsa3_v1 |
By: | Green, Alicia |
Abstract: | The integration of artificial intelligence (AI) into financial intelligence systems enables automated risk detection and strategic decision support in African markets. This paper examines the technical architectures and AI methodologies (supervised learning, anomaly detection, natural language processing) employed in real-world African financial applications. We discuss data pipelines combining structured and unstructured data (market transactions, social media, news, macro indicators) and outline algorithmic models for credit risk, market risk, systemic risk, and financial crime detection. Specific cases from Nigeria, Kenya, and South Africa illustrate AI use in fraud/AML detection, credit scoring with alternative data, and portfolio stress-testing. Quantitative indicators (e.g., Nigeria’s NGN1.56 quadrillion digital payments in H1 2024 and 468\% surge in fraud cases) underscore the scale of data and risks. Regulatory contexts (e.g., CBN’s AI‑AML framework, SARB guidelines) and infrastructure constraints (limited data connectivity, power) are highlighted. The paper proposes a system framework comprising data integration, machine learning engines, continuous risk scoring, and visualization dashboards. Key applications include dynamic capital allocation, real-time AML monitoring, and scenario-based stress testing. We conclude by identifying ethical challenges (data privacy, model bias, transparency) and suggesting future directions such as hybrid AI-rule systems, localized language models, and cross-border data sharing platforms. |
Date: | 2025–06–18 |
URL: | https://d.repec.org/n?u=RePEc:osf:osfxxx:ynph2_v1 |
By: | Philippe Jean-Baptiste (LEST - Laboratoire d'Economie et de Sociologie du Travail - AMU - Aix Marseille Université - CNRS - Centre National de la Recherche Scientifique) |
Abstract: | Why did Generative Artificial Intelligence (GAI) only emerge in the public eye in 2022, even though it is based on concepts formulated as early as the 1950s? The answer lies less in a sudden breakthrough than in the gradual removal of longstanding technical barriers—obstacles identified long ago by AI researchers (Jordan & Mitchell, 2015; Brynjolfsson & McAfee, 2015). The recent history of AI has been marked by several "AI winters"—periods of disillusionment and underinvestment in the field, triggered by the technology's failure to live up to its promises (Crevier, 1993; Hendler, 2008). These successive stagnations highlighted the limitations of a still-constrained system: insufficient data, expensive storage, limited computing power, and poor connectivity. These bottlenecks, both technical and economic, long hindered the maturation of AI. Over the past decade, these barriers have gradually fallen. The volume of available data has exploded thanks to open data initiatives and connected devices, among others (Manyika et al., 2011); storage costs have plummeted (McCallum, 2023); GPUs/TPUs have dramatically increased computing power (Jouppi et al., 2017); and cloud computing has made this power widely accessible (Armbrust et al., 2010). These conditions have enabled the emergence of tools such as ChatGPT and Mistral.ai (Bommasani et al., 2022; Dwivedi et al., 2023), which are now transforming professional practices. This article offers an interpretation of this transition: understanding the former barriers, analyzing how they were lifted, and anticipating the concrete implications for businesses and their managers. |
Abstract: | Pourquoi l'intelligence artificielle générative (IAG) n'a-t-elle émergé qu'en 2022 aux yeux du grand public, alors qu'elle repose sur des concepts formulés dès les années 1950 ? La réponse réside moins dans une rupture soudaine que dans la levée progressive de barrières techniques longtemps bloquantes, identifiées de longue date par les chercheurs en IA (Jordan & Mitchell, 2015; Brynjolfsson & McAfee, 2015). L'histoire récente de l'IA a été marquée par plusieurs « hivers de l'IA » – périodes de désillusion et de sous-investissement dans le domaine, dues à l'incapacité des technologies à tenir leurs promesses (Crevier, 1993; Hendler, 2008). Ces stagnations successives ont illustré les limites d'un système encore bridé : données insuffisantes, stockage onéreux, puissance de calcul limitée, connectivité déficiente. Ces verrous, à la fois techniques et économiques, ont longtemps freiné la maturation de l'IA. Depuis une décennie, ces barrières tombent. Les volumes de données disponibles explosent grâce à l'open data et aux objets connectés entre autres (Manyika et al., 2011), les coûts de stockage s'effondrent (McCallum, 2023), les GPU/TPU démultipliant la puissance de calcul (Jouppi et al., 2017), et le cloud rend cette puissance accessible (Armbust et al., 2010). ces conditions rendent possibl l'émergence d'outils comme ChatGPT ou Mistral.ai (Bommasani et al., 2022; Dwivedi et al., 2023), qui bouleversent nos pratiques professionnelles. Cet article propose une lecture de cette transition : comprendre les anciennes barrières, analyser leur levée, et anticiper les implications concrètes pour les entreprises et leurs managers. |
Keywords: | Barrières technologique IA, Compétences IA et management, Histoire de l'IA, Intelligence artificielle générative, Transformation des entreprises, usage professionnel de l'IA |
Date: | 2025–06–02 |
URL: | https://d.repec.org/n?u=RePEc:hal:journl:hal-05099668 |