|
on Artificial Intelligence |
| By: | David Almog; Lucas Lippman; Daniel Martin |
| Abstract: | We use an online experiment with a real work task to study whether workers change their behavior when they know AI will be used to judge their work instead of humans. We find that individuals produce a higher quantity of output when they are assigned an AI evaluator. However, controlling for quantity, the quality of their output is lower, regardless of whether quality is measured using humans or LLM grades. We also find that workers are more likely to use external tools, including LLMs, when they know AI is used to judge their work instead of humans. However, the increase in external tool use does not appear to explain the differences in quantity or quality across treatments. |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2603.02076 |
| By: | Ales Marsal (National Bank of Slovakia); Patryk Perkowski (Yeshiva University) |
| Abstract: | We examine how generative AI impacts productivity across the task-based framework using a field experiment at the National Bank of Slovakia. In our experiment, we randomly assign generative AI access to central bank employees completing workplace tasks that mirror the theoretical task-based framework. Our results indicate that generative AI access leads to large improvements in both quality and efficiency for the majority of participants. We find a strong complementarity between generative AI and non-routine work, both on average and for most participants. We also find some support for generative AI as both cognitive-biased and specialist-biased, though smaller in magnitude than our tests of routine-biased. While workers in routine jobs experience larger individual performance gains, generative AI is less effective for the routine task content of their work. The mismatch between generative AI’s task- versus worker-level impacts is economically large, and results from a simulation exercise suggest the organization can increase output by 7.3% by changing how workers are assigned to tasks in the presence of generative AI. Additionally, we find differences in how the benefits of generative AI relate to worker skills: low-skill workers benefitmost in terms of quality while high-skill workers benefit in terms of efficiency. Our findings provide empirical support on generative AI and task-level complementarities, with important implications for how generative AI will impact workers, organizations, and labor markets more broadly. |
| JEL: | J24 M15 E58 C93 O33 |
| Date: | 2025–07 |
| URL: | https://d.repec.org/n?u=RePEc:svk:wpaper:1128 |
| By: | Maryse Kathleen Ngangoue; Andrew Schotter; Bill Wang |
| Abstract: | The Big Decisions in our lives, having a child, getting married, getting educated, etc. are transformative decisions that are hard to make. As a result, people often seek advice before making them. However, since people tend to live in homophilous social networks the advice received from their friends and neighbors may simply reinforce the decisions that people like them are already making. We investigate whether the advice offered by ChatGPT for such decisions can be useful in broadening the advice people receive and how such advice varies as we change the prompted socioeconomic backgrounds of the advisee and advisor. We find that advice tends to be confirmatory for low-income groups, in that it reinforces their established choices, while being dis-confirming for high-income groups, where it prompts reconsideration. Furthermore, even when suggesting the same choice, ChatGPT justifies that choice differently depending on who it is talking to. These findings suggest that AI-generated advice may differentially shape life’s Big Decisions across social strata. |
| Keywords: | LLM, generative AI, big decisions, transformative decisions, advice, SES |
| JEL: | O33 D83 D03 |
| Date: | 2026 |
| URL: | https://d.repec.org/n?u=RePEc:ces:ceswps:_12515 |
| By: | Gunnar P. Epping; Andrew Caplin; Erik Duhaime; William R. Holmes; Daniel Martin; Jennifer S. Trueblood |
| Abstract: | Many operational AI systems depend on large-scale human annotation to detect rare but consequential events (e.g., fraud, defects, and medical abnormalities). When positives are rare, the prevalence effect induces systematic cognitive biases that inflate misses and can propagate through the AI lifecycle via biased training labels. We analyze prior experimental evidence and run a field experiment on DiagnosUs, a medical crowdsourcing platform, in which we hold the true prevalence in the unlabeled stream fixed (20% blasts) while varying (i) the prevalence of positives in the gold-standard feedback stream (20% vs. 50%) and (ii) the response interface (binary labels vs. elicited probabilities). We then post-process probabilistic labels using a linear-in-log-odds recalibration approach at the worker and crowd levels, and train convolutional neural networks on the resulting labels. Balanced feedback and probabilistic elicitation reduce rare-event misses, and pipeline-level recalibration substantially improves both classification performance and probabilistic calibration; these gains carry through to downstream CNN reliability out of sample. |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2603.11511 |
| By: | Daron Acemoglu; Dingwen Kong; Asuman Ozdaglar |
| Abstract: | We study how generative AI, and in particular agentic AI, shapes human learning incentives and the long-run evolution of society’s information ecosystem. We build a dynamic model of learning and decision-making in which successful decisions require combining shared, community-level general knowledge with individual-level, context-specific knowledge; these two inputs are complements. Learning exhibits economies of scope: costly human effort jointly produces a private signal about their own context and a “thin” public signal that accumulates into the community’s stock of general knowledge, generating a learning externality. Agentic AI delivers context-specific recommendations that substitute for human effort. By contrast, a richer stock of general knowledge complements human effort by raising its marginal return. The model highlights a sharp dynamic tension: while agentic AI can improve contemporaneous decision quality, it can also erode learning incentives that sustain long-run collective knowledge. When human effort is sufficiently elastic and agentic recommendations exceed an accuracy threshold, the economy can tip into a knowledge-collapse steady state in which general knowledge vanishes ultimately, despite high-quality personalized advice. Welfare is generally non-monotone in agentic accuracy, implying an interior, welfare-maximizing level of agentic precision and motivating information-design regulations. In contrast, greater aggregation capacity for general knowledge—meaning more effective sharing and pooling of human-generated general knowledge—unambiguously raises welfare and increases resilience to knowledge collapse. |
| JEL: | D80 D83 |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:34910 |
| By: | Manuel A. Hidalgo-Pérez (Universidad Pablo de Olavide) |
| Abstract: | This paper models the impact of Generative AI on labor inequality by endogenizing technological complementarity (β) as a function of human capital (h) and supervision costs (σ). I introduce "Positional Capital" (W) -workers' material conditions'- as a key determinant of adaptation capacity, showing how precarity generates transition traps. The framework accounts for the "expert paradox", cognitive decapitalization through AI dependency, and the micro-macro productivity disconnect. Calibrated simulations indicate that the wage inequality ratio rises from 1.33x to 2.12x within a decade under current technological trajectories. |
| Keywords: | Generative AI, Technological Complementarity, Inequality, Positional Capital, Cognitive Decapitalization. |
| JEL: | J24 J31 O33 J62 |
| Date: | 2026 |
| URL: | https://d.repec.org/n?u=RePEc:pab:wpaper:26.01 |
| By: | Brodzicki, Tomasz |
| Abstract: | This paper presents an extended framework building on the Melitz model to analyze the impact of artificial intelligence (AI) adoption on firm behavior, market structure, and international trade. We introduce a log-normal distribution of firm productivity and model heterogeneous AI adoption by incorporating fixed costs and a free-rider effect, where non-adopters benefit indirectly from technological diffusion. A key innovation lies in including AI productivity gains, either symmetric in a simplified manner or stochastic, allowing for firm-level variation in implementation success. This addition generates realistic dispersion in post-adoption outcomes and alters firm dynamics near critical survival, investment, and export activity thresholds. We compare deterministic AI adoption trajectories (sigmoid and exponential) with stochastic scenarios, highlighting how uncertainty in AI outcomes can amplify competitive asymmetries and increase market volatility. Under high fixed adoption costs and weak spillovers, the model exhibits strong endogenous concentration effects, especially when adoption follows an exponential path reinforced by feedback loops, potentially approaching scenarios of artificial superintelligence (ASI) or singularity. A sigmoid adoption trajectory implies bounded gains and a more stable equilibrium. The paper also explores the potential breakdown of monopolistic competition assumptions, suggesting oligopolistic drift in concentrated AI-intensive markets. These dynamics give rise to targeted policy implications to promote inclusive technological diffusion and reduce systemic risk. |
| Keywords: | Artificial Intelligence; AI; Market Structure; Global Trade; Productivity; Firm Heterogeneity; Technological Change |
| JEL: | D24 F12 F61 L11 O33 |
| Date: | 2024–08–31 |
| URL: | https://d.repec.org/n?u=RePEc:pra:mprapa:127767 |
| By: | Binelli, Chiara; Luca, Teresa; Vergolini, Loris (University of Bologna); Marconi, Gabriele |
| Abstract: | We study how the introduction of generative AI (GPT-3) has impacted the demand for AI-related skills in the European labour market. Using a novel large-scale collection of online job advertisements in in twenty two European Union countries and the United Kingdom, we develop a detailed classification of AI skills and tasks, and we exploit the release of GPT-3 in November 2022 as a natural experiment and apply a difference-in-differences estimation to assess the shifts in the demand for AI skills for occupations that are exposed to ChatGPT relative to not-exposed occupations. We find that GPT-3 had a negative and statistically significant impact on the share of AI job ads for occupations in the treated group relative to the control group, so that the availability of generative AI decreased the demand for occupations whose most frequent core task is automatable through ChatGPT. To the best of our knowledge, this is the first paper that proposes a detailed classification framework and robust identification strategy to study the impact of generative AI on labour demand. |
| Date: | 2026–03–06 |
| URL: | https://d.repec.org/n?u=RePEc:osf:socarx:cy2s3_v1 |
| By: | Grenz, Sabrina (Utrecht University); Gregory, Terry (LISER); Lehmer, Florian (IAB Nueremberg) |
| Abstract: | The rapid evolution of technology is reshaping labor markets by altering skill demands and job profiles. This paper introduces a novel skill-based measure of occupational technology intensity -- the Occupational Technology Skill Share (OTSS) -- that distinguishes between manual, digital, and frontier technologies. Using natural language processing, generative AI, and supervised machine learning, we develop an AI-powered skill classification that enriches occupation-linked skill labels with standardized GenAI-generated descriptions and structured indicators of technological content, enabling transparent classification by technology intensity. We compute OTSS for all occupations in the German labor market. For the average worker in 2023, manual technologies account for the largest share of skill content (42\%), followed by digital (38\%) and frontier technologies (20\%). Frontier technologies remain concentrated in specialized occupations, while digital technologies are widespread. Linking these measures to administrative data from 2012–2023 shows a broad shift from manual and digital toward frontier skills across occupations, and reveals a U-shaped relationship between changes in frontier skill intensity and employment growth. |
| Keywords: | artificial intelligence, digitalization, skills, employment growth |
| JEL: | J21 J24 O33 |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:iza:izadps:dp18415 |
| By: | Benjamin S. Manning; John J. Horton |
| Abstract: | Useful social science theories predict behavior across settings. However, applying a theory to make predictions in new settings is challenging: rarely can it be done without ad hoc modifications to account for setting-specific factors. We argue that AI agents put in simulations of those novel settings offer an alternative for applying theory, requiring minimal or no modifications. We present an approach for building such "general" agents that use theory-grounded natural language instructions, existing empirical data, and knowledge acquired by the underlying AI during training. To demonstrate the approach in settings where no data from that data-generating process exists--as is often the case in applied prediction problems--we design a heterogeneous population of 883, 320 novel games. AI agents are constructed using human data from a small set of conceptually related but structurally distinct "seed" games. In preregistered experiments, on average, agents predict initial human play in a random sample of 1, 500 games from the population better than (i) a cognitive hierarchy model, (ii) game-theoretic equilibria, and (iii) out-of-the-box agents. For a small set of separate novel games, these simulations predict responses from a new sample of human subjects better even than the most plausibly relevant published human data. |
| JEL: | D01 D03 |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:34937 |
| By: | Ali Ansari; Mark Esposito; Ava Fitoussy; Liu Zhang |
| Abstract: | The standard framing treats structured human-data work as transitional, a bridge between today's imperfect models and a future state where automation is complete. We challenge this view by modeling structured human data as a persistent production input: evaluation, rubric-based judgment, auditing, exception handling, and continual updates that convert raw model capability into dependable, deployable performance. These activities accumulate into a reusable AI capability stock that raises productivity by improving reliability on existing tasks and by expanding the frontier of task families for which AI can be used at high confidence. Crucially, this capability stock depreciates as tasks and contexts drift, standards evolve, and new edge cases emerge. In a tractable baseline model, an interior steady state implies a closed-form, strictly positive long-run labor share devoted to structured human-data work whenever depreciation is positive, a "no last mile" result in which maintenance demand persists even as models improve. We then microfound aggregate capability with a portfolio of task families featuring diminishing returns, frontier entry, and complementarity, generating reallocation toward low-maturity and bottleneck families and a Roy-style mechanism for within-structured wage dispersion. Finally, we map model objects to observable proxies using standard data layers, and provide a conservative calibration suggesting a 5-7% steady-state structured labor share in the long run. |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2603.00932 |
| By: | Garbers, Julio (LISER); Gregory, Terry (LISER) |
| Abstract: | We develop a novel firm-level indicator of Artificial Intelligence adoption in Europe (MAP-AI) by extracting information from more than three million firm websites in Belgium, France, Germany, and Luxembourg between 2016 and 2024 using a Large Language Model. The indicator captures realized AI use as publicly signaled by firms, rather than potential exposure, and distinguishes firms by their role in the AI ecosystem and the type of AI technologies employed. Validation against human-coded benchmarks and external data confirms high accuracy. We show that the share of AI-active firms increased from 1% in 2016 to 12% in 2024, with a marked acceleration after 2022. This growth reflects a structural shift toward widespread adoption and more integrated AI use, including generative AI. AI adoption is concentrated among larger, younger, knowledge-intensive firms in urban regions, with workforce skills emerging as a key driver. Foundational data skills are necessary for adoption, while specialized AI skills—such as machine learning and natural language processing—act as strong complements, highlighting the central role of human capital in AI diffusion. |
| Keywords: | Artificial Intelligence, firm-level data, Large Language Models, AI diffusion, human capital, skills |
| JEL: | O33 C81 L25 |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:iza:izadps:dp18434 |
| By: | Masataka Mori; Juan M. Sanchez |
| Abstract: | Firms with higher R&D spending tend to hold more cash. In recent years, as AI investment has increased, R&D intensity has gone up while cash ratios have declined. |
| Keywords: | research and development; corporate cash holdings; artificial intelligence (AI); internal liquidity management |
| Date: | 2026–03–12 |
| URL: | https://d.repec.org/n?u=RePEc:fip:l00001:102892 |
| By: | Yutong Yan; Raphael Tang; Zhenyu Gao; Wenxi Jiang; Yao Lu |
| Abstract: | In financial backtesting, large language models pretrained on internet-scale data risk introducing lookahead bias that undermines their forecasting validity, as they may have already seen the true outcome during training. To address this, we present DatedGPT, a family of twelve 1.3B-parameter language models, each trained from scratch on approximately 100 billion tokens of temporally partitioned data with strict annual cutoffs spanning 2013 to 2024. We further enhance each model with instruction fine-tuning on both general-domain and finance-specific datasets curated to respect the same temporal boundaries. Perplexity-based probing confirms that each model's knowledge is effectively bounded by its data cutoff year, while evaluation on standard benchmarks shows competitive performance with existing models of similar scale. We provide an interactive web demo that allows users to query and compare responses from models across different cutoff years. |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2603.11838 |
| By: | Morteza Ghomi (BANCO DE ESPAÑA); Samuel Hurtado (BANCO DE ESPAÑA) |
| Abstract: | We present a methodology for generating uncertainty indicators for user-defined topics based on newspaper data. The approach is based on Retrieval-Augmented Generation (RAG) systems commonly used in artificial intelligence applications, which we adapt to construct topic-specific uncertainty measures, referred to as Retrieval-Augmented Uncertainty Indicators (RAUI). The method employs semantic search with an embedding model to select news articles relevant to a given topic, and a large language model (LLM) to quantify the level of uncertainty contained in each of those articles. We construct uncertainty indicators for ten topics using Spanish newspaper data and an aggregate measure that also highlights how each topic contributes to overall uncertainty. We present two practical applications of these indicators: a VAR analysis that shows how different sources of uncertainty have different effects on the Spanish economy, and an estimation that generates time-varying fan charts around the Banco de España GDP growth projections. |
| Keywords: | uncertainty, artificial intelligence, natural language processing, newspapers |
| JEL: | C81 E32 |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:bde:wpaper:2609 |
| By: | Srinivas Raghavendra (University of Galway, Ireland) |
| Abstract: | This paper examines how the financialization of advanced capitalist economies in the 1990s relates to the rise of artificial intelligence (AI) as the dominant technological framework in the 21st century. Using insights from political economy and Kaleckian theory, it suggests that AI is not a break from previous trends but a continuation and intensification of the computational rationality developed during the late 20th century's shift toward finance-led capitalism. The study highlights three structural limits that AI introduces to capitalist reproduction: the zero-slack labor constraint, the full-employment constraint, and resource limitations. These limits and their interactions threaten to destabilize the class relationship between capital and labor, the territorial basis of the state, and the legitimacy of democratic capitalism. The paper proposes that the emerging oligarchic state faces a legitimacy crisis that might be alleviated by universal basic income (UBI), which could serve as a political tool to maintain democratic support amid automation and inequality. It concludes by reflecting on how capitalism, much like an adaptive biological system, sustains itself through ongoing structural transformations, reshaping its institutions and practices to enable it to deflect existential threats while preserving its core logic of private appropriation. |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:ico:wpaper:178 |