|
on Artificial Intelligence |
| By: | Liu, Yan; Huang, Jingyun; Wang, He |
| Abstract: | Nearly three years after ChatGPT’s launch, the generative artificial intelligence landscape remains in rapid flux. Using high-frequency website traffic data from Semrush, this paper tracks global adoption patterns for the 60 most-visited consumer-facing generative artificial intelligence tools through mid-2025. Five key findings emerge. First, fierce competition drives continuous innovation: two of 2025’s top five tools—DeepSeek and Grok—are new entrants, and development is rapidly diversifying into multi-modal capabilities, reasoning, and specialized applications. Second, ChatGPT maintains dominance despite competition, accounting for 77 percent of traffic to the top 60 tools in April 2025. Third, usage of generative artificial intelligence has exploded since mid-2024: ChatGPT traffic grew 113 percent year-over-year, driven by 42 percent user growth and 50 percent increased visits per user, with session duration doubling. Fourth, high-income countries are pulling decisively ahead, creating stark global divides. While 24 percent of internet users in high-income countries use ChatGPT, penetration drops to 5.8 percent in upper-middle-income countries, 4.7 percent in lower-middle-income countries, and just 0.7 percent in low-income countries. Regression analysis confirms that gross domestic product per capita strongly predicts adoption growth. Fifth, localization shapes competitive advantage: non-U.S. tools concentrate heavily in home markets, with Le Chat drawing 69 percent of traffic from Europe and several Chinese tools exceeding 90 percent domestic usage. These patterns reveal an artificial intelligence landscape characterized by intense innovation, persistent market leadership, accelerating growth, and deepening global inequality, underscoring the need for inclusive policies as generative artificial intelligence becomes central to economic participation. |
| Date: | 2025–10–15 |
| URL: | https://d.repec.org/n?u=RePEc:wbk:wbrwps:11231 |
| By: | Bohan Zhang; Jiaxuan Li; Ali Horta\c{c}su; Xiaoyang Ye; Victor Chernozhukov; Angelo Ni; Edward Huang |
| Abstract: | We introduce Agentic Economic Modeling (AEM), a framework that aligns synthetic LLM choices with small-sample human evidence for reliable econometric inference. AEM first generates task-conditioned synthetic choices via LLMs, then learns a bias-correction mapping from task features and raw LLM choices to human-aligned choices, upon which standard econometric estimators perform inference to recover demand elasticities and treatment effects.We validate AEM in two experiments. In a large scale conjoint study with millions of observations, using only 10% of the original data to fit the correction model lowers the error of the demand-parameter estimates, while uncorrected LLM choices even increase the errors. In a regional field experiment, a mixture model calibrated on 10% of geographic regions estimates an out-of-domain treatment effect of -65\pm10 bps, closely matching the full human experiment (-60\pm8 bps).Under time-wise extrapolation, training with only day-one human data yields -24 bps (95% CI: [-26, -22], p |
| Date: | 2025–10 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2510.25743 |
| By: | Ali Raza Jafree; Konark Jain; Nick Firoozye |
| Abstract: | We investigate the mechanisms by which medium-frequency trading agents are adversely selected by opportunistic high-frequency traders. We use reinforcement learning (RL) within a Hawkes Limit Order Book (LOB) model in order to replicate the behaviours of high-frequency market makers. In contrast to the classical models with exogenous price impact assumptions, the Hawkes model accounts for endogenous price impact and other key properties of the market (Jain et al. 2024a). Given the real-world impracticalities of the market maker updating strategies for every event in the LOB, we formulate the high-frequency market making agent via an impulse control reinforcement learning framework (Jain et al. 2025). The RL used in the simulation utilises Proximal Policy Optimisation (PPO) and self-imitation learning. To replicate the adverse selection phenomenon, we test the RL agent trading against a medium frequency trader (MFT) executing a meta-order and demonstrate that, with training against the MFT meta-order execution agent, the RL market making agent learns to capitalise on the price drift induced by the meta-order. Recent empirical studies have shown that medium-frequency traders are increasingly subject to adverse selection by high-frequency trading agents. As high-frequency trading continues to proliferate across financial markets, the slippage costs incurred by medium-frequency traders are likely to increase over time. However, we do not observe that increased profits for the market making RL agent necessarily cause significantly increased slippages for the MFT agent. |
| Date: | 2025–10 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2510.27334 |
| By: | Giulia Iadisernia; Carolina Camassa |
| Abstract: | We evaluate whether persona-based prompting improves Large Language Model (LLM) performance on macroeconomic forecasting tasks. Using 2, 368 economics-related personas from the PersonaHub corpus, we prompt GPT-4o to replicate the ECB Survey of Professional Forecasters across 50 quarterly rounds (2013-2025). We compare the persona-prompted forecasts against the human experts panel, across four target variables (HICP, core HICP, GDP growth, unemployment) and four forecast horizons. We also compare the results against 100 baseline forecasts without persona descriptions to isolate its effect. We report two main findings. Firstly, GPT-4o and human forecasters achieve remarkably similar accuracy levels, with differences that are statistically significant yet practically modest. Our out-of-sample evaluation on 2024-2025 data demonstrates that GPT-4o can maintain competitive forecasting performance on unseen events, though with notable differences compared to the in-sample period. Secondly, our ablation experiment reveals no measurable forecasting advantage from persona descriptions, suggesting these prompt components can be omitted to reduce computational costs without sacrificing accuracy. Our results provide evidence that GPT-4o can achieve competitive forecasting accuracy even on out-of-sample macroeconomic events, if provided with relevant context data, while revealing that diverse prompts produce remarkably homogeneous forecasts compared to human panels. |
| Date: | 2025–11 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2511.02458 |
| By: | Peeyush Agarwal; Harsh Agarwal; Akshat Ranaa |
| Abstract: | Purpose: The rapid integration of artificial intelligence (AI) systems like ChatGPT, Claude AI, etc., has a deep impact on how work is done. Predicting how AI will reshape work requires understanding not just its capabilities, but how it is actually being adopted. This study investigates which intrinsic task characteristics drive users' decisions to delegate work to AI systems. Methodology: This study utilizes the Anthropic Economic Index dataset of four million Claude AI interactions mapped to O*NET tasks. We systematically scored each task across seven key dimensions: Routine, Cognitive, Social Intelligence, Creativity, Domain Knowledge, Complexity, and Decision Making using 35 parameters. We then employed multivariate techniques to identify latent task archetypes and analyzed their relationship with AI usage. Findings: Tasks requiring high creativity, complexity, and cognitive demand, but low routineness, attracted the most AI engagement. Furthermore, we identified three task archetypes: Dynamic Problem Solving, Procedural & Analytical Work, and Standardized Operational Tasks, demonstrating that AI applicability is best predicted by a combination of task characteristics, over individual factors. Our analysis revealed highly concentrated AI usage patterns, with just 5% of tasks accounting for 59% of all interactions. Originality: This research provides the first systematic evidence linking real-world generative AI usage to a comprehensive, multi-dimensional framework of intrinsic task characteristics. It introduces a data-driven classification of work archetypes that offers a new framework for analyzing the emerging human-AI division of labor. |
| Date: | 2025–10 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2510.23669 |
| By: | Brüll, Eduard (ZEW); Mäurer, Samuel (University of Mannheim); Rostam-Afschar, Davud (University of Mannheim) |
| Abstract: | We provide experimental evidence on how employers adjust expectations to automation risk in high-skill, white-collar work. Using a randomized information intervention among tax advisors in Germany, we show that firms systematically underestimate automatability. Information provision raises risk perceptions, especially for routine-intensive roles. Yet, it leaves short-run hiring plans unchanged. Instead, updated beliefs increase productivity and financial expectations with minor wage adjustments, implying within-firm inequality like limited rent-sharing. Employers also anticipate new tasks in legal tech, compliance, and AI interaction, and report higher training and adoption intentions. |
| Keywords: | belief updating, firm expectations, technology adoption, innovation, technological change, automation, artificial intelligence, expertise, labor demand, white collar jobs, training |
| JEL: | J23 J24 D22 D84 O33 C93 |
| Date: | 2025–10 |
| URL: | https://d.repec.org/n?u=RePEc:iza:izadps:dp18225 |
| By: | Lewandowski, Piotr (Institute for Structural Research (IBS)); Madoń, Karol (Institute for Structural Research (IBS)); Park, Albert (Hong Kong University of Science & Technology) |
| Abstract: | This paper develops a task-adjusted, country-specific measure of workers’ exposure to Artificial Intelligence (AI) across 108 countries. Building on Felten et al. (2021), we adapt the Artificial Intelligence Occupational Exposure (AIOE) index to worker-level PIAAC data and extend it globally using comparable surveys and regression-based predictions, covering about 89% of global employment. Accounting for country-specific task structures reveals substantial cross-country heterogeneity: workers in low-income countries exhibit AI exposure levels roughly 0.8 U.S. standard deviations below those in high-income countries, largely due to differences in within-occupation task content. Regression decompositions attribute most cross-country variation to ICT intensity and human capital. High-income countries employ the majority of workers in highly AI-exposed occupations, while low-income countries concentrate in less exposed ones. Using two PIAAC cycles, we document rising AI exposure in high-income countries, driven by shifts in within-occupation tasks rather than employment structure. |
| Keywords: | AI, occupations, job tasks, technology, skills |
| JEL: | J21 J23 J24 |
| Date: | 2025–10 |
| URL: | https://d.repec.org/n?u=RePEc:iza:izadps:dp18235 |
| By: | Lei Chen; Chaoyue Gao; Alvin Leung; Xiaoning Wang |
| Abstract: | Generative artificial intelligence (GenAI) like Large Language Model (LLM) is increasingly integrated into digital platforms to enhance information access, deliver personalized experiences, and improve matching efficiency. However, these algorithmic advancements rely heavily on large-scale user data, creating a fundamental tension between information assurance-the protection, integrity, and responsible use of privacy data-and artificial intelligence-the learning capacity and predictive accuracy of models. We examine this assurance-intelligence trade-off in the context of LinkedIn, leveraging a regulatory intervention that suspended the use of user data for model training in Hong Kong. Using large-scale employment and job posting data from Revelio Labs and a Difference-in-Differences design, we show that restricting data use significantly reduced GenAI efficiency, leading to lower matching rates, higher employee turnover, and heightened labor market frictions. These effects were especially pronounced for small and fast-growing firms that rely heavily on AI for talent acquisition. Our findings reveal the unintended efficiency costs of well-intentioned data governance and highlight that information assurance, while essential for trust, can undermine intelligence-driven efficiency when misaligned with AI system design. This study contributes to emerging research on AI governance and digital platform by theorizing data assurance as an institutional complement-and potential constraint-to GenAI efficacy in data-intensive environments. |
| Date: | 2025–11 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2511.01923 |
| By: | Nils H. Lehr; Pascual Restrepo |
| Abstract: | Leading AI firms claim to prioritize social welfare. How should firms with a social mandate price and deploy AI? We derive pricing formulas that depart from profit maximization by incorporating incentives to improve welfare and reduce labor disruptions. Using US data, we evaluate several scenarios. A welfarist firm that values both profit and welfare should price closer to marginal cost, as efficiency gains outweigh distributional concerns. A conservative firm focused on labor-market stability should price above the profit-maximizing level in the short run, especially when its AI may displace low-income workers. Overall, socially minded firms face a trade-off between expanding access to AI and the resulting loss in profits and labor market risks. |
| JEL: | E0 H0 J0 |
| Date: | 2025–10 |
| URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:34424 |
| By: | Xiaoning Wang; Chun Feng; Tianshu Sun |
| Abstract: | Labor mobility is a critical source of technology acquisition for firms. This paper examines how artificial intelligence (AI) knowledge is disseminated across firms through labor mobility and identifies the organizational conditions that facilitate productive spillovers. Using a comprehensive dataset of over 460 million job records from Revelio Labs (2010 to 2023), we construct an inter-firm mobility network of AI workers among over 16, 000 U.S. companies. Estimating a Cobb Douglas production function, we find that firms benefit substantially from the AI investments of other firms from which they hire AI talents, with productivity spillovers two to three times larger than those associated with traditional IT after accounting for labor scale. Importantly, these spillovers are contingent on organizational context: hiring from flatter and more lean startup method intensive firms generates significant productivity gains, whereas hiring from firms lacking these traits yields little benefit. Mechanism tests indicate that "flat and lean" organizations cultivate more versatile AI generalists who transfer richer knowledge across firms. These findings reveal that AI spillovers differ fundamentally from traditional IT spillovers: while IT spillovers primarily arise from scale and process standardization, AI spillovers critically depend on the experimental and integrative environments in which AI knowledge is produced. Together, these results underscore the importance of considering both labor mobility and organizational context in understanding the full impact of AI-driven productivity spillovers. |
| Date: | 2025–11 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2511.02099 |
| By: | Nikolas Anic; Andrea Barbon; Ralf Seiz; Carlo Zarattini |
| Abstract: | This paper investigates whether large language models (LLMs) can improve cross-sectional momentum strategies by extracting predictive signals from firm-specific news. We combine daily U.S. equity returns for S&P 500 constituents with high-frequency news data and use prompt-engineered queries to ChatGPT that inform the model when a stock is about to enter a momentum portfolio. The LLM evaluates whether recent news supports a continuation of past returns, producing scores that condition both stock selection and portfolio weights. An LLM-enhanced momentum strategy outperforms a standard long-only momentum benchmark, delivering higher Sharpe and Sortino ratios both in-sample and in a truly out-of-sample period after the model's pre-training cut-off. These gains are robust to transaction costs, prompt design, and portfolio constraints, and are strongest for concentrated, high-conviction portfolios. The results suggest that LLMs can serve as effective real-time interpreters of financial news, adding incremental value to established factor-based investment strategies. |
| Date: | 2025–10 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2510.26228 |
| By: | Hamidah Oderinwale; Anna Kazlauskas |
| Abstract: | Despite data's central role in AI production, it remains the least understood input. As AI labs exhaust public data and turn to proprietary sources, with deals reaching hundreds of millions of dollars, research across computer science, economics, law, and policy has fragmented. We establish data economics as a coherent field through three contributions. First, we characterize data's distinctive properties -- nonrivalry, context dependence, and emergent rivalry through contamination -- and trace historical precedents for market formation in commodities such as oil and grain. Second, we present systematic documentation of AI training data deals from 2020 to 2025, revealing persistent market fragmentation, five distinct pricing mechanisms (from per-unit licensing to commissioning), and that most deals exclude original creators from compensation. Third, we propose a formal hierarchy of exchangeable data units (token, record, dataset, corpus, stream) and argue for data's explicit representation in production functions. Building on these foundations, we outline four open research problems foundational to data economics: measuring context-dependent value, balancing governance with privacy, estimating data's contribution to production, and designing mechanisms for heterogeneous, compositional goods. |
| Date: | 2025–10 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2510.24990 |