|
on Artificial Intelligence |
| By: | Hans-Theo Normann; Nina Ruli\'e; Olaf Stypa; Tobias Werner |
| Abstract: | We analyze the delegation of pricing by participants, representing firms, to a collusive, self-learning algorithm in a repeated Bertrand experiment. In the baseline treatment, participants set prices themselves. In the other treatments, participants can either delegate pricing to the algorithm at the beginning of each supergame or receive algorithmic recommendations that they can override. Participants delegate more when they can override the algorithm's decisions. In both algorithmic treatments, prices are lower than in the baseline. Our results indicate that while self-learning pricing algorithms can be collusive, they can foster competition rather than collusion with humans-in-the-loop. |
| Date: | 2025–10 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2510.27636 |
| By: | Felipe Valencia-Clavijo |
| Abstract: | Large language models (LLMs) are increasingly examined as both behavioral subjects and decision systems, yet it remains unclear whether observed cognitive biases reflect surface imitation or deeper probability shifts. Anchoring bias, a classic human judgment bias, offers a critical test case. While prior work shows LLMs exhibit anchoring, most evidence relies on surface-level outputs, leaving internal mechanisms and attributional contributions unexplored. This paper advances the study of anchoring in LLMs through three contributions: (1) a log-probability-based behavioral analysis showing that anchors shift entire output distributions, with controls for training-data contamination; (2) exact Shapley-value attribution over structured prompt fields to quantify anchor influence on model log-probabilities; and (3) a unified Anchoring Bias Sensitivity Score integrating behavioral and attributional evidence across six open-source models. Results reveal robust anchoring effects in Gemma-2B, Phi-2, and Llama-2-7B, with attribution signaling that the anchors influence reweighting. Smaller models such as GPT-2, Falcon-RW-1B, and GPT-Neo-125M show variability, suggesting scale may modulate sensitivity. Attributional effects, however, vary across prompt designs, underscoring fragility in treating LLMs as human substitutes. The findings demonstrate that anchoring bias in LLMs is robust, measurable, and interpretable, while highlighting risks in applied domains. More broadly, the framework bridges behavioral science, LLM safety, and interpretability, offering a reproducible path for evaluating other cognitive biases in LLMs. |
| Date: | 2025–11 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2511.05766 |
| By: | F. Atzori; L. Corazzini; A. Guarnieri |
| Abstract: | Artificial Intelligence (AI) has emerged as a transformative technology, capable of reshaping production processes, professions, and domains traditionally regarded as uniquely human—such as art and creativity. This study examines the relationship between AI, creativity, and personality traits, with the goal of understanding how individual differences influence perceptions, attitudes, and the utilization of AI. A total of 260 participants completed a comprehensive questionnaire assessing AI usage and perception, personality traits, and multiple creativity tasks, including the Divergent Association Task, the Alternative Uses Task, and a constrained narrative task. Our results reveal that creativity increases with reflective, moderate engagement with AI, while both minimal and excessive reliance reduce creative performance. Cluster analysis identifies four distinct attitudinal profiles toward AI—Enthusiasts, Alarmed, Critics, and Cautious—differing in trust, perceived risks, and frequency of use. Openness to Experience and Agreeableness emerge as key traits that predict these profiles - openness is positively associated with the creative and balanced use of AI, whereas high agreeableness correlates with more cautious or risk-averse perceptions. Overall, creativity thrives when curiosity and critical reflection coexist, suggesting that human originality benefits most from mindful, selective interaction with AI rather than from full automation. |
| Keywords: | creativity;Perception of AI;Personality traits |
| Date: | 2025 |
| URL: | https://d.repec.org/n?u=RePEc:cns:cnscwp:202516 |
| By: | Shaohui Wang |
| Abstract: | This paper develops a unified game-theoretic account of how generative AI reshapes the pre-doctoral "hope-labor" market linking Principal Investigators (PIs), Research Assistants (RAs), and PhD admissions. We integrate (i) a PI-RA relational-contract stage, (ii) a task-based production technology in which AI is both substitute (automation) and complement (augmentation/leveling), and (iii) a capacity-constrained admissions tournament that converts absolute output into relative rank. The model yields four results. First, AI has a dual and thresholded effect on RA demand: when automation dominates, AI substitutes for RA labor; when augmentation dominates, small elite teams become more valuable. Second, heterogeneous PI objectives endogenously segment the RA market: quantity-maximizing PIs adopt automation and scale "project-manager" RAs, whereas quality-maximizing PIs adopt augmentation and cultivate "idea-generator" RAs. Third, a symmetric productivity shock triggers a signaling arms race: more "strong" signals flood a fixed-slot tournament, depressing the admission probability attached to any given signal and potentially lowering RA welfare despite higher productivity. Fourth, AI degrades the informational content of polished routine artifacts, creating a novel moral-hazard channel ("effort laundering") that shifts credible recommendations toward process-visible, non-automatable creative contributions. We discuss welfare and equity implications, including over-recruitment with thin mentoring, selectively misleading letters, and opaque pipelines, and outline light-touch governance (process visibility, AI-use disclosure, and limited viva/replication checks) that preserves efficiency while reducing unethical supervision and screening practices. |
| Date: | 2025–10 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2511.00068 |
| By: | Daley, Mark |
| Abstract: | This paper models how increasingly capable, low-cost AI research systems interact with metric-driven universities. We formalize a simple production economy in which labs allocate human and AI effort to maximize publications, citations, and grant dollars. AI ``effective IQ'' (research capability) doubles every 16 months and can be rented as a service. With a constant-elasticity-of-substitution technology, the relative demand for human research labor decays exponentially when AI and human work are gross substitutes. A grant tournament with prestige multipliers amplifies concentration toward already advantaged principal investigators (PIs). We derive role-specific theorems for tenured and tenure-track faculty, graduate students, and professional research support staff. The model highlights one central comparative static, the elasticity of substitution, and shows how policy levers that reduce substitutability or raise human-oversight floors materially change the trajectory. |
| Date: | 2025–11–04 |
| URL: | https://d.repec.org/n?u=RePEc:osf:metaar:xztf7_v1 |
| By: | Ding, Liangping; Lawson, Cornelia; Shapira, Philip (The University of Manchester) |
| Abstract: | Artificial intelligence (AI) promises to transform science by accelerating knowledge discovery, automating processes, and introducing new paradigms for research. However, there remains a limited understanding of how AI is being utilized in scientific research. In this paper, we develop a framework based on GPT-4 and SciBERT to identify AI’s role in scientific papers, differentiating between Foundational, Adaptation, Tool and Discussion modes of AI research. This allows us to capture AI’s diverse contributions, from theoretical advances to practical applications and critical analysis. We examine AI’s trajectory across these modes by analyzing time series, field-specific, and country trends. This approach expands on search-term based identification of AI contributions and offers insights into how AI is being deployed in science. |
| Date: | 2025–11–02 |
| URL: | https://d.repec.org/n?u=RePEc:osf:socarx:7ed2b_v1 |
| By: | Susan Athey; Fiona Scott Morton |
| Abstract: | We study how market power in artificial intelligence (AI) shapes wages and welfare in open-economy general equilibrium by treating AI as a priced, imported factor. Across three models, we separate technical efficiency from the impact of upstream price setting. In a two-traded-goods benchmark, the incidence of AI price changes depends on how sectoral skill intensity changes with AI prices; non-monotone intensity can generate “double harm” for unskilled workers (lower real wage after a large decrease in the price of AI, and real wage decreases further when the AI price rises as a result of market power). With one non-traded sector, we observe that the classic “Dutch disease” effect here would arise when one sector gets more productive and draws labor away from other sectors, creating scarcity and raising prices; but this is not what we expect from the introduction of labor-substituting AI. In contrast, our last model considers two non-traded sectors and CES/free entry, and the opportunity for discrete adoption of technology that replaces unskilled labor from the AI-using sector. When AI reduces unit costs and increases variety, it will not pull U from non-tradables, instead it will displace workers from the AI-using sector and lower wage due to diminishing returns in alternative sectors. Strategic upstream pricing of AI then harms welfare through unit-cost (usage fees) and variety (access fees) channels, with income leakage abroad. We derive an adoption frontier tying feasible usage prices to displaced workers’ outside options and show a monopolist typically prices on this boundary; capping one instrument shifts rents to the other. Broad gains for the adopting country relies on pressure (or regulation) on both usage and access fees and as well as policy that supports productive absorption of displaced labor. The framework clarifies when AI can lower real wages and aggregate welfare despite efficiency gains. |
| JEL: | L10 L12 L4 L40 L5 |
| Date: | 2025–11 |
| URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:34444 |
| By: | Bekkers, Eddy; Humphreys, Lee; Kalachyhin, Hryhorii; Wilczynska, Karolina; Zhao, Danchen |
| Abstract: | This paper studies the macroeconomic impacts of artificial intelligence (AI) using a quantitative trade model with multiple sectors, multiple factors of production, and intermediate linkages. The reallocation of tasks from labour to AI services will generate productivity gains in the model, and AI will reduce operational trade costs. We build four scenarios that differ in how far less-prepared economies catch up. The simulations yield three main findings. First, AI adoption is projected to substantially boost global trade flows and eco-nomic growth: in the most favourable scenario, the diffusion of AI raises global GDP by an additional 13.2% over the next 15 years compared to the baseline. Global trade volumes are projected to be 35% larger than without AI. Second, low- and middle-income economies can capture more of these gains if they improve their digital infrastructure and ensure adequate AI deployment across the economy. Third, AI is projected to change the withincountry income distribution. While all factors gain in real terms, returns shift toward capital and the skill premium declines. The magnitude of these distributional effects depends on the long-run growth rate of AI and the degree of complementarity between production factors. |
| Keywords: | Artificial Intelligence, Computational general equilibrium, Productivity, Technology adoption, Trade Cost |
| JEL: | C68 E13 O33 O41 F17 |
| Date: | 2025 |
| URL: | https://d.repec.org/n?u=RePEc:zbw:wtowps:330670 |
| By: | Nils H. Lehr; Pascual Restrepo |
| Abstract: | Leading AI firms claim to prioritize social welfare. How should firms with a social mandate price and deploy AI? We derive pricing formulas that depart from profit maximization by incorporating incentives to improve welfare and reduce labor disruptions. Using US data, we evaluate several scenarios. A welfarist firm that values both profit and welfare should price closer to marginal cost, as efficiency gains outweigh distributional concerns. A conservative firm focused on labor-market stability should price above the profit-maximizing level in the short run, especially when its AI may displace low-income workers. Overall, socially minded firms face a trade-off between expanding access to AI and the resulting loss in profits and labor market risks. |
| Keywords: | Artificial intelligence; automation; corporate social responsibility |
| Date: | 2025–11–07 |
| URL: | https://d.repec.org/n?u=RePEc:imf:imfwpa:2025/234 |
| By: | Ruiqing Cao; Abhishek Bhatia |
| Abstract: | The rapid diffusion of generative artificial intelligence (GenAI) has substantially lowered the costs of launching and developing digital ventures. GenAI can potentially both enable previously unviable entrepreneurial ideas by lowering resource needs and improve the performance of existing ventures. We explore how founders' technical and managerial expertise shapes GenAI's impact on digital ventures along these dimensions. Exploiting exogenous variation in GenAI usage across venture categories and the timing of its broad availability for software tasks (e.g., GitHub Copilot's public release and subsequent GenAI tools), we find that the number of new venture launches increased and the median time to launch decreased significantly more in categories with relatively high GenAI usage. GenAI's effect on new launches is larger for founders without managerial experience or education, while its effect on venture capital (VC) funding likelihood is stronger for founders with technical experience or education. Overall, our results suggest that GenAI expands access to digital entrepreneurship for founders lacking managerial expertise and enhances venture performance among technical founders. |
| Date: | 2025–11 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2511.06545 |
| By: | Peyman Shahidi; Gili Rusak; Benjamin S. Manning; Andrey Fradkin; John J. Horton |
| Abstract: | AI agents—autonomous systems that perceive, reason, and act on behalf of human principals—are poised to transform digital markets by dramatically reducing transaction costs. This chapter evaluates the economic implications of this transition, adopting a consumer-oriented view of agents as market participants that can search, negotiate, and transact directly. From the demand side, agent adoption reflects derived demand: users trade off decision quality against effort reduction, with outcomes mediated by agent capability and task context. On the supply side, firms will design, integrate, and monetize agents, with outcomes hinging on whether agents operate within or across platforms. At the market level, agents create efficiency gains from lower search, communication, and contracting costs, but also introduce frictions such as congestion and price obfuscation. By lowering the costs of preference elicitation, contract enforcement, and identity verification, agents expand the feasible set of market designs but also raise novel regulatory challenges. While the net welfare effects remain an empirical question, the rapid onset of AI-mediated transactions presents a unique opportunity for economic research to inform real-world policy and market design. |
| JEL: | D47 D83 J44 K20 L15 L86 O33 |
| Date: | 2025–11 |
| URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:34468 |
| By: | Nikolas Anic (Swiss Finance Institute - University of Zurich; Finreon); Andrea Barbon (University of St. Gallen; University of St.Gallen); Ralf Seiz (University of St.Gallen; Finreon); Carlo Zarattini (Concretum Group) |
| Abstract: | This paper investigates whether large language models (LLMs) can improve cross-sectional momentum strategies by extracting predictive signals from firm-specific news. We combine daily U.S. equity returns for S&P 500 constituents with high-frequency news data and use prompt-engineered queries to ChatGPT that inform the model when a stock is about to enter a momentum portfolio. The LLM evaluates whether recent news supports a continuation of past returns, producing scores that condition both stock selection and portfolio weights. An LLM-enhanced momentum strategy outperforms a standard longonly momentum benchmark, delivering higher Sharpe and Sortino ratios both in-sample and in a truly out-of-sample period after the model's pre-training cutoff. These gains are robust to transaction costs, prompt design, and portfolio constraints, and are strongest for concentrated, high-conviction portfolios. The results suggest that LLMs can serve as effective real-time interpreters of financial news, adding incremental value to established factor-based investment strategies. |
| Keywords: | Large Language Models, Momentum Investing, Textual Analysis, News Sentiment, Artificial Intelligence |
| Date: | 2025–10 |
| URL: | https://d.repec.org/n?u=RePEc:chf:rpseri:rp2594 |
| By: | Haofei Yu; Fenghai Li; Jiaxuan You |
| Abstract: | Large language models (LLMs) achieve strong performance across benchmarks--from knowledge quizzes and math reasoning to web-agent tasks--but these tests occur in static settings, lacking real dynamics and uncertainty. Consequently, they evaluate isolated reasoning or problem-solving rather than decision-making under uncertainty. To address this, we introduce LiveTradeBench, a live trading environment for evaluating LLM agents in realistic and evolving markets. LiveTradeBench follows three design principles: (i) Live data streaming of market prices and news, eliminating dependence on offline backtesting and preventing information leakage while capturing real-time uncertainty; (ii) a portfolio-management abstraction that extends control from single-asset actions to multi-asset allocation, integrating risk management and cross-asset reasoning; and (iii) multi-market evaluation across structurally distinct environments--U.S. stocks and Polymarket prediction markets--differing in volatility, liquidity, and information flow. At each step, an agent observes prices, news, and its portfolio, then outputs percentage allocations that balance risk and return. Using LiveTradeBench, we run 50-day live evaluations of 21 LLMs across families. Results show that (1) high LMArena scores do not imply superior trading outcomes; (2) models display distinct portfolio styles reflecting risk appetite and reasoning dynamics; and (3) some LLMs effectively leverage live signals to adapt decisions. These findings expose a gap between static evaluation and real-world competence, motivating benchmarks that test sequential decision making and consistency under live uncertainty. |
| Date: | 2025–11 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2511.03628 |