|
on Artificial Intelligence |
| By: | Dalaman, Burak (University of London); Kalay, Ali Furkan (Macquarie University, Sydney); Kettlewell, Nathan (University of Technology, Sydney) |
| Abstract: | Large language models (LLMs) have altered the nature of academic writing. While the influence of LLMs on academic writing is not uncontroversial, one promise for this technology is to bridge language barriers faced by nonnative English-speaking researchers. This study empirically demonstrates that LLMs have led to convergence in the lexical diversity of native and nonnative speakers, potentially helping to level the playing field. There has also been an increase in language complexity for nonnatives. We classify over one million authors as native or nonnative English speakers based on the etymological origins of their names and analyze over one million abstracts from arXiv.org, evaluating changes in lexical diversity and readability before and after ChatGPT’s release in November 2022. The results demonstrate a sharp increase in writing sophistication among all researchers, with nonnative English speakers showing the greatest gains across all writing metrics. Our findings provide empirical evidence on the impact of LLMs in academic writing, supporting recent speculations about their potential to bridge language barriers. |
| Keywords: | technology adoption, large language models, academic equity, generative AI, language barrier, bayesian structural time series |
| JEL: | J24 I23 |
| Date: | 2025–10 |
| URL: | https://d.repec.org/n?u=RePEc:iza:izadps:dp18215 |
| By: | Jing Cynthia Wu; Jin Xi; Shihan Xie |
| Abstract: | We propose a new LLM-based survey framework that enables retrospective coverage, economic reasoning, dynamic effects, and clean identification. We recover human-comparable treatment effects in a multi-wave randomized controlled trial of inflation expectations surveys, at 1/1000 the cost. To demonstrate the framework’s full potential, we extend the benchmark human survey (10 waves, 2018–2023) to over 50 waves dating back to 1990. We further examine the economic mechanisms underlying agents’ expectation formation, identifying the mean-reversion and individual-attention channels. Finally, we trace dynamic treatment effects and demonstrate clean identification. Together, these innovations demonstrate that LLM surveys enable research designs unattainable with human surveys. |
| JEL: | C83 E31 E52 |
| Date: | 2025–10 |
| URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:34308 |
| By: | Brüll, Eduard; Mäurer, Samuel; Rostam-Afschar, Davud |
| Abstract: | We provide experimental evidence on how employers adjust expectations to automation risk in high-skill, white-collar work. Using a randomized information intervention among tax advisors in Germany, we show that firms systematically underestimate automatability. Information provision raises risk perceptions, especially for routine-intensive roles. Yet, it leaves short-run hiring plans unchanged. Instead, updated beliefs increase productivity and financial expectations with minor wage adjustments, implying within-firm inequality like limited rent-sharing. Employers also anticipate new tasks in legal tech, compliance, and AI interaction, and report higher training and adoption intentions. |
| Keywords: | Artificial Intelligence, Automation, Technological Change, Innovation, Technology Adoption, Firm Expectations, Belief Updating, Expertise, Labor Demand, White Collar Jobs, Training |
| JEL: | J23 J24 D22 D84 O33 C93 |
| Date: | 2025 |
| URL: | https://d.repec.org/n?u=RePEc:zbw:glodps:1683 |
| By: | Fouarge, Didier (ROA, Maastricht University); Fregin, Marie-Christine (Maastricht University); Janssen, Simon (Institute for Employment Research (IAB), Nuremberg); Levels, Mark (Maastricht University); Montizaan, Raymond (ROA, Maastricht University); Özgül, Pelin (Maastricht University); Rounding, Nicholas (Maastricht University); Stops, Michael (Institute for Employment Research (IAB), Nuremberg) |
| Abstract: | We analyze the impact of AI-augmented training on worker productivity in a financial services company. The company introduced an AI tool that provides performance feedback on call center agents to guide their training. To estimate causal effects, we exploit the staggered roll out of the AI-tool. The AI-augmented training reduces call handling time by 10 percent. We find larger effects for short-tenured workers because they spend less time putting clients on hold. But the AI-augmented training also improves communication style with relatively stronger effects for long-tenured agents, and we find slightly positive effects on customer satisfaction. |
| Keywords: | performance feedback, training, artificial intelligence, employee productivity |
| JEL: | J24 O31 O33 |
| Date: | 2025–10 |
| URL: | https://d.repec.org/n?u=RePEc:iza:izadps:dp18224 |
| By: | Ji Ma; Albert Casella |
| Abstract: | Public and nonprofit organizations often hesitate to adopt AI tools because most models are opaque even though standard approaches typically analyze aggregate patterns rather than offering actionable, case-level guidance. This study tests a practitioner-in-the-loop workflow that pairs transparent decision-tree models with large language models (LLMs) to improve predictive accuracy, interpretability, and the generation of practical insights. Using data from an ongoing college-success program, we build interpretable decision trees to surface key predictors. We then provide each tree's structure to an LLM, enabling it to reproduce case-level predictions grounded in the transparent models. Practitioners participate throughout feature engineering, model design, explanation review, and usability assessment, ensuring that field expertise informs the analysis at every stage. Results show that integrating transparent models, LLMs, and practitioner input yields accurate, trustworthy, and actionable case-level evaluations, offering a viable pathway for responsible AI adoption in the public and nonprofit sectors. |
| Date: | 2025–10 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2510.19799 |
| By: | Wenjun Cao |
| Abstract: | Large Language Models are increasingly adopted as critical tools for accelerating innovation. This paper identifies and formalizes a systemic risk inherent in this paradigm: \textbf{Black Box Absorption}. We define this as the process by which the opaque internal architectures of LLM platforms, often operated by large-scale service providers, can internalize, generalize, and repurpose novel concepts contributed by users during interaction. This mechanism threatens to undermine the foundational principles of innovation economics by creating severe informational and structural asymmetries between individual creators and platform operators, thereby jeopardizing the long-term sustainability of the innovation ecosystem. To analyze this challenge, we introduce two core concepts: the idea unit, representing the transportable functional logic of an innovation, and idea safety, a multidimensional standard for its protection. This paper analyzes the mechanisms of absorption and proposes a concrete governance and engineering agenda to mitigate these risks, ensuring that creator contributions remain traceable, controllable, and equitable. |
| Date: | 2025–10 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2510.20612 |
| By: | Alex Haag |
| Abstract: | Global competition in artificial intelligence (AI) has intensified in recent years. Some assessments emphasize US exceptionalism, while others argue that China is eroding US dominance. By contrast, the progress of other advanced foreign economies (AFEs) receives far less attention. |
| Date: | 2025–10–06 |
| URL: | https://d.repec.org/n?u=RePEc:fip:fedgfn:2025-10-06 |
| By: | Ngunza Maniata, Kevin |
| Abstract: | The recent ascent of Anthropic, a United States-based artificial-intelligence company founded in 2021 by former OpenAI executives, has reignited the debate over whether the global AI boom represents sustainable technological transformation or a new financial bubble. With a private valuation surpassing 180 billion USD and projected annualized revenue exceeding 20 billion USD by 2026, Anthropic embodies both the promise of rapid innovation and the risks of speculative exuberance. This paper examines the firm’s growth within the theoretical frameworks of Schumpeterian innovation, Minskyan financial cycles, and contemporary analyses of digital-economy concentration. Drawing on publicly available financial data, corporate disclosures, and secondary literature, it interprets Anthropic’s trajectory as a case study in the financialization of cognition. The discussion highlights how alliances with Amazon and Google have turned frontier AI into an infrastructure-dependent oligopoly, while unresolved issues of data ownership and legal accountability question the durability of such valuations. The study concludes that Anthropic’s rise illustrates the dual nature of modern technological capitalism: the capacity for exponential value creation tempered by systemic fragility and institutional lag. |
| Keywords: | Artificial Intelligence; Anthropic; Financialization; Valuation; Technological Innovation; Industrial Organization; Intellectual Property; Speculative Cycles. |
| JEL: | G32 K11 L86 M21 O33 |
| Date: | 2025–10–18 |
| URL: | https://d.repec.org/n?u=RePEc:pra:mprapa:126512 |
| By: | Lucas Eduardo Pereira Teles; Carlos M. S. Figueiredo |
| Abstract: | This article presents a comparative study of large language models (LLMs) in the task of sentiment analysis of financial market news. This work aims to analyze the performance difference of these models in this important natural language processing task within the context of finance. LLM models are compared with classical approaches, allowing for the quantification of the benefits of each tested model or approach. Results show that large language models outperform classical models in the vast majority of cases. |
| Date: | 2025–10 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2510.15929 |
| By: | Fernando Spadea; Oshani Seneviratne |
| Abstract: | Most financial recommendation systems often fail to account for key behavioral and regulatory factors, leading to advice that is misaligned with user preferences, difficult to interpret, or unlikely to be followed. We present FLARKO (Financial Language-model for Asset Recommendation with Knowledge-graph Optimization), a novel framework that integrates Large Language Models (LLMs), Knowledge Graphs (KGs), and Kahneman-Tversky Optimization (KTO) to generate asset recommendations that are both profitable and behaviorally aligned. FLARKO encodes users' transaction histories and asset trends as structured KGs, providing interpretable and controllable context for the LLM. To demonstrate the adaptability of our approach, we develop and evaluate both a centralized architecture (CenFLARKO) and a federated variant (FedFLARKO). To our knowledge, this is the first demonstration of combining KTO for fine-tuning of LLMs for financial asset recommendation. We also present the first use of structured KGs to ground LLM reasoning over behavioral financial data in a federated learning (FL) setting. Evaluated on the FAR-Trans dataset, FLARKO consistently outperforms state-of-the-art recommendation baselines on behavioral alignment and joint profitability, while remaining interpretable and resource-efficient. |
| Date: | 2025–10 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2510.15993 |
| By: | Qionghua Chu |
| Abstract: | I identify a new signaling channel in ESG research by empirically examining whether environmental, social, and governance (ESG) investing remains valuable as large institutional investors increasingly shift toward artificial intelligence (AI). Using winsorized ESG scores of S&P 500 firms from Yahoo Finance and controlling for market value of equity, I conduct cross-sectional regressions to test the signaling mechanism. I demonstrate that Environmental, Social, Governance, and composite ESG scores strongly and positively signal higher debt-to-total-capital ratio, both individually and in various combinations. My findings contribute to the growing literature on ESG investing, offering economically meaningful signaling channel with implications for long-term portfolio management amid the rise of AI. |
| Date: | 2025–10 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2510.15956 |
| By: | Charidimos Papadakis; Angeliki Dimitriou; Giorgos Filandrianos; Maria Lymperaiou; Konstantinos Thomas; Giorgos Stamou |
| Abstract: | Large language models show promise for financial decision-making, yet deploying them as autonomous trading agents raises fundamental challenges: how to adapt instructions when rewards arrive late and obscured by market noise, how to synthesize heterogeneous information streams into coherent decisions, and how to bridge the gap between model outputs and executable market actions. We present ATLAS (Adaptive Trading with LLM AgentS), a unified multi-agent framework that integrates structured information from markets, news, and corporate fundamentals to support robust trading decisions. Within ATLAS, the central trading agent operates in an order-aware action space, ensuring that outputs correspond to executable market orders rather than abstract signals. The agent can incorporate feedback while trading using Adaptive-OPRO, a novel prompt-optimization technique that dynamically adapts the prompt by incorporating real-time, stochastic feedback, leading to increasing performance over time. Across regime-specific equity studies and multiple LLM families, Adaptive-OPRO consistently outperforms fixed prompts, while reflection-based feedback fails to provide systematic gains. |
| Date: | 2025–10 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2510.15949 |
| By: | Kefan Chen; Hussain Ahmad; Diksha Goel; Claudia Szabo |
| Abstract: | Large Language Models (LLMs) have recently gained popularity in stock trading for their ability to process multimodal financial data. However, most existing methods focus on single-stock trading and lack the capacity to reason over multiple candidates for portfolio construction. Moreover, they typically lack the flexibility to revise their strategies in response to market shifts, limiting their adaptability in real-world trading. To address these challenges, we propose 3S-Trader, a training-free framework that incorporates scoring, strategy, and selection modules for stock portfolio construction. The scoring module summarizes each stock's recent signals into a concise report covering multiple scoring dimensions, enabling efficient comparison across candidates. The strategy module analyzes historical strategies and overall market conditions to iteratively generate an optimized selection strategy. Based on this strategy, the selection module identifies and assembles a portfolio by choosing stocks with higher scores in relevant dimensions. We evaluate our framework across four distinct stock universes, including the Dow Jones Industrial Average (DJIA) constituents and three sector-specific stock sets. Compared with existing multi-LLM frameworks and time-series-based baselines, 3S-Trader achieves the highest accumulated return of 131.83% on DJIA constituents with a Sharpe ratio of 0.31 and Calmar ratio of 11.84, while also delivering consistently strong results across other sectors. |
| Date: | 2025–10 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2510.17393 |
| By: | Jinrui Zhang |
| Abstract: | Foundation models - already transformative in domains such as natural language processing - are now starting to emerge for time-series tasks in finance. While these pretrained architectures promise versatile predictive signals, little is known about how they shape the risk profiles of the trading strategies built atop them, leaving practitioners reluctant to commit serious capital. In this paper, we propose an extension to the Capital Asset Pricing Model (CAPM) that disentangles the systematic risk introduced by a shared foundation model - potentially capable of generating alpha if the underlying model is genuinely predictive - from the idiosyncratic risk attributable to custom fine-tuning, which typically accrues no systematic premium. To enable a practical estimation of these separate risks, we align this decomposition with the concepts of uncertainty disentanglement, casting systematic risk as epistemic uncertainty (rooted in the pretrained model) and idiosyncratic risk as aleatory uncertainty (introduced during custom adaptations). Under the Aleatory Collapse Assumption, we illustrate how Monte Carlo dropout - among other methods in the uncertainty-quantization toolkit - can directly measure the epistemic risk, thereby mapping trading strategies to a more transparent risk-return plane. Our experiments show that isolating these distinct risk factors yields deeper insights into the performance limits of foundation-model-based strategies, their model degradation over time, and potential avenues for targeted refinements. Taken together, our results highlight both the promise and the pitfalls of deploying large pretrained models in competitive financial markets. |
| Date: | 2025–10 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2510.17165 |
| By: | Ayoki, Milton |
| Abstract: | The diffusion of general-purpose artificial intelligence (AI) systems is collapsing the marginal cost of cognition, coordination, and capital formation. This abundance of intelligence is simultaneously re-pricing the three residual scarcities that still constrain human welfare: atmospheric carbon space, human labor hours, and irreversible time. Using a unified production–climate–welfare model, we show that (i) AI accelerates decarbonization by driving the cost curve of clean technologies below that of fossil fuels; (ii) labor markets bifurcate into a vanishing low-skill wage sector and an expanding high-skill rent sector, generating a transfer problem that can only be solved by AI dividends; and (iii) the option value of future consumption rises as AI compresses the calendar time needed to unlock large-scale decarbonization, longevity, and existential-risk mitigation. The conjunction of these effects drives the Ramsey rule for optimal climate policy to its mathematical limit: the social discount rate (SDR) must converge to zero. We provide empirical calibration using the latest IPCC scenarios, large-language-model energy-intensity data, and labor-share forecasts through 2100. A zero SDR reconciles inter-generational equity with intra-generational efficiency and unlocks a portfolio of “long-horizon public goods” (LHPGs)—from atmospheric restoration to asteroid defense—that markets at positive discount rates chronically under-supply. |
| Keywords: | Artificial intelligence, abundance; scarcity; social discount rate; zero discounting; inter-generational equity; labor-market bifurcation; AI dividend; long-horizon public goods; existential risk, decarbonization; marginal cost of cognition; Ramsey rule; option value of time. |
| JEL: | D63 E24 H23 O33 Q54 Q55 |
| Date: | 2025–09–03 |
| URL: | https://d.repec.org/n?u=RePEc:pra:mprapa:126550 |