nep-ain New Economics Papers
on Artificial Intelligence
Issue of 2026–04–06
23 papers chosen by
Ben Greiner, Wirtschaftsuniversität Wien


  1. Human–AI Evaluation and Gender Transparency: Application Decisions in Competitive Hiring By Bernd Irlenbusch; Holger A. Rau; Rainer Michael Rilke
  2. A Revealed Preference Framework for AI Alignment By Elchin Suleymanov
  3. Should I State or Should I Show? Aligning AI with Human Preferences By Keaton Ellis; Wanying Huang
  4. Artificial Intelligence in Science: Returns, Reallocation, and Reorganization By Moh Hosseinioun; Brian Uzzi; Henrik Barslund Fosse
  5. Workers' Incentives and the Optimal Taxation of AI By Jakub Growiec; Klaus Prettner; Maciej Szkr\'obka
  6. Steering Technological Progress By Anton Korinek; Joseph E. Stiglitz
  7. AI, Output, and Employment By Andrew Johnston; Christos A. Makridis
  8. The AI Layoff Trap By Brett Hemenway Falk; Gerry Tsoukalas
  9. Agentic AI and Occupational Displacement: A Multi-Regional Task Exposure Analysis of Emerging Labor Market Disruption By Ravish Gupta; Saket Kumar
  10. Crashing Waves vs. Rising Tides: Preliminary Findings on AI Automation from Thousands of Worker Evaluations of Labor Market Tasks By Matthias Mertens; Adam Kuzee; Brittany S. Harris; Harry Lyu; Wensu Li; Jonathan Rosenfeld; Meiri Anto; Martin Fleming; Neil Thompson
  11. Who Adopts AI? Evidence on Firms, Technologies and Worker By Pulito, Giuseppe; Pytlikova, Mariola; Schroede, Sarah; Lodefalk, Magnus
  12. Bridging Distant Ideas: the Impact of AI on R&D and Recombinant Innovation By Emanuele Bazzichi; Massimo Riccaboni; Fulvio Castellacci
  13. Economics of Human and AI Collaboration: When is Partial Automation More Attractive than Full Automation? By Wensu Li; Atin Aboutorabi; Harry Lyu; Kaizhi Qian; Martin Fleming; Brian C. Goehring; Neil Thompson
  14. Mind the Gap: AI Adoption in Europe and the US By Alexander Bick; Adam Blandin; David Deming; Nicola Fuchs-Schündeln; Jonas Jessen
  15. Power Couple? AI Growth and Renewable Energy Investment By Luyi Gui; Tinglong Dai
  16. On the Carbon Footprint of Economic Research in the Age of Generative AI By Andres Alonso-Robisco; Carlos Esparcia; Francisco Jare\~no
  17. Pay-Per-Crawl Pricing for AI: The LM-Tree Agent By Richard Archer; Soheil Ghili; Nima Haghpanah
  18. Designing Agentic AI-Based Screening for Portfolio Investment By Mehmet Caner; Agostino Capponi; Nathan Sun; Jonathan Y. Tan
  19. LLM-Based Measurement of Latent Attributes in Trade Data By DiGiuseppe, Matthew; Fu, Xuelong; Flynn, Michael E
  20. Learning to Aggregate Zero-Shot LLM Agents for Corporate Disclosure Classification By Kemal Kirtac
  21. Artificial Intelligence Capital and Business Innovation By Drydakis, Nick
  22. Large Language Models and Stock Investing: Is the Human Factor Required? By Ricardo Crisostomo; Diana Mykhalyuk
  23. Shopping with a Platform AI Assistant: Who Adopts, When in the Journey, and What For By Se Yan; Han Zhong; Zemin; Zhong; Wenyu Zhou

  1. By: Bernd Irlenbusch (University of Cologne & London School of Economics and Political Science); Holger A. Rau (University of Duisburg-Essen & University of Gottingen); Rainer Michael Rilke (WHU – Otto Beisheim School of Management)
    Abstract: LLMs are rapidly entering the hiring process, but their most pronounced effects may occur before any screening by changing who chooses to apply. We study how human versus LLM-based evaluation and gender transparency shape entry into competitive jobs. In a preregistered online experiment, participants first complete a Niederle and Vesterlund (2007) tournament task to measure competitive preferences, then prepare text-based job applications and decide whether to apply under each of four evaluation regimes—human only, LLM only, and two hybrid human-in-the-loop configurations—while gender disclosure is randomized between subjects. LLM involvement reduces application rates, with stronger effects for women than men, including under hybrid designs. Effects are driven by non-competitive candidates; non-competitive women, the group most exposed to AI-induced deterrence, receive the strongest objective evaluations under pure AI assessment across all subgroups, yet are systematically underconfident and apply least often. Competitive men persistently apply and exhibit overconfidence-driven adverse selection, whereas competitive women show resilience to AI-induced deterrence while remaining well-calibrated under AI evaluation and exhibiting positive self-selection across regimes. We find no effects of gender transparency.
    Keywords: AI hiring, LLMs, algorithm aversion, gender differences
    JEL: C92 J71 J24 O33
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:ajk:ajkdps:398
  2. By: Elchin Suleymanov
    Abstract: Human decision makers increasingly delegate choices to AI agents, raising a natural question: does the AI implement the human principal's preferences or pursue its own? To study this question using revealed preference techniques, I introduce the Luce Alignment Model, where the AI's choices are a mixture of two Luce rules, one reflecting the human's preferences and the other the AI's. I show that the AI's alignment (similarity of human and AI preferences) can be generically identified in two settings: the laboratory setting, where both human and AI choices are observed, and the field setting, where only AI choices are observed.
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2603.27868
  3. By: Keaton Ellis; Wanying Huang
    Abstract: As AI agents become more autonomous, properly aligning their objectives with human preferences becomes increasingly important. We study how effectively an AI agent learns a human principal's preference in choice under risk via stated versus revealed preferences. We conduct an online experiment in which subjects state their preferences through written instructions ("prompts") and reveal them through choices in a series of binary lottery questions ("data"). We find that on average, an AI agent given revealed-preference data predicts subjects' choices more accurately than an AI agent given stated-preference prompts. Further analysis suggests that the gap is driven by subjects' difficulty in translating their own preferences into written instructions. When given a choice between which information source to give to an AI agent, a large portion of subjects fail to select the more informative one. Moreover, when predictions from the two sources conflict, we find that the AI agent aligns more frequently with the prompt, despite its lower accuracy. Overall, these results highlight the revealed preference approach as a powerful mechanism for communicating human preferences to AI agents, but its success depends on careful implementation.
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2603.29317
  4. By: Moh Hosseinioun; Brian Uzzi; Henrik Barslund Fosse
    Abstract: Investment in artificial intelligence (AI) has grown rapidly, yet its returns to scientific research remain poorly understood. We study how AI reshapes the production of science using a comprehensive dataset of research proposals submitted to a large international funding agency, including both funded and unfunded projects. Combining keyword extraction with large language model classification, we identify the presence, type, and functional role of AI within each proposal and link these measures to detailed budget allocations, team structure, and subsequent publication outcomes. We find that, in the short run, AI adoption is associated with modest improvements in scientific outcomes concentrated in the upper tail. Instead, its primary effects arise in the organization of research: AI-enabled projects reallocate resources toward human capital, involve larger teams, and undertake a broader set of tasks. These patterns are consistent with a reorganization of the scientific production process rather than immediate efficiency gains, in line with theories of general-purpose technologies. Task-level analyses further show that activities expanded in AI-enabled projects, particularly ideation and experimentation, are increasingly compatible with large language model capabilities, suggesting potential for future productivity gains as these technologies mature.
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2603.27956
  5. By: Jakub Growiec; Klaus Prettner; Maciej Szkr\'obka
    Abstract: We characterize the optimal tax policy in an economy with human manual and cognitive labor, physical capital, and artificial intelligence (AI). Extending the dynamic taxation setup of Slavik and Yazici (2014), we find that it is optimal to start taxing AI when cognitive workers start to consider switching to manual jobs. This threshold may be crossed once AI becomes sufficiently capable in substituting humans across cognitive tasks.
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2603.17898
  6. By: Anton Korinek; Joseph E. Stiglitz
    Abstract: Rapid progress in new technologies such as AI has led to widespread anxiety about adverse labor market impacts. This paper asks how to guide innovative efforts so as to increase labor demand and create better-paying jobs while also evaluating the limitations of such an approach. We develop a theoretical framework to identify the properties that make an innovation desirable from the perspective of workers, including its technological complementarity to labor, the relative income of the affected workers, and the factor share of labor in producing the goods involved. Applications include robot taxation, factor-augmenting progress, and task automation. In our framework, the welfare benefits of steering technology are greater the less efficient social safety nets are. As technological progress devalues labor, the welfare benefits of steering are at first increased but, but beyond a critical threshold, decline and optimal policy shifts toward greater redistribution. Moreover, as labor's economic value diminishes, steering progress focuses increasingly on enhancing human well-being rather than labor productivity.
    JEL: D63 E64 O3
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:nbr:nberwo:34994
  7. By: Andrew Johnston; Christos A. Makridis
    Abstract: Does artificial intelligence (AI) increase productivity - and does it displace workers? We examine aggregate effects using administrative data covering essentially all U.S. employers in a difference-in-differences design exploiting occupational AI exposure across industries and states. A one standard deviation increase in exposure raises output by 7%, with effects emerging in 2021 when enterprise AI tools entered the market. Employment effects follow the same timing but diverge by exposure type: where AI likely requires human collaboration, employment rises 4%; where AI can perform tasks independently, we find no significant employment effect. Results are robust to state-by-year and industry-by-year fixed effects and suggest AI has caused a decrease in the labor share of income.
    Keywords: artificial intelligence, generative AI, aggregate productivity, labor market, technological change
    JEL: O33 J24 J23 E24 O47
    Date: 2026
    URL: https://d.repec.org/n?u=RePEc:ces:ceswps:_12579
  8. By: Brett Hemenway Falk; Gerry Tsoukalas
    Abstract: If AI displaces human workers faster than the economy can reabsorb them, it risks eroding the very consumer demand firms depend on. We show that knowing this is not enough for firms to stop it. In a competitive task-based model, demand externalities trap rational firms in an automation arms race, displacing workers well beyond what is collectively optimal. The resulting loss harms both workers and firm owners. More competition and "better" AI amplify the excess; wage adjustments and free entry cannot eliminate it. Neither can capital income taxes, worker equity participation, universal basic income, upskilling, or Coasian bargaining. Only a Pigouvian automation tax can. The results suggest that policy should address not only the aftermath of AI labor displacement but also the competitive incentives that drive it.
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2603.20617
  9. By: Ravish Gupta; Saket Kumar
    Abstract: This paper extends the Acemoglu-Restrepo task exposure framework to address the labor market effects of agentic artificial intelligence systems: autonomous AI agents capable of completing entire occupational workflows rather than discrete tasks. Unlike prior automation technologies that substitute for individual subtasks, agentic AI systems execute end-to-end workflows involving multi-step reasoning, tool invocation, and autonomous decision-making, substantially expanding occupational displacement risk beyond what existing task-level analyses capture. We introduce the Agentic Task Exposure (ATE) score, a composite measure computed algorithmically from O*NET task data using calibrated adoption parameters--not a regression estimate--incorporating AI capability scores, workflow coverage factors, and logistic adoption velocity. Applying the ATE framework across five major US technology regions (Seattle-Tacoma, San Francisco Bay Area, Austin, New York, and Boston) over a 2025-2030 horizon, we find that 93.2% of the 236 analyzed occupations across six information-intensive SOC groups (financial, legal, healthcare, healthcare support, sales, and administrative/clerical) cross the moderate-risk threshold (ATE >= 0.35) in Tier 1 regions by 2030, with credit analysts, judges, and sustainability specialists reaching ATE scores of 0.43-0.47. We simultaneously identify seventeen emerging occupational categories benefiting from reinstatement effects, concentrated in human-AI collaboration, AI governance, and domain-specific AI operations roles. Our findings carry implications for workforce transition policy, regional economic planning, and the temporal dynamics of labor market adjustment
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.00186
  10. By: Matthias Mertens; Adam Kuzee; Brittany S. Harris; Harry Lyu; Wensu Li; Jonathan Rosenfeld; Meiri Anto; Martin Fleming; Neil Thompson
    Abstract: We propose that AI automation is a continuum between: (i) crashing waves where AI capabilities surge abruptly over small sets of tasks, and (ii) rising tides where the increase in AI capabilities is more continuous and broad-based. We test for these effects in preliminary evidence from an ongoing evaluation of AI capabilities across over 3, 000 broad-based tasks derived from the U.S. Department of Labor O*NET categorization that are text-based and thus LLM-addressable. Based on more than 17, 000 evaluations by workers from these jobs, we find little evidence of crashing waves (in contrast to recent work by METR), but substantial evidence that rising tides are the primary form of AI automation. AI performance is high and improving rapidly across a wide range of tasks. We estimate that, in 2024-Q2, AI models successfully complete tasks that take humans approximately 3-4 hours with about a 50% success rate, increasing to about 65% by 2025-Q3. If recent trends in AI capability growth persist, this pace of AI improvement implies that LLMs will be able to complete most text-related tasks with success rates of, on average, 80%-95% by 2029 at a minimally sufficient quality level. Achieving near-perfect success rates at this quality level or comparable success rates at superior quality would require several additional years. These AI capability improvements would impact the economy and labor market as organizations adopt AI, which could have a substantially longer timeline.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.01363
  11. By: Pulito, Giuseppe (ROCKWOOL Foundation Berlin); Pytlikova, Mariola (CERGE-EI, Charles University and the Economics Institute of the Czech Academy of Sciences, and AIAS, Aarhus University); Schroede, Sarah (Aarhus University and Ratio Institute); Lodefalk, Magnus (Örebro University School of Business)
    Abstract: Using two waves of nationally representative Danish firm surveys linked to employer– employee administrative registers, we study how adoption varies across artificial intelligence (AI) and related advanced technologies. We show that AI adoption is highly technologyspecific. While firm size and digital infrastructure predict adoption broadly, workforce composition operates through distinct channels: STEM-educated workforces predict core AI adoption, whereas non-STEM university-educated workforces are associated with generative AI adoption, indicating different human capital complementarities. The factors associated with adoption differ from those predicting deployment breadth: firm size and digital maturity matter for both, whereas workforce composition primarily predicts adoption alone. Machine learning and natural language processing are deployed across multiple business functions, whereas other advanced technologies remain concentrated in specific operational domains. Individual-level evidence provides a foundation for these patterns, with awareness of workplace AI usage concentrated among managers and high-skilled workers. Self-reported AI knowledge is higher among younger and more educated individuals. Finally, commonly used occupational AI exposure measures vary substantially in their ability to predict observed adoption, with benchmark-based measures outperforming patent-based and LLM-focused alternatives. These findings show that treating AI as a monolithic category obscures economically meaningful variation in who adopts, what they deploy, and how well existing measures capture it.
    Keywords: Artificial Intelligence; Technology Adoption; Digitalisation; Human capital; AI Exposure Measures.
    JEL: D24 J23 J62 O33
    Date: 2026–03–27
    URL: https://d.repec.org/n?u=RePEc:hhs:oruesi:2026_003
  12. By: Emanuele Bazzichi; Massimo Riccaboni; Fulvio Castellacci
    Abstract: We study how artificial intelligence (AI) affects firms' incentives to pursue incremental versus radical knowledge recombinations. We develop a model of recombinant innovation embedded in a Schumpeterian quality-ladder framework, in which innovation arises from recombining ideas across varying distances in a knowledge space. R&D consists of multiple tasks, a fraction of which can be performed by AI. AI facilitates access to distant knowledge domains, but at the same time it also increases the aggregate rate of creative destruction, shortening the monopoly duration that rewards radical innovations. Moreover, excessive reliance on AI may reduce the originality of research and lead to duplication of research efforts. We obtain three main results. First, higher AI productivity encourages more distant recombinations, if the direct facilitation effect is stronger than the indirect effect due to intensified competition from rivals. Second, the effect of increasing the share of AI-automated R&D tasks is non-monotonic: firms initially target more radical innovations, but beyond a threshold of human-AI complementarity, they shift the focus toward incremental innovations. Third, in the limiting case of full automation, the model predicts that optimal recombination distance collapses to zero, suggesting that fully AI-driven research would undermine the very knowledge creation that it seeks to accelerate.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.02189
  13. By: Wensu Li; Atin Aboutorabi; Harry Lyu; Kaizhi Qian; Martin Fleming; Brian C. Goehring; Neil Thompson
    Abstract: This paper develops a unified framework for evaluating the optimal degree of task automation. Moving beyond binary automate-or-not assessments, we model automation intensity as a continuous choice in which firms minimize costs by selecting an AI accuracy level, from no automation through partial human-AI collaboration to full automation. On the supply side, we estimate an AI production function via scaling-law experiments linking performance to data, compute, and model size. Because AI systems exhibit predictable but diminishing returns to these inputs, the cost of higher accuracy is convex: good performance may be inexpensive, but near-perfect accuracy is disproportionately costly. Full automation is therefore often not cost-minimizing; partial automation, where firms retain human workers for residual tasks, frequently emerges as the equilibrium. On the demand side, we introduce an entropy-based measure of task complexity that maps model accuracy into a labor substitution ratio, quantifying human labor displacement at each accuracy level. We calibrate the framework with O*NET task data, a survey of 3, 778 domain experts, and GPT-4o-derived task decompositions, implementing it in computer vision. Task complexity shapes substitution: low-complexity tasks see high substitution, while high-complexity tasks favor limited partial automation. Scale of deployment is a key determinant: AI-as-a-Service and AI agents spread fixed costs across users, sharply expanding economically viable tasks. At the firm level, cost-effective automation captures approximately 11% of computer-vision-exposed labor compensation; under economy-wide deployment, this share rises sharply. Since other AI systems exhibit similar scaling-law economics, our mechanisms extend beyond computer vision, reinforcing that partial automation is often the economically rational long-run outcome, not merely a transitional phase.
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2603.29121
  14. By: Alexander Bick; Adam Blandin; David Deming; Nicola Fuchs-Schündeln; Jonas Jessen
    Abstract: This paper combines international evidence from worker and firm surveys conducted in 2025 and2026 to document large gaps in AI adoption, both between the US and Europe and across European countries. Cross-country differences in worker demographics and firm composition account for an important share of these gaps. AI adoption, within and across countries, is also closely linked to firm personnel management practices and whether firms actively encourage AI use by workers. Micro-level evidence suggests that AI generates meaningful time savings for many workers. At the macro level, in recent years industries with higher AI adoption rates have experienced faster productivity growth. While we do not establish causality, this relationship is statistically significant and similar in magnitude in Europe and the US. We do not find clear evidence that industry-level AI adoption is associated with employment changes. We discuss limitations of existing data and outline priorities for future data collection to better assess the productivity and labor market effects of AI.
    Keywords: generative artificial intelligence (AI); technology adoption; labor productivity
    JEL: J24 M16 O14 O33
    Date: 2026–03–26
    URL: https://d.repec.org/n?u=RePEc:fip:fedlwp:102950
  15. By: Luyi Gui; Tinglong Dai
    Abstract: AI and renewable energy are increasingly framed as a "power couple" -- the idea that surging AI electricity demand will accelerate clean-energy investment -- yet concerns persist that AI will instead entrench fossil-fuel carbon lock-in. We reconcile these views by modeling the equilibrium interaction between AI growth and renewable investment. In a parsimonious game, a policymaker invests in renewable capacity available to AI and an AI developer chooses capability; the equilibrium depends on scaling regimes and market incentives. When the market payoff to capability is supermodular and performance gains are near-linear in compute, developers push toward frontier scale even when the marginal megawatt-hour is fossil-based. In this regime, renewable expansion can primarily relax scaling constraints rather than displace fossil generation one-for-one, weakening incentives to build enough clean capacity and reinforcing fossil dependence. This yields an "adaptation trap": as climate damages rise, the value of AI-enabled adaptation increases, which strengthens incentives to enable frontier scaling while tolerating residual fossil use. When AI faces diminishing returns and lower scaling efficiency, energy costs discipline capability choices; renewable investment then both enables capability and decarbonizes marginal compute, generating an "adaptation pathway" in which climate stress strengthens incentives for clean-capacity expansion and can support a carbon-free equilibrium. A calibrated case study illustrates these mechanisms using observed magnitudes for investment, capability, and energy use. Decarbonizing AI is an equilibrium outcome: effective policy must keep clean capacity binding at the margin as compute expands.
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2603.26678
  16. By: Andres Alonso-Robisco; Carlos Esparcia; Francisco Jare\~no
    Abstract: Generative artificial intelligence (AI) is increasingly used to write and refactor research code, expanding computational workflows. At the same time, Green AI research has largely measured the footprint of models rather than the downstream workflows in which GenAI is a tool. We shift the unit of analysis from models to workflows and treat prompts as decision policies that allocate discretion between researcher and system, governing what is executed and when iteration stops. We contribute in two ways. First, we map the recent Green AI literature into seven themes: training footprint is the largest cluster, while inference efficiency and system level optimisation are growing rapidly, alongside measurement protocols, green algorithms, governance, and security and efficiency trade-offs. Second, we benchmark a modern economic survey workflow, an LDA-based literature mapping implemented with GenAI assisted coding and executed in a fixed cloud notebook, measuring runtime and estimated CO2e with CodeCarbon. Injecting generic green language into prompts has no reliable effect, whereas operational constraints and decision rule prompts deliver large and stable footprint reductions while preserving decision equivalent topic outputs. The results identify human in the loop governance as a practical lever to align GenAI productivity with environmental efficiency.
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2603.26712
  17. By: Richard Archer; Soheil Ghili; Nima Haghpanah
    Abstract: As AI systems shift from directing users to content toward consuming it directly, publishers need a new revenue model: charging AI crawlers for content access. This model, called pay-per-crawl, must solve a problem of mechanism selection at scale: content is too heterogeneous for a fixed pricing framework. Different sub-types warrant not only different price levels but different pricing rules based on different unstructured features, and there are too many to enumerate or design by hand. We propose the LM Tree, an adaptive pricing agent that grows a segmentation tree over the content library, using LLMs to discover what distinguishes high-value from low-value items and apply those attributes at scale, from binary purchase feedback alone. We evaluate the LM Tree on real content from a major German technology publisher, using 8, 939 articles and 80, 451 buyer queries with willingness-to-pay calibrated from actual AI crawler traffic. The LM Tree achieves a 65% revenue gain over a single static price and a 47% gain over two-category pricing, outperforming even the publisher's own 8-segment editorial taxonomy by 40% -- recovering content distinctions the publisher's own categories miss.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.01416
  18. By: Mehmet Caner; Agostino Capponi; Nathan Sun; Jonathan Y. Tan
    Abstract: We introduce a new agentic artificial intelligence (AI) platform for portfolio management. Our architecture consists of three layers. First, two large language model (LLM) agents are assigned specialized tasks: one agent screens for firms with desirable fundamentals, while a sentiment analysis agent screens for firms with desirable news. Second, these agents deliberate to generate and agree upon buy and sell signals from a large portfolio, substantially narrowing the pool of candidate assets. Finally, we apply a high-dimensional precision matrix estimation procedure to determine optimal portfolio weights. A defining theoretical feature of our framework is that the number of assets in the portfolio is itself a random variable, realized through the screening process. We introduce the concept of sensible screening and establish that, under mild screening errors, the squared Sharpe ratio of the screened portfolio consistently estimates its target. Empirically, our method achieves superior Sharpe ratios relative to an unscreened baseline portfolio and to conventional screening approaches, evaluated on S&P 500 data over the period 2020--2024.
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2603.23300
  19. By: DiGiuseppe, Matthew (Leiden University); Fu, Xuelong; Flynn, Michael E (Kansas State University)
    Abstract: Trade data are available at a high level of disaggregation, allowing scholars to examine flows of highly specific goods. Yet the sheer number of goods classifications (5, 000+) makes it difficult to analyze trade flows and tariff policy at a mid-level of aggregation beyond a few existing categorizations. Here, we outline a method that can scale---not merely classify---traded goods on researcher-defined dimensions that are orthogonal to existing classification schemes. We propose that the embedded knowledge in large language models (LLMs) can be used to conduct pairwise comparisons (PWCs) of Harmonized System (HS) product descriptions by determining their relative proximity to a specific concept. A Bayesian Bradley--Terry model then uses these PWCs to place individual items on a latent scale of interest. These estimates and their associated uncertainty can then be used for downstream descriptive or causal analysis.
    Date: 2026–03–27
    URL: https://d.repec.org/n?u=RePEc:osf:socarx:t8wdg_v1
  20. By: Kemal Kirtac
    Abstract: This paper studies whether a lightweight trained aggregator can combine diverse zero-shot large language model judgments into a stronger downstream signal for corporate disclosure classification. Zero-shot LLMs can read disclosures without task-specific fine-tuning, but their predictions often vary across prompts, reasoning styles, and model families. I address this problem with a multi-agent framework in which three zero-shot agents independently read each disclosure and output a sentiment label, a confidence score, and a short rationale. A logistic meta-classifier then aggregates these signals to predict next-day stock return direction. I use a sample of 18, 420 U.S. corporate disclosures issued by Nasdaq and S&P 500 firms between 2018 and 2024, matched to next-day stock returns. Results show that the trained aggregator outperforms all single agents, majority vote, confidence-weighted voting, and a FinBERT baseline. Balanced accuracy rises from 0.561 for the best single agent to 0.612 for the trained aggregator, with the largest gains in disclosures combining strong current performance with weak guidance or elevated risk. The results suggest that zero-shot LLM agents capture complementary financial signals and that supervised aggregation can turn cross-agent disagreement into a more useful classification target.
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2603.20965
  21. By: Drydakis, Nick (Anglia Ruskin University)
    Abstract: This study examines whether AI Capital, defined as AI-related knowledge, skills and capabilities, is associated with business innovation among SMEs in England. Using a two-wave longitudinal panel dataset comprising 504 observations from SMEs collected in 2024 and 2025, the study develops and validates a 45-item AI Capital of Business scale. Business innovation is measured across five dimensions: product and service innovation, process innovation, technology adoption, market and customer engagement, and organisational culture and strategy. Regression models, including pooled OLS, Random Effects, and Fixed Effects specifications, are employed. The findings reveal a robust positive association between AI Capital and business innovation across all model specifications. This association holds across all business innovation dimensions and remains consistent for SMEs with differing levels of financial performance, size, and operational maturity. Each component of AI Capital independently exhibits a positive association with business innovation outcomes.
    Keywords: artificial intelligence, artificial intelligence capital, business innovation, innovation, SMEs
    JEL: O31 O33 O32 L26 L25 M15 D83 J24 O14 O39
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:iza:izadps:dp18476
  22. By: Ricardo Crisostomo; Diana Mykhalyuk
    Abstract: This paper investigates whether large language models (LLMs) can generate reliable stock market predictions. We evaluate four state-of-the-art models - ChatGPT, Gemini, DeepSeek, and Perplexity - across three prompting strategies: a naive query, a structured approach, and chain-of-thought reasoning. Our results show that LLM-generated recommendations are hindered by recurring reasoning failures, including financial misconceptions, carryover errors, and reliance on outdated or hallucinated information. When appropriately guided and supervised, LLMs demonstrate the capacity to outperform the market, but realizing LLMs' full potential requires substantial human oversight. We also find that grounding stock recommendations in official regulatory filings increases their forecasting accuracy. Overall, our findings underscore the need for robust safeguards and validation when deploying LLMs in financial markets.
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2603.19944
  23. By: Se Yan (Zachary); Han Zhong (Zachary); Zemin (Zachary); Zhong; Wenyu Zhou
    Abstract: This paper provides some of the first large-scale descriptive evidence on how consumers adopt and use platform-embedded shopping AI in e-commerce. Using data on 31 million users of Ctrip, China's largest online travel platform, we study "Wendao, " an LLM-based AI assistant integrated into the platform. We document three empirical regularities. First, adoption is highest among older consumers, female users, and highly engaged existing users, reversing the younger, male-dominated profile commonly documented for general-purpose AI tools. Second, AI chat appears in the same broad phase of the purchase journey as traditional search and well before order placement; among journeys containing both chat and search, the most common pattern is interleaving, with users moving back and forth between the two modalities. Third, consumers disproportionately use the assistant for exploratory, hard-to-keyword tasks: attraction queries account for 42% of observed chat requests, and chat intent varies systematically with both the timing of chat relative to search and the category of products later purchased within the same journey. These findings suggest that embedded shopping AI functions less as a substitute for conventional search than as a complementary interface for exploratory product discovery in e-commerce.
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2603.24947

This nep-ain issue is ©2026 by Ben Greiner. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the Griffith Business School of Griffith University in Australia.