|
on Artificial Intelligence |
By: | Engin Iyidogan; Ali I. Ozkes |
Abstract: | We model a competitive market where AI agents buy answers from upstream generative models and resell them to users who differ in how much they value accuracy and in how much they fear hallucinations. Agents can privately exert effort for costly verification to lower hallucination risks. Since interactions halt in the event of a hallucination, the threat of losing future rents disciplines effort. A unique reputational equilibrium exists under nontrivial discounting. The equilibrium effort, and thus the price, increases with the share of users who have high accuracy concerns, implying that hallucination-sensitive sectors, such as law and medicine, endogenously lead to more serious verification efforts in agentic AI markets. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.19183 |
By: | Pawe{\l} Niszczota; Tomasz Grzegorczyk; Alexander Pastukhov |
Abstract: | Machines driven by large language models (LLMs) have the potential to augment humans across various tasks, a development with profound implications for business settings where effective communication, collaboration, and stakeholder trust are paramount. To explore how interacting with an LLM instead of a human might shift cooperative behavior in such settings, we used the Prisoner's Dilemma game -- a surrogate of several real-world managerial and economic scenarios. In Experiment 1 (N=100), participants engaged in a thirty-round repeated game against a human, a classic bot, and an LLM (GPT, in real-time). In Experiment 2 (N=192), participants played a one-shot game against a human or an LLM, with half of them allowed to communicate with their opponent, enabling LLMs to leverage a key advantage over older-generation machines. Cooperation rates with LLMs -- while lower by approximately 10-15 percentage points compared to interactions with human opponents -- were nonetheless high. This finding was particularly notable in Experiment 2, where the psychological cost of selfish behavior was reduced. Although allowing communication about cooperation did not close the human-machine behavioral gap, it increased the likelihood of cooperation with both humans and LLMs equally (by 88%), which is particularly surprising for LLMs given their non-human nature and the assumption that people might be less receptive to cooperating with machines compared to human counterparts. Additionally, cooperation with LLMs was higher following prior interaction with humans, suggesting a spillover effect in cooperative behavior. Our findings validate the (careful) use of LLMs by businesses in settings that have a cooperative component. |
Date: | 2025–05 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.18639 |
By: | Margarita Leib; Nils K\"obis; Ivan Soraperra |
Abstract: | People increasingly rely on AI-advice when making decisions. At times, such advice can promote selfish behavior. When individuals abide by selfishness-promoting AI advice, how are they perceived and punished? To study this question, we build on theories from social psychology and combine machine-behavior and behavioral economic approaches. In a pre-registered, financially-incentivized experiment, evaluators could punish real decision-makers who (i) received AI, human, or no advice. The advice (ii) encouraged selfish or prosocial behavior, and decision-makers (iii) behaved selfishly or, in a control condition, behaved prosocially. Evaluators further assigned responsibility to decision-makers and their advisors. Results revealed that (i) prosocial behavior was punished very little, whereas selfish behavior was punished much more. Focusing on selfish behavior, (ii) compared to receiving no advice, selfish behavior was penalized more harshly after prosocial advice and more leniently after selfish advice. Lastly, (iii) whereas selfish decision-makers were seen as more responsible when they followed AI compared to human advice, punishment between the two advice sources did not vary. Overall, behavior and advice content shape punishment, whereas the advice source does not. |
Date: | 2025–05 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.19487 |
By: | Jörg Papenkordt (Paderborn University); Johannes Dahlke (University of Twente); Nicolas Neef (University of Hohenheim); Sarah Zabel (University of Hohenheim) |
Abstract: | The integration of artificial intelligence technology in contemporary work environments raises questions about how human team members collaborate when being assisted by AI. We propose that the reductionist properties of AI technology could affect the logics by which teams operate. This experimental research project aims to identify possible changes in collaboration dynamics within teams when employing AI support in a creative task domain. We explore conversational changes in collaboration by analyzing problem-focused, procedural, action-oriented, and socio-emotional sentiments expressed by team members, as well as structural changes by examining the properties of the communication network resulting from team discussions. Based on the observed co-occurrence of contentual and structural changes, our research points toward emerging patterns of AI-augmented collaboration, indicating that the temporary duration of AI collaboration influences team dynamics differently. |
Keywords: | Team-AI collaboration, Team dynamics, AI team member, Generative AI, Creativity |
JEL: | C92 D83 C88 O31 |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:pdn:dispap:146 |
By: | Ben Weidmann; Yixian Xu; David J. Deming |
Abstract: | We show that the ability to lead groups of humans is predicted by leadership skill with Artificially Intelligent agents. In a large pre-registered lab experiment, human leaders worked with AI agents to solve problems. Their performance on this 'AI leadership test' was strongly correlated with their causal impact on human teams, which we estimate by repeatedly randomly assigning leaders to groups of human followers and measuring team performance. Successful leaders of both humans and AI agents ask more questions and engage in more conversational turn-taking; they score higher on measures of social intelligence, fluid intelligence, and decision-making skill, but do not differ in gender, age, ethnicity or education. Our findings indicate that AI agents can be effective proxies for human participants in social experiments, which greatly simplifies the measurement of leadership and teamwork skills. |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2508.02966 |
By: | Jörg Papenkordt (Paderborn University); Axel-Cyrille Ngonga Ngomo (Paderborn University); Kirsten Thommes (Paderborn University) |
Abstract: | Advances in AI and our limited human capabilities have made AI decision-making opaque to humans. One prerequisite for enhancing the transparency of AI recommendations is improving AI explainability as humans need to be enabled to take responsibility for their actions even with AI support. Our study aims to tackle this issue by investigating two basic approaches to explainability: We evaluate numerical explanations, such as certainty measures, against verbal explanations, such as those provided by LLM as explanatory agents. Specifically, we examine whether verbal or numerical (or both) explanations in tasks of high uncertainty lure users into false beliefs or, on the contrary, promote appropriate reliance. Drawing on an experiment with 441 participants, we explore the dynamics of non-expert users' interactions with AI under varying explanatory conditions. Results show that explanations significantly improve reliance and decision accuracy. Numerical explanations aid in identifying uncertainties and errors, but the users' reliance on the advice falls far behind the given numerical certainty. Verbal explanations foster higher reliance while increasing the risk of over-reliance. Combining both explanation types enhances reliance but further amplifies blind trust in AI. |
Keywords: | explainable AI, artificial intelligence, human-computer interaction |
JEL: | C83 D81 C88 O33 |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:pdn:dispap:147 |
By: | Kiran Tomlinson; Sonia Jaffe; Will Wang; Scott Counts; Siddharth Suri |
Abstract: | Given the rapid adoption of generative AI and its potential to impact a wide range of tasks, understanding the effects of AI on the economy is one of society's most important questions. In this work, we take a step toward that goal by analyzing the work activities people do with AI, how successfully and broadly those activities are done, and combine that with data on what occupations do those activities. We analyze a dataset of 200k anonymized and privacy-scrubbed conversations between users and Microsoft Bing Copilot, a publicly available generative AI system. We find the most common work activities people seek AI assistance for involve gathering information and writing, while the most common activities that AI itself is performing are providing information and assistance, writing, teaching, and advising. Combining these activity classifications with measurements of task success and scope of impact, we compute an AI applicability score for each occupation. We find the highest AI applicability scores for knowledge work occupation groups such as computer and mathematical, and office and administrative support, as well as occupations such as sales whose work activities involve providing and communicating information. Additionally, we characterize the types of work activities performed most successfully, how wage and education correlate with AI applicability, and how real-world usage compares to predictions of occupational AI impact. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.07935 |
By: | Tatsuru Kikuchi |
Abstract: | This paper investigates how executive demographics particularly age and gender influence artificial intelligence (AI) investment decisions and subsequent firm productivity using comprehensive data from over 500 Japanese enterprises spanning from 2018 to 2023. Our central research question addresses the role of executive characteristics in technology adoption, finding that CEO age and technical background significantly predict AI investment propensity. Employing these demographic characteristics as instrumental variables to address endogeneity concerns, we identify a statistically significant 2.4% increase in total factor productivity attributable to AI investment adoption. Our novel mechanism decomposition framework reveals that productivity gains operate through three distinct channels: cost reduction (40% of total effect), revenue enhancement (35%), and innovation acceleration (25%). The results demonstrate that younger executives (below 50 years) are 23% more likely to adopt AI technologies, while firm size significantly moderates this relationship. Aggregate projections suggest potential GDP impacts of 1.15 trillion JPY from widespread AI adoption across the Japanese economy. These findings provide crucial empirical guidance for understanding the human factors driving digital transformation and inform both corporate governance and public policy regarding AI investment incentives. |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2508.03757 |
By: | Kuusi, Tero |
Abstract: | Abstract This paper analyzes the macroeconomic impact of Generative Artificial Intelligence (GenAI) on the Finnish economy, integrating recent literature and empirical evidence into a quantitative multi-sector general equilibrium model. The results indicate that, over a ten-year horizon, GenAI adoption increases annual economic growth by less than 0.5 percentage points in the baseline scenarios, with the potential for larger impacts—exceeding 1 percentage point—under scenarios involving greater automation and shifts in labor and ICT factor shares. The model’s input-output structure reveals significant multiplier effects, as productivity gains in one sector propagate to others. The service sector emerges as a pivotal driver of adjustment, with its adaptability helping to offset slower growth in sectors less amenable to automation. The study acknowledges uncertainties regarding the broader impacts of artificial general intelligence, emphasizing the limitations of current forecasts, adaptation frictions, and the importance of anticipatory behavior in financial markets. Overall, the findings underscore the transformative potential of GenAI, contingent upon proactive policy measures to foster economic growth. |
Keywords: | Artificial Intelligence, Productivity, Technology adoption |
JEL: | C6 E1 O3 O4 O5 |
Date: | 2025–08–14 |
URL: | https://d.repec.org/n?u=RePEc:rif:wpaper:131 |
By: | Christopher Clayton; Antonio Coppola |
Abstract: | We examine whether and how granular, real-time predictive models should be integrated into central banks' macroprudential toolkit. First, we develop a tractable framework that formalizes the tradeoff regulators face when choosing between implementing models that forecast systemic risk accurately but have uncertain causal content and models with the opposite profile. We derive the regulator's optimal policy in a setting in which private portfolios react endogenously to the regulator's model choice and policy rule. We show that even purely predictive models can generate welfare gains for a regulator, and that predictive precision and knowledge of causal impacts of policy interventions are complementary. Second, we introduce a deep learning architecture tailored to financial holdings data--a graph transformer--and we discuss why it is optimally suited to this problem. The model learns vector embedding representations for both assets and investors by explicitly modeling the relational structure of holdings, and it attains state-of-the-art predictive accuracy in out-of-sample forecasting tasks including trade prediction. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.18747 |
By: | Adam Darmanin; Vince Vella |
Abstract: | Algorithmic trading requires short-term decisions aligned with long-term financial goals. While reinforcement learning (RL) has been explored for such tactical decisions, its adoption remains limited by myopic behavior and opaque policy rationale. In contrast, large language models (LLMs) have recently demonstrated strategic reasoning and multi-modal financial signal interpretation when guided by well-designed prompts. We propose a hybrid system where LLMs generate high-level trading strategies to guide RL agents in their actions. We evaluate (i) the rationale of LLM-generated strategies via expert review, and (ii) the Sharpe Ratio (SR) and Maximum Drawdown (MDD) of LLM-guided agents versus unguided baselines. Results show improved return and risk metrics over standard RL. |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2508.02366 |
By: | Yu Shi; Zongliang Fu; Shuo Chen; Bohan Zhao; Wei Xu; Changshui Zhang; Jian Li |
Abstract: | The success of large-scale pre-training paradigm, exemplified by Large Language Models (LLMs), has inspired the development of Time Series Foundation Models (TSFMs). However, their application to financial candlestick (K-line) data remains limited, often underperforming non-pre-trained architectures. Moreover, existing TSFMs often overlook crucial downstream tasks such as volatility prediction and synthetic data generation. To address these limitations, we propose Kronos, a unified, scalable pre-training framework tailored to financial K-line modeling. Kronos introduces a specialized tokenizer that discretizes continuous market information into token sequences, preserving both price dynamics and trade activity patterns. We pre-train Kronos using an autoregressive objective on a massive, multi-market corpus of over 12 billion K-line records from 45 global exchanges, enabling it to learn nuanced temporal and cross-asset representations. Kronos excels in a zero-shot setting across a diverse set of financial tasks. On benchmark datasets, Kronos boosts price series forecasting RankIC by 93% over the leading TSFM and 87% over the best non-pre-trained baseline. It also achieves a 9% lower MAE in volatility forecasting and a 22% improvement in generative fidelity for synthetic K-line sequences. These results establish Kronos as a robust, versatile foundation model for end-to-end financial time series analysis. Our pre-trained model is publicly available at https://github.com/shiyu-coder/Kronos. |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2508.02739 |
By: | Craig S Wright |
Abstract: | This paper presents a praxeological analysis of artificial intelligence and algorithmic governance, challenging assumptions about the capacity of machine systems to sustain economic and epistemic order. Drawing on Misesian a priori reasoning and Austrian theories of entrepreneurship, we argue that AI systems are incapable of performing the core functions of economic coordination: interpreting ends, discovering means, and communicating subjective value through prices. Where neoclassical and behavioural models treat decisions as optimisation under constraint, we frame them as purposive actions under uncertainty. We critique dominant ethical AI frameworks such as Fairness, Accountability, and Transparency (FAT) as extensions of constructivist rationalism, which conflict with a liberal order grounded in voluntary action and property rights. Attempts to encode moral reasoning in algorithms reflect a misunderstanding of ethics and economics. However complex, AI systems cannot originate norms, interpret institutions, or bear responsibility. They remain opaque, misaligned, and inert. Using the concept of epistemic scarcity, we explore how information abundance degrades truth discernment, enabling both entrepreneurial insight and soft totalitarianism. Our analysis ends with a civilisational claim: the debate over AI concerns the future of human autonomy, institutional evolution, and reasoned choice. The Austrian tradition, focused on action, subjectivity, and spontaneous order, offers the only coherent alternative to rising computational social control. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.01483 |