|
on Artificial Intelligence |
By: | Simon Martin; Hans-Theo Normann; Paul P\"uplichhuisen; Tobias Werner |
Abstract: | We study the propensity of independent algorithms to collude in repeated Cournot duopoly games. Specifically, we investigate the predictive power of different oligopoly and bargaining solutions regarding the effect of asymmetry between firms. We find that both consumers and firms can benefit from asymmetry. Algorithms produce more competitive outcomes when firms are symmetric, but less when they are very asymmetric. Although the static Nash equilibrium underestimates the effect on total quantity and overestimates the effect on profits, it delivers surprisingly accurate predictions in terms of total welfare. The best description of our results is provided by the equal relative gains solution. In particular, we find algorithms to agree on profits that are on or close to the Pareto frontier for all degrees of asymmetry. Our results suggest that the common belief that symmetric industries are more prone to collusion may no longer hold when algorithms increasingly drive managerial decisions. |
Date: | 2025–01 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2501.07178 |
By: | Connor Douglas; Foster Provost; Arun Sundararajan |
Abstract: | Algorithmic agents are used in a variety of competitive decision settings, notably in making pricing decisions in contexts that range from online retail to residential home rentals. Business managers, algorithm designers, legal scholars, and regulators alike are all starting to consider the ramifications of "algorithmic collusion." We study the emergent behavior of multi-armed bandit machine learning algorithms used in situations where agents are competing, but they have no information about the strategic interaction they are engaged in. Using a general-form repeated Prisoner's Dilemma game, agents engage in online learning with no prior model of game structure and no knowledge of competitors' states or actions (e.g., no observation of competing prices). We show that these context-free bandits, with no knowledge of opponents' choices or outcomes, still will consistently learn collusive behavior - what we call "naive collusion." We primarily study this system through an analytical model and examine perturbations to the model through simulations. Our findings have several notable implications for regulators. First, calls to limit algorithms from conditioning on competitors' prices are insufficient to prevent algorithmic collusion. This is a direct result of collusion arising even in the naive setting. Second, symmetry in algorithms can increase collusion potential. This highlights a new, simple mechanism for "hub-and-spoke" algorithmic collusion. A central distributor need not imbue its algorithm with supra-competitive tendencies for apparent collusion to arise; it can simply arise by using certain (common) machine learning algorithms. Finally, we highlight that collusive outcomes depend starkly on the specific algorithm being used, and we highlight market and algorithmic conditions under which it will be unknown a priori whether collusion occurs. |
Date: | 2024–11 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2411.16574 |
By: | Jason D. Hartline; Chang Wang; Chenhao Zhang |
Abstract: | We study the regulation of algorithmic (non-)collusion amongst sellers in dynamic imperfect price competition by auditing their data as introduced by Hartline et al. [2024]. We develop an auditing method that tests whether a seller's pessimistic calibrated regret is low. The pessimistic calibrated regret is the highest calibrated regret of outcomes compatible with the observed data. This method relaxes the previous requirement that a pricing algorithm must use fully-supported price distributions to be auditable. This method is at least as permissive as any auditing method that has a high probability of failing algorithmic outcomes with non-vanishing calibrated regret. Additionally, we strengthen the justification for using vanishing calibrated regret, versus vanishing best-in-hindsight regret, as the non-collusion definition, by showing that even without any side information, the pricing algorithms that only satisfy weaker vanishing best-in-hindsight regret allow an opponent to manipulate them into posting supra-competitive prices. This manipulation cannot be excluded with a non-collusion definition of vanishing best-in-hindsight regret. We motivate and interpret the approach of auditing algorithms from their data as suggesting a per se rule. However, we demonstrate that it is possible for algorithms to pass the audit by pretending to have higher costs than they actually do. For such scenarios, the rule of reason can be applied to bound the range of costs to those that are reasonable for the domain. |
Date: | 2025–01 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2501.09740 |
By: | Augusto Gonzalez-Bonorino (Pomona College Economics Department); Monica Capra (Claremont Graduate University Economics Department; University of Arizona Center for the Philosophy of Freedom); Emilio Pantoja (Pitzer College Economics and Computer Science Department) |
Abstract: | Despite its importance, studying economic behavior across diverse, non-WEIRD (Western, Educated, Industrialized, Rich, and Democratic) populations presents significant challenges. We address this issue by introducing a novel methodology that uses Large Language Models (LLMs) to create synthetic cultural agents (SCAs) representing these populations. We subject these SCAs to classic behavioral experiments, including the dictator and ultimatum games. Our results demonstrate substantial cross-cultural variability in experimental behavior. Notably, for populations with available data, SCAs' behaviors qualitatively resemble those of real human subjects. For unstudied populations, our method can generate novel, testable hypotheses about economic behavior. By integrating AI into experimental economics, this approach offers an effective and ethical method to pilot experiments and refine protocols for hard-to-reach populations. Our study provides a new tool for cross-cultural economic studies and demonstrates how LLMs can help experimental behavioral research. |
Date: | 2025–01 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2501.06834 |
By: | Paolo Carioli; Dirk Czarnitzki; Gastón P Fernández Barros |
Abstract: | Artificial Intelligence (AI) is considered to be the next general-purpose technology, with the potential of performing tasks commonly requiring human capabilities. While it is commonly feared that AI replaces labor and disrupts jobs, we instead investigate the potential of AI for overcoming increasingly alarming skills shortages in firms. We exploit unique German survey data from the Mannheim Innovation Panel on both the adoption of AI and the extent to which firms experience scarcity of skills. We measure skills shortage by the number of job vacancies that could not be filled as planned by firms, distinguishing among different types of skills. To account for the potential endogeneity of skills shortage, we also implement instrumental variable estimators. Overall, we find a positive and significant effect of skills shortage on AI adoption, the breadth of AI methods, and the breadth of areas of application of AI. In addition, we find evidence that scarcity of labor with academic education relates to firms exploring and adopting AI. |
Keywords: | Artificial Intelligence, CIS data, skills shortage |
Date: | 2024–02–08 |
URL: | https://d.repec.org/n?u=RePEc:ete:ceswps:735893 |
By: | Casey O. Barkan |
Abstract: | It is widely assumed that increases in economic productivity necessarily lead to economic growth. In this paper, it is shown that this is not always the case. An idealized model of an economy is presented in which a new technology allows capital to be utilized autonomously without labor input. This is motivated by the possibility that advances in artificial intelligence (AI) will give rise to AI agents that act autonomously in the economy. The economic model involves a single profit-maximizing firm which is a monopolist in the product market and a monopsonist in the labor market. The new automation technology causes the firm to replace labor with capital in such a way that its profit increases while total production decreases. The model is not intended to capture the structure of a real economy, but rather to illustrate how basic economic mechanisms can give rise to counterintuitive and undesirable outcomes. |
Date: | 2024–11 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2411.15718 |
By: | Payal Malik (Indian Council for Research on International Economic Relations (ICRIER)); Nikita Jain; Shiva Kanwar; Bhargavee Das; Saloni Dhadwal |
Abstract: | Artificial Intelligence (AI) technologies are becoming integral to businesses and public markets alike, enabling innovation and efficiency and creating avenues for economic growth. The emphasis in public discourse has been on the technological advances enabled by AI and the risks and benefits associated with them. It is equally important that discussions on market implications of firm behavior active in AI are also understood. This report explores the evolving market dynamics in India and the critical challenges faced by policymakers and regulators in creating a competitive and innovative AI ecosystem. The report also examines the AI technology stack, highlighting its distinct layers and their implications for industrial organization and market competition. Key themes include the role of major cloud providers in shaping the AI ecosystem, the complexities of open-source models, the expanding network of partnerships between global technology companies, AI startups, and domestic IT incumbents, and the creation of new dependencies. Drawing on global best practices, the report emphasizes the need for a nuanced mix of competition and industrial policies, including a Digital Public Infrastructure paradigm, to foster a competitive, inclusive, and innovative AI ecosystem in India. It also highlights India's push for technological sovereignty through initiatives like the IndiaAI Mission and investments in indigenous AI models and supercomputing capabilities. The recommendations proposed in the report include promoting interoperability, enhancing access to computing resources, strengthening data-governance frameworks while facilitating access to high-quality open datasets, and leveraging public-private partnerships to support emerging AI startups. |
Keywords: | Artificial Intelligence, Competition Policy, Generative AI, Digital Public Infrastructure, Data Governance, AI Regulation, Prosus, icrier |
Date: | 2025–01 |
URL: | https://d.repec.org/n?u=RePEc:bdc:report:25-r-01 |
By: | Kuan-Ming Liu; Ming-Chih Lo |
Abstract: | Recent advances in deep learning and large language models (LLMs) have facilitated the deployment of the mixture-of-experts (MoE) mechanism in the stock investment domain. While these models have demonstrated promising trading performance, they are often unimodal, neglecting the wealth of information available in other modalities, such as textual data. Moreover, the traditional neural network-based router selection mechanism fails to consider contextual and real-world nuances, resulting in suboptimal expert selection. To address these limitations, we propose LLMoE, a novel framework that employs LLMs as the router within the MoE architecture. Specifically, we replace the conventional neural network-based router with LLMs, leveraging their extensive world knowledge and reasoning capabilities to select experts based on historical price data and stock news. This approach provides a more effective and interpretable selection mechanism. Our experiments on multimodal real-world stock datasets demonstrate that LLMoE outperforms state-of-the-art MoE models and other deep neural network approaches. Additionally, the flexible architecture of LLMoE allows for easy adaptation to various downstream tasks. |
Date: | 2025–01 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2501.09636 |
By: | Jerick Shi; Burton Hollifield |
Abstract: | Predicting the movement of the stock market and other assets has been valuable over the past few decades. Knowing how the value of a certain sector market may move in the future provides much information for investors, as they use that information to develop strategies to maximize profit or minimize risk. However, market data are quite noisy, and it is challenging to choose the right data or the right model to create such predictions. With the rise of large language models, there are ways to analyze certain data much more efficiently than before. Our goal is to determine whether the GPT model provides more useful information compared to other traditional transformer models, such as the BERT model. We shall use data from the Federal Reserve Beige Book, which provides summaries of economic conditions in different districts in the US. Using such data, we then employ the LLM's to make predictions on the correlations. Using these correlations, we then compare the results with well-known strategies and determine whether knowing the economic conditions improves investment decisions. We conclude that the Beige Book does contain information regarding correlations amongst different assets, yet the GPT model has too much look-ahead bias and that traditional models still triumph. |
Date: | 2024–11 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2411.16569 |
By: | Robert Novy-Marx; Mihail Z. Velikov |
Abstract: | This paper describes a process for automatically generating academic finance papers using large language models (LLMs). It demonstrates the process’ efficacy by producing hundreds of complete papers on stock return predictability, a topic particularly well-suited for our illustration. We first mine over 30, 000 potential stock return predictor signals from accounting data, and apply the Novy-Marx and Velikov (2024) “Assaying Anomalies” protocol to generate standardized “template reports” for 96 signals that pass the protocol’s rigorous criteria. Each report details a signal’s performance predicting stock returns using a wide array of tests and benchmarks it to more than 200 other known anomalies. Finally, we use state-of-the-art LLMs to generate three distinct complete versions of academic papers for each signal. The different versions include creative names for the signals, contain custom introductions providing different theoretical justifications for the observed predictability patterns, and incorporate citations to existing (and, on occasion, imagined) literature supporting their respective claims. This experiment illustrates AI’s potential for enhancing financial research efficiency, but also serves as a cautionary tale, illustrating how it can be abused to industrialize HARKing (Hypothesizing After Results are Known). |
JEL: | A12 C12 C18 C45 G11 G12 |
Date: | 2025–01 |
URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:33363 |
By: | Bryan T. Kelly; Boris Kuznetsov; Semyon Malamud; Teng Andrea Xu |
Abstract: | The core statistical technology in artificial intelligence is the large-scale transformer network. We propose a new asset pricing model that implants a transformer in the stochastic discount factor. This structure leverages conditional pricing information via cross-asset information sharing and nonlinearity. We also develop a linear transformer that serves as a simplified surrogate from which we derive an intuitive decomposition of the transformer's asset pricing mechanisms. We find large reductions in pricing errors from our artificial intelligence pricing model (AIPM) relative to previous machine learning models and dissect the sources of these gains. |
JEL: | C45 G10 G11 G14 G17 |
Date: | 2025–01 |
URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:33351 |
By: | Aaron Wheeler; Jeffrey D. Varner |
Abstract: | This work presents a generative pre-trained transformer (GPT) designed for modeling financial time series. The GPT functions as an order generation engine within a discrete event simulator, enabling realistic replication of limit order book dynamics. Our model leverages recent advancements in large language models to produce long sequences of order messages in a steaming manner. Our results demonstrate that the model successfully reproduces key features of order flow data, even when the initial order flow prompt is no longer present within the model's context window. Moreover, evaluations reveal that the model captures several statistical properties, or 'stylized facts', characteristic of real financial markets and broader macro-scale data distributions. Collectively, this work marks a significant step toward creating high-fidelity, interactive market simulations. |
Date: | 2024–11 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2411.16585 |
By: | Paolo Barucca; Flaviano Morone |
Abstract: | The efficient market hypothesis (EMH) famously stated that prices fully reflect the information available to traders. This critically depends on the transfer of information into prices through trading strategies. Traders optimise their strategy with models of increasing complexity that identify the relationship between information and profitable trades more and more accurately. Under specific conditions, the increased availability of low-cost universal approximators, such as AI systems, should be naturally pushing towards more advanced trading strategies, potentially making it harder and harder for inefficient traders to profit. In this paper, we leverage on a generalised notion of market efficiency, based on the definition of an equilibrium price process, that allows us to distinguish different levels of model complexity through investors' beliefs, and trading strategies optimisation, and discuss the relationship between AI-powered trading and the time-evolution of market efficiency. Finally, we outline the need for and the challenge of describing out-of-equilibrium market dynamics in an adaptive multi-agent environment. |
Date: | 2025–01 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2501.07489 |