nep-evo New Economics Papers
on Evolutionary Economics
Issue of 2024–12–02
five papers chosen by
Matthew Baker, City University of New York


  1. Evolution with Opponent-Learning Awareness By Yann Bouteiller; Karthik Soma; Giovanni Beltrame
  2. The Complexity of Economic Decisions By Xavier Gabaix; Thomas Graeber
  3. Can Machines Think Like Humans? A Behavioral Evaluation of LLM-Agents in Dictator Games By Ji Ma
  4. Take Caution in Using LLMs as Human Surrogates: Scylla Ex Machina By Yuan Gao; Dokyun Lee; Gordon Burtch; Sina Fazelpour
  5. The green transition of firms: The role of evolutionary competition, adjustment costs, transition risk, and green technology progress By Davide Radi; Frank Westerhoff

  1. By: Yann Bouteiller; Karthik Soma; Giovanni Beltrame
    Abstract: The universe involves many independent co-learning agents as an ever-evolving part of our observed environment. Yet, in practice, Multi-Agent Reinforcement Learning (MARL) applications are usually constrained to small, homogeneous populations and remain computationally intensive. In this paper, we study how large heterogeneous populations of learning agents evolve in normal-form games. We show how, under assumptions commonly made in the multi-armed bandit literature, Multi-Agent Policy Gradient closely resembles the Replicator Dynamic, and we further derive a fast, parallelizable implementation of Opponent-Learning Awareness tailored for evolutionary simulations. This enables us to simulate the evolution of very large populations made of heterogeneous co-learning agents, under both naive and advanced learning strategies. We demonstrate our approach in simulations of 200, 000 agents, evolving in the classic games of Hawk-Dove, Stag-Hunt, and Rock-Paper-Scissors. Each game highlights distinct ways in which Opponent-Learning Awareness affects evolution.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.17466
  2. By: Xavier Gabaix; Thomas Graeber
    Abstract: We propose a theory of the complexity of economic decisions. Leveraging a macroeconomic framework of production functions, we conceptualize the mind as a cognitive economy, where a task's complexity is determined by its composition of cognitive operations. Complexity emerges as the inverse of the total factor productivity of thinking about a task. It increases in the number of importance-weighted components and decreases in the degree to which the effect of one or few components on the optimal action dominates. Higher complexity generates larger decision errors and behavioral attenuation to variation in problem parameters. The model applies both to continuous and discrete choice. We develop a theory-guided experimental methodology for measuring subjective perceptions of complexity that is simple and portable. A series of experiments test and confirm the central predictions of our model for perceptions of complexity, behavioral attenuation, and decision errors. We provide a template for applying the framework to core economic decision domains, and then develop several applications including the complexity of static consumption choice with one or several interacting goods, consumption over time, the tax system, forecasting, and discrete choice between goods.
    JEL: C91 D03 D11 D14 D90 E03
    Date: 2024–11
    URL: https://d.repec.org/n?u=RePEc:nbr:nberwo:33109
  3. By: Ji Ma
    Abstract: As Large Language Model (LLM)-based agents increasingly undertake real-world tasks and engage with human society, how well do we understand their behaviors? This study (1) investigates how LLM agents' prosocial behaviors -- a fundamental social norm -- can be induced by different personas and benchmarked against human behaviors; and (2) introduces a behavioral approach to evaluate the performance of LLM agents in complex decision-making scenarios. We explored how different personas and experimental framings affect these AI agents' altruistic behavior in dictator games and compared their behaviors within the same LLM family, across various families, and with human behaviors. Our findings reveal substantial variations and inconsistencies among LLMs and notable differences compared to human behaviors. Merely assigning a human-like identity to LLMs does not produce human-like behaviors. Despite being trained on extensive human-generated data, these AI agents cannot accurately predict human decisions. LLM agents are not able to capture the internal processes of human decision-making, and their alignment with human behavior is highly variable and dependent on specific model architectures and prompt formulations; even worse, such dependence does not follow a clear pattern.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.21359
  4. By: Yuan Gao; Dokyun Lee; Gordon Burtch; Sina Fazelpour
    Abstract: Recent studies suggest large language models (LLMs) can exhibit human-like reasoning, aligning with human behavior in economic experiments, surveys, and political discourse. This has led many to propose that LLMs can be used as surrogates or simulations for humans in social science research. However, LLMs differ fundamentally from humans, relying on probabilistic patterns, absent the embodied experiences or survival objectives that shape human cognition. We assess the reasoning depth of LLMs using the 11-20 money request game. Nearly all advanced approaches fail to replicate human behavior distributions across many models. Causes of failure are diverse and unpredictable, relating to input language, roles, and safeguarding. These results advise caution when using LLMs to study human behavior or as surrogates or simulations.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.19599
  5. By: Davide Radi; Frank Westerhoff
    Abstract: We propose an evolutionary competition model to investigate the green transition of firms, highlighting the role of adjustment costs, dynamically adjusted transition risk, and green technology progress in this process. Firms base their decisions to adopt either green or brown technologies on relative performance. To incorporate the costs of switching to another technology into their decision-making process, we generalize the classical exponential replicator dynamics. Our global analysis reveals that increasing transition risk, e.g., by threatening to impose stricter environmental regulations, effectively incentivizes the green transition. Economic policy recommendations derived from our model further suggest maintaining high transition risk regardless of the industry's level of greenness. Subsidizing the costs of adopting green technologies can reduce the risk of a failed green transition. While advances in green technologies can amplify the effects of green policies, they do not completely eliminate the possibility of a failed green transition. Finally, evolutionary pressures favor the green transition when green technologies are profitable.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.20379

This nep-evo issue is ©2024 by Matthew Baker. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.