nep-ain New Economics Papers
on Artificial Intelligence
Issue of 2023‒08‒28
five papers chosen by
Ben Greiner, Wirtschaftsuniversität Wien

  1. Fairness in algorithmic decision systems: A microfinance perspective By Koefer, Franziska; Lemken, Ivo; Pauls, Jan
  2. Predict-AI-bility of how humans balance self-interest with the interest of others By Valerio Capraro; Roberto Di Paolo; Veronica Pizziol
  3. Of Models and Tin Men -- a behavioural economics study of principal-agent problems in AI alignment using large-language models By Steve Phelps; Rebecca Ranson
  4. Multimodal Document Analytics for Banking Process Automation By Christopher Gerling; Stefan Lessmann
  5. Datalism and Data Monopolies in the Era of A.I.: A Research Agenda By Catherine E. A. Mulligan; Phil Godsiff

  1. By: Koefer, Franziska; Lemken, Ivo; Pauls, Jan
    Abstract: Fairness is a crucial concept in the context of artificial intelligence (AI) ethics and policy. It is an incremental component in existing ethical principle frameworks, especially for algorithm-enabled decision systems. Yet, unwanted biases in algorithms persist due to the failure of practitioners to consider the social context in which algorithms operate. Recent initiatives have led to the development of ethical principles, guidelines and codes to guide organisations through the development, implementation and use of fair AI. However, practitioners still struggle with the various interpretations of abstract fairness principles, making it necessary to ask context-specific questions to create organisational awareness of fairness-related risks and how AI affects them. This paper argues that there is a gap between the potential and actual realised value of AI. We propose a framework that analyses the challenges throughout a typical AI product life cycle while focusing on the critical question of how rather broadly defined fairness principles may be translated into day-to-day practical solutions at the organisational level. We report on an exploratory case study of a social impact microfinance organisation that is using AI-enabled credit scoring to support the screening process of particularly financially marginalised entrepreneurs. This paper highlights the importance of considering the strategic role of the organisation when developing and evaluating fair algorithm-enabled decision systems. The paper concludes that the framework, introduced in this paper, provides a set of questions that can guide thinking processes inside organisations when aiming to implement fair AI systems.
    Date: 2023
  2. By: Valerio Capraro; Roberto Di Paolo; Veronica Pizziol
    Abstract: Generative artificial intelligence holds enormous potential to revolutionize decision-making processes, from everyday to high-stake scenarios. However, as many decisions carry social implications, for AI to be a reliable assistant for decision-making it is crucial that it is able to capture the balance between self-interest and the interest of others. We investigate the ability of three of the most advanced chatbots to predict dictator game decisions across 78 experiments with human participants from 12 countries. We find that only GPT-4 (not Bard nor Bing) correctly captures qualitative behavioral patterns, identifying three major classes of behavior: self-interested, inequity-averse, and fully altruistic. Nonetheless, GPT-4 consistently overestimates other-regarding behavior, inflating the proportion of inequity-averse and fully altruistic participants. This bias has significant implications for AI developers and users.
    Date: 2023–07
  3. By: Steve Phelps; Rebecca Ranson
    Abstract: AI Alignment is often presented as an interaction between a single designer and an artificial agent in which the designer attempts to ensure the agent's behavior is consistent with its purpose, and risks arise solely because of conflicts caused by inadvertent misalignment between the utility function intended by the designer and the resulting internal utility function of the agent. With the advent of agents instantiated with large-language models (LLMs), which are typically pre-trained, we argue this does not capture the essential aspects of AI safety because in the real world there is not a one-to-one correspondence between designer and agent, and the many agents, both artificial and human, have heterogeneous values. Therefore, there is an economic aspect to AI safety and the principal-agent problem is likely to arise. In a principal-agent problem conflict arises because of information asymmetry together with inherent misalignment between the utility of the agent and its principal, and this inherent misalignment cannot be overcome by coercing the agent into adopting a desired utility function through training. We argue the assumptions underlying principal-agent problems are crucial to capturing the essence of safety problems involving pre-trained AI models in real-world situations. Taking an empirical approach to AI safety, we investigate how GPT models respond in principal-agent conflicts. We find that agents based on both GPT-3.5 and GPT-4 override their principal's objectives in a simple online shopping task, showing clear evidence of principal-agent conflict. Surprisingly, the earlier GPT-3.5 model exhibits more nuanced behaviour in response to changes in information asymmetry, whereas the later GPT-4 model is more rigid in adhering to its prior alignment. Our results highlight the importance of incorporating principles from economics into the alignment process.
    Date: 2023–07
  4. By: Christopher Gerling; Stefan Lessmann
    Abstract: In response to growing FinTech competition and the need for improved operational efficiency, this research focuses on understanding the potential of advanced document analytics, particularly using multimodal models, in banking processes. We perform a comprehensive analysis of the diverse banking document landscape, highlighting the opportunities for efficiency gains through automation and advanced analytics techniques in the customer business. Building on the rapidly evolving field of natural language processing (NLP), we illustrate the potential of models such as LayoutXLM, a cross-lingual, multimodal, pre-trained model, for analyzing diverse documents in the banking sector. This model performs a text token classification on German company register extracts with an overall F1 score performance of around 80\%. Our empirical evidence confirms the critical role of layout information in improving model performance and further underscores the benefits of integrating image information. Interestingly, our study shows that over 75% F1 score can be achieved with only 30% of the training data, demonstrating the efficiency of LayoutXLM. Through addressing state-of-the-art document analysis frameworks, our study aims to enhance process efficiency and demonstrate the real-world applicability and benefits of multimodal models within banking.
    Date: 2023–07
  5. By: Catherine E. A. Mulligan; Phil Godsiff
    Abstract: The increasing use of data in various parts of the economic and social systems is creating a new form of monopoly: data monopolies. We illustrate that the companies using these strategies, Datalists, are challenging the existing definitions used within Monopoly Capital Theory (MCT). Datalists are pursuing a different type of monopoly control than traditional multinational corporations. They are pursuing monopolistic control over data to feed their productive processes, increasingly controlled by algorithms and Artificial Intelligence (AI). These productive processes use information about humans and the creative outputs of humans as the inputs but do not classify those humans as employees, so they are not paid or credited for their labour. This paper provides an overview of this evolution and its impact on monopoly theory. It concludes with an outline for a research agenda for economics in this space.
    Date: 2023–07

This nep-ain issue is ©2023 by Ben Greiner. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.