nep-ain New Economics Papers
on Artificial Intelligence
Issue of 2023‒09‒11
eleven papers chosen by
Ben Greiner, Wirtschaftsuniversität Wien

  1. Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty By Margarita Leib; Nils Köbis; Rainer Michael Rilke; Marloes Hagens; Bernd Irlenbusch
  2. Machine Learning Advice in Managerial Decision-Making: The Overlooked Role of Decision Makers’ Advice Utilization By Sturm, Timo; Pumplun, Luisa; Gerlach, Jin; Kowalczyk, Martin; Buxmann, Peter
  3. Managers and AI-driven decisions: Exploring Managers’ Sensemaking Processes in Digital Transformation Contexts By Fabrice Duval; Christophe Elie-Dit-Cosaque
  4. Fine-Tuning Games: Bargaining and Adaptation for General-Purpose Models By Benjamin Laufer; Jon Kleinberg; Hoda Heidari
  5. AI exposure predicts unemployment risk By Morgan Frank; Yong-Yeol Ahn; Esteban Moro
  6. Contested Transparency: Digital Monitoring Technologies and Worker Voice By Belloc, Filippo; Burdin, Gabriel; Dughera, Stefano; Landini, Fabio
  7. How Nations Become Fragile: An AI-Augmented Bird’s-Eye View (with a Case Study of South Sudan) By Tohid Atashbar
  8. Understanding Models and Model Bias with Gaussian Processes By Thomas R. Cook; Nathan M. Palmer
  9. Entity matching with similarity encoding: A supervised learning recommendation framework for linking (big) data By Karapanagiotis, Pantelis; Liebald, Marius
  10. Why Machines Will Not Replace Entrepreneurs. On the Inevitable Limitations of Artificial Intelligence in Economic Life. By Van Den Hauwe, Ludwig
  11. Quantifying Retrospective Human Responsibility in Intelligent Systems By Nir Douer; Joachim Meyer

  1. By: Margarita Leib; Nils Köbis; Rainer Michael Rilke; Marloes Hagens; Bernd Irlenbusch
    Abstract: Artificial Intelligence (AI) increasingly becomes an indispensable advisor. New ethical concerns arise if AI persuades people to behave dishonestly. In an experiment, we study how AI advice (generated by a Natural-Language-Processing algorithm) affects (dis)honesty, compare it to equivalent human advice, and test whether transparency about advice source matters. We find that dishonesty-promoting advice increases dishonesty, whereas honesty-promoting advice does not increase honesty. This is the case for both AIand human advice. Algorithmic transparency, a commonly proposed policy to mitigate AI risks, does not affect behaviour. The findings mark the first steps towards managing AI advice responsibly.
    Keywords: Artificial Intelligence, Machine Behaviour, Behavioural Ethics, Advice
    Date: 2023–08
  2. By: Sturm, Timo; Pumplun, Luisa; Gerlach, Jin; Kowalczyk, Martin; Buxmann, Peter
    Abstract: Machine learning (ML) analyses offer great potential to craft profound advice for augmenting managerial decision-making. Yet, even the most promising ML advice cannot improve decision-making if it is not utilized by decision makers. We therefore investigate how ML analyses influence decision makers’ utilization of advice and resulting decision-making performance. By analyzing data from 239 ML-supported decisions in real-world organizational scenarios, we demonstrate that decision makers’ utilization of ML advice depends on the information quality and transparency of ML advice as well as decision makers’ trust in data scientists’ competence. Furthermore, we find that decision makers’ utilization of ML advice can lead to improved decision-making performance, which is, however, moderated by the decision makers’ management level. The study’s results can help organizations leverage ML advice to improve decision-making and promote the mutual consideration of technical and social aspects behind ML advice in research and practice as a basic requirement.
    Date: 2023–12
  3. By: Fabrice Duval (MEMIAD - Management, économie, modélisation, informatique et aide à la décision [UR7_3] - UA - Université des Antilles); Christophe Elie-Dit-Cosaque (MEMIAD - Management, économie, modélisation, informatique et aide à la décision [UR7_3] - UA - Université des Antilles)
    Abstract: Making effective decisions is vital for organisations to ensure their competitiveness and sustainability. Many expect decisions based on artificial intelligence (AI) to help revolutionise the business world. We know very little about how managers interpret, make sense of and respond to these digital transformation challenges. To address this issue and improve the understanding of how managers make sense of digital transformation, in particular AI-driven digital transformation, we propose to analyse their representations of AI-driven decisions and the forces at play in the sensemaking processes. To do so, we intend to conduct a qualitative study based on interviews with managers involved in digital transformation in Martinique, a Caribbean Island. The expected implications for research and practice are discussed.
    Abstract: Prendre des décisions efficaces est vital pour les organisations afin d'assurer leur compétitivité et leur pérennité. Beaucoup attendent des décisions fondées sur l'intelligence artificielle (IA) qu'elles contribuent à révolutionner le monde des affaires. Nous en savons très peu sur la façon dont les managers interprètent, donnent du sens et répondent à ces défis de transformation digitale. Afin de répondre à ce problème et d'améliorer la compréhension de la façon dont les managers donnent du sens à la transformation digitale, en particulier la transformation digitale axée sur l'intelligence artificielle, nous proposons d'analyser leurs représentations des décisions fondées sur l'IA et les forces en jeu dans les processus de construction de sens. Pour ce faire, nous comptons mener une étude qualitative fondée sur des entretiens avec des managers impliqués dans la transformation digitale en Martinique, une île des Caraïbes. Les implications attendues pour la recherche et la pratique sont discutées.
    Keywords: Sensemaking, AI-driven decisions, intuition, artificial intelligence, Construction de sens, transformation digitale, intelligence artificielle
    Date: 2023–05–29
  4. By: Benjamin Laufer; Jon Kleinberg; Hoda Heidari
    Abstract: Major advances in Machine Learning (ML) and Artificial Intelligence (AI) increasingly take the form of developing and releasing general-purpose models. These models are designed to be adapted by other businesses and agencies to perform a particular, domain-specific function. This process has become known as adaptation or fine-tuning. This paper offers a model of the fine-tuning process where a Generalist brings the technological product (here an ML model) to a certain level of performance, and one or more Domain-specialist(s) adapts it for use in a particular domain. Both entities are profit-seeking and incur costs when they invest in the technology, and they must reach a bargaining agreement on how to share the revenue for the technology to reach the market. For a relatively general class of cost and revenue functions, we characterize the conditions under which the fine-tuning game yields a profit-sharing solution. We observe that any potential domain-specialization will either contribute, free-ride, or abstain in their uptake of the technology, and we provide conditions yielding these different strategies. We show how methods based on bargaining solutions and sub-game perfect equilibria provide insights into the strategic behavior of firms in these types of interactions, and we find that profit-sharing can still arise even when one firm has significantly higher costs than another. We also provide methods for identifying Pareto-optimal bargaining arrangements for a general set of utility functions.
    Date: 2023–08
  5. By: Morgan Frank; Yong-Yeol Ahn; Esteban Moro
    Abstract: Is artificial intelligence (AI) disrupting jobs and creating unemployment? Despite many attempts to quantify occupations' exposure to AI, inconsistent validation obfuscates the relative benefits of each approach. A lack of disaggregated labor outcome data, including unemployment data, further exacerbates the issue. Here, we assess which models of AI exposure predict job separations and unemployment risk using new occupation-level unemployment data by occupation from each US state's unemployment insurance office spanning 2010 through 2020. Although these AI exposure scores have been used by governments and industry, we find that individual AI exposure models are not predictive of unemployment rates, unemployment risk, or job separation rates. However, an ensemble of those models exhibits substantial predictive power suggesting that competing models may capture different aspects of AI exposure that collectively account for AI's variable impact across occupations, regions, and time. Our results also call for dynamic, context-aware, and validated methods for assessing AI exposure. Interactive visualizations for this study are available at mo/.
    Date: 2023–08
  6. By: Belloc, Filippo (University of Siena); Burdin, Gabriel (Leeds University Business School); Dughera, Stefano (University Paris Ouest-Nanterre); Landini, Fabio (University of Parma)
    Abstract: Advances in artificial intelligence and data analytics have notably expanded employers' monitoring and surveillance capabilities, facilitating the accurate observability of work effort. There is an ongoing debate among academics and policymakers about the productivity and broader welfare implications of digital monitoring (DM) technologies. In this context, many countries confer information, consultation and codetermination rights to employee representation (ER) bodies on matters related to the workplace governance of these technologies. Using a cross-sectional sample of more than 21000 European establishments, we document a positive association between ER and the utilization of DM technologies. We also find a positive effect of ER on DM utilization in the context of a local-randomization regression discontinuity analysis that exploits size-contingent policy rules governing the operation of ER bodies in Europe. Finally, in an exploratory analysis, we find a positive association between DM and process innovations, particularly in establishments where ER bodies are present and a large fraction of workers perform jobs that require finding solutions to unfamiliar problems. We interpret these findings through the lens of a labor discipline model in which the presence of ER bodies affect employer's decision to invest in DM technologies.
    Keywords: digital-based monitoring, algorithmic management, HR analytics, transparency, innovation, worker voice, employee representation
    JEL: M5 J50 O32 O33
    Date: 2023–08
  7. By: Tohid Atashbar
    Abstract: In this study we introduce and apply a set of machine learning and artificial intelligence techniques to analyze multi-dimensional fragility-related data. Our analysis of the fragility data collected by the OECD for its States of Fragility index showed that the use of such techniques could provide further insights into the non-linear relationships and diverse drivers of state fragility, highlighting the importance of a nuanced and context-specific approach to understanding and addressing this multi-aspect issue. We also applied the methodology used in this paper to South Sudan, one of the most fragile countries in the world to analyze the dynamics behind the different aspects of fragility over time. The results could be used to improve the Fund’s country engagement strategy (CES) and efforts at the country.
    Keywords: Fragile and Conflict-Affected States; Fragility Trap; Fragility Syndrome; Machine Learning; Artificial Intelligence
    Date: 2023–08–11
  8. By: Thomas R. Cook; Nathan M. Palmer
    Abstract: Despite growing interest in the use of complex models, such as machine learning (ML) models, for credit underwriting, ML models are difficult to interpret, and it is possible for them to learn relationships that yield de facto discrimination. How can we understand the behavior and potential biases of these models, especially if our access to the underlying model is limited? We argue that counterfactual reasoning is ideal for interpreting model behavior, and that Gaussian processes (GP) can provide approximate counterfactual reasoning while also incorporating uncertainty in the underlying model’s functional form. We illustrate with an exercise in which a simulated lender uses a biased machine model to decide credit terms. Comparing aggregate outcomes does not clearly reveal bias, but with a GP model we can estimate individual counterfactual outcomes. This approach can detect the bias in the lending model even when only a relatively small sample is available. To demonstrate the value of this approach for the more general task of model interpretability, we also show how the GP model’s estimates can be aggregated to recreate the partial density functions for the lending model.
    Keywords: models; Gaussian process; model bias
    JEL: C10 C14 C18 C45
    Date: 2023–06–15
  9. By: Karapanagiotis, Pantelis; Liebald, Marius
    Abstract: In this study, we introduce a novel entity matching (EM) framework. It com-bines state-of-the-art EM approaches based on Artificial Neural Networks (ANN) with a new similarity encoding derived from matching techniques that are preva-lent in finance and economics. Our framework is on-par or outperforms alternative end-to-end frameworks in standard benchmark cases. Because similarity encod-ing is constructed using (edit) distances instead of semantic similarities, it avoids out-of-vocabulary problems when matching dirty data. We highlight this property by applying an EM application to dirty financial firm-level data extracted from historical archives.
    Keywords: Entity matching, Entity resolution, Database linking, Machine learning, Record resolution, Similarity encoding
    JEL: C8
    Date: 2023
  10. By: Van Den Hauwe, Ludwig
    Abstract: This paper critically explores some supposed implications of the development of artificial intelligence (AI), particularly also machine learning (ML), for how we conceive of the role of entrepreneurship in the economy. The question of the impact of AI and ML is examined by hypothesizing a decentralized market-based system and raising the question of whether entrepreneurs will someday likely be replaced by machines. The answer turns out to be highly skeptical. Not only does the materialist worldview behind the ambitions of much AI research cast serious doubts upon the chances of success of any attempts to emulate entrepreneurship algorithmically with the help of computers, the very possibility of artificial general intelligence (AGI) can also be ruled out on purely scientific grounds. The paper concludes that human entrepreneurs will remain the driving force of the market.
    Keywords: Artificial Intelligence, Entrepreneurship, Creativity
    JEL: A12 M0 O3
    Date: 2023–08–10
  11. By: Nir Douer; Joachim Meyer
    Abstract: Intelligent systems have become a major part of our lives. Human responsibility for outcomes becomes unclear in the interaction with these systems, as parts of information acquisition, decision-making, and action implementation may be carried out jointly by humans and systems. Determining human causal responsibility with intelligent systems is particularly important in events that end with adverse outcomes. We developed three measures of retrospective human causal responsibility when using intelligent systems. The first measure concerns repetitive human interactions with a system. Using information theory, it quantifies the average human's unique contribution to the outcomes of past events. The second and third measures concern human causal responsibility in a single past interaction with an intelligent system. They quantify, respectively, the unique human contribution in forming the information used for decision-making and the reasonability of the actions that the human carried out. The results show that human retrospective responsibility depends on the combined effects of system design and its reliability, the human's role and authority, and probabilistic factors related to the system and the environment. The new responsibility measures can serve to investigate and analyze past events involving intelligent systems. They may aid the judgment of human responsibility and ethical and legal discussions, providing a novel quantitative perspective.
    Date: 2023–08

This nep-ain issue is ©2023 by Ben Greiner. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.