nep-ain New Economics Papers
on Artificial Intelligence
Issue of 2024‒06‒10
fifteen papers chosen by
Ben Greiner, Wirtschaftsuniversität Wien


  1. Strategic Behavior and AI Training Data By Christian Peukert; Florian Abeillon; J\'er\'emie Haese; Franziska Kaiser; Alexander Staub
  2. Algorithmic Bias and Racial Inequality: A Critical Review By Kasy, Maximilian
  3. Bridging the Human-Automation Fairness Gap: How Providing Reasons Enhances the Perceived Fairness of Public Decision-Making By Arian Henning; Pascal Langenbach
  4. Designing Algorithmic Recommendations to Achieve Human-AI Complementarity By Bryce McLaughlin; Jann Spiess
  5. Artificial Intelligence for Multi-Unit Auction design By Peyman Khezr; Kendall Taylor
  6. Generative AI Usage and Academic Performance By Janik Ole Wecks; Johannes Voshaar; Benedikt Jost Plate; Jochen Zimmermann
  7. AI, Automation and Taxation By Spencer Bastani; Daniel Waldenström
  8. AI and the Future of Government: Unexpected Effects and Critical Challenges By Tiago C. Peixoto; Otaviano Canuto; Luke Jordan
  9. Planning for Degrowth: How artificial intelligence and Big Data revitalize the debate on democratic economic planning By Schlichter, Leo
  10. Application and practice of AI technology in quantitative investment By Shuochen Bi; Wenqing Bao; Jue Xiao; Jiangshan Wang; Tingting Deng
  11. NumLLM: Numeric-Sensitive Large Language Model for Chinese Finance By Huan-Yi Su; Ke Wu; Yu-Hao Huang; Wu-Jun Li
  12. Multi-Stakeholder Ecosystem for Standardization of AI in Industry By Bonilla, George J.J.; Dietlmeier, Simon Frederic; Urmetzer, Florian
  13. Innovative Application of Artificial Intelligence Technology in Bank Credit Risk Management By Shuochen Bi; Wenqing Bao
  14. ECC Analyzer: Extract Trading Signal from Earnings Conference Calls using Large Language Model for Stock Performance Prediction By Yupeng Cao; Zhi Chen; Qingyun Pei; Prashant Kumar; K. P. Subbalakshmi; Papa Momar Ndiaye
  15. Deep learning solutions of DSGE models: A technical report By Pierre Beck; Pablo Garcia-Sanchez; Alban Moura; Julien Pascal; Olivier Pierrard

  1. By: Christian Peukert; Florian Abeillon; J\'er\'emie Haese; Franziska Kaiser; Alexander Staub
    Abstract: Human-created works represent critical data inputs to artificial intelligence (AI). Strategic behavior can play a major role for AI training datasets, be it in limiting access to existing works or in deciding which types of new works to create or whether to create new works at all. We examine creators' behavioral change when their works become training data for AI. Specifically, we focus on contributors on Unsplash, a popular stock image platform with about 6 million high-quality photos and illustrations. In the summer of 2020, Unsplash launched an AI research program by releasing a dataset of 25, 000 images for commercial use. We study contributors' reactions, comparing contributors whose works were included in this dataset to contributors whose works were not included. Our results suggest that treated contributors left the platform at a higher-than-usual rate and substantially slowed down the rate of new uploads. Professional and more successful photographers react stronger than amateurs and less successful photographers. We also show that affected users changed the variety and novelty of contributions to the platform, with long-run implications for the stock of works potentially available for AI training. Taken together, our findings highlight the trade-off between interests of rightsholders and promoting innovation at the technological frontier. We discuss implications for copyright and AI policy.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.18445&r=
  2. By: Kasy, Maximilian (University of Oxford)
    Abstract: Most definitions of algorithmic bias and fairness encode decisionmaker interests, such as profits, rather than the interests of disadvantaged groups (e.g., racial minorities): Bias is defined as a deviation from profit maximization. Future research should instead focus on the causal effect of automated decisions on the distribution of welfare, both across and within groups. The literature emphasizes some apparent contradictions between different notions of fairness, and between fairness and profits. These contradictions vanish, however, when profits are maximized. Existing work involves conceptual slippages between statistical notions of bias and misclassification errors, economic notions of profit, and normative notions of bias and fairness. Notions of bias nonetheless carry some interest within the welfare paradigm that I advocate for, if we understand bias and discrimination as mechanisms and potential points of intervention.
    Keywords: AI, algorithmic bias, inequality, machine learning, discrimination
    JEL: J7 O3
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp16944&r=
  3. By: Arian Henning (Max Planck Institute for Research on Collective Goods, Bonn); Pascal Langenbach (Max Planck Institute for Research on Collective Goods, Bonn)
    Abstract: Automated decision-making in legal contexts is often perceived as less fair than its human counterpart. This human-automation fairness gap poses practical challenges for implementing automated systems in the public sector. Drawing on experimental data from 4, 250 participants in three public decision-making scenarios, this study examines how different reasoning models influence the perceived fairness of automated and human decision-making. The results show that providing reasons enhances the perceived fairness of decision-making, regardless of whether decisions are made by humans or machines. Moreover, the study demonstrates that sufficiently individualized reasoning largely mitigates the human-automation fairness gap. The study thus contributes to the understanding of how procedural elements like giving reasons for decisions shape perceptions of automated government and suggests that well-designed reason giving can improve the acceptability of automated decision systems.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:mpg:wpaper:2024_11&r=
  4. By: Bryce McLaughlin; Jann Spiess
    Abstract: Algorithms frequently assist, rather than replace, human decision-makers. However, the design and analysis of algorithms often focus on predicting outcomes and do not explicitly model their effect on human decisions. This discrepancy between the design and role of algorithmic assistants becomes of particular concern in light of empirical evidence that suggests that algorithmic assistants again and again fail to improve human decisions. In this article, we formalize the design of recommendation algorithms that assist human decision-makers without making restrictive ex-ante assumptions about how recommendations affect decisions. We formulate an algorithmic-design problem that leverages the potential-outcomes framework from causal inference to model the effect of recommendations on a human decision-maker's binary treatment choice. Within this model, we introduce a monotonicity assumption that leads to an intuitive classification of human responses to the algorithm. Under this monotonicity assumption, we can express the human's response to algorithmic recommendations in terms of their compliance with the algorithm and the decision they would take if the algorithm sends no recommendation. We showcase the utility of our framework using an online experiment that simulates a hiring task. We argue that our approach explains the relative performance of different recommendation algorithms in the experiment, and can help design solutions that realize human-AI complementarity.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.01484&r=
  5. By: Peyman Khezr; Kendall Taylor
    Abstract: Understanding bidding behavior in multi-unit auctions remains an ongoing challenge for researchers. Despite their widespread use, theoretical insights into the bidding behavior, revenue ranking, and efficiency of commonly used multi-unit auctions are limited. This paper utilizes artificial intelligence, specifically reinforcement learning, as a model free learning approach to simulate bidding in three prominent multi-unit auctions employed in practice. We introduce six algorithms that are suitable for learning and bidding in multi-unit auctions and compare them using an illustrative example. This paper underscores the significance of using artificial intelligence in auction design, particularly in enhancing the design of multi-unit auctions.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.15633&r=
  6. By: Janik Ole Wecks; Johannes Voshaar; Benedikt Jost Plate; Jochen Zimmermann
    Abstract: This study evaluates the impact of students' usage of generative artificial intelligence (GenAI) tools such as ChatGPT on their academic performance. We analyze student essays using GenAI detection systems to identify GenAI users among the cohort. Employing multivariate regression analysis, we find that students using GenAI tools score on average 6.71 (out of 100) points lower than non-users. While GenAI tools may offer benefits for learning and engagement, the way students actually use it correlates with diminished academic outcomes. Exploring the underlying mechanism, additional analyses show that the effect is particularly detrimental to students with high learning potential, suggesting an effect whereby GenAI tool usage hinders learning. Our findings provide important empirical evidence for the ongoing debate on the integration of GenAI in higher education and underscores the necessity for educators, institutions, and policymakers to carefully consider its implications for student performance.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.19699&r=
  7. By: Spencer Bastani; Daniel Waldenström
    Abstract: This paper examines the implications of Artificial Intelligence (AI) and automation for the taxation of labor and capital in advanced economies. It synthesizes empirical evidence on worker displacement, productivity, and income inequality, as well as theoretical frameworks for optimal taxation. Implications for tax policy are discussed, focusing on the level of capital taxes and the progressivity of labor taxes. While there may be a need to adjust the level of capital taxes and the structure of labor income taxation, there are potential drawbacks of overly progressive taxation and universal basic income schemes that could undermine work incentives, economic growth, and long-term household welfare. Some of the challenges posed by AI and automation may also be better addressed through regulatory measures rather than tax policy.
    Keywords: AI, automation, inequality, labor share, optimal taxation, tax progressivity
    JEL: H21 H30 O33
    Date: 2024
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_11084&r=
  8. By: Tiago C. Peixoto; Otaviano Canuto; Luke Jordan
    Abstract: Based on observable facts, this policy paper explores some of the less- acknowledged yet critically important ways in which artificial intelligence (AI) may affect the public sector and its role. Our focus is on those areas where AI's influence might be understated currently, but where it has substantial implications for future government policies and actions.
    Date: 2024–03
    URL: http://d.repec.org/n?u=RePEc:ocp:ppaper:pb10-24&r=
  9. By: Schlichter, Leo
    Abstract: The Degrowth movement advocates a radical shift from our capitalist economic system to one based on human needs, planetary boundaries, and economic democracy. The literature, however, often neglects detailing the concrete coordination mechanisms of a Degrowth economy. This paper addresses this gap by proposing democratic economic planning as a potential solution. I delve into historical and contemporary planning debates, examining practical examples and proposals that leverage artificial intelligence and cybernetics for democratic economic planning. I argue that models such as participatory economics (Parecon) or Daniel Saros's planning model align well with Degrowth principles, forming a foundation for further exploration. Effective economic planning requires democratic participation, free information flow, and safeguards against power abuse. Still, open questions on money, trade, democratic institutions, and privacy protection require further investigation.
    Keywords: Degrowth, economic democracy, economic planning, participatory economics
    JEL: B50 O49 P11 P21 P40
    Date: 2024
    URL: http://d.repec.org/n?u=RePEc:zbw:ipewps:294825&r=
  10. By: Shuochen Bi; Wenqing Bao; Jue Xiao; Jiangshan Wang; Tingting Deng
    Abstract: With the continuous development of artificial intelligence technology, using machine learning technology to predict market trends may no longer be out of reach. In recent years, artificial intelligence has become a research hotspot in the academic circle, and it has been widely used in image recognition, natural language processing and other fields, and also has a huge impact on the field of quantitative investment. As an investment method to obtain stable returns through data analysis, model construction and program trading, quantitative investment is deeply loved by financial institutions and investors. At the same time, as an important application field of quantitative investment, the quantitative investment strategy based on artificial intelligence technology arises at the historic moment.How to apply artificial intelligence to quantitative investment, so as to better achieve profit and risk control, has also become the focus and difficulty of the research. From a global perspective, inflation in the US and the Federal Reserve are the concerns of investors, which to some extent affects the direction of global assets, including the Chinese stock market. This paper studies the application of AI technology, quantitative investment, and AI technology in quantitative investment, aiming to provide investors with auxiliary decision-making, reduce the difficulty of investment analysis, and help them to obtain higher returns.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.18184&r=
  11. By: Huan-Yi Su; Ke Wu; Yu-Hao Huang; Wu-Jun Li
    Abstract: Recently, many works have proposed various financial large language models (FinLLMs) by pre-training from scratch or fine-tuning open-sourced LLMs on financial corpora. However, existing FinLLMs exhibit unsatisfactory performance in understanding financial text when numeric variables are involved in questions. In this paper, we propose a novel LLM, called numeric-sensitive large language model (NumLLM), for Chinese finance. We first construct a financial corpus from financial textbooks which is essential for improving numeric capability of LLMs during fine-tuning. After that, we train two individual low-rank adaptation (LoRA) modules by fine-tuning on our constructed financial corpus. One module is for adapting general-purpose LLMs to financial domain, and the other module is for enhancing the ability of NumLLM to understand financial text with numeric variables. Lastly, we merge the two LoRA modules into the foundation model to obtain NumLLM for inference. Experiments on financial question-answering benchmark show that NumLLM can boost the performance of the foundation model and can achieve the best overall performance compared to all baselines, on both numeric and non-numeric questions.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.00566&r=
  12. By: Bonilla, George J.J.; Dietlmeier, Simon Frederic; Urmetzer, Florian
    Abstract: The increasing governmental interest in fostering the Artificial Intelligence sector in Britain has rapidly increased; the United Kingdom has recognised AI’s significance and incorporated it into its policy frameworks. The UK’s Industrial Strategy framework of 2017 emphasises the need for investment, research, and collaboration in this field, and these effortsraise a significant question: How do Regional AI SMEs have access to framework, networks and resources? In line with this research endeavour, the research focuses on how three AI SMEs located in different regions of Britain are influenced by the introduction of those policy frameworks in their business operations. By examining these aspects, this research provides insights into the impact of the domestic AI policy framework on Britain’s AI SMEs. It focuses on how policies can shape the development and adoption of such frameworks in SMEs and how these frameworks influences might differ from one SME to another, by utilising two frameworks: 1. The Stakeholder Assessment Criterion, defines three models: ‘Statist-model’, ‘Laissez-Faire model’ and ‘Academia Model’. 2. The Governance Matrix. These two frameworks aided this research to in comprehending the current British AI ecosystem policy developments influencing the three AI SMEs. This research was propelled by an inductive reasoning process and qualitative data collection methodology. Three case studies were conducted: one in a company based in London, England’s capital; another in Reading, located in Berkshire; and a third in Sheffield, situated in the South Yorkshire County of northern England. These observations took place between June 12 and July 14. Several interviews with stakeholders from these companies were conducted, providing the opportunity to scrutinise and cross-reference the recent AI policy framework developments implemented by the British Parliament. Furthermore, the study engaged regional and domestic policymakers in interviews to comprehend the external factors influencing these companies
    Keywords: Standardization; Multi-Stakeholder; Forum; Artificial Intelligence; Policy
    JEL: A1 B2 C8 F5 Y4
    Date: 2023–08–01
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:120619&r=
  13. By: Shuochen Bi; Wenqing Bao
    Abstract: With the rapid growth of technology, especially the widespread application of artificial intelligence (AI) technology, the risk management level of commercial banks is constantly reaching new heights. In the current wave of digitalization, AI has become a key driving force for the strategic transformation of financial institutions, especially the banking industry. For commercial banks, the stability and safety of asset quality are crucial, which directly relates to the long-term stable growth of the bank. Among them, credit risk management is particularly core because it involves the flow of a large amount of funds and the accuracy of credit decisions. Therefore, establishing a scientific and effective credit risk decision-making mechanism is of great strategic significance for commercial banks. In this context, the innovative application of AI technology has brought revolutionary changes to bank credit risk management. Through deep learning and big data analysis, AI can accurately evaluate the credit status of borrowers, timely identify potential risks, and provide banks with more accurate and comprehensive credit decision support. At the same time, AI can also achieve realtime monitoring and early warning, helping banks intervene before risks occur and reduce losses.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.18183&r=
  14. By: Yupeng Cao; Zhi Chen; Qingyun Pei; Prashant Kumar; K. P. Subbalakshmi; Papa Momar Ndiaye
    Abstract: In the realm of financial analytics, leveraging unstructured data, such as earnings conference calls (ECCs), to forecast stock performance is a critical challenge that has attracted both academics and investors. While previous studies have used deep learning-based models to obtain a general view of ECCs, they often fail to capture detailed, complex information. Our study introduces a novel framework: \textbf{ECC Analyzer}, combining Large Language Models (LLMs) and multi-modal techniques to extract richer, more predictive insights. The model begins by summarizing the transcript's structure and analyzing the speakers' mode and confidence level by detecting variations in tone and pitch for audio. This analysis helps investors form an overview perception of the ECCs. Moreover, this model uses the Retrieval-Augmented Generation (RAG) based methods to meticulously extract the focuses that have a significant impact on stock performance from an expert's perspective, providing a more targeted analysis. The model goes a step further by enriching these extracted focuses with additional layers of analysis, such as sentiment and audio segment features. By integrating these insights, the ECC Analyzer performs multi-task predictions of stock performance, including volatility, value-at-risk (VaR), and return for different intervals. The results show that our model outperforms traditional analytic benchmarks, confirming the effectiveness of using advanced LLM techniques in financial analytics.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.18470&r=
  15. By: Pierre Beck; Pablo Garcia-Sanchez; Alban Moura; Julien Pascal; Olivier Pierrard
    Abstract: This technical report provides an introduction to solving economic models using deep learning techniques. We offer a simple yet rigorous overview of deep learning methods and their applicability to economic modeling. We illustrate these concepts using the benchmark of modern macroeconomic theory: the stochastic growth model. Our results emphasize how various choices related to the design of the deep learning solution affect the accuracy of the results, providing some guidance for potential users of the method. We also provide fully commented computer codes. Overall, our hope is that this report will serve as an accessible, useful entry point to applying deep learning techniques to solve economic models for graduate students and researchers interested in the field.
    Keywords: Solutions of DSGE models, deep learning, artificial neural networks
    JEL: C45 C60 C63 E13
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:bcl:bclwop:bclwp184&r=

This nep-ain issue is ©2024 by Ben Greiner. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.