nep-ain New Economics Papers
on Artificial Intelligence
Issue of 2024‒05‒27
seven papers chosen by
Ben Greiner, Wirtschaftsuniversität Wien


  1. Regulating Artificial Intelligence By Pedro Teles; João Guerreiro; Sérgio Rebelo
  2. An Economic Solution to Copyright Challenges of Generative AI By Jiachen T. Wang; Zhun Deng; Hiroaki Chiba-Okabe; Boaz Barak; Weijie J. Su
  3. Identifying Risks and Ethical Considerations of AI in Gambling: A Scoping Review By Ghaharian, Kasra; Binesh, Nasim
  4. The GPT Surprise: Offering Large Language Model Chat in a Massive Coding Class Reduced Engagement but Increased Adopters Exam Performances By Nie, Allen; Chandak, Yash; Suzara, Miroslav; Ali, Malika; Woodrow, Juliette; Peng, Matt; Sahami, Mehran; Brunskill, Emma; Piech, Chris
  5. Artificial intelligence investments reduce risks to critical mineral supply By Joaquin Vespignani; Russell Smyth
  6. Assessing the Potential of AI for Spatially Sensitive Nature-Related Financial Risks By Steven Reece; Emma O donnell; Felicia Liu; Joanna Wolstenholme; Frida Arriaga; Giacomo Ascenzi; Richard Pywell
  7. Bridging the innovation gap. AI and robotics as drivers of China’s urban innovation By Andres Rodriguez-Pose; Zhuoying You; ;

  1. By: Pedro Teles; João Guerreiro; Sérgio Rebelo
    Abstract: We consider an environment in which there is substantial uncertainty about the potential adverse external effects of AI algorithms. We find that subjecting algorithm implementation to regulatory approval or mandating testing is insufficient to implement the social optimum. When testing costs are low, a combination of mandatory testing for external effects and making developers liable for the adverse external effects of their algorithms comes close to implementing the social optimum even when developers have limited liability.
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:ptu:wpaper:w202319&r=ain
  2. By: Jiachen T. Wang; Zhun Deng; Hiroaki Chiba-Okabe; Boaz Barak; Weijie J. Su
    Abstract: Generative artificial intelligence (AI) systems are trained on large data corpora to generate new pieces of text, images, videos, and other media. There is growing concern that such systems may infringe on the copyright interests of training data contributors. To address the copyright challenges of generative AI, we propose a framework that compensates copyright owners proportionally to their contributions to the creation of AI-generated content. The metric for contributions is quantitatively determined by leveraging the probabilistic nature of modern generative AI models and using techniques from cooperative game theory in economics. This framework enables a platform where AI developers benefit from access to high-quality training data, thus improving model performance. Meanwhile, copyright owners receive fair compensation, driving the continued provision of relevant data for generative model training. Experiments demonstrate that our framework successfully identifies the most relevant data sources used in artwork generation, ensuring a fair and interpretable distribution of revenues among copyright owners.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.13964&r=ain
  3. By: Ghaharian, Kasra (University of Nevada, Las Vegas); Binesh, Nasim
    Abstract: The proliferation of data and artificial intelligence (AI) throughout society has raised concerns about its potential misuse and threats across industries. In this paper we explore the risks and ethical considerations of AI applications in gambling, an industry that makes significant contributions to many tourism destinations and local economies around the world. We conducted a scoping review to collect the breadth of literature and to understand the current state of knowledge. Our search yielded 2, 499 potentially relevant documents, from which we deemed 16 as eligible for inclusion. A content analysis revealed convergence around six main themes: (1) Explainability, (2) Exploitation, (3) Algorithmic Flaws, (4) Consumer Rights, (5) Accountability, and (6) Human-in-the-Loop. We found that these gambling-specific themes largely overlap with broader AI principles. Most records focused on algorithmic strategies to reduce gambling-related harm (n = 12/16), thus we call for more attention to be turned to commercially driven AI applications. We provide a theoretical evaluation that illustrates the challenges involved for stakeholders tasked with governing AI risks and associated ethical considerations. As a globally reaching product, gambling regulators and operators need to be cognizant, not just of philosophical principles, but also of the rich tapestry of global ethical traditions.
    Date: 2024–04–15
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:gpyub&r=ain
  4. By: Nie, Allen; Chandak, Yash; Suzara, Miroslav; Ali, Malika; Woodrow, Juliette; Peng, Matt; Sahami, Mehran; Brunskill, Emma; Piech, Chris
    Abstract: Large language models (LLMs) are quickly being adopted in a wide range of learning experiences, especially via ubiquitous and broadly accessible chat interfaces like ChatGPT and Copilot. This type of interface is readily available to students and teachers around the world, yet relatively little research has been done to assess the impact of such generic tools on student learning. Coding education is an interesting test case, both because LLMs have strong performance on coding tasks, and because LLM-powered support tools are rapidly becoming part of the workflow of professional software engineers. To help understand the impact of generic LLM use on coding education, we conducted a large-scale randomized control trial with 5, 831 students from 146 countries in an online coding class in which we provided some students with access to a chat interface with GPT-4. We estimate positive benefits on exam performance for adopters, the students who used the tool, but over all students, the advertisement of GPT-4 led to a significant average decrease in exam participation. We observe similar decreases in other forms of course engagement. However, this decrease is modulated by the student's country of origin. Offering access to LLMs to students from low human development index countries increased their exam participation rate on average. Our results suggest there may be promising benefits to using LLMs in an introductory coding class, but also potential harms for engagement, which makes their longer term impact on student success unclear. Our work highlights the need for additional investigations to help understand the potential impact of future adoption and integration of LLMs into classrooms.
    Date: 2024–04–25
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:qy8zd&r=ain
  5. By: Joaquin Vespignani (Tasmanian School of Business and Economics, University of Tasmania, Australia); Russell Smyth (Department of Economics, Monash University, Clayton, Australia)
    Abstract: This paper employs insights from earth science on the financial risk of project developments to present an economic theory of critical minerals. Our theory posits that back-ended critical mineral projects that have unaddressed technical and nontechnical barriers, such as those involving lithium and cobalt, exhibit an additional risk for investors which we term the “back-ended risk premium”. We show that the back-ended risk premium increases the cost of capital and, therefore, has the potential to reduce investment in the sector. We posit that the back-ended risk premium may also reduce the gains in productivity expected from artificial intelligence (AI) technologies in the mining sector. Progress in AI may, however, lessen the back-ended risk premium itself through shortening the duration of mining projects and the required rate of investment through reducing the associated risk. We conclude that the best way to reduce the costs associated with energy transition is for governments to invest heavily in AI mining technologies and research.
    Keywords: Critical Minerals, Artificial Intelligence, Risk Premium
    JEL: Q02 Q40 Q50
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:mos:moswps:2024-08&r=ain
  6. By: Steven Reece; Emma O donnell; Felicia Liu; Joanna Wolstenholme; Frida Arriaga; Giacomo Ascenzi; Richard Pywell
    Abstract: There is growing recognition among financial institutions, financial regulators and policy makers of the importance of addressing nature-related risks and opportunities. Evaluating and assessing nature-related risks for financial institutions is challenging due to the large volume of heterogeneous data available on nature and the complexity of investment value chains and the various components' relationship to nature. The dual problem of scaling data analytics and analysing complex systems can be addressed using Artificial Intelligence (AI). We address issues such as plugging existing data gaps with discovered data, data estimation under uncertainty, time series analysis and (near) real-time updates. This report presents potential AI solutions for models of two distinct use cases, the Brazil Beef Supply Use Case and the Water Utility Use Case. Our two use cases cover a broad perspective within sustainable finance. The Brazilian cattle farming use case is an example of greening finance - integrating nature-related considerations into mainstream financial decision-making to transition investments away from sectors with poor historical track records and unsustainable operations. The deployment of nature-based solutions in the UK water utility use case is an example of financing green - driving investment to nature-positive outcomes. The two use cases also cover different sectors, geographies, financial assets and AI modelling techniques, providing an overview on how AI could be applied to different challenges relating to nature's integration into finance. This report is primarily aimed at financial institutions but is also of interest to ESG data providers, TNFD, systems modellers, and, of course, AI practitioners.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.17369&r=ain
  7. By: Andres Rodriguez-Pose; Zhuoying You; ;
    Abstract: Artificial intelligence (AI) and robotics are revolutionising production, yet their potential to stimulate innovation and change innovation patterns remains underexplored. This paper examines whether AI and robotics can spearhead technological innovation, with a particular focus on their capacity to deliver where other policies have mostly failed: less developed cities and regions. We resort to OLS and IV-2SLS methods to probe the direct and moderating influences of AI and robotics on technological innovation across 270 Chinese cities. We further employ quantile regression analysis to assess their impacts on innovation in more and less innovative cities. The findings reveal that AI and robotics significantly promote technological innovation, with a pronounced impact in cities at or below the technological frontier. Additionally, the use of AI and robotics improves the returns of investment in science and technology (S&T) on technological innovation. AI and robotics moderating effects are often more pronounced in less innovative cities, meaning that AI and robotics are not just powerful instruments for the promotion of innovation but also effective mechanisms to reduce the yawning gap in regional innovation between Chinese innovation hubs and the rest of the country.
    Keywords: AI, robotics, China, technological innovation, territorial inequality
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:egu:wpaper:2412&r=ain

This nep-ain issue is ©2024 by Ben Greiner. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.