nep-ain New Economics Papers
on Artificial Intelligence
Issue of 2023‒07‒24
nine papers chosen by
Ben Greiner
Wirtschaftsuniversität Wien

  1. Regulatory sandboxes in artificial intelligence By OECD
  2. New Technologies and Jobs in Europe By Stefania Albanesi; António Dias da Silva; Juan F. Jimeno; Ana Lamo; Alena Wabitsch
  3. Questioning the ability of feature-based explanations to empower non-experts in robo-advised financial decision-making By Astrid Bertrand; James Eagan; Winston Maxwell
  4. Bloated Disclosures: Can ChatGPT Help Investors Process Financial Information? By Alex Kim; Maximilian Muhn; Valeri Nikolaev
  5. Temporal Data Meets LLM -- Explainable Financial Time Series Forecasting By Xinli Yu; Zheng Chen; Yuan Ling; Shujing Dong; Zongyi Liu; Yanbin Lu
  6. Using Deep Learning to Hedge Rainbow Options By Thibault Collin
  7. Statistical Tests for Replacing Human Decision Makers with Algorithms By Kai Feng; Han Hong; Ke Tang; Jingyuan Wang
  8. Uncovering the semantics of concepts using GPT-4 and Other recent large language models By Gaël Le Mens; Balász Kovács; Michael T. Hannan; Guillem Pros
  9. The Great Rush By Károly Fazekas

  1. By: OECD
    Abstract: This report focuses on regulatory sandboxes in artificial intelligence (AI), where authorities engage firms to test innovative products or services that challenge existing legal frameworks. Participating firms obtain a waiver from specific legal provisions or compliance processes to innovate. It highlights positive impacts like increased venture capital investment in fintech start-ups. It points out challenges, risks, and policy considerations for AI sandboxes, emphasizing interdisciplinary cooperation, building AI expertise, regulatory interoperability, and trade policy. It also addresses the importance of comprehensive criteria for eligibility and assessing trials, as well as the impact on innovation and competition.
    Date: 2023–07–13
  2. By: Stefania Albanesi; António Dias da Silva; Juan F. Jimeno; Ana Lamo; Alena Wabitsch
    Abstract: We examine the link between labour market developments and new technologies such as artificial intelligence (AI) and software in 16 European countries over the period 2011- 2019. Using data for occupations at the 3-digit level in Europe, we find that on average employment shares have increased in occupations more exposed to AI. This is particularly the case for occupations with a relatively higher proportion of younger and skilled workers. This evidence is in line with the Skill Biased Technological Change theory. While there exists heterogeneity across countries, only very few countries show a decline in employment shares of occupations more exposed to AI-enabled automation. Country heterogeneity for this result seems to be linked to the pace of technology diffusion and education, but also to the level of product market regulation (competition) and employment protection laws. In contrast to the findings for employment, we find little evidence for a relationship between wages and potential exposures to new technologies.
    JEL: E24 J2 J21 J31 O30 O33
    Date: 2023–06
  3. By: Astrid Bertrand (IP Paris - Institut Polytechnique de Paris, DIVA - Design, Interaction, Visualization & Applications - LTCI - Laboratoire Traitement et Communication de l'Information - IMT - Institut Mines-Télécom [Paris] - Télécom Paris); James Eagan (DIVA - Design, Interaction, Visualization & Applications - LTCI - Laboratoire Traitement et Communication de l'Information - IMT - Institut Mines-Télécom [Paris] - Télécom Paris, IP Paris - Institut Polytechnique de Paris, INFRES - Département Informatique et Réseaux - Télécom ParisTech); Winston Maxwell (SES - Département Sciences Economiques et Sociales - Télécom ParisTech, ECOGE - Economie Gestion - I3 SES - Institut interdisciplinaire de l’innovation de Telecom Paris - Télécom ParisTech - I3 - Institut interdisciplinaire de l’innovation - CNRS - Centre National de la Recherche Scientifique, IP Paris - Institut Polytechnique de Paris, Télécom ParisTech, I3 SES - Institut interdisciplinaire de l’innovation de Telecom Paris - Télécom ParisTech - I3 - Institut interdisciplinaire de l’innovation - CNRS - Centre National de la Recherche Scientifique)
    Abstract: Robo-advisors are democratizing access to life-insurance by enabling fully online underwriting. In Europe, financial legislation requires that the reasons for recommending a life insurance plan be explained according to the characteristics of the client, in order to empower the client to make a "fully informed decision". In this study conducted in France, we seek to understand whether legal requirements for feature-based explanations actually help users in their decision-making. We conduct a qualitative study to characterize the explainability needs formulated by non-expert users and by regulators expert in customer protection. We then run a large-scale quantitative study using Robex, a simplified robo-advisor built using ecological interface design that delivers recommendations with explanations in different hybrid textual and visual formats: either "dialogic"-more textual-or "graphical"-more visual. We find that providing feature-based explanations does not improve appropriate reliance or understanding compared to not providing any explanation. In addition, dialogic explanations increase users' trust in the recommendations of the robo-advisor, sometimes to the users' detriment. This real-world scenario illustrates how XAI can address information asymmetry in complex areas such as finance. This work has implications for other critical, AI-based recommender systems, where the General Data Protection Regulation (GDPR) may require similar provisions for feature-based explanations. CCS CONCEPTS • Human-centered computing → Empirical studies in HCI.
    Keywords: explainability intelligibility AI regulation financial inclusion, explainability, intelligibility, AI regulation, financial inclusion
    Date: 2023–06–12
  4. By: Alex Kim; Maximilian Muhn; Valeri Nikolaev
    Abstract: Generative AI tools such as ChatGPT can fundamentally change the way investors process information. We probe the economic usefulness of these tools in summarizing complex corporate disclosures using the stock market as a laboratory. The unconstrained summaries are dramatically shorter, often by more than 70% compared to the originals, whereas their information content is amplified. When a document has a positive (negative) sentiment, its summary becomes more positive (negative). More importantly, the summaries are more effective at explaining stock market reactions to the disclosed information. Motivated by these findings, we propose a measure of information "bloat." We show that bloated disclosure is associated with adverse capital markets consequences, such as lower price efficiency and higher information asymmetry. Finally, we show that the model is effective at constructing targeted summaries that identify firms' (non-)financial performance and risks. Collectively, our results indicate that generative language modeling adds considerable value for investors with information processing constraints.
    Date: 2023–06
  5. By: Xinli Yu; Zheng Chen; Yuan Ling; Shujing Dong; Zongyi Liu; Yanbin Lu
    Abstract: This paper presents a novel study on harnessing Large Language Models' (LLMs) outstanding knowledge and reasoning abilities for explainable financial time series forecasting. The application of machine learning models to financial time series comes with several challenges, including the difficulty in cross-sequence reasoning and inference, the hurdle of incorporating multi-modal signals from historical news, financial knowledge graphs, etc., and the issue of interpreting and explaining the model results. In this paper, we focus on NASDAQ-100 stocks, making use of publicly accessible historical stock price data, company metadata, and historical economic/financial news. We conduct experiments to illustrate the potential of LLMs in offering a unified solution to the aforementioned challenges. Our experiments include trying zero-shot/few-shot inference with GPT-4 and instruction-based fine-tuning with a public LLM model Open LLaMA. We demonstrate our approach outperforms a few baselines, including the widely applied classic ARMA-GARCH model and a gradient-boosting tree model. Through the performance comparison results and a few examples, we find LLMs can make a well-thought decision by reasoning over information from both textual news and price time series and extracting insights, leveraging cross-sequence information, and utilizing the inherent knowledge embedded within the LLM. Additionally, we show that a publicly available LLM such as Open-LLaMA, after fine-tuning, can comprehend the instruction to generate explainable forecasts and achieve reasonable performance, albeit relatively inferior in comparison to GPT-4.
    Date: 2023–06
  6. By: Thibault Collin (Université Paris Dauphine-PSL - PSL - Université Paris sciences et lettres)
    Abstract: The general scope of this thesis will be to further study the application of artificial neural networks in the context of hedging rainbow options. Due to their inherently complex features, such as the correlated paths that the prices of their underlying assets take or their absence from traded markets, finding an optimal hedging strategy for rainbow options is difficult, and traders usually have to resort to models and methods they know are inaccurate. An alternative approach involving deep learning however recently surfaced in the context of hedging vanilla options [6], and researchers have started to see potential in the use of neural networks for options endowed with exotic features in [5], [12] and [22]. The key to a near-perfect hedge for contingent claims might be hidden behind the training of neural network algorithms [6], and the scope of this research will be to further investigate how those innovative hedging techniques can be extended to rainbow options [22], using recent research [21], and to compare our results with those proposed by the current models and techniques used by traders, such as running Monte-Carlo path simulations. In order to accomplish that, we will try to develop an algorithm capable of designing an innovative and optimal hedging strategy for rainbow options using some intuition developed to hedge vanilla options [21] and price exotics [5]. But although it was shown from past literature to be potentially efficient and cost-effective, the opaque nature of an artificial neural network will make it difficult for the deep learning algorithm to be fully trusted and used as a sole method for hedging purposes, but rather as an additional technique associated with other more reliable models.
    Keywords: Quantitative finance, deep hedging, deep learning, machine learning, rainbow options, call options, call worst-of options, black scholes, geometric brownian motion
    Date: 2023–06–04
  7. By: Kai Feng; Han Hong; Ke Tang; Jingyuan Wang
    Abstract: This paper proposes a statistical framework with which artificial intelligence can improve human decision making. The performance of each human decision maker is first benchmarked against machine predictions; we then replace the decisions made by a subset of the decision makers with the recommendation from the proposed artificial intelligence algorithm. Using a large nationwide dataset of pregnancy outcomes and doctor diagnoses from prepregnancy checkups of reproductive age couples, we experimented with both a heuristic frequentist approach and a Bayesian posterior loss function approach with an application to abnormal birth detection. We find that our algorithm on a test dataset results in a higher overall true positive rate and a lower false positive rate than the diagnoses made by doctors only. We also find that the diagnoses of doctors from rural areas are more frequently replaceable, suggesting that artificial intelligence assisted decision making tends to improve precision more in less developed regions.
    Date: 2023–06
  8. By: Gaël Le Mens; Balász Kovács; Michael T. Hannan; Guillem Pros
    Abstract: Recently, the world's attention has been captivated by Large Language Models (LLMs) thanks to OpenAI's Chat-GPT, which rapidly proliferated as an app powered by GPT-3 and now its successor, GPT-4. If these LLMs produce human-like text, the semantic spaces they construct likely align with those used by humans for interpreting and generating language. This suggests that social scientists could use these LLMs to construct measures of semantic similarity that match human judgment. In this article, we provide an empirical test of this intuition. We use GPT-4 to construct a new measure of typicality– the similarity of a text document to a concept or category. We evaluate its performance against other model-based typicality measures in terms of their correspondence with human typicality ratings. We conduct this comparative analysis in two domains: the typicality of books in literary genres (using an existing dataset of book descriptions) and the typicality of tweets authored by US Congress members in the Democratic and Republican parties (using a novel dataset). The GPT-4 Typicality measure not only meets or exceeds the current state-of-the-art but accomplishes this without any model training. This is a breakthrough because the previous state-of-the-art measure required fine-tuning a model (a BERT text classifier) on hundreds of thousands of text documents to achieve its performance. Our comparative analysis emphasizes the need for systematic empirical validation of measures based on LLMs: several measures based on other recent LLMs achieve at best a moderate correspondence with human judgments.
    Keywords: categories, concepts, deep learning, typicality, GPT, chatGPT, BERT, Similarity
    JEL: C18 C52
    Date: 2023–06
  9. By: Károly Fazekas (Centre for Economic and Regional Studies – Institute of Economics)
    Abstract: This paper provides a summary of the latest advancements in generative artificial intelligence using large language models over the past six months. The impact of this breakthrough remains uncertain, but it is evident that GPT is a General-Purpose Technology (GPT) that will significantly alter various aspects of our economy and society in ways that are yet to be fully comprehended. While it is essential for the government to regulate GPT technology, it is inevitable that the technology will continue to expand and evolve at a rapid pace. There is no doubt that every corner of the new world if it exists at all, will be covered by millions of forms of artificial intelligence. The taming of AIs and successful social and personal cooperation with domesticated AIs could ensure our survival and prosperity in that world. Whether or not AIs are capable and willing to cooperate will populate the new world is neither an individual nor a national matter. But how a country and its people fare in the new world is more so.
    Keywords: innovation and invention: processes and incentives; technological change: choices and consequences; diffusion processes; technological innovation
    JEL: O31 O33 Q55
    Date: 2023–06

This nep-ain issue is ©2023 by Ben Greiner. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.