nep-ain New Economics Papers
on Artificial Intelligence
Issue of 2023‒07‒31
ten papers chosen by
Ben Greiner
Wirtschaftsuniversität Wien

  1. Toward an Objective Measurement of AI Literacy By Weber, Patrick; Pinski, Marc; Baum, Lorenz
  2. Sea Change in Software Development: Economic and Productivity Analysis of the AI-Powered Developer Lifecycle By Thomas Dohmke; Marco Iansiti; Greg Richards
  3. Ideas Without Scale in French Artificial Intelligence Innovations By Johanna Deperi; Ludovic Dibiaggio; Mohamed Keita; Lionel Nesta
  4. Instruct-FinGPT: Financial Sentiment Analysis by Instruction Tuning of General-Purpose Large Language Models By Boyu Zhang; Hongyang Yang; Xiao-Yang Liu
  5. Unveiling the Potential of Sentiment: Can Large Language Models Predict Chinese Stock Price Movements? By Haohan Zhang; Fengrui Hua; Chengjin Xu; Jian Guo; Hao Kong; Ruiting Zuo
  6. Pricing European Options with Google AutoML, TensorFlow, and XGBoost By Juan Esteban Berger
  7. Stock Price Prediction using Dynamic Neural Networks By David Noel
  8. Comparing deep learning models for volatility prediction using multivariate data By Wenbo Ge; Pooia Lalbakhsh; Leigh Isai; Artem Lensky; Hanna Suominen
  9. Realistic Synthetic Financial Transactions for Anti-Money Laundering Models By Erik Altman; B\'eni Egressy; Jovan Blanu\v{s}a; Kubilay Atasu
  10. What should be done about Google’s quasi-monopoly in search? Mandatory data sharing versus AI-driven technological competition By Bertin Martens

  1. By: Weber, Patrick; Pinski, Marc; Baum, Lorenz
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:dar:wpaper:138515&r=ain
  2. By: Thomas Dohmke (GitHub); Marco Iansiti (Harvard Business School and Keystone.AI); Greg Richards (Keystone.AI)
    Abstract: This study examines the impact of GitHub Copilot on a large sample of Copilot users (n=934, 533). The analysis shows that users on average accept nearly 30% of the suggested code, leading to increased productivity. Furthermore, our research demonstrates that the acceptance rate rises over time and is particularly high among less experienced developers, providing them with substantial benefits. Additionally, our estimations indicate that the adoption of generative AI productivity tools could potentially contribute to a $1.5 trillion increase in global GDP by 2030. Moreover, our investigation sheds light on the diverse contributors in the generative AI landscape, including major technology companies, startups, academia, and individual developers. The findings suggest that the driving force behind generative AI software innovation lies within the open-source ecosystem, particularly in the United States. Remarkably, a majority of repositories on GitHub are led by individual developers. As more developers embrace these tools and acquire proficiency in the art of prompting with generative AI, it becomes evident that this novel approach to software development has forged a unique inextricable link between humans and artificial intelligence. This symbiotic relationship has the potential to shape the construction of the world's software for future generations.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.15033&r=ain
  3. By: Johanna Deperi (University of Brescia); Ludovic Dibiaggio (SKEMA Business School); Mohamed Keita (SKEMA Business School); Lionel Nesta (GREDEG - Groupe de Recherche en Droit, Economie et Gestion - UNS - Université Nice Sophia Antipolis (1965 - 2019) - COMUE UCA - COMUE Université Côte d'Azur (2015-2019) - CNRS - Centre National de la Recherche Scientifique - UCA - Université Côte d'Azur, OFCE - Observatoire français des conjonctures économiques (Sciences Po) - Sciences Po - Sciences Po)
    Abstract: Artificial intelligence (AI) is viewed as the next technological revolution. The aim of this Policy Brief is to identify France's strengths and weaknesses in this great race for AI innovation. We characterise France's positioning relative to other key players and make the following observations: 1. Without being a world leader in innovation incorporating artificial intelligence, France is showing moderate but significant activity in this field. 2. France specialises in machine learning, unsupervised learning and probabilistic graphical models, and in developing solutions for the medical sciences, transport and security. 3. The AI value chain in France is poorly integrated, mainly due to a lack of integration in the downstream phases of the innovation chain. 4. The limited presence of French private players in the global AI arena contrasts with the extensive involvement of French public institutions. French public research organisations produce patents with great economic value. 5. Public players are the key actors in French networks for collaboration in patent development, but are not open to international and institutional diversity. In our opinion, France runs the risk of becoming a global AI laboratory located upstream in the AI innovation value chain. As such, it is likely to bear the sunk costs of AI invention, without enjoying the benefits of AI exploitation on a larger scale. In short, our fear is that French AI will be exported to other locations to prosper and grow.
    Date: 2023–06–26
    URL: http://d.repec.org/n?u=RePEc:hal:spmain:hal-04144817&r=ain
  4. By: Boyu Zhang; Hongyang Yang; Xiao-Yang Liu
    Abstract: Sentiment analysis is a vital tool for uncovering insights from financial articles, news, and social media, shaping our understanding of market movements. Despite the impressive capabilities of large language models (LLMs) in financial natural language processing (NLP), they still struggle with accurately interpreting numerical values and grasping financial context, limiting their effectiveness in predicting financial sentiment. In this paper, we introduce a simple yet effective instruction tuning approach to address these issues. By transforming a small portion of supervised financial sentiment analysis data into instruction data and fine-tuning a general-purpose LLM with this method, we achieve remarkable advancements in financial sentiment analysis. In the experiment, our approach outperforms state-of-the-art supervised sentiment analysis models, as well as widely used LLMs like ChatGPT and LLaMAs, particularly in scenarios where numerical understanding and contextual comprehension are vital.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.12659&r=ain
  5. By: Haohan Zhang; Fengrui Hua; Chengjin Xu; Jian Guo; Hao Kong; Ruiting Zuo
    Abstract: The rapid advancement of Large Language Models (LLMs) has led to extensive discourse regarding their potential to boost the return of quantitative stock trading strategies. This discourse primarily revolves around harnessing the remarkable comprehension capabilities of LLMs to extract sentiment factors which facilitate informed and high-frequency investment portfolio adjustments. To ensure successful implementations of these LLMs into the analysis of Chinese financial texts and the subsequent trading strategy development within the Chinese stock market, we provide a rigorous and encompassing benchmark as well as a standardized back-testing framework aiming at objectively assessing the efficacy of various types of LLMs in the specialized domain of sentiment factor extraction from Chinese news text data. To illustrate how our benchmark works, we reference three distinctive models: 1) the generative LLM (ChatGPT), 2) the Chinese language-specific pre-trained LLM (Erlangshen-RoBERTa), and 3) the financial domain-specific fine-tuned LLM classifier(Chinese FinBERT). We apply them directly to the task of sentiment factor extraction from large volumes of Chinese news summary texts. We then proceed to building quantitative trading strategies and running back-tests under realistic trading scenarios based on the derived sentiment factors and evaluate their performances with our benchmark. By constructing such a comparative analysis, we invoke the question of what constitutes the most important element for improving a LLM's performance on extracting sentiment factors. And by ensuring that the LLMs are evaluated on the same benchmark, following the same standardized experimental procedures that are designed with sufficient expertise in quantitative trading, we make the first stride toward answering such a question.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.14222&r=ain
  6. By: Juan Esteban Berger
    Abstract: Researchers have been using Neural Networks and other related machine-learning techniques to price options since the early 1990s. After three decades of improvements in machine learning techniques, computational processing power, cloud computing, and data availability, this paper is able to provide a comparison of using Google Cloud's AutoML Regressor, TensorFlow Neural Networks, and XGBoost Gradient Boosting Decision Trees for pricing European Options. All three types of models were able to outperform the Black Scholes Model in terms of mean absolute error. These results showcase the potential of using historical data from an option's underlying asset for pricing European options, especially when using machine learning algorithms that learn complex patterns that traditional parametric models do not take into account.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.00476&r=ain
  7. By: David Noel
    Abstract: This paper will analyze and implement a time series dynamic neural network to predict daily closing stock prices. Neural networks possess unsurpassed abilities in identifying underlying patterns in chaotic, non-linear, and seemingly random data, thus providing a mechanism to predict stock price movements much more precisely than many current techniques. Contemporary methods for stock analysis, including fundamental, technical, and regression techniques, are conversed and paralleled with the performance of neural networks. Also, the Efficient Market Hypothesis (EMH) is presented and contrasted with Chaos theory using neural networks. This paper will refute the EMH and support Chaos theory. Finally, recommendations for using neural networks in stock price prediction will be presented.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.12969&r=ain
  8. By: Wenbo Ge; Pooia Lalbakhsh; Leigh Isai; Artem Lensky; Hanna Suominen
    Abstract: This study aims at comparing several deep learning-based forecasters in the task of volatility prediction using multivariate data, proceeding from simpler or shallower to deeper and more complex models and compare them to the naive prediction and variations of classical GARCH models. Specifically, the volatility of five assets (i.e., S\&P500, NASDAQ100, gold, silver, and oil) was predicted with the GARCH models, Multi-Layer Perceptrons, recurrent neural networks, Temporal Convolutional Networks, and the Temporal Fusion Transformer. In most cases the Temporal Fusion Transformer followed by variants of Temporal Convolutional Network outperformed classical approaches and shallow networks. These experiments were repeated, and the difference between competing models was shown to be statistically significant, therefore encouraging their use in practice.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.12446&r=ain
  9. By: Erik Altman; B\'eni Egressy; Jovan Blanu\v{s}a; Kubilay Atasu
    Abstract: With the widespread digitization of finance and the increasing popularity of cryptocurrencies, the sophistication of fraud schemes devised by cybercriminals is growing. Money laundering -- the movement of illicit funds to conceal their origins -- can cross bank and national boundaries, producing complex transaction patterns. The UN estimates 2-5\% of global GDP or \$0.8 - \$2.0 trillion dollars are laundered globally each year. Unfortunately, real data to train machine learning models to detect laundering is generally not available, and previous synthetic data generators have had significant shortcomings. A realistic, standardized, publicly-available benchmark is needed for comparing models and for the advancement of the area. To this end, this paper contributes a synthetic financial transaction dataset generator and a set of synthetically generated AML (Anti-Money Laundering) datasets. We have calibrated this agent-based generator to match real transactions as closely as possible and made the datasets public. We describe the generator in detail and demonstrate how the datasets generated can help compare different Graph Neural Networks in terms of their AML abilities. In a key way, using synthetic data in these comparisons can be even better than using real data: the ground truth labels are complete, whilst many laundering transactions in real data are never detected.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.16424&r=ain
  10. By: Bertin Martens
    Abstract: This paper explores the crucial role of search engines in modern digital economies and their impact on user welfare.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:bre:wpaper:node_9228&r=ain

This nep-ain issue is ©2023 by Ben Greiner. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.