nep-ain New Economics Papers
on Artificial Intelligence
Issue of 2023‒08‒21
seven papers chosen by
Ben Greiner
Wirtschaftsuniversität Wien

  1. Improving Human Deception Detection Using Algorithmic Feedback By Marta Serra-Garcia; Uri Gneezy
  2. Please take over: XAI, delegation of authority, and domain knowledge By Bauer, Kevin; von Zahn, Moritz; Hinz, Oliver
  3. Epidemic Modeling with Generative Agents By Ross Williams; Niyousha Hosseinichimeh; Aritra Majumdar; Navid Ghaffarzadegan
  4. Regulating Transformative Technologies By Daron Acemoglu; Todd Lensman
  5. The employment impact of AI technologies among AI innovators By Giacomo Damioli; Vincent Van Roy; Daniel Vertesy; Marco Vivarelli
  6. Ideas Without Scale in French Artificial Intelligence Innovations By Johanna Deperi; Ludovic Dibiaggio; Mohamed Keita; Lionel Nesta
  7. Using GPT-4 for Financial Advice By Christian Fieberg; Lars Hornuf; David J. Streich

  1. By: Marta Serra-Garcia; Uri Gneezy
    Abstract: Can algorithms help people predict behavior in high-stakes prisoner’s dilemmas? Participants watching the pre-play communication of contestants in the TV show Golden Balls display a limited ability to predict contestants’ behavior, while algorithms do significantly better. We provide participants algorithmic advice by flagging videos for which an algorithm predicts a high likelihood of cooperation or defection. We find that the effectiveness of flags depends on their timing: participants rely significantly more on flags shown before they watch the videos than flags shown after they watch them. These findings show that the timing of algorithmic feedback is key for its adoption.
    Keywords: detecting lies, machine learning, cooperation, experiment
    JEL: D83 D91 C72 C91
    Date: 2023
  2. By: Bauer, Kevin; von Zahn, Moritz; Hinz, Oliver
    Abstract: Recent regulatory measures such as the European Union's AI Act require artificial intelligence (AI) systems to be explainable. As such, understanding how explainability impacts human-AI interaction and pinpointing the specific circumstances and groups affected, is imperative. In this study, we devise a formal framework and conduct an empirical investigation involving real estate agents to explore the complex interplay between explainability of and delegation to AI systems. On an aggregate level, our findings indicate that real estate agents display a higher propensity to delegate apartment evaluations to an AI system when its workings are explainable, thereby surrendering control to the machine. However, at an individual level, we detect considerable heterogeneity. Agents possessing extensive domain knowledge are generally more inclined to delegate decisions to AI and minimize their effort when provided with explanations. Conversely, agents with limited domain knowledge only exhibit this behavior when explanations correspond with their preconceived notions regarding the relationship between apartment features and listing prices. Our results illustrate that the introduction of explainability in AI systems may transfer the decision-making control from humans to AI under the veil of transparency, which has notable implications for policy makers and practitioners that we discuss.
    Date: 2023
  3. By: Ross Williams; Niyousha Hosseinichimeh; Aritra Majumdar; Navid Ghaffarzadegan
    Abstract: This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
    Date: 2023–07
  4. By: Daron Acemoglu; Todd Lensman
    Abstract: Transformative technologies like generative artificial intelligence promise to accelerate productivity growth across many sectors, but they also present new risks from potential misuse. We develop a multi-sector technology adoption model to study the optimal regulation of transformative technologies when society can learn about these risks over time. Socially optimal adoption is gradual and convex. If social damages are proportional to the productivity gains from the new technology, a higher growth rate leads to slower optimal adoption. Equilibrium adoption is inefficient when firms do not internalize all social damages, and sector-independent regulation is helpful but generally not sufficient to restore optimality.
    JEL: H21 O33 O41
    Date: 2023–07
  5. By: Giacomo Damioli; Vincent Van Roy; Daniel Vertesy; Marco Vivarelli
    Abstract: This study supports the labour-friendly nature of product innovation among developers of artificial intelligence (AI) technologies. GMM-SYS estimates on a worldwide longitudinal dataset covering 3, 500 companies that patented inventions related to AI technologies over the period 2000-2016 show a positive and significant impact of AI patent families on employment. The effect is small in magnitude and limited to service sectors and younger firms, which are front-runners of the AI revolution. We also detect some evidence of increasing returns suggesting that innovative companies more focused on AI technologies are those obtaining larger impacts in terms of job creation.
    Keywords: Innovation, technological change, artificial intelligence, patents, employment, job-creation
    Date: 2023–07–12
  6. By: Johanna Deperi (University of Brescia); Ludovic Dibiaggio (SKEMA Business School); Mohamed Keita (SKEMA Business School); Lionel Nesta (GREDEG - Groupe de Recherche en Droit, Economie et Gestion - UNS - Université Nice Sophia Antipolis (1965 - 2019) - COMUE UCA - COMUE Université Côte d'Azur (2015-2019) - CNRS - Centre National de la Recherche Scientifique - UCA - Université Côte d'Azur, OFCE - Observatoire français des conjonctures économiques (Sciences Po) - Sciences Po - Sciences Po)
    Abstract: Artificial intelligence (AI) is viewed as the next technological revolution. The aim of this Policy Brief is to identify France's strengths and weaknesses in this great race for AI innovation. We characterise France's positioning relative to other key players and make the following observations: 1. Without being a world leader in innovation incorporating artificial intelligence, France is showing moderate but significant activity in this field. 2. France specialises in machine learning, unsupervised learning and probabilistic graphical models, and in developing solutions for the medical sciences, transport and security. 3. The AI value chain in France is poorly integrated, mainly due to a lack of integration in the downstream phases of the innovation chain. 4. The limited presence of French private players in the global AI arena contrasts with the extensive involvement of French public institutions. French public research organisations produce patents with great economic value. 5. Public players are the key actors in French networks for collaboration in patent development, but are not open to international and institutional diversity. In our opinion, France runs the risk of becoming a global AI laboratory located upstream in the AI innovation value chain. As such, it is likely to bear the sunk costs of AI invention, without enjoying the benefits of AI exploitation on a larger scale. In short, our fear is that French AI will be exported to other locations to prosper and grow.
    Date: 2023–06–26
  7. By: Christian Fieberg; Lars Hornuf; David J. Streich
    Abstract: We show that the recently released text-based artificial intelligence tool GPT-4 can provide suitable financial advice. The tool suggests specific investment portfolios that reflect an investor’s individual circumstances such as risk tolerance, risk capacity, and sustainability preference. Notably, while the suggested portfolios display home bias and are rather insensitive to the investment horizon, historical risk-adjusted performance is on par with a professionally managed benchmark portfolio. Given the current inability of GPT-4 to provide full-service financial advice, it may be used by financial advisors as a back-office tool for portfolio recommendation.
    Keywords: GPT-4, ChatGPT, financial advice, artificial intelligence, portfolio management
    JEL: G00 G11
    Date: 2023

This nep-ain issue is ©2023 by Ben Greiner. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.