nep-ain New Economics Papers
on Artificial Intelligence
Issue of 2023‒11‒27
thirteen papers chosen by
Ben Greiner, Wirtschaftsuniversität Wien

  1. Trust in times of AI By Francesco Bogliacino; Paolo Buonanno; Francesco Fallucchi; Marcello Puca
  2. Ruled by Robots: Preference for Algorithmic Decision Makers and Perceptions of Their Choices By Marina Chugunova; Wolfgang Luhan
  3. Putting a Human in the Loop: Increasing Uptake, but Decreasing Accuracy of Automated Decision-Making By Daniela Sele; Marina Chugunova
  4. Artificial intelligence and the skill premium. By David E. Bloom; Klaus Prettner; Jamel Saadaoui; Mario Veruete
  5. Artificial intelligence and jobs: evidence from online vacancies By Acemoglu, Daron; Autor, David; Hazell, Jonathon; Restrepo, Pascual
  6. Market Concentration Implications of Foundation Models By Jai Vipra; Anton Korinek
  7. On the use of artificial intelligence in financial regulations and the impact on financial stability By Jon Danielsson; Andreas Uthemann
  8. How big is the real estate property? Using zero-shot vs. rule-based classificationfor size extraction in real estate contracts By Julia Angerer; Wolfgang Brunauer
  9. Changing the Location Game – Improving Location Analytics with the Help of Explainable AI By Moritz Stang; Bastian Krämer; Marcelo Del Cajias; Wolfgang Schäfers
  10. Location Analysis and Pricing of Amenities By Anett Wins; Marcelo Del Cajias
  11. Multimodal Information Fusion for the Prediction of the Condition of Condominiums By Miroslav Despotovic; David Koch; Matthias Zeppelzauer; Stumpe Eric; Simon Thaler; Wolfgang A. Brunauer
  12. Deepfake Detection With and Without Content Warnings By Lewis, Andrew; Vu, Patrick; Duch, Raymond; Chowdhury, Areeq
  13. Logic Mill - A Knowledge Navigation System By Sebastian Erhardt; Mainak Ghosh; Erik Buunk; Michael E. Rose; Dietmar Harhoff

  1. By: Francesco Bogliacino (Università di Bergamo); Paolo Buonanno (Università di Bergamo); Francesco Fallucchi (Università di Bergamo); Marcello Puca (Università di Bergamo, CSEF and Webster University Geneva)
    Abstract: In an online, pre-registered experiment, we explore the impact of AI mediated communication within the context of a Trust Game with unverifiable actions. We compare a baseline treatment, where no communication is allowed, to treatments where participants can use free-form communication or have the additional option of using ChatGPT-generated promises, which were assessed in a companion experiment. We confirm previous observations that communication bolsters trust and trustworthiness. In the AI treatment, trustworthiness sees the most significant increase, yet trust levels decline for those who opt not to write a message. AI-generated promises become more frequent but garner less trust. Consequently, the overall trust and efficiency levels in the AI treatment align with that of human communication. Contrary to our assumptions, less trustworthy individuals do not show a higher propensity to delegate messages to ChatGPT.
    Keywords: Artificial Intelligence, Trust Game, ChatGPT, Experiment.
    JEL: C93 D83 D84 D91
    Date: 2023–10–24
  2. By: Marina Chugunova (Max Planck Institute for Innovation and Competition); Wolfgang Luhan (University of Portsmouth)
    Abstract: As technology-assisted decision-making is becoming more widespread, it is important to understand how the algorithmic nature of the decisionmaker affects how decisions are perceived by the affected people. We use a laboratory experiment to study the preference for human or algorithmic decision makers in re-distributive decisions. In particular, we consider whether algorithmic decision maker will be preferred because of its unbiasedness. Contrary to previous findings, the majority of participants (over 60%) prefer the algorithm as a decision maker over a human—but this is not driven by concerns over biased decisions. Yet, despite this preference, the decisions made by humans are regarded more favorably. Participants judge the decisions to be equally fair, but are nonetheless less satisfied with the AI decisions. Subjective ratings of the decisions are mainly driven by own material interests and fairness ideals. For the latter, players display remarkable flexibility: they tolerate any explainable deviation between the actual decision and their ideals, but react very strongly and negatively to redistribution decisions that do not fit any fairness ideals. Our results suggest that even in the realm of moral decisions algorithmic decision-makers might be preferred, but actual performance of the algorithm plays an important role in how the decisions are rated.
    Keywords: delegation; algorithm aversion; redistribution; fairness;
    JEL: C91 D31 D81 D9 O33
    Date: 2023–10–24
  3. By: Daniela Sele (ETH); Marina Chugunova (Max Planck Institute for Innovation and Competition)
    Abstract: Are people algorithm averse, as some previous literature indicates? If so, can the retention of human oversight increase the uptake of algorithmic recommendations, and does keeping a human in the loop improve accuracy? Answers to these questions are of utmost importance given the fast-growing availability of algorithmic recommendations and current intense discussions about regulation of automated decision-making. In an online experiment, we find that 66% of participants prefer algorithmic to equally accurate human recommendations if the decision is delegated fully. This preference for algorithms increases by further 7 percentage points if participants are able to monitor and adjust the recommendations before the decision is made. In line with automation bias, participants adjust the recommendations that stem from an algorithm by less than those from another human. Importantly, participants are less likely to intervene with the least accurate recommendations and adjust them by less, raising concerns about the monitoring ability of a human in a Human-in-the-Loop system. Our results document a trade-off: while allowing people to adjust algorithmic recommendations increases their uptake, the adjustments made by the human monitors reduce the quality of final decisions.
    Keywords: automated decision-making; algorithm aversion; algorithm appreciation; automation bias;
    JEL: O33 C90 D90
    Date: 2023–10–24
  4. By: David E. Bloom; Klaus Prettner; Jamel Saadaoui; Mario Veruete
    Abstract: What will likely be the effect of the emergence of ChatGPT and other forms of artificial intelligence (AI) on the skill premium? To address this question, we develop a nested constant elasticity of substitution production function that distinguishes between industrial robots and AI. Industrial robots predominantly substitute for low-skill workers, whereas AI mainly helps to perform the tasks of high-skill workers. We show that AI reduces the skill premium as long as it is more substitutable for high-skill workers than low-skill workers are for high-skill workers.
    Keywords: Automation, Artificial Intelligence, ChatGPT, Skill Premium, Wages, Productivity.
    JEL: J30 O14 O15 O33
    Date: 2023
  5. By: Acemoglu, Daron; Autor, David; Hazell, Jonathon; Restrepo, Pascual
    Abstract: We study the impact of artificial intelligence (AI) on labor markets using establishment-level data on the near universe of online vacancies in the United States from 2010 onward. There is rapid growth in AI-related vacancies over 2010–18 that is driven by establishments whose workers engage in tasks compatible with AI’s current capabilities. As these AI-exposed establishments adopt AI, they simultaneously reduce hiring in non-AI positions and change the skill requirements of remaining postings. While visible at the establishment level, the aggregate impacts of AI-labor substitution on employment and wage growth in more exposed occupations and industries is currently too small to be detectable.
    Keywords: artificial intelligence; displacement; labor; jobs; tasks; technology; wages
    JEL: J23 O33
    Date: 2022–04–01
  6. By: Jai Vipra; Anton Korinek
    Abstract: We analyze the structure of the market for foundation models, i.e., large AI models such as those that power ChatGPT and that are adaptable to downstream uses, and we examine the implications for competition policy and regulation. We observe that the most capable models will have a tendency towards natural monopoly and may have potentially vast markets. This calls for a two-pronged regulatory response: (i) Antitrust authorities need to ensure the contestability of the market by tackling strategic behavior, in particular by ensuring that monopolies do not propagate vertically to downstream uses, and (ii) given the diminished potential for market discipline, there is a role for regulators to ensure that the most capable models meet sufficient quality standards (including safety, privacy, non-discrimination, reliability and interoperability standards) to maximally contribute to social welfare. Regulators should also ensure a level regulatory playing field between AI and non-AI applications in all sectors of the economy. For models that are behind the frontier, we expect competition to be quite intense, implying a more limited role for competition policy, although a role for regulation remains.
    Date: 2023–11
  7. By: Jon Danielsson; Andreas Uthemann
    Abstract: Artificial intelligence (AI) is making rapid inroads in financial regulations. It will benefit micro regulations, concerned with issues like consumer protection and routine banking regulations, because of ample data, short time horizons, clear objectives, and repeated decisions that leave plenty of data for AI to train on. It is different with macro regulations focused on the stability of the entire financial system. Here, infrequent and mostly unique events frustrate AI learning. Distributed human decision making in times of extreme stress has strong advantages over centralised AI decisions, which, coupled with the catastrophic cost of mistakes, raises questions about AI used in macro regulations. However, AI will likely become widely used by stealth as it takes over increasingly high level advice and decisions, driven by significant cost efficiencies, robustness and accuracy compared to human regulators. We propose six criteria against which to judge the suitability of AI use by the private sector and financial regulation.
    Date: 2023–10
  8. By: Julia Angerer; Wolfgang Brunauer
    Abstract: Due to the massive amount of real-estate related text documents, the necessity to automatically process the data is evident. Especially purchase contracts contain valuable transaction and property description information, like usable area. In this research project, a natural language processing (NLP) approach using open-source transformer-based models was investigated. The potential of pre-trained language models for zero-shot classification is highlighted, especially in cases where no training data is available. This approach is particularly relevant for analyzing purchase contracts in the legal domain, where it can be challenging to manually extract the information or to build comprehensive regular expression rules manually. A data set consisting of classified contract sentence parts, each containing onesize and context information, was created manually for model comparison. The experiments conducted in this study demonstrate that pre-trained language models can accurately classify sentence parts containing a size, with varying levels of performance across different models. The results suggest that pre-trained language models can be effective tools for processing textual data in the real estate and legal domains and can provide valuable insights into the underlying structures and patterns in such data. Overall, this research contributes to the understanding of the capabilities of pre-trained language models in NLP and highlights their potential for practical applications in real-world settings, particularly in the legal domain where there is a large volume of textual data and annotated training data is not available.
    Keywords: contract documents; Information Extraction; Natural Language Processing; zero-shot classification
    JEL: R3
    Date: 2023–01–01
  9. By: Moritz Stang; Bastian Krämer; Marcelo Del Cajias; Wolfgang Schäfers
    Abstract: Besides its structural and economic characteristics, the location of a property is probably one of the most important determinants of its underlying value. In contrast to property valuations, there are hardly any approaches to date that evaluate the quality of a real estate location in an automated manner. The reasons are the complexity, the number of interactions and the non-linearities underlying the quality specifications of a certain location. These are difficult to represent by traditional econometric models. The aim of this paper is thus to present a newly developed data-driven approach for the assessments of real estate locations. By combining a state-of-the-art machine learning algorithm and the local post-hoc model agnostic method of Shapley Additive Explanations, the newly developed SHAP location score is able to account for empirical complexities, especially for non-linearities and higher order interactions. The SHAP location score represents an intuitive and flexible approach based on econometric modeling techniques and the basic assumptions of hedonic pricing theory. The approach can be applied post-hoc to any common machine learning method and can be flexibly adapted to the respective needs. This constitutes a significant extension of traditional urban models and offers many advantages for a wide range of real estate players.
    Keywords: Automated Location Valuation Model; Explainable AI; Location Analytics; Machine Learning
    JEL: R3
    Date: 2023–01–01
  10. By: Anett Wins; Marcelo Del Cajias
    Abstract: Modern location analysis evaluates location attractiveness almost in real time, combining the knowledge of local real estate experts and artificial intelligence. In this paper we develop an algorithm – The Amenities Magnet algorithm – that measures and benchmarks the attractiveness of locations based on the urban amenities’ footprint of the surrounding area, grouped according to relevance for residential purposes and taking distance information from Google and OpenStreetMap into account. As cities are continuously evolving, benchmarking locations’ amenity-wise change of attractiveness over time helps to detect upswing areas and thus supports investment decisions. According to the 15-minute city concept, the welfare of residents is proportional to the amenities accessible within a short walk or bike ride. Measuring individual scorings for the seven basic living needs results in a more detailed, disaggregated location assessment. Based on these insights, an advanced machine learning (ML) algorithm under the Gradient Boosting framework (XGBoost) is adapted to model residential rental prices for the region Greater Manchester, United Kingdom, and achieves an improved predictive power. To extract interpretable results and quantify the contribution of certain amenities to rental prices eXplainable Artificial Intelligence (XAI) methods are used. Tenants' willingness to pay (WTP) for accessibility to amenities varies by type. In Manchester tram stops, bars, schools and the proximity to the city center in particular emerged as relevant value drivers. Even if the results of the case study are not generally applicable, the methodology can be transferred to any market in order to reveal regional patterns.
    Keywords: Amenities Magnet algorithm; location analysis; residential rental pricing; XGBoost
    JEL: R3
    Date: 2023–01–01
  11. By: Miroslav Despotovic; David Koch; Matthias Zeppelzauer; Stumpe Eric; Simon Thaler; Wolfgang A. Brunauer
    Abstract: Today's data analysis techniques allow for the combination of multiple different data modalities, which should also allow for more accurate feature extraction. In our research, we leverage the capacity of machine learning tools to build a model with shared neural network layers and multiple inputs that is more flexible and allows for more robust extraction of real estate attributes. The most common form of data for a real estate assessment is data structured in tables, such as size or year of construction, but also descriptions of the real estate. Other data that can also be easily found in real estate listings are visual data such as exterior and interior photographs. In the presented approach, we fuse textual information and variable quantity of interior photographs per condominium for condition assessment and investigate how multiple modalities can be efficiently combined using deep learning. We train and test the performance of a pre-trained convolutional neural network fine-tuned with variable quantity of interior views of selected condominiums. In parallel, we train and test the pre-trained bidirectional encoder-transformer language model using text data from the same observations. Finally, we build an experimental neural network model using both modalities for the same task and compare the performance with the models trained with a single modality. Our initial assumption that coupling both networks would lead to worse performance compared to fine-tuned single-modal models was not confirmed, as we achieved the better performance with the proposed multi-modal model despite the impairment of a very unbalanced dataset. The novelty here is the multimodal modeling of variable quantity of real estate-related attributes in a unified model that integrates all available modalities and can thus use their complementary information. With the presented approach, we intend to extend the existing information extraction methods for automated valuation models, which in turn would contribute to a higher transparency of valuation procedures and thus to more reliable statements about the value of real estate.
    Keywords: Avm; Computer vision; Hedonic Pricing; NLP
    JEL: R3
    Date: 2023–01–01
  12. By: Lewis, Andrew; Vu, Patrick; Duch, Raymond (University of Oxford); Chowdhury, Areeq
    Abstract: The rapid advancement of ‘deepfake’ video technology — which uses deep learning artificial intelligence algorithms to create fake videos that look real — has given urgency to the question of how policymakers and technology companies should moderate inauthentic content. We conduct an experiment to measure people’s alertness to and ability to detect a high-quality deepfake amongst a set of videos. First, we find that in a natural setting with no content warnings, individuals who are exposed to a deepfake video of neutral content are no more likely to detect anything out of the ordinary (32.9%) compared to a control group who viewed only authentic videos (34.1%). Second, we find that when individuals are given a warning that at least one video in a set of five videos is a deepfake, only 21.6% of respondents correctly identify the deepfake as the only inauthentic video, while the remainder erroneously select at least one genuine video as a deepfake.
    Date: 2023–10–15
  13. By: Sebastian Erhardt (MPI-IC); Mainak Ghosh (MPI-IC); Erik Buunk (MPI-IC); Michael E. Rose (MPI-IC); Dietmar Harhoff (MPI-IC)
    Abstract: Logic Mill is a scalable and openly accessible software system that identifies semantically similar documents within either one domain-specific corpus or multi-domain corpora. It uses advanced Natural Language Processing (NLP) techniques to generate numerical representations of documents. Currently it leverages a large pre-trained language model to generate these document representations. The system focuses on scientific publications and patent documents and contains more than 200 million documents. It is easily accessible via a simple Application Programming Interface (API) or via a web interface. Moreover, it is continuously being updated and can be extended to text corpora from other domains. We see this system as a generalpurpose tool for future research applications in the social sciences and other domains.
    Date: 2023–10–24

This nep-ain issue is ©2023 by Ben Greiner. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.