nep-ain New Economics Papers
on Artificial Intelligence
Issue of 2023‒11‒06
twelve papers chosen by
Ben Greiner, Wirtschaftsuniversität Wien

  1. How Does Artificial Intelligence Improve Human Decision-Making? Evidence from the AI-Powered Go Program By Sukwoong Choi; Hyo Kang; Namil Kim; Junsik Kim
  2. Adoption of Artificial Intelligence in an Organizational Context: Analysis of the Factors Influencing the Adoption and Decision-Making Process By Eitle, Verena
  3. Behavioral Intentions to use Artificial Intelligence Among Managers in Small and Medium Enterprises By Jameel, Alaa S.; Harjan, Sinan Abdullah; Ahmad, Abd Rahman
  4. Artificial intelligence at the workplace and the impacts on work organisation, working conditions and ethics By Lechardoy, Lucie; López Forés, Laura; Codagnone, Cristiano
  5. The Turing Transformation: Artificial Intelligence, Intelligence Augmentation, and Skill Premiums By Ajay K. Agrawal; Joshua S. Gans; Avi Goldfarb
  6. Contested Transparency: Digital Monitoring Technologies and Worker Voice By Burdin, Gabriel; Dughera, Stefano; Landini, Fabio; Belloc, Filippo
  7. The promise and perils of artificial intelligence: Overcoming the odds By Garcia-Murillo, Martha; MacInnes, Ian
  8. Collecting, generating and analyzing national statistics with AI: what benefits and costs? By Rim, Maria J.; Kwon, Youngsun
  9. Understandings of the AI business ecosystem in South Korea: AI startups' perspective By Nam, Jinyoung; Kim, Junghwan; Jung, Yoonhyuk
  10. Artificial Intelligence and Central Bank Communication: The Case of the ECB By Nicolas Fanta; Roman Horvath
  11. Can GPT models be Financial Analysts? An Evaluation of ChatGPT and GPT-4 on mock CFA Exams By Ethan Callanan; Amarachi Mbakwe; Antony Papadimitriou; Yulong Pei; Mathieu Sibue; Xiaodan Zhu; Zhiqiang Ma; Xiaomo Liu; Sameena Shah
  12. Explainable AI for Operational Research: A Defining Framework, Methods, Applications, and a Research Agenda By Koen W. de Bock; Kristof Coussement; Arno De Caigny; Roman Slowiński; Bart Baesens; Robert N Boute; Tsan-Ming Choi; Dursun Delen; Mathias Kraus; Stefan Lessmann; Sebastián Maldonado; David Martens; María Óskarsdóttir; Carla Vairetti; Wouter Verbeke; Richard Weber

  1. By: Sukwoong Choi; Hyo Kang; Namil Kim; Junsik Kim
    Abstract: We study how humans learn from AI, exploiting an introduction of an AI-powered Go program (APG) that unexpectedly outperformed the best professional player. We compare the move quality of professional players to that of APG's superior solutions around its public release. Our analysis of 749, 190 moves demonstrates significant improvements in players' move quality, accompanied by decreased number and magnitude of errors. The effect is pronounced in the early stages of the game where uncertainty is highest. In addition, younger players and those in AI-exposed countries experience greater improvement, suggesting potential inequality in learning from AI. Further, while players of all levels learn, less skilled players derive higher marginal benefits. These findings have implications for managers seeking to adopt and utilize AI effectively within their organizations.
    Date: 2023–10
  2. By: Eitle, Verena
    Abstract: The emergence of Artificial Intelligence (AI) shifts the business environment to such an extent that this general-purpose technology (GPT) is prevalent in a wide range of industries, evolves through constant advancements, and stimulates complementary innovations. By implementing AI applications in their business practices, organizations primarily benefit from improved business process automation, valuable cognitive insights, and enhanced cognitive engagements. Despite this great potential, organizations encounter difficulties in adopting AI as they struggle to adjust to corresponding complex organizational changes. The tendency for organizations to face challenges when implementing AI applications indicates that AI adoption is far from trivial. The complex organizational change generated by AI adoption could emerge from intelligent agents’ learning and autonomy capabilities. While AI simulates human intelligence in perception, reasoning, learning, and interaction, organizations’ decision-making processes might change as human decision-making power shifts to AI. Furthermore, viewing AI adoption as a multi-stage rather than a single-stage process divides this complex change into the initiation, adoption, and routinization stages. Thus, AI adoption does not necessarily imply that AI applications are fully incorporated into enterprise-wide business practices; they could be at certain adoption stages or only in individual business functions. To address these complex organizational changes, this thesis seeks to examine the dynamics surrounding AI adoption at the organizational level. Based on four empirical research papers, this thesis presents the factors that influence AI adoption and reveals the impact of AI on the decision-making process. These research papers have been published in peer-reviewed conference proceedings. The first part of this thesis describes the factors that influence AI adoption in organizations. Based on the technology-organization-environment (TOE) framework, the findings of the qualitative study are consistent with previous innovation studies showing that generic factors, such as compatibility, top management, and data protection, affect AI adoption. In addition to the generic factors, the study also reveals that specific factors, such as data quality, ethical guidelines, and collaborative work, are of particular importance in the AI context. However, given these technological, organizational, and environmental factors, national cultural differences may occur as described by Hofstede’s national cultural framework. Factors are validated using a quantitative research design throughout the adoption process to account for the complexity of AI adoption. By considering the initiation, adoption, and routinization stages, differentiating and opposing effects on AI adoption are identified. The second part of this thesis addresses AI’s impact on the decision-making process in recruiting and marketing and sales. The experimental study shows that AI can ensure procedural justice in the candidate selection process. The findings indicate that the rule of consistency increases when recruiters are assisted by a CV recommender system. In marketing and sales, AI can support the decision-making process to identify promising prospects. By developing classification models in lead-and-opportunity management, the predictive performances of various machine learning algorithms are presented. This thesis outlines a variety of factors that involve generic and AI-specific considerations, national cultural perspectives, and a multi-stage process view to account for the complex organizational changes AI adoption entails. By focusing on recruiting as well as marketing and sales, it emphasizes AI’s impact on organizations’ decision-making processes.
    Date: 2023–10–13
  3. By: Jameel, Alaa S. (Cihan University-Erbil); Harjan, Sinan Abdullah; Ahmad, Abd Rahman
    Abstract: The purpose of this study is to examine the measure the Behavioral intentions (BI) to use artificial intelligence (AI) among managers in small and medium enterprises. the targets population of this study was the SMEs managers in Baghdad City after ensuring that the managers were using some form of AI. 184 valid questionnaires have been analyzed by Smart-PLS. The results indicated that performance expectancy (PE), Social influence (SI), Facilitating Conditions (FC), and Top management support (TMS) have a positive and significant impact on behavioral intention to use AI among the managers in SMEs; on the other hand, the effort expectancy (EE) has an insignificant impact on behavioral intention to use AI among the managers.
    Date: 2023–07–10
  4. By: Lechardoy, Lucie; López Forés, Laura; Codagnone, Cristiano
    Abstract: The Digital Compass sets the goal to increase the digitalisation of businesses and take-up of artificial intelligence (AI). The use of AI-based technologies, such as algorithmic management, AI-based robots and wearables using algorithms for data processing, is increasing across countries and sectors. Based on a literature review and the insight from exploratory case studies at company level, this paper presents the main applications of AI-based technologies at the workplace and their impacts for work organisation, working conditions and ethics. Evidence shows a range of both positive and negative impacts of the use of AI on work organisation and working conditions as well as several ethical concerns. To address some of these concerns, a set of ethical guidelines and recommendations from EU, international and national public authorities and social partners have emerged in recent years. The paper presents and compares the different initiatives, highlighting the current gaps to ensure the protection of workers and working conditions while contributing towards the digitalisation goals of the Digital Compass.
    Keywords: Artificial intelligence, workplace, impacts, work organisation, working conditions, ethics, legislative framework
    Date: 2023
  5. By: Ajay K. Agrawal; Joshua S. Gans; Avi Goldfarb
    Abstract: We ask whether a technical objective of using human performance of tasks as a benchmark for AI performance will result in the negative outcomes highlighted in prior work in terms of jobs and inequality. Instead, we argue that task automation, especially when driven by AI advances, can enhance job prospects and potentially widen the scope for employment of many workers. The neglected mechanism we highlight is the potential for changes in the skill premium where AI automation of tasks exogenously improves the value of the skills of many workers, expands the pool of available workers to perform other tasks, and, in the process, increases labor income and potentially reduces inequality. We label this possibility the “Turing Transformation.” As such, we argue that AI researchers and policymakers should not focus on the technical aspects of AI applications and whether they are directed at automating human-performed tasks or not and, instead, focus on the outcomes of AI research. In so doing, our goal is not to diminish human-centric AI research as a laudable goal. Instead, we want to note that AI research that uses a human-task template with a goal to automate that task can often augment human performance of other tasks and whole jobs. The distributional effects of technology depend more on which workers have tasks that get automated than on the fact of automation per se.
    JEL: J2 O3
    Date: 2023–10
  6. By: Burdin, Gabriel; Dughera, Stefano; Landini, Fabio; Belloc, Filippo
    Abstract: Advances in artificial intelligence and data analytics have notably expanded employers' monitoring and surveillance capabilities, facilitating the accurate observability of work effort. There is an ongoing debate among academics and policymakers about the productivity and broader welfare implications of digital monitoring (DM) technologies. In this context, many countries confer information, consultation and codetermination rights to employee representation (ER) bodies on matters related to workplace organization and the introduction of new technologies, which could potentially discourage employers from making DM investments. Using a cross-sectional sample of more than 21000 European establishments, we find instead that establishments with ER are more likely to utilize DM technologies than establishments without ER. We also document a positive effect of ER on DM utilization in the context of a local-randomization regression discontinuity analysis that exploits size-contingent policy rules governing the operation of ER bodies in Europe. We rationalize this unexpected finding through the lens of a theoretical framework in which shared governance via ER create organizational safeguards that mitigate workers' negative responses to monitoring and undermines the disciplining effect of DM technologies.
    Keywords: Digital-based monitoring, algorithmic management, HR analytics, transparency, control aversion, worker voice, employee representation
    Date: 2023
  7. By: Garcia-Murillo, Martha; MacInnes, Ian
    Abstract: Artificial Intelligence has been received with both great enthusiasm and great concern. From a positive perspective, IBM's CEO, Ginni Rometty, stated, "It will be a partnership between a man and machine. It is a very symbiotic relationship that you are conversing with this technology. Our purpose is to augment and really be in service of what humans do" (N. Singh, 2017). In contrast, other tech executives have reacted like Bill Gates, who stated that "[h]umans should be worried about the threat posed by artificial Intelligence" (Rawlinson, 2015). Many academic researchers have voiced their concerns about the disappearance of jobs, with machines replacing workers (Brynjolfsson & McAfee, 2015). Yet beyond the enthusiasm and the concerns, there has been an increased interest in AI in all areas of human endeavor. In manufacturing, for example, the number of publications related to AI has grown from about 500 in 2010 to more than 2, 000 in 2020 (Arinez et al., 2020), and this trend can also be seen in other areas where AI is being incorporated. A 2003 study from Filippi et al. found exponential growth in the number of papers focusing on AI. (...)
    Date: 2023
  8. By: Rim, Maria J.; Kwon, Youngsun
    Abstract: The paper addresses the increasing adoption of digital transformation in public sector organizations, mainly focusing on its impact on national statistical offices. The emergence of data-driven strategies powered by artificial intelligence (AI) disrupts the conventional labourintensive approaches of NSOs. This necessitates a delicate balance between real-time information and statistical accuracy, leading to exploring AI applications such as machine learning in data processing. Despite its potential benefits, the cooperation between AI and human resources requires in-depth examination to leverage their combined strengths effectively. The paper proposes an integrative review and multi-case study approach to comprehensively contribute to a deeper understanding of the benefits and costs of AI adoption in national statistical processes, facilitate the acceleration of digital transformation, and provide valuable insights for policymakers and practitioners in optimizing the use of AI in collecting, generating and analyzing national statistics.
    Keywords: Digital transformation, national statistics, artificial intelligence, human resources, data-driven strategy
    Date: 2023
  9. By: Nam, Jinyoung; Kim, Junghwan; Jung, Yoonhyuk
    Abstract: Artificial intelligence (AI) startups are utilizing artificial intelligence technology to produce novel solutions across a multitude of sectors, becoming key players in the AI business ecosystem, signifying AI business networks consisting of technology, business applications, and various industry sectors. Particularly noteworthy is the substantial surge in the initiation and investment in AI startups within South Korea. To gain insight into the AI business ecosystem, this study explores how the ecosystem is collectively understood from AI startups' perspectives in South Korea. We conducted semi-structured interviews with 16 CEOs and managers in AI startups in South Korea. This study conducted a core-periphery analysis of the social representation of the AI business ecosystem. By doing so, it bridges an existing knowledge gap and enriches the body of research related to the AI business ecosystem, as well as the current opportunities and challenges it faces. Our findings not only inform and guide practitioners, governments, and businesses alike, but also suggest that continuous discussion among government agencies, large tech companies, and AI startups is crucial for establishing a more sustainable AI business ecosystem.
    Keywords: Artificial intelligence (AI), AI startups, AI business ecosystem, Social representations theory
    Date: 2023
  10. By: Nicolas Fanta (Institute of Economic Studies, Charles University, Prague); Roman Horvath (Institute of Economic Studies, Charles University, Prague)
    Abstract: We examine whether artificial intelligence (AI) can decipher European Central Bank´s communication. Employing 1769 inter-meeting verbal communication events of the European Central Bank´s Governing Council members, we construct an AI-based indicator evaluating whether communication is leaning towards easing, tightening or maintaining the monetary policy stance. We find that our AI-based indicator replicates well similar indicators based on human expert judgment but at much higher speed and at much lower costs. Using our AI-based indicator and a number of robustness checks, our regression results show that ECB communication matters for the future monetary policy even after controlling for financial market expectations and lagged monetary policy decisions.
    Keywords: Artificial intelligence, central bank communication, monetary policy
    JEL: E52 E58
    Date: 2023–09
  11. By: Ethan Callanan; Amarachi Mbakwe; Antony Papadimitriou; Yulong Pei; Mathieu Sibue; Xiaodan Zhu; Zhiqiang Ma; Xiaomo Liu; Sameena Shah
    Abstract: Large Language Models (LLMs) have demonstrated remarkable performance on a wide range of Natural Language Processing (NLP) tasks, often matching or even beating state-of-the-art task-specific models. This study aims at assessing the financial reasoning capabilities of LLMs. We leverage mock exam questions of the Chartered Financial Analyst (CFA) Program to conduct a comprehensive evaluation of ChatGPT and GPT-4 in financial analysis, considering Zero-Shot (ZS), Chain-of-Thought (CoT), and Few-Shot (FS) scenarios. We present an in-depth analysis of the models' performance and limitations, and estimate whether they would have a chance at passing the CFA exams. Finally, we outline insights into potential strategies and improvements to enhance the applicability of LLMs in finance. In this perspective, we hope this work paves the way for future studies to continue enhancing LLMs for financial reasoning through rigorous evaluation.
    Date: 2023–10
  12. By: Koen W. de Bock (Audencia Business School); Kristof Coussement (LEM - Lille économie management - UMR 9221 - UA - Université d'Artois - UCL - Université catholique de Lille - Université de Lille - CNRS - Centre National de la Recherche Scientifique, IÉSEG School Of Management [Puteaux]); Arno De Caigny; Roman Slowiński (Poznan University of Technology, Systems Research Institute of the Polish Academy of Sciences); Bart Baesens (KU Leuven - Catholic University of Leuven - Katholieke Universiteit Leuven, SBS - Southampton Business School); Robert N Boute (Vlerick Business School [Leuven], KU Leuven - Catholic University of Leuven - Katholieke Universiteit Leuven); Tsan-Ming Choi (University fo Liverpool Management School); Dursun Delen (Spears School of Business (Oklahoma State University), Istinye University); Mathias Kraus (FAU - Friedrich-Alexander Universität Erlangen-Nürnberg); Stefan Lessmann (Humboldt-Universität zu Berlin - Humboldt University Of Berlin); Sebastián Maldonado (UCHILE - Universidad de Chile = University of Chile [Santiago], ISCI - Instituto de Sistemas Complejos de Ingeniería); David Martens (UA - University of Antwerp); María Óskarsdóttir (Reykjavík University); Carla Vairetti (UANDES - Universidad de los Andes [Santiago]); Wouter Verbeke (KU Leuven - Catholic University of Leuven - Katholieke Universiteit Leuven); Richard Weber (UCHILE - Universidad de Chile = University of Chile [Santiago], ISCI - Instituto de Sistemas Complejos de Ingeniería)
    Abstract: The ability to understand and explain the outcomes of data analysis methods, with regard to aiding decision-making, has become a critical requirement for many applications. For example, in operational research domains, data analytics have long been promoted as a way to enhance decision-making. This study proposes a comprehensive, normative framework to define explainable artificial intelligence (XAI) for operational research (XAIOR) as a reconciliation of three subdimensions that constitute its requirements: performance, attributable, and responsible analytics. In turn, this article offers in-depth overviews of how XAIOR can be deployed through various methods with respect to distinct domains and applications. Finally, an agenda for future XAIOR research is defined.
    Keywords: Decision analysis, XAI, explainable artificial intelligence, interpretable machine learning, XAIOR
    Date: 2023–09

This nep-ain issue is ©2023 by Ben Greiner. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.