nep-cmp New Economics Papers
on Computational Economics
Issue of 2023‒11‒06
24 papers chosen by
Stan Miles, Thompson Rivers University

  1. CAD: Clustering And Deep Reinforcement Learning Based Multi-Period Portfolio Management Strategy By Zhengyong Jiang; Jeyan Thiayagalingam; Jionglong Su; Jinjun Liang
  2. Identifying Nascent High-Growth Firms Using Machine Learning By Stephanie Houle; Ryan Macdonald
  3. A machine learning approach for assessing labor supply to the online labor market By Fung, Esabella
  4. Big Data y Algoritmos para la Medición de la Pobreza y el Desarrollo By Walter Sosa Escudero
  5. Fuzzy firm name matching: Merging Amadeus firm data to PATSTAT By Leon Bremer
  6. Machine learning applied to active fixed-income portfolio management: a Lasso logit approach. By Mercedes de Luis; Emilio Rodríguez; Diego Torres
  7. Collecting, generating and analyzing national statistics with AI: what benefits and costs? By Rim, Maria J.; Kwon, Youngsun
  8. Adoption of Artificial Intelligence in an Organizational Context: Analysis of the Factors Influencing the Adoption and Decision-Making Process By Eitle, Verena
  9. Hedging Properties of Algorithmic Investment Strategies using Long Short-Term Memory and Time Series models for Equity Indices By Jakub Michańków; Paweł Sakowski; Robert Ślepaczuk
  10. Artificial Intelligence and Central Bank Communication: The Case of the ECB By Nicolas Fanta; Roman Horvath
  11. Explainable AI for Operational Research: A Defining Framework, Methods, Applications, and a Research Agenda By Koen W. de Bock; Kristof Coussement; Arno De Caigny; Roman Slowiński; Bart Baesens; Robert N Boute; Tsan-Ming Choi; Dursun Delen; Mathias Kraus; Stefan Lessmann; Sebastián Maldonado; David Martens; María Óskarsdóttir; Carla Vairetti; Wouter Verbeke; Richard Weber
  12. Learning from experts: Energy efficiency in residential buildings By Billio, Monica; Casarin, Roberto; Costola, Michele; Veggente, Veronica
  13. Generative AI, Productivity, the Labor Market, and Choice Behavior: A speech at the National Bureau of Economic Research Economics of Artificial Intelligence Conference, Fall 2023, Toronto, Canada, Sept. 22, 2023 By Lisa D. Cook
  14. Artificial intelligence at the workplace and the impacts on work organisation, working conditions and ethics By Lechardoy, Lucie; López Forés, Laura; Codagnone, Cristiano
  15. Combining Deep Learning and GARCH Models for Financial Volatility and Risk Forecasting By Jakub Micha\'nk\'ow; {\L}ukasz Kwiatkowski; Janusz Morajda
  16. How Does Artificial Intelligence Improve Human Decision-Making? Evidence from the AI-Powered Go Program By Sukwoong Choi; Hyo Kang; Namil Kim; Junsik Kim
  17. Understandings of the AI business ecosystem in South Korea: AI startups' perspective By Nam, Jinyoung; Kim, Junghwan; Jung, Yoonhyuk
  18. NoxTrader: LSTM-Based Stock Return Momentum Prediction By Hsiang-Hui Liu; Han-Jay Shu; Wei-Ning Chiu
  19. Behavioral Intentions to use Artificial Intelligence Among Managers in Small and Medium Enterprises By Jameel, Alaa S.; Harjan, Sinan Abdullah; Ahmad, Abd Rahman
  20. Can GPT models be Financial Analysts? An Evaluation of ChatGPT and GPT-4 on mock CFA Exams By Ethan Callanan; Amarachi Mbakwe; Antony Papadimitriou; Yulong Pei; Mathieu Sibue; Xiaodan Zhu; Zhiqiang Ma; Xiaomo Liu; Sameena Shah
  21. Artificial Intelligence and Employment: A Look into the Crystal Ball By Guarascio, Dario; Reljic, Jelena; Stöllinger, Roman
  22. The Turing Transformation: Artificial Intelligence, Intelligence Augmentation, and Skill Premiums By Ajay K. Agrawal; Joshua S. Gans; Avi Goldfarb
  23. Machine Learning Who to Nudge: Causal vs Predictive Targeting in a Field Experiment on Student Financial Aid Renewal By Susan Athey; Niall Keleher; Jann Spiess
  24. Automated regime detection in multidimensional time series data using sliced Wasserstein k-means clustering By Qinmeng Luan; James Hamp

  1. By: Zhengyong Jiang; Jeyan Thiayagalingam; Jionglong Su; Jinjun Liang
    Abstract: In this paper, we present a novel trading strategy that integrates reinforcement learning methods with clustering techniques for portfolio management in multi-period trading. Specifically, we leverage the clustering method to categorize stocks into various clusters based on their financial indices. Subsequently, we utilize the algorithm Asynchronous Advantage Actor-Critic to determine the trading actions for stocks within each cluster. Finally, we employ the algorithm DDPG to generate the portfolio weight vector, which decides the amount of stocks to buy, sell, or hold according to the trading actions of different clusters. To the best of our knowledge, our approach is the first to combine clustering methods and reinforcement learning methods for portfolio management in the context of multi-period trading. Our proposed strategy is evaluated using a series of back-tests on four datasets, comprising a of 800 stocks, obtained from the Shanghai Stock Exchange and National Association of Securities Deal Automated Quotations sources. Our results demonstrate that our approach outperforms conventional portfolio management techniques, such as the Robust Median Reversion strategy, Passive Aggressive Median Reversion Strategy, and several machine learning methods, across various metrics. In our back-test experiments, our proposed strategy yields an average return of 151% over 360 trading periods with 800 stocks, compared to the highest return of 124% achieved by other techniques over identical trading periods and stocks.
    Date: 2023–10
  2. By: Stephanie Houle; Ryan Macdonald
    Abstract: Predicting which firms will grow quickly and why has been the subject of research studies for many decades. Firms that grow rapidly have the potential to usher in new innovations, products or processes (Kogan et al. 2017), become superstar firms (Haltiwanger et al. 2013) and impact the aggregate labour share (Autor et al. 2020; De Loecker et al. 2020). We explore the use of supervised machine learning techniques to identify a population of nascent high-growth firms using Canadian administrative firm-level data. We apply a suite of supervised machine learning algorithms (elastic net model, random forest and neural net) to determine whether a large set of variables on Canadian firm tax filing financial and employment data, state variables (e.g., industry, geography) and indicators of firm complexity (e.g., multiple industrial activities, foreign ownership) can predict which firms will be high-growth firms over the next three years. The results suggest that the machine learning classifiers can select a sub-population of nascent high-growth firms that includes the majority of actual high-growth firms plus a group of firms that shared similar attributes but failed to attain high-growth status.
    Keywords: Econometric and statistical methods; Firm dynamics
    JEL: C55 C81 L25
    Date: 2023–10
  3. By: Fung, Esabella
    Abstract: The online labor market, comprised of companies such as Upwork, Amazon Mechanical Turk, and their freelancer workforce, has expanded worldwide over the past 15 years and has changed the labor market landscape. Although qualitative studies have been done to identify factors related to the global supply to the online labor market, few data modeling studies have been conducted to quantify the importance of these factors in this area. This study applied tree-based supervised learning techniques, decision tree regression, random forest, and gradient boosting, to systematically evaluate the online labor supply with 70 features related to climate, population, economics, education, health, language, and technology adoption. To provide machine learning explainability, SHAP, based on the Shapley values, was introduced to identify features with high marginal contributions. The top 5 contributing features indicate the tight integration of technology adoption, language, and human migration patterns with the online labor market supply.
    Keywords: business, boosting, commerce and trade, digital divide, economics, ensemble learning, globalization, machine learning, random forest, social factors, statistical learning, sharing economy
    JEL: C60 F14 F16 J11 J22 M2
    Date: 2023–10–09
  4. By: Walter Sosa Escudero (UDESA, CONICET, CEDLAS-IIE-UNLP)
    Abstract: La revolución del combo big data-machine learning-inteligencia artificial ha invadido todos los campos del conocimiento y, esperablemente, el de la medición del bienestar no es una excepción. Y, naturalmente, urge preguntar si los enormes problemas de cuantificación de la pobreza o la desigualdad no encontraran una solución rápida y efectiva que provenga de la combinación de datos masivos de big data y los poderosos algoritmos de machine learning y la inteligencia artificial. Esta nota es una introducción técnicamente accesible a los logros y desafíos del uso big data y machine learning para la medición de la pobreza, el desarrollo, la desigualdad y otras dimensiones sociales. Se basa en Sosa Escudero, Anauati y Brau (2022), un artículo abarcativo y técnico, que estudia con detalle el estado de las artes en lo que se refiere al uso de machine learning para los estudios de desarrollo y bienestar, al cual remitiremos para mayores detalles y referencias específicas.
    Date: 2023–10
  5. By: Leon Bremer (Vrije Universiteit Amsterdam)
    Abstract: When merging firms across large databases in the absence of common identifiers, text algorithms can help. I propose a high-performance fuzzy firm name matching algorithm that uses existing computational methods and works even under hardware restrictions. The algorithm consists of four steps, namely (1) cleaning, (2) similarity scoring, (3) a decision rule based on supervised machine learning, and (4) group identification using community detection. The algorithm is applied to merging firms in the Amadeus Financials and Subsidiaries databases, containing firm-level business and ownership information, to applicants in PATSTAT, a worldwide patent database. For the application the algorithm vastly outperforms an exact string match by increasing the number of matched firms in the Amadeus Financials (Subsidiaries) database with 116% (160%). 53% (74%) of this improvement is due to cleaning, and another 41% (50%) improvement is due to similarity matching. 18.1% of all patent applications since 1950 are matched to firms in the Amadeus databases, compared to 2.6% for an exact name match.
    Keywords: Fuzzy name matching, supervised machine learning, name disambiguation, patents
    JEL: C81 C88 O34
    Date: 2023–10–12
  6. By: Mercedes de Luis (Banco de España); Emilio Rodríguez (Banco de España); Diego Torres (Banco de España)
    Abstract: The use of quantitative methods constitutes a standard component of the institutional investors’ portfolio management toolkit. In the last decade, several empirical studies have employed probabilistic or classification models to predict stock market excess returns, model bond ratings and default probabilities, as well as to forecast yield curves. To the authors’ knowledge, little research exists into their application to active fixed-income management. This paper contributes to filling this gap by comparing a machine learning algorithm, the Lasso logit regression, with a passive (buy-and-hold) investment strategy in the construction of a duration management model for high-grade bond portfolios, specifically focusing on US treasury bonds. Additionally, a two-step procedure is proposed, together with a simple ensemble averaging aimed at minimising the potential overfitting of traditional machine learning algorithms. A method to select thresholds that translate probabilities into signals based on conditional probability distributions is also introduced. A large set of financial and economic variables is used as an input to obtain a signal for active duration management relative to a passive benchmark portfolio. As a first result, most of the variables selected by the model are related to financial flows and economic fundamentals, but the parameters seem to be unstable over time, thereby suggesting that the variable relevance may be time dependent. Backtesting of the model, which was carried out on a sovereign bond portfolio denominated in US dollars, resulted in a small but statistically significant outperformance of benchmark index in the out-of-sample dataset after controlling for overfitting. These results support the case for incorporating quantitative tools in the active portfolio management process for institutional investors, but paying special attention to potential overfitting and unstable parameters. Quantitative tools should be viewed as a complementary input to the qualitative and fundamental analysis, together with the portfolio manager’s expertise, in order to make better-informed investment decisions.
    Keywords: machine learning, probabilistic or classification models, Lasso logit regressions, active fixed-income management, absolute excess return, Sharpe ratios, duration management
    JEL: C45 C51 C53 E37 G11
    Date: 2023–09
  7. By: Rim, Maria J.; Kwon, Youngsun
    Abstract: The paper addresses the increasing adoption of digital transformation in public sector organizations, mainly focusing on its impact on national statistical offices. The emergence of data-driven strategies powered by artificial intelligence (AI) disrupts the conventional labourintensive approaches of NSOs. This necessitates a delicate balance between real-time information and statistical accuracy, leading to exploring AI applications such as machine learning in data processing. Despite its potential benefits, the cooperation between AI and human resources requires in-depth examination to leverage their combined strengths effectively. The paper proposes an integrative review and multi-case study approach to comprehensively contribute to a deeper understanding of the benefits and costs of AI adoption in national statistical processes, facilitate the acceleration of digital transformation, and provide valuable insights for policymakers and practitioners in optimizing the use of AI in collecting, generating and analyzing national statistics.
    Keywords: Digital transformation, national statistics, artificial intelligence, human resources, data-driven strategy
    Date: 2023
  8. By: Eitle, Verena
    Abstract: The emergence of Artificial Intelligence (AI) shifts the business environment to such an extent that this general-purpose technology (GPT) is prevalent in a wide range of industries, evolves through constant advancements, and stimulates complementary innovations. By implementing AI applications in their business practices, organizations primarily benefit from improved business process automation, valuable cognitive insights, and enhanced cognitive engagements. Despite this great potential, organizations encounter difficulties in adopting AI as they struggle to adjust to corresponding complex organizational changes. The tendency for organizations to face challenges when implementing AI applications indicates that AI adoption is far from trivial. The complex organizational change generated by AI adoption could emerge from intelligent agents’ learning and autonomy capabilities. While AI simulates human intelligence in perception, reasoning, learning, and interaction, organizations’ decision-making processes might change as human decision-making power shifts to AI. Furthermore, viewing AI adoption as a multi-stage rather than a single-stage process divides this complex change into the initiation, adoption, and routinization stages. Thus, AI adoption does not necessarily imply that AI applications are fully incorporated into enterprise-wide business practices; they could be at certain adoption stages or only in individual business functions. To address these complex organizational changes, this thesis seeks to examine the dynamics surrounding AI adoption at the organizational level. Based on four empirical research papers, this thesis presents the factors that influence AI adoption and reveals the impact of AI on the decision-making process. These research papers have been published in peer-reviewed conference proceedings. The first part of this thesis describes the factors that influence AI adoption in organizations. Based on the technology-organization-environment (TOE) framework, the findings of the qualitative study are consistent with previous innovation studies showing that generic factors, such as compatibility, top management, and data protection, affect AI adoption. In addition to the generic factors, the study also reveals that specific factors, such as data quality, ethical guidelines, and collaborative work, are of particular importance in the AI context. However, given these technological, organizational, and environmental factors, national cultural differences may occur as described by Hofstede’s national cultural framework. Factors are validated using a quantitative research design throughout the adoption process to account for the complexity of AI adoption. By considering the initiation, adoption, and routinization stages, differentiating and opposing effects on AI adoption are identified. The second part of this thesis addresses AI’s impact on the decision-making process in recruiting and marketing and sales. The experimental study shows that AI can ensure procedural justice in the candidate selection process. The findings indicate that the rule of consistency increases when recruiters are assisted by a CV recommender system. In marketing and sales, AI can support the decision-making process to identify promising prospects. By developing classification models in lead-and-opportunity management, the predictive performances of various machine learning algorithms are presented. This thesis outlines a variety of factors that involve generic and AI-specific considerations, national cultural perspectives, and a multi-stage process view to account for the complex organizational changes AI adoption entails. By focusing on recruiting as well as marketing and sales, it emphasizes AI’s impact on organizations’ decision-making processes.
    Date: 2023–10–13
  9. By: Jakub Michańków (Cracow University of Economics, Department of Informatics; University of Warsaw, Faculty of Economic Sciences, Quantitative Finance Research Group, Department of Quantitative Finance); Paweł Sakowski (University of Warsaw, Faculty of Economic Sciences, Quantitative Finance Research Group, Department of Quantitative Finance); Robert Ślepaczuk (University of Warsaw, Faculty of Economic Sciences, Quantitative Finance Research Group, Department of Quantitative Finance)
    Abstract: This paper proposes a novel approach to hedging portfolios of risky assets when financial markets are affected by financial turmoils. We introduce a completely novel approach to diversification activity not on the level of single assets but on the level of ensemble algorithmic investment strategies (AIS) built based on the prices of these assets. We employ four types of diverse theoretical models (LSTM - Long Short-Term Memory, ARIMA-GARCH - Autoregressive Integrated Moving Average - Generalized Autoregressive Conditional Heteroskedasticity, momentum, and contrarian) to generate price forecasts, which are then used to produce investment signals in single and complex AIS. In such a way, we are able to verify the diversification potential of different types of investment strategies consisting of various assets (energy commodities, precious metals, cryptocurrencies, or soft commodities) in hedging ensemble AIS built for equity indices (S&P 500 index). Empirical data used in this study cover the period between 2004 and 2022. Our main conclusion is that LSTM-based strategies outperform the other models and that the best diversifier for the AIS built for the S&P 500 index is the AIS built for Bitcoin. Finally, we test the LSTM model for a higher frequency of data (1 hour). We conclude that it outperforms the results obtained using daily data.
    Keywords: machine learning, recurrent neural networks, long short-term memory, algorithmic investment strategies, testing architecture, loss function, walk-forward optimization, over-optimization
    JEL: C4 C14 C45 C53 C58 G13
    Date: 2023
  10. By: Nicolas Fanta (Institute of Economic Studies, Charles University, Prague); Roman Horvath (Institute of Economic Studies, Charles University, Prague)
    Abstract: We examine whether artificial intelligence (AI) can decipher European Central Bank´s communication. Employing 1769 inter-meeting verbal communication events of the European Central Bank´s Governing Council members, we construct an AI-based indicator evaluating whether communication is leaning towards easing, tightening or maintaining the monetary policy stance. We find that our AI-based indicator replicates well similar indicators based on human expert judgment but at much higher speed and at much lower costs. Using our AI-based indicator and a number of robustness checks, our regression results show that ECB communication matters for the future monetary policy even after controlling for financial market expectations and lagged monetary policy decisions.
    Keywords: Artificial intelligence, central bank communication, monetary policy
    JEL: E52 E58
    Date: 2023–09
  11. By: Koen W. de Bock (Audencia Business School); Kristof Coussement (LEM - Lille économie management - UMR 9221 - UA - Université d'Artois - UCL - Université catholique de Lille - Université de Lille - CNRS - Centre National de la Recherche Scientifique, IÉSEG School Of Management [Puteaux]); Arno De Caigny; Roman Slowiński (Poznan University of Technology, Systems Research Institute of the Polish Academy of Sciences); Bart Baesens (KU Leuven - Catholic University of Leuven - Katholieke Universiteit Leuven, SBS - Southampton Business School); Robert N Boute (Vlerick Business School [Leuven], KU Leuven - Catholic University of Leuven - Katholieke Universiteit Leuven); Tsan-Ming Choi (University fo Liverpool Management School); Dursun Delen (Spears School of Business (Oklahoma State University), Istinye University); Mathias Kraus (FAU - Friedrich-Alexander Universität Erlangen-Nürnberg); Stefan Lessmann (Humboldt-Universität zu Berlin - Humboldt University Of Berlin); Sebastián Maldonado (UCHILE - Universidad de Chile = University of Chile [Santiago], ISCI - Instituto de Sistemas Complejos de Ingeniería); David Martens (UA - University of Antwerp); María Óskarsdóttir (Reykjavík University); Carla Vairetti (UANDES - Universidad de los Andes [Santiago]); Wouter Verbeke (KU Leuven - Catholic University of Leuven - Katholieke Universiteit Leuven); Richard Weber (UCHILE - Universidad de Chile = University of Chile [Santiago], ISCI - Instituto de Sistemas Complejos de Ingeniería)
    Abstract: The ability to understand and explain the outcomes of data analysis methods, with regard to aiding decision-making, has become a critical requirement for many applications. For example, in operational research domains, data analytics have long been promoted as a way to enhance decision-making. This study proposes a comprehensive, normative framework to define explainable artificial intelligence (XAI) for operational research (XAIOR) as a reconciliation of three subdimensions that constitute its requirements: performance, attributable, and responsible analytics. In turn, this article offers in-depth overviews of how XAIOR can be deployed through various methods with respect to distinct domains and applications. Finally, an agenda for future XAIOR research is defined.
    Keywords: Decision analysis, XAI, explainable artificial intelligence, interpretable machine learning, XAIOR
    Date: 2023–09
  12. By: Billio, Monica; Casarin, Roberto; Costola, Michele; Veggente, Veronica
    Abstract: Measuring and reducing energy consumption constitutes a crucial concern in public policies aimed at mitigating global warming. The real estate sector faces the challenge of enhancing building efficiency, where insights from experts play a pivotal role in the evaluation process. This research employs a machine learning approach to analyze expert opinions, seeking to extract the key determinants influencing potential residential building efficiency and establishing an efficient prediction framework. The study leverages open Energy Performance Certificate databases from two countries with distinct latitudes, namely the UK and Italy, to investigate whether enhancing energy efficiency necessitates different intervention approaches. The findings reveal the existence of non-linear relationships between efficiency and building characteristics, which cannot be captured by conventional linear modeling frameworks. By offering insights into the determinants of residential building efficiency, this study provides guidance to policymakers and stakeholders in formulating effective and sustainable strategies for energy efficiency improvement.
    Keywords: Energy efficiency, Energy Performance Certificate, Machine learning, Tree-based models, big data
    JEL: C10 C53 C50
    Date: 2023
  13. By: Lisa D. Cook
    Date: 2023–09–22
  14. By: Lechardoy, Lucie; López Forés, Laura; Codagnone, Cristiano
    Abstract: The Digital Compass sets the goal to increase the digitalisation of businesses and take-up of artificial intelligence (AI). The use of AI-based technologies, such as algorithmic management, AI-based robots and wearables using algorithms for data processing, is increasing across countries and sectors. Based on a literature review and the insight from exploratory case studies at company level, this paper presents the main applications of AI-based technologies at the workplace and their impacts for work organisation, working conditions and ethics. Evidence shows a range of both positive and negative impacts of the use of AI on work organisation and working conditions as well as several ethical concerns. To address some of these concerns, a set of ethical guidelines and recommendations from EU, international and national public authorities and social partners have emerged in recent years. The paper presents and compares the different initiatives, highlighting the current gaps to ensure the protection of workers and working conditions while contributing towards the digitalisation goals of the Digital Compass.
    Keywords: Artificial intelligence, workplace, impacts, work organisation, working conditions, ethics, legislative framework
    Date: 2023
  15. By: Jakub Micha\'nk\'ow; {\L}ukasz Kwiatkowski; Janusz Morajda
    Abstract: In this paper, we develop a hybrid approach to forecasting the volatility and risk of financial instruments by combining common econometric GARCH time series models with deep learning neural networks. For the latter, we employ Gated Recurrent Unit (GRU) networks, whereas four different specifications are used as the GARCH component: standard GARCH, EGARCH, GJR-GARCH and APARCH. Models are tested using daily logarithmic returns on the S&P 500 index as well as gold price Bitcoin prices, with the three assets representing quite distinct volatility dynamics. As the main volatility estimator, also underlying the target function of our hybrid models, we use the price-range-based Garman-Klass estimator, modified to incorporate the opening and closing prices. Volatility forecasts resulting from the hybrid models are employed to evaluate the assets' risk using the Value-at-Risk (VaR) and Expected Shortfall (ES) at two different tolerance levels of 5% and 1%. Gains from combining the GARCH and GRU approaches are discussed in the contexts of both the volatility and risk forecasts. In general, it can be concluded that the hybrid solutions produce more accurate point volatility forecasts, although it does not necessarily translate into superior VaR and ES forecasts.
    Date: 2023–10
  16. By: Sukwoong Choi; Hyo Kang; Namil Kim; Junsik Kim
    Abstract: We study how humans learn from AI, exploiting an introduction of an AI-powered Go program (APG) that unexpectedly outperformed the best professional player. We compare the move quality of professional players to that of APG's superior solutions around its public release. Our analysis of 749, 190 moves demonstrates significant improvements in players' move quality, accompanied by decreased number and magnitude of errors. The effect is pronounced in the early stages of the game where uncertainty is highest. In addition, younger players and those in AI-exposed countries experience greater improvement, suggesting potential inequality in learning from AI. Further, while players of all levels learn, less skilled players derive higher marginal benefits. These findings have implications for managers seeking to adopt and utilize AI effectively within their organizations.
    Date: 2023–10
  17. By: Nam, Jinyoung; Kim, Junghwan; Jung, Yoonhyuk
    Abstract: Artificial intelligence (AI) startups are utilizing artificial intelligence technology to produce novel solutions across a multitude of sectors, becoming key players in the AI business ecosystem, signifying AI business networks consisting of technology, business applications, and various industry sectors. Particularly noteworthy is the substantial surge in the initiation and investment in AI startups within South Korea. To gain insight into the AI business ecosystem, this study explores how the ecosystem is collectively understood from AI startups' perspectives in South Korea. We conducted semi-structured interviews with 16 CEOs and managers in AI startups in South Korea. This study conducted a core-periphery analysis of the social representation of the AI business ecosystem. By doing so, it bridges an existing knowledge gap and enriches the body of research related to the AI business ecosystem, as well as the current opportunities and challenges it faces. Our findings not only inform and guide practitioners, governments, and businesses alike, but also suggest that continuous discussion among government agencies, large tech companies, and AI startups is crucial for establishing a more sustainable AI business ecosystem.
    Keywords: Artificial intelligence (AI), AI startups, AI business ecosystem, Social representations theory
    Date: 2023
  18. By: Hsiang-Hui Liu; Han-Jay Shu; Wei-Ning Chiu
    Abstract: We introduce NoxTrader, which is designed for portfolio construction and trading execution, aims at generating profitable outcomes. The primary focus of NoxTrader is on stock market trading with an emphasis on cultivating moderate to long-term profits. The underlying learning process of NoxTrader hinges on the assimilation of insights gleaned from historical trading data, primarily hinging on time-series analysis due to the inherent nature of the employed dataset. We delineate the sequential progression encompassing data acquisition, feature engineering, predictive modeling, parameter configuration, establishment of a rigorous backtesting framework, and ultimately position NoxTrader as a testament to the prospective viability of algorithmic trading models within real-world trading scenarios.
    Date: 2023–10
  19. By: Jameel, Alaa S. (Cihan University-Erbil); Harjan, Sinan Abdullah; Ahmad, Abd Rahman
    Abstract: The purpose of this study is to examine the measure the Behavioral intentions (BI) to use artificial intelligence (AI) among managers in small and medium enterprises. the targets population of this study was the SMEs managers in Baghdad City after ensuring that the managers were using some form of AI. 184 valid questionnaires have been analyzed by Smart-PLS. The results indicated that performance expectancy (PE), Social influence (SI), Facilitating Conditions (FC), and Top management support (TMS) have a positive and significant impact on behavioral intention to use AI among the managers in SMEs; on the other hand, the effort expectancy (EE) has an insignificant impact on behavioral intention to use AI among the managers.
    Date: 2023–07–10
  20. By: Ethan Callanan; Amarachi Mbakwe; Antony Papadimitriou; Yulong Pei; Mathieu Sibue; Xiaodan Zhu; Zhiqiang Ma; Xiaomo Liu; Sameena Shah
    Abstract: Large Language Models (LLMs) have demonstrated remarkable performance on a wide range of Natural Language Processing (NLP) tasks, often matching or even beating state-of-the-art task-specific models. This study aims at assessing the financial reasoning capabilities of LLMs. We leverage mock exam questions of the Chartered Financial Analyst (CFA) Program to conduct a comprehensive evaluation of ChatGPT and GPT-4 in financial analysis, considering Zero-Shot (ZS), Chain-of-Thought (CoT), and Few-Shot (FS) scenarios. We present an in-depth analysis of the models' performance and limitations, and estimate whether they would have a chance at passing the CFA exams. Finally, we outline insights into potential strategies and improvements to enhance the applicability of LLMs in finance. In this perspective, we hope this work paves the way for future studies to continue enhancing LLMs for financial reasoning through rigorous evaluation.
    Date: 2023–10
  21. By: Guarascio, Dario; Reljic, Jelena; Stöllinger, Roman
    Abstract: This study provides evidence of the employment impact of AI exposure in European regions, addressing one of the many gaps in the emerging literature on AI's effects on employment in Europe. Building upon the occupation-based AI-exposure indicators proposed by Felten et al. (2018, 2019, 2021), which are mapped to the European occupational classification (ISCO), following Albanesi et al. (2023), we analyse the regional employment dynamics between 2011 and 2018. After controlling for a wide range of supply and demand factors, our findings indicate that, on average, AI exposure has a positive impact on regional employment. Put differently, European regions characterised by a relatively larger share of AI-exposed occupations display, all else being equal and once potential endogeneity concerns are mitigated, a more favourable employment tendency over the period 2011-2018. We also find evidence of a moderating effect of robot density on the AI-employment nexus, which however lacks a causal underpinning.
    Keywords: Artificial intelligence, industrial robots, labour, regional employment, occupations
    JEL: J21 J23 O33 R1
    Date: 2023
  22. By: Ajay K. Agrawal; Joshua S. Gans; Avi Goldfarb
    Abstract: We ask whether a technical objective of using human performance of tasks as a benchmark for AI performance will result in the negative outcomes highlighted in prior work in terms of jobs and inequality. Instead, we argue that task automation, especially when driven by AI advances, can enhance job prospects and potentially widen the scope for employment of many workers. The neglected mechanism we highlight is the potential for changes in the skill premium where AI automation of tasks exogenously improves the value of the skills of many workers, expands the pool of available workers to perform other tasks, and, in the process, increases labor income and potentially reduces inequality. We label this possibility the “Turing Transformation.” As such, we argue that AI researchers and policymakers should not focus on the technical aspects of AI applications and whether they are directed at automating human-performed tasks or not and, instead, focus on the outcomes of AI research. In so doing, our goal is not to diminish human-centric AI research as a laudable goal. Instead, we want to note that AI research that uses a human-task template with a goal to automate that task can often augment human performance of other tasks and whole jobs. The distributional effects of technology depend more on which workers have tasks that get automated than on the fact of automation per se.
    JEL: J2 O3
    Date: 2023–10
  23. By: Susan Athey; Niall Keleher; Jann Spiess
    Abstract: In many settings, interventions may be more effective for some individuals than others, so that targeting interventions may be beneficial. We analyze the value of targeting in the context of a large-scale field experiment with over 53, 000 college students, where the goal was to use "nudges" to encourage students to renew their financial-aid applications before a non-binding deadline. We begin with baseline approaches to targeting. First, we target based on a causal forest that estimates heterogeneous treatment effects and then assigns students to treatment according to those estimated to have the highest treatment effects. Next, we evaluate two alternative targeting policies, one targeting students with low predicted probability of renewing financial aid in the absence of the treatment, the other targeting those with high probability. The predicted baseline outcome is not the ideal criterion for targeting, nor is it a priori clear whether to prioritize low, high, or intermediate predicted probability. Nonetheless, targeting on low baseline outcomes is common in practice, for example because the relationship between individual characteristics and treatment effects is often difficult or impossible to estimate with historical data. We propose hybrid approaches that incorporate the strengths of both predictive approaches (accurate estimation) and causal approaches (correct criterion); we show that targeting intermediate baseline outcomes is most effective, while targeting based on low baseline outcomes is detrimental. In one year of the experiment, nudging all students improved early filing by an average of 6.4 percentage points over a baseline average of 37% filing, and we estimate that targeting half of the students using our preferred policy attains around 75% of this benefit.
    Date: 2023–10
  24. By: Qinmeng Luan; James Hamp
    Abstract: Recent work has proposed Wasserstein k-means (Wk-means) clustering as a powerful method to identify regimes in time series data, and one-dimensional asset returns in particular. In this paper, we begin by studying in detail the behaviour of the Wasserstein k-means clustering algorithm applied to synthetic one-dimensional time series data. We study the dynamics of the algorithm and investigate how varying different hyperparameters impacts the performance of the clustering algorithm for different random initialisations. We compute simple metrics that we find are useful in identifying high-quality clusterings. Then, we extend the technique of Wasserstein k-means clustering to multidimensional time series data by approximating the multidimensional Wasserstein distance as a sliced Wasserstein distance, resulting in a method we call `sliced Wasserstein k-means (sWk-means) clustering'. We apply the sWk-means clustering method to the problem of automated regime detection in multidimensional time series data, using synthetic data to demonstrate the validity of the approach. Finally, we show that the sWk-means method is effective in identifying distinct market regimes in real multidimensional financial time series, using publicly available foreign exchange spot rate data as a case study. We conclude with remarks about some limitations of our approach and potential complementary or alternative approaches.
    Date: 2023–10

This nep-cmp issue is ©2023 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.