nep-cmp New Economics Papers
on Computational Economics
Issue of 2023‒10‒23
24 papers chosen by
Stan Miles, Thompson Rivers University


  1. Automatic Product Classification in International Trade: Machine Learning and Large Language Models By Marra de Artiñano, Ignacio; Riottini Depetris, Franco; Volpe Martincus, Christian
  2. Startup success prediction and VC portfolio simulation using CrunchBase data By Mark Potanin; Andrey Chertok; Konstantin Zorin; Cyril Shtabtsovsky
  3. Can I Trust the Explanations? Investigating Explainable Machine Learning Methods for Monotonic Models By Dangxing Chen
  4. Short-Term Stock Price Forecasting using exogenous variables and Machine Learning Algorithms By Albert Wong; Steven Whang; Emilio Sagre; Niha Sachin; Gustavo Dutra; Yew-Wei Lim; Gaetan Hains; Youry Khmelevsky; Frank Zhang
  5. An Ensemble Method of Deep Reinforcement Learning for Automated Cryptocurrency Trading By Shuyang Wang; Diego Klabjan
  6. Quantifying Credit Portfolio sensitivity to asset correlations with interpretable generative neural networks By Sergio Caprioli; Emanuele Cagliero; Riccardo Crupi
  7. Mean Absolute Directional Loss as a New Loss Function for Machine Learning Problems in Algorithmic Investment Strategies By Jakub Michańków; Paweł Sakowski; Robert Ślepaczuk
  8. Transformers versus LSTMs for electronic trading By Paul Bilokon; Yitao Qiu
  9. A Comprehensive Review on Financial Explainable AI By Wei Jie Yeo; Wihan van der Heever; Rui Mao; Erik Cambria; Ranjan Satapathy; Gianmarco Mengaldo
  10. Mean Absolute Directional Loss as a New Loss Function for Machine Learning Problems in Algorithmic Investment Strategies By Jakub Micha\'nk\'ow; Pawe{\l} Sakowski; Robert \'Slepaczuk
  11. Univariate Forecasting for REIT with Deep Learning: A Comparative Analysis with an ARIMA Model By Axelsson, Birger; Song, Han-Suck
  12. Comparing effects of price limit and circuit breaker in stock exchanges by an agent-based model By Takanobu Mizuta; Isao Yagi
  13. Human-AI Interactions and Societal Pitfalls By Francisco Castro; Jian Gao; S\'ebastien Martin
  14. Electricity price forecasting on the day-ahead market using machine learning By Léonard Tschora; Erwan Pierre; Marc Plantevit; Céline Robardet
  15. Approximation Rates for Deep Calibration of (Rough) Stochastic Volatility Models By Francesca Biagini; Lukas Gonon; Niklas Walter
  16. New News is Bad News By Paul Glasserman; Harry Mamaysky; Jimmy Qin
  17. Forecasting Global Maize Prices From Regional Productions By Rotem Zelingher; David Makowski
  18. Conducting Qualitative Interviews with AI By Felix Chopra; Ingar Haaland; Ingar K. Haaland
  19. Double machine learning and design in batch adaptive experiments By Harrison H. Li; Art B. Owen
  20. Utiliser la presse pour construire un nouvel indicateur de perception d’inflation en France By De Bandt Olivier; Bricongne Jean-Charles; Denes Julien; Dhenin Alexandre; De Gaye Annabelle; Robert Pierre-Antoine
  21. AI Adoption in America: Who, What, and Where By Kristina McElheran; J. Frank Li; Erik Brynjolfsson; Zachary Krof; Emin Dinlersoz; Lucia Foster; Nikolas Zolas
  22. Stock Market Sentiment Classification and Backtesting via Fine-tuned BERT By Jiashu Lou
  23. Numerical Simulations of How Economic Inequality Increases in Democratic Countries By Harashima, Taiji
  24. Responsible artificial intelligence in Africa: Towards policy learning By Plantinga, Paul; Shilongo, Kristophina; Mudongo, Oarabile; Umubyeyi, Angelique; Gastrow, Michael; Razzano, Gabriella

  1. By: Marra de Artiñano, Ignacio; Riottini Depetris, Franco; Volpe Martincus, Christian
    Abstract: Accurately classifying products is essential in international trade. Virtually all countries categorize products into tariff lines using the Harmonized System (HS) nomenclature for both statistical and duty collection purposes. In this paper, we apply and assess several different algorithms to automatically classify products based on text descriptions. To do so, we use agricultural product descriptions from several public agencies, including customs authorities and the United States Department of Agriculture (USDA). We find that while traditional machine learning (ML) models tend to perform well within the dataset in which they were trained, their precision drops dramatically when implemented outside of it. In contrast, large language models (LLMs) such as GPT 3.5 show a consistently good performance across all datasets, with accuracy rates ranging between 60% and 90% depending on HS aggregation levels. Our analysis highlights the valuable role that artificial intelligence (AI) can play in facilitating product classification at scale and, more generally, in enhancing the categorization of unstructured data.
    Keywords: Product Classification;machine learning;Large Language Models;Trade
    JEL: F10 C55 C81 C88
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:idb:brikps:12962&r=cmp
  2. By: Mark Potanin; Andrey Chertok; Konstantin Zorin; Cyril Shtabtsovsky
    Abstract: Predicting startup success presents a formidable challenge due to the inherently volatile landscape of the entrepreneurial ecosystem. The advent of extensive databases like Crunchbase jointly with available open data enables the application of machine learning and artificial intelligence for more accurate predictive analytics. This paper focuses on startups at their Series B and Series C investment stages, aiming to predict key success milestones such as achieving an Initial Public Offering (IPO), attaining unicorn status, or executing a successful Merger and Acquisition (M\&A). We introduce novel deep learning model for predicting startup success, integrating a variety of factors such as funding metrics, founder features, industry category. A distinctive feature of our research is the use of a comprehensive backtesting algorithm designed to simulate the venture capital investment process. This simulation allows for a robust evaluation of our model's performance against historical data, providing actionable insights into its practical utility in real-world investment contexts. Evaluating our model on Crunchbase's, we achieved a 14 times capital growth and successfully identified on B round high-potential startups including Revolut, DigitalOcean, Klarna, Github and others. Our empirical findings illuminate the importance of incorporating diverse feature sets in enhancing the model's predictive accuracy. In summary, our work demonstrates the considerable promise of deep learning models and alternative unstructured data in predicting startup success and sets the stage for future advancements in this research area.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.15552&r=cmp
  3. By: Dangxing Chen
    Abstract: In recent years, explainable machine learning methods have been very successful. Despite their success, most explainable machine learning methods are applied to black-box models without any domain knowledge. By incorporating domain knowledge, science-informed machine learning models have demonstrated better generalization and interpretation. But do we obtain consistent scientific explanations if we apply explainable machine learning methods to science-informed machine learning models? This question is addressed in the context of monotonic models that exhibit three different types of monotonicity. To demonstrate monotonicity, we propose three axioms. Accordingly, this study shows that when only individual monotonicity is involved, the baseline Shapley value provides good explanations; however, when strong pairwise monotonicity is involved, the Integrated gradients method provides reasonable explanations on average.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.13246&r=cmp
  4. By: Albert Wong; Steven Whang; Emilio Sagre; Niha Sachin; Gustavo Dutra; Yew-Wei Lim; Gaetan Hains; Youry Khmelevsky; Frank Zhang
    Abstract: Creating accurate predictions in the stock market has always been a significant challenge in finance. With the rise of machine learning as the next level in the forecasting area, this research paper compares four machine learning models and their accuracy in forecasting three well-known stocks traded in the NYSE in the short term from March 2020 to May 2022. We deploy, develop, and tune XGBoost, Random Forest, Multi-layer Perceptron, and Support Vector Regression models. We report the models that produce the highest accuracies from our evaluation metrics: RMSE, MAPE, MTT, and MPE. Using a training data set of 240 trading days, we find that XGBoost gives the highest accuracy despite running longer (up to 10 seconds). Results from this study may improve by further tuning the individual parameters or introducing more exogenous variables.
    Date: 2023–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.00618&r=cmp
  5. By: Shuyang Wang; Diego Klabjan
    Abstract: We propose an ensemble method to improve the generalization performance of trading strategies trained by deep reinforcement learning algorithms in a highly stochastic environment of intraday cryptocurrency portfolio trading. We adopt a model selection method that evaluates on multiple validation periods, and propose a novel mixture distribution policy to effectively ensemble the selected models. We provide a distributional view of the out-of-sample performance on granular test periods to demonstrate the robustness of the strategies in evolving market conditions, and retrain the models periodically to address non-stationarity of financial data. Our proposed ensemble method improves the out-of-sample performance compared with the benchmarks of a deep reinforcement learning strategy and a passive investment strategy.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.00626&r=cmp
  6. By: Sergio Caprioli; Emanuele Cagliero; Riccardo Crupi
    Abstract: In this research, we propose a novel approach for the quantification of credit portfolio Value-at-Risk (VaR) sensitivity to asset correlations with the use of synthetic financial correlation matrices generated with deep learning models. In previous work Generative Adversarial Networks (GANs) were employed to demonstrate the generation of plausible correlation matrices, that capture the essential characteristics observed in empirical correlation matrices estimated on asset returns. Instead of GANs, we employ Variational Autoencoders (VAE) to achieve a more interpretable latent space representation. Through our analysis, we reveal that the VAE latent space can be a useful tool to capture the crucial factors impacting portfolio diversification, particularly in relation to credit portfolio sensitivity to asset correlations changes.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.08652&r=cmp
  7. By: Jakub Michańków (Cracow University of Economics, Department of Informatics; University of Warsaw, Faculty of Economic Sciences, Quantitative Finance Research Group, Department of Quantitative Finance); Paweł Sakowski (University of Warsaw, Faculty of Economic Sciences, Quantitative Finance Research Group, Department of Quantitative Finance); Robert Ślepaczuk (University of Warsaw, Faculty of Economic Sciences, Quantitative Finance Research Group, Department of Quantitative Finance)
    Abstract: This paper investigates the issue of an adequate loss function in the optimization of machine learning models used in the forecasting of financial time series for the purpose of algorithmic investment strategies (AIS) construction. We propose the Mean Absolute Directional Loss (MADL) function, solving important problems of classical forecast error functions in extracting information from forecasts to create efficient buy/sell signals in algorithmic investment strategies. Finally, based on the data from two different asset classes (cryptocurrencies: Bitcoin and commodities: Crude Oil), we show that the new loss function enables us to select better hyperparameters for the LSTM model and obtain more efficient investment strategies, regarding risk-adjusted return metrics on the out-of-sample data.
    Keywords: machine learning, recurrent neural networks, long short-term memory, algorithmic investment strategies, testing architecture, loss function, walk-forward optimization, over-optimization
    JEL: C4 C14 C45 C53 C58 G13
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:war:wpaper:2023-23&r=cmp
  8. By: Paul Bilokon; Yitao Qiu
    Abstract: With the rapid development of artificial intelligence, long short term memory (LSTM), one kind of recurrent neural network (RNN), has been widely applied in time series prediction. Like RNN, Transformer is designed to handle the sequential data. As Transformer achieved great success in Natural Language Processing (NLP), researchers got interested in Transformer's performance on time series prediction, and plenty of Transformer-based solutions on long time series forecasting have come out recently. However, when it comes to financial time series prediction, LSTM is still a dominant architecture. Therefore, the question this study wants to answer is: whether the Transformer-based model can be applied in financial time series prediction and beat LSTM. To answer this question, various LSTM-based and Transformer-based models are compared on multiple financial prediction tasks based on high-frequency limit order book data. A new LSTM-based model called DLSTM is built and new architecture for the Transformer-based model is designed to adapt for financial prediction. The experiment result reflects that the Transformer-based model only has the limited advantage in absolute price sequence prediction. The LSTM-based models show better and more robust performance on difference sequence prediction, such as price difference and price movement.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.11400&r=cmp
  9. By: Wei Jie Yeo; Wihan van der Heever; Rui Mao; Erik Cambria; Ranjan Satapathy; Gianmarco Mengaldo
    Abstract: The success of artificial intelligence (AI), and deep learning models in particular, has led to their widespread adoption across various industries due to their ability to process huge amounts of data and learn complex patterns. However, due to their lack of explainability, there are significant concerns regarding their use in critical sectors, such as finance and healthcare, where decision-making transparency is of paramount importance. In this paper, we provide a comparative survey of methods that aim to improve the explainability of deep learning models within the context of finance. We categorize the collection of explainable AI methods according to their corresponding characteristics, and we review the concerns and challenges of adopting explainable AI methods, together with future directions we deemed appropriate and important.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.11960&r=cmp
  10. By: Jakub Micha\'nk\'ow; Pawe{\l} Sakowski; Robert \'Slepaczuk
    Abstract: This paper investigates the issue of an adequate loss function in the optimization of machine learning models used in the forecasting of financial time series for the purpose of algorithmic investment strategies (AIS) construction. We propose the Mean Absolute Directional Loss (MADL) function, solving important problems of classical forecast error functions in extracting information from forecasts to create efficient buy/sell signals in algorithmic investment strategies. Finally, based on the data from two different asset classes (cryptocurrencies: Bitcoin and commodities: Crude Oil), we show that the new loss function enables us to select better hyperparameters for the LSTM model and obtain more efficient investment strategies, with regard to risk-adjusted return metrics on the out-of-sample data.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.10546&r=cmp
  11. By: Axelsson, Birger (Department of Real Estate and Construction Management, Royal Institute of Technology); Song, Han-Suck (Department of Real Estate and Construction Management, Royal Institute of Technology)
    Abstract: This study aims to investigate whether the newly developed deep learning-based algorithms, specifically Long-Short Term Memory (LSTM), outperform traditional algorithms in forecasting Real Estate Investment Trust (REIT) returns. The empirical analysis conducted in this research compares the forecasting performance of LSTM and Autoregressive Integrated Moving Average (ARIMA) models using out-of-sample data. The results demonstrate that in general, the LSTM model does not exhibit superior performance over the ARIMA model for forecasting REIT returns. While the LSTM model showed some improvement over the ARIMA model for shorter forecast horizons, it did not demonstrate a significant advantage in the majority of forecast scenarios, including both recursive multi-step forecasts and rolling forecasts. The comparative evaluation reveals that neither the LSTM nor ARIMA model demonstrated satisfactory performance in predicting REIT returns out-of-sample for longer forecast horizons. This outcome aligns with the efficient market hypothesis, suggesting that REIT returns may exhibit a random walk behavior. While this observation does not exclude other potential factors contributing to the models' performance, it supports the notion of the presence of market efficiency in the REIT sector. The error rates obtained by both models were comparable, indicating the absence of a significant advantage for LSTM over ARIMA, as well as the challenges in accurately predicting REIT returns using these approaches. These findings emphasize the need for careful consideration when employing advanced deep learning techniques, such as LSTM, in the context of REIT return forecasting and financial time series. While LSTM has shown promise in various domains, its performance in the context of financial time series forecasting, particularly with a univariate regression approach using daily data, may be influenced by multiple factors. Potential reasons for the observed limitations of our LSTM model, within this specific framework, include the presence of significant noise in the daily data and the suitability of the LSTM model for financial time series compared to other problem domains. However, it is important to acknowledge that there could be additional factors that impact the performance of LSTM models in financial time series forecasting, warranting further investigation and exploration. This research contributes to the understanding of the applicability of deep learning algorithms in the context of REIT return forecasting and encourages further exploration of alternative methodologies for improved forecasting accuracy in this domain.
    Keywords: Forecasting; Equity REITs; deep learning; LSTM; ARIMA
    JEL: G17 G19
    Date: 2023–09–28
    URL: http://d.repec.org/n?u=RePEc:hhs:kthrec:2023_010&r=cmp
  12. By: Takanobu Mizuta; Isao Yagi
    Abstract: The prevention of rapidly and steeply falling market prices is vital to avoid financial crisis. To this end, some stock exchanges implement a price limit or a circuit breaker, and there has been intensive investigation into which regulation best prevents rapid and large variations in price. In this study, we examine this question using an artificial market model that is an agent-based model for a financial market. Our findings show that the price limit and the circuit breaker basically have the same effect when the parameters, limit price range and limit time range, are the same. However, the price limit is less effective when limit the time range is smaller than the cancel time range. With the price limit, many sell orders are accumulated around the lower limit price, and when the lower limit price is changed before the accumulated sell orders are cancelled, it leads to the accumulation of sell orders of various prices. These accumulated sell orders essentially act as a wall against buy orders, thereby preventing price from rising. Caution should be taken in the sense that these results pertain to a limited situation. Specifically, our finding that the circuit breaker is better than the price limit should be adapted only in cases where the reason for falling prices is erroneous orders and when individual stocks are regulated.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.10220&r=cmp
  13. By: Francisco Castro; Jian Gao; S\'ebastien Martin
    Abstract: When working with generative artificial intelligence (AI), users may see productivity gains, but the AI-generated content may not match their preferences exactly. To study this effect, we introduce a Bayesian framework in which heterogeneous users choose how much information to share with the AI, facing a trade-off between output fidelity and communication cost. We show that the interplay between these individual-level decisions and AI training may lead to societal challenges. Outputs may become more homogenized, especially when the AI is trained on AI-generated content. And any AI bias may become societal bias. A solution to the homogenization and bias issues is to improve human-AI interactions, enabling personalized outputs without sacrificing productivity.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.10448&r=cmp
  14. By: Léonard Tschora (LIRIS - Laboratoire d'InfoRmatique en Image et Systèmes d'information - UL2 - Université Lumière - Lyon 2 - ECL - École Centrale de Lyon - Université de Lyon - UCBL - Université Claude Bernard Lyon 1 - Université de Lyon - INSA Lyon - Institut National des Sciences Appliquées de Lyon - Université de Lyon - INSA - Institut National des Sciences Appliquées - CNRS - Centre National de la Recherche Scientifique, DM2L - Data Mining and Machine Learning - LIRIS - Laboratoire d'InfoRmatique en Image et Systèmes d'information - UL2 - Université Lumière - Lyon 2 - ECL - École Centrale de Lyon - Université de Lyon - UCBL - Université Claude Bernard Lyon 1 - Université de Lyon - INSA Lyon - Institut National des Sciences Appliquées de Lyon - Université de Lyon - INSA - Institut National des Sciences Appliquées - CNRS - Centre National de la Recherche Scientifique); Erwan Pierre; Marc Plantevit; Céline Robardet (LIRIS - Laboratoire d'InfoRmatique en Image et Systèmes d'information - UL2 - Université Lumière - Lyon 2 - ECL - École Centrale de Lyon - Université de Lyon - UCBL - Université Claude Bernard Lyon 1 - Université de Lyon - INSA Lyon - Institut National des Sciences Appliquées de Lyon - Université de Lyon - INSA - Institut National des Sciences Appliquées - CNRS - Centre National de la Recherche Scientifique, DM2L - Data Mining and Machine Learning - LIRIS - Laboratoire d'InfoRmatique en Image et Systèmes d'information - UL2 - Université Lumière - Lyon 2 - ECL - École Centrale de Lyon - Université de Lyon - UCBL - Université Claude Bernard Lyon 1 - Université de Lyon - INSA Lyon - Institut National des Sciences Appliquées de Lyon - Université de Lyon - INSA - Institut National des Sciences Appliquées - CNRS - Centre National de la Recherche Scientifique)
    Abstract: The price of electricity on the European market is very volatile. This is due both to its mode of production by different sources, each with its own constraints (volume of production, dependence on the weather, or production inertia), and by the difficulty of its storage. Being able to predict the prices of the next day is an important issue, to allow the development of intelligent uses of electricity. In this article, we investigate the capabilities of different machine learning techniques to accurately predict electricity prices. Specifically, we extend current state-of-the-art approaches by considering previously unused predictive features such as price histories of neighboring countries. We show that these features significantly improve the quality of forecasts, even in the current period when sudden changes are occurring. We also develop an analysis of the contribution of the different features in model prediction using Shap values, in order to shed light on how models make their prediction and to build user confidence in models.
    Date: 2022–05
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-03621974&r=cmp
  15. By: Francesca Biagini; Lukas Gonon; Niklas Walter
    Abstract: We derive quantitative error bounds for deep neural networks (DNNs) approximating option prices on a $d$-dimensional risky asset as functions of the underlying model parameters, payoff parameters and initial conditions. We cover a general class of stochastic volatility models of Markovian nature as well as the rough Bergomi model. In particular, under suitable assumptions we show that option prices can be learned by DNNs up to an arbitrary small error $\varepsilon \in (0, 1/2)$ while the network size grows only sub-polynomially in the asset vector dimension $d$ and the reciprocal $\varepsilon^{-1}$ of the accuracy. Hence, the approximation does not suffer from the curse of dimensionality. As quantitative approximation results for DNNs applicable in our setting are formulated for functions on compact domains, we first consider the case of the asset price restricted to a compact set, then we extend these results to the general case by using convergence arguments for the option prices.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.14784&r=cmp
  16. By: Paul Glasserman; Harry Mamaysky; Jimmy Qin
    Abstract: An increase in the novelty of news predicts negative stock market returns and negative macroeconomic outcomes over the next year. We quantify news novelty - changes in the distribution of news text - through an entropy measure, calculated using a recurrent neural network applied to a large news corpus. Entropy is a better out-of-sample predictor of market returns than a collection of standard measures. Cross-sectional entropy exposure carries a negative risk premium, suggesting that assets that positively covary with entropy hedge the aggregate risk associated with shifting news language. Entropy risk cannot be explained by existing long-short factors.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.05560&r=cmp
  17. By: Rotem Zelingher (ECO-PUB - Economie Publique - AgroParisTech - Université Paris-Saclay - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement); David Makowski (MIA Paris-Saclay - Mathématiques et Informatique Appliquées - AgroParisTech - Université Paris-Saclay - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement)
    Abstract: This study analyses the quality of six regression algorithms in forecasting the monthly price of maize in its primary international trading market, using publicly available data of agricultural production at a regional scale. The forecasting process is done between one and twelve months ahead, using six different forecasting techniques. Three (CART, RF, and GBM) are tree-based machine learning techniques that capture the relative influence of maize-producing regions on global maize price variations. Additionally, we consider two types of linear models—standard multiple linear regression and vector autoregressive (VAR) model. Finally, TBATS serves as an advanced time-series model that holds the advantages of several commonly used time-series algorithms. The predictive capabilities of these six methods are compared by cross-validation. We find RF and GBM have superior forecasting abilities relative to the linear models. At the same time, TBATS is more accurate for short time forecasts when the time horizon is shorter than three months. On top of that, all models are trained to assess the marginal contribution of each producing region to the most extreme price shocks that occurred through the past 60 years of data in both positive and negative directions, using Shapley decompositions. Our results reveal a strong influence of North-American yield variation on the global price, except for the last months preceding the new-crop season.
    Keywords: Price forecasting, Regional production
    Date: 2022–04–28
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-03764942&r=cmp
  18. By: Felix Chopra; Ingar Haaland; Ingar K. Haaland
    Abstract: Qualitative interviews are one of the fundamental tools of empirical social science research and give individuals the opportunity to explain how they understand and interpret the world, allowing researchers to capture detailed and nuanced insights into complex phenomena. However, qualitative interviews are seldom used in economics and other disciplines inclined toward quantitative data analysis, likely due to concerns about limited scalability, high costs, and low generalizability. In this paper, we introduce an AI-assisted method to conduct semi-structured interviews. This approach retains the depth of traditional qualitative research while enabling large-scale, cost-effective data collection suitable for quantitative analysis. We demonstrate the feasibility of this approach through a large-scale data collection to understand the stock market participation puzzle. Our 395 interviews allow for quantitative analysis that we demonstrate yields richer and more robust conclusions compared to qualitative interviews with traditional sample sizes as well as to survey responses to a single open-ended question. We also demonstrate high interviewee satisfaction with the AI-assisted interviews. In fact, a majority of respondents indicate a strict preference for AI-assisted interviews over human-led interviews. Our novel AI-assisted approach bridges the divide between qualitative and quantitative data analysis and substantially lowers the barriers and costs of conducting qualitative interviews at scale.
    Keywords: artificial intelligence, interviews, large language models, qualitative methods, stock market participation
    JEL: C83 C90 D14 D91 Z13
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_10666&r=cmp
  19. By: Harrison H. Li; Art B. Owen
    Abstract: We consider an experiment with at least two stages or batches and $O(N)$ subjects per batch. First, we propose a semiparametric treatment effect estimator that efficiently pools information across the batches, and show it asymptotically dominates alternatives that aggregate single batch estimates. Then, we consider the design problem of learning propensity scores for assigning treatment in the later batches of the experiment to maximize the asymptotic precision of this estimator. For two common causal estimands, we estimate this precision using observations from previous batches, and then solve a finite-dimensional concave maximization problem to adaptively learn flexible propensity scores that converge to suitably defined optima in each batch at rate $O_p(N^{-1/4})$. By extending the framework of double machine learning, we show this rate suffices for our pooled estimator to attain the targeted precision after each batch, as long as nuisance function estimates converge at rate $o_p(N^{-1/4})$. These relatively weak rate requirements enable the investigator to avoid the common practice of discretizing the covariate space for design and estimation in batch adaptive experiments while maintaining the advantages of pooling. Our numerical study shows that such discretization often leads to substantial asymptotic and finite sample precision losses outweighing any gains from design.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.15297&r=cmp
  20. By: De Bandt Olivier; Bricongne Jean-Charles; Denes Julien; Dhenin Alexandre; De Gaye Annabelle; Robert Pierre-Antoine
    Abstract: The paper applies Natural Language Processing techniques (NLP) to the quasi-universe of newspaper articles for France, concentrating on the period 2004-2022, in order to measure inflation attention as well as perceptions by households and firms for that country. The indicator, constructed along the lines of a balance of opinions, is well correlated with actual HICP inflation. It also exhibits good forecasting properties for the European Commission survey on households’ inflation expectations, as well as overall HICP inflation. The method used is a supervised approach that we describe step-by-step. It performs better on our data than the Latent-Dirichlet-Allocation (LDA)-based approach of Angelico et al. (2022). The indicator can be used as an early real-time indicator of future inflation developments and expectations. It also provides a new set of indicators at a time when central banks monitor inflation through new types of surveys of households and firms.
    Keywords: Inflation, Natural Language Processing, Households and Firms, Expectations, Machine Learning
    JEL: C53 C55 D84 E31 E58
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:bfr:banfra:921&r=cmp
  21. By: Kristina McElheran; J. Frank Li; Erik Brynjolfsson; Zachary Krof; Emin Dinlersoz; Lucia Foster; Nikolas Zolas
    Abstract: We study the early adoption and diffusion of five AI-related technologies (automated-guided vehicles, machine learning, machine vision, natural language processing, and voice recognition) as documented in the 2018 Annual Business Survey of 850, 000 firms across the United States. We find that fewer than 6% of firms used any of the AI-related technologies we measure, though most very large firms reported at least some AI use. Weighted by employment, average adoption was just over 18%. Among dynamic young firms, AI use was highest alongside more-educated, more-experienced, and younger owners, including owners motivated by bringing new ideas to market or helping the community. AI adoption was also more common in startups displaying indicators of high-growth entrepreneurship, such as venture capital funding, recent innovation, and growth-oriented business strategies. Adoption was far from evenly spread across America: a handful of “superstar” cities and emerging technology hubs led startups’ use of AI. These patterns of early AI use foreshadow economic and social impacts far beyond its limited initial diffusion, with the possibility of a growing “AI divide” if early patterns persist.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:cen:wpaper:23-48&r=cmp
  22. By: Jiashu Lou
    Abstract: With the rapid development of big data and computing devices, low-latency automatic trading platforms based on real-time information acquisition have become the main components of the stock trading market, so the topic of quantitative trading has received widespread attention. And for non-strongly efficient trading markets, human emotions and expectations always dominate market trends and trading decisions. Therefore, this paper starts from the theory of emotion, taking East Money as an example, crawling user comment titles data from its corresponding stock bar and performing data cleaning. Subsequently, a natural language processing model BERT was constructed, and the BERT model was fine-tuned using existing annotated data sets. The experimental results show that the fine-tuned model has different degrees of performance improvement compared to the original model and the baseline model. Subsequently, based on the above model, the user comment data crawled is labeled with emotional polarity, and the obtained label information is combined with the Alpha191 model to participate in regression, and significant regression results are obtained. Subsequently, the regression model is used to predict the average price change for the next five days, and use it as a signal to guide automatic trading. The experimental results show that the incorporation of emotional factors increased the return rate by 73.8\% compared to the baseline during the trading period, and by 32.41\% compared to the original alpha191 model. Finally, we discuss the advantages and disadvantages of incorporating emotional factors into quantitative trading, and give possible directions for further research in the future.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.11979&r=cmp
  23. By: Harashima, Taiji
    Abstract: It is not easy to perform a numerical simulation of the path to a steady state in dynamic economic growth models in which households behave by generating rational expectations. It is much easier, however, if households are assumed to behave according to a procedure based on the maximum degree of comfortability (MDC), where MDC indicates the state at which a household feels most comfortable with its combination of income and assets. In this paper, I simulate how economic inequality increases in democratic countries under the supposition that households behave according to the MDC-based procedure. The results indicate that high levels of economic inequality can be generated and even increase in a democracy. As causes, I postulate households’ misunderstandings of the economic situation, a government against certain groups in the economy, or an upward trend in temporary rent incomes. I then present a criterion for establishing the socially acceptable level of economic inequality and point out a practical shortfall arising from the inability to distinguish temporary economic rents.
    Keywords: Democracy; Economic rent; Economic inequality; Government transfer; Heterogeneity; Simulation
    JEL: E17 E60 E63 H2
    Date: 2023–09–27
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:118710&r=cmp
  24. By: Plantinga, Paul; Shilongo, Kristophina; Mudongo, Oarabile; Umubyeyi, Angelique; Gastrow, Michael; Razzano, Gabriella
    Abstract: Several African countries are developing artificial intelligence (AI) strategies and ethics frameworks with the goal of accelerating responsible AI development and adoption. However, many of these governance actions are emerging without consideration for their suitability to local contexts, including whether the proposed policies are feasible to implement and what their impact may be on regulatory outcomes. In response, we suggest that there is a need for more explicit policy learning, by looking at existing governance capabilities and experiences related to algorithms, automation, data and digital technology in other countries and in adjacent sectors. From such learning it will be possible to identify where existing capabilities may be adapted or strengthened to address current AI-related opportunities and risks. This paper explores the potential for learning by analysing existing policy and legislation in twelve African countries across three main areas: strategy and multi-stakeholder engagement, human dignity and autonomy, and sector-specific governance. The findings point to a variety of existing capabilities that could be relevant to responsible AI; from existing model management procedures used in banking and air quality assessment, to efforts aimed at enhancing public sector skills and transparency around public-private partnerships, and the way in which existing electronic transactions legislation addresses accountability and human oversight. All of these point to the benefit of wider engagement on how existing governance mechanisms are working, and on where AI-specific adjustments or new instruments may be needed.
    Date: 2023–09–26
    URL: http://d.repec.org/n?u=RePEc:osf:socarx:jyhae&r=cmp

This nep-cmp issue is ©2023 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.