nep-cmp New Economics Papers
on Computational Economics
Issue of 2023‒11‒20
twenty-six papers chosen by



  1. Blending gradient boosted trees and neural networks for point and probabilistic forecasting of hierarchical time series By Ioannis Nasios; Konstantinos Vogklis
  2. Non-linear approximations of DSGE models with neural-networks and hard-constraints By Emmet Hall-Hoffarth
  3. Transparency challenges in policy evaluation with causal machine learning -- improving usability and accountability By Patrick Rehill; Nicholas Biddle
  4. Co-Training Realized Volatility Prediction Model with Neural Distributional Transformation By Xin Du; Kai Moriyama; Kumiko Tanaka-Ishii
  5. A Score Function to Prioritize Editing in Household Survey Data: A Machine Learning Approach By Nicolás Forteza; Sandra García-Uribe
  6. Determinants of U.S. REIT Bond Risk Premia with Explainable Machine Learning By Jakob Kozak; Maximilian Nagl; Cathrine Nagl; Eli Beracha; Wolfgang Schäfers
  7. GDP nowcasting with Machine Learning and Unstructured Data to Peru By Juan Tenorio; Wilder Pérez
  8. Social Media and Real Estate: Do Twitter users predict REIT performance? By Nino Paulus; Lukas Lautenschlaeger; Wolfgang Schäfers
  9. Neural Network for valuing Bitcoin options under jump-diffusion and market sentiment model By Edson Pindza; Jules Clement Mba; Sutene Mwambi; Nneka Umeorah
  10. The Economics of Attention By George Loewenstein; Zachary Wojtowicz
  11. Demand Estimation with Text and Image Data By Giovanni Compiani; Ilya Morozov; Stephan Seiler
  12. AI Adoption in America: Who, What, and Where By Kristina McElheran; J. Frank Li; Erik Brynjolfsson; Zachary Kroff; Emin Dinlersoz; Lucia S. Foster; Nikolas Zolas
  13. Quantum Computational Algorithms for Derivative Pricing and Credit Risk in a Regime Switching Economy By Eric Ghysels; Jack Morgan; Hamed Mohammadbagherpoor
  14. A Deep Learning Analysis of Climate Change, Innovation, and Uncertainty By Michael Barnett; William Brock; Lars Peter Hansen; Ruimeng Hu; Joseph Huang
  15. Agent-based Modelling of Credit Card Promotions By Conor B. Hamill; Raad Khraishi; Simona Gherghel; Jerrard Lawrence; Salvatore Mercuri; Ramin Okhrati; Greig A. Cowan
  16. Sparse Index Tracking via Topological Learning By Anubha Goel; Puneet Pasricha; Juho Kanniainen
  17. Understanding Models and Model Bias with Gaussian Processes By Thomas R. Cook; Nathan M. Palmer
  18. Machine Learning for Staggered Difference-in-Differences and Dynamic Treatment Effect Heterogeneity By Julia Hatamyar; Noemi Kreif; Rudi Rocha; Martin Huber
  19. Towards Enhanced Local Explainability of Random Forests: a Proximity-Based Approach By Joshua Rosaler; Dhruv Desai; Bhaskarjit Sarmah; Dimitrios Vamvourellis; Deran Onay; Dhagash Mehta; Stefano Pasquali
  20. Artificial intelligence, services globalisation and income inequality By Giulio Cornelli; Jon Frost; Saurabh Mishra
  21. Some critical and ethical perspectives on the empirical turn of AI interpretability By Jean-Marie John-Mathews
  22. Double taxation treaties and resource revenue mobilization in developing countries: A neural network approach By Harouna Kinda; Abrams M.E. Tagem
  23. Leveraging Large Language Model for Automatic Evolving of Industrial Data-Centric R&D Cycle By Xu Yang; Xiao Yang; Weiqing Liu; Jinhui Li; Peng Yu; Zeqi Ye; Jiang Bian
  24. AI-Generated Inventions: Implications for the Patent System By Gaetan de Rassenfosse; Adam Jaffe; Melissa Wasserman
  25. The Quantum Tortoise and the Classical Hare: A simple framework for understanding which problems quantum computing will accelerate (and which it will not) By Sukwoong Choi; William S. Moses; Neil Thompson
  26. Towards reducing hallucination in extracting information from financial reports using Large Language Models By Bhaskarjit Sarmah; Tianjie Zhu; Dhagash Mehta; Stefano Pasquali

  1. By: Ioannis Nasios; Konstantinos Vogklis
    Abstract: In this paper we tackle the problem of point and probabilistic forecasting by describing a blending methodology of machine learning models that belong to gradient boosted trees and neural networks families. These principles were successfully applied in the recent M5 Competition on both Accuracy and Uncertainty tracks. The keypoints of our methodology are: a) transform the task to regression on sales for a single day b) information rich feature engineering c) create a diverse set of state-of-the-art machine learning models and d) carefully construct validation sets for model tuning. We argue that the diversity of the machine learning models along with the careful selection of validation examples, where the most important ingredients for the effectiveness of our approach. Although forecasting data had an inherent hierarchy structure (12 levels), none of our proposed solutions exploited that hierarchical scheme. Using the proposed methodology, our team was ranked within the gold medal range in both Accuracy and the Uncertainty track. Inference code along with already trained models are available at https://github.com/IoannisNasios/M5_Unce rtainty_3rd_place
    Date: 2023–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2310.13029&r=cmp
  2. By: Emmet Hall-Hoffarth
    Abstract: Recently a number of papers have suggested using neural-networks in order to approximate policy functions in DSGE models, while avoiding the curse of dimensionality, which for example arises when solving many HANK models, and while preserving non-linearity. One important step of this method is to represent the constraints of the economic model in question in the outputs of the neural-network. I propose, and demonstrate the advantages of, a novel approach to handling these constraints which involves directly constraining the neural-network outputs, such that the economic constraints are satisfied by construction. This is achieved by a combination of re-scaling operations that are differentiable and therefore compatible with the standard gradient descent approach used when fitting neural-networks. This has a number of attractive properties, and is shown to out-perform the penalty-based approach suggested by the existing literature, which while theoretically sound, can be poorly behaved practice for a number of reasons that I identify.
    Date: 2023–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2310.13436&r=cmp
  3. By: Patrick Rehill; Nicholas Biddle
    Abstract: Causal machine learning tools are beginning to see use in real-world policy evaluation tasks to flexibly estimate treatment effects. One issue with these methods is that the machine learning models used are generally black boxes, i.e., there is no globally interpretable way to understand how a model makes estimates. This is a clear problem in policy evaluation applications, particularly in government, because it is difficult to understand whether such models are functioning in ways that are fair, based on the correct interpretation of evidence and transparent enough to allow for accountability if things go wrong. However, there has been little discussion of transparency problems in the causal machine learning literature and how these might be overcome. This paper explores why transparency issues are a problem for causal machine learning in public policy evaluation applications and considers ways these problems might be addressed through explainable AI tools and by simplifying models in line with interpretable AI principles. It then applies these ideas to a case-study using a causal forest model to estimate conditional average treatment effects for a hypothetical change in the school leaving age in Australia. It shows that existing tools for understanding black-box predictive models are poorly suited to causal machine learning and that simplifying the model to make it interpretable leads to an unacceptable increase in error (in this application). It concludes that new tools are needed to properly understand causal machine learning models and the algorithms that fit them.
    Date: 2023–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2310.13240&r=cmp
  4. By: Xin Du; Kai Moriyama; Kumiko Tanaka-Ishii
    Abstract: This paper shows a novel machine learning model for realized volatility (RV) prediction using a normalizing flow, an invertible neural network. Since RV is known to be skewed and have a fat tail, previous methods transform RV into values that follow a latent distribution with an explicit shape and then apply a prediction model. However, knowing that shape is non-trivial, and the transformation result influences the prediction model. This paper proposes to jointly train the transformation and the prediction model. The training process follows a maximum-likelihood objective function that is derived from the assumption that the prediction residuals on the transformed RV time series are homogeneously Gaussian. The objective function is further approximated using an expectation-maximum algorithm. On a dataset of 100 stocks, our method significantly outperforms other methods using analytical or naive neural-network transformations.
    Date: 2023–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2310.14536&r=cmp
  5. By: Nicolás Forteza (Banco de España); Sandra García-Uribe (Banco de España)
    Abstract: Errors in the collection of household finance survey data may proliferate in population estimates, especially when there is oversampling of some population groups. Manual case-by-case revision has been commonly applied in order to identify and correct potential errors and omissions such as omitted or misreported assets, income and debts. We derive a machine learning approach for the purpose of classifying survey data affected by severe errors and omissions in the revision phase. Using data from the Spanish Survey of Household Finances we provide the best-performing supervised classification algorithm for the task of prioritizing cases with substantial errors and omissions. Our results show that a Gradient Boosting Trees classifier outperforms several competing classifiers. We also provide a framework that takes into account the trade-off between precision and recall in the survey agency in order to select the optimal classification threshold.
    Keywords: machine learning, predictive models, selective editing, survey data
    JEL: C81 C83 C88
    Date: 2023–10
    URL: http://d.repec.org/n?u=RePEc:bde:wpaper:2330&r=cmp
  6. By: Jakob Kozak; Maximilian Nagl; Cathrine Nagl; Eli Beracha; Wolfgang Schäfers
    Abstract: Corporate bonds are an important source of funding for real estate investment trusts (REITs). The outstanding unsecured debt of U.S. equity REITs, which is an approximation for outstanding bond debt, was $450 billion in 2022, while REIT net asset value was $1.1 trillion in the same year. This highlights the importance of corporate bonds for U.S. REITs. However, the literature on bond risk premia focuses only on corporate bonds in general and neglects the specific structure and functioning of issuing REITs. Specifically, U.S. REITs must distribute 90% of their taxable income to shareholders, which prevents them from building capital internally through retained earnings. Since corporate bonds represent a general claim on corporate assets and cash in the case of default, we hypothesize that the drivers of REIT bond risk premia differ from those of the general corporate bond market. Therefore, this paper aims to fill this gap by examining yield spreads, which are the difference between the yield on a REIT bond and the U.S. Treasury yield having the same maturity. Based on findings in the empirical asset pricing literature on the superior performance of artificial neural networks in the adjacent fields of stock and bond return prediction, this paper applies an artificial neural network to predict REIT bond yield spreads. We use a data set of 27, 014 monthly U.S. REIT bond transactions from 2010 to 2021 and 33 explanatory variables including bond characteristics, equity and bond market variables, macroeconomic indicators, and, as a novelty, REIT balance sheet data, REIT type, and direct real estate market total return. Preliminary results show that the neural network predicts REIT bond yield spreads with an out-of-sample mean R2 of 36.3%. Feature importance analysis using explainable machine learning methods shows that default risk, captured by REIT size, economy-wide default risk spread, and interest rate volatility, is highly relevant to the prediction of REIT bond yield spreads. We also find evidence for tax and illiquidity risk premia. Interestingly, equity market-related variables are only important in times of economic recession. Real estate market return is an important feature and is negatively related to the predictions of REIT bond yield spreads. These findings underline that bond risk premia for REITs have additional drivers compared to those in the general corporate bond market.
    Keywords: Fixed Income; Machine Learning; REIT; Risk Premium
    JEL: R3
    Date: 2023–01–01
    URL: http://d.repec.org/n?u=RePEc:arz:wpaper:eres2023_146&r=cmp
  7. By: Juan Tenorio; Wilder Pérez
    Abstract: In a context of ongoing change, “nowcasting” models based on Machine Learning (ML) algorithms deliver a noteworthy advantage for decision-making in both the public and private sectors due to its flexibility and ability to drive large amounts of data. This document presents projection models for the monthly GDP rate growth of Peru, which incorporate structured macroeconomic indicators with high-frequency unstructured sentiment variables. The window sampling comes from January 2007 to May 2023, including a total of 91 variables. By assessing six ML algorithms, the best predictors for each model were identified. The results reveal the high capacity of each ML model with unstructured data to provide more accurate and anticipated predictions than traditional time series models, where the outstanding models were Gradient Boosting Machine, LASSO, and Elastic Net, which achieved a prediction error reduction of 20% to 25% compared to the AR and Dynamic Factor Models (DFM) models. These results could be influenced by the analysis period, which includes crisis events featured by high uncertainty, where ML models with unstructured data improve significance.
    Keywords: nowcasting, machine learning, GDP growth
    Date: 2023–11
    URL: http://d.repec.org/n?u=RePEc:apc:wpaper:197&r=cmp
  8. By: Nino Paulus; Lukas Lautenschlaeger; Wolfgang Schäfers
    Abstract: Problems and objective Social media platforms have become vibrant online platforms where people share their opinions and views on any topic (Yadav and Vishwakarma, 2020). With the increasing volume and speed of social media, the exchange of stock market-related information has become more important, which is why the effects of social media information on stock markets are becoming increasingly salient (Li et al., 2018). Business organizations need to understand these dynamics, as it reflects the interest of all kind of market participants – retail investors, institutional investors, but also clients, journalists and many others. Therefore, it is not surprising that there is evidence for public sentiment, obtained from social media, correlating with or even predicting economic indicators (e.g. Bollen et al., 2011; Sprenger et al., 2014; Xu and Cohen, 2018). Regarding real estate, Zamani and Schwartz (2017) successfully used Twitter language to forecast house price changes for a small sample at the county level. Except this limited research on real estate markets and the research for the general stock market, there is no more general study that examines the relationship between social media and real estate markets. Nevertheless, real estate markets are of particular interest, not only because of its popularity as an asset class among retail investors, but also because real estate is ubiquitous in daily life and the intransparency of the market. Sentiment indicators extracted from social media therefore promises to cover perspectives from all kinds of people and could therefore be more informative than traditional sentiment measures. However, as described by Li et al. (2018), social media-based sentiment indicators are not intended to replace traditional sentiment indicators, but rather complement them, as these are usually based on the knowledge of only a few industry insiders instead of that of the general public. Besides, the study focuses on indirect real estate (i.e. REITs) as it allows retail investors who represent the majority of social media users sharing equity-related information, to participate in real estate markets. Methodology & Data Using a dictionary-based approach, a classical machine learning approach as well as a deep learning based approach to extract the sentiment of approximately 4 million tweets, this paper compared methods of different complexity in terms of their ability to classify social media sentiment and predict indirect real estate returns on a monthly basis. The baseline for this comparison is a conventional dictionary-based approach including valence shifting properties. The dictionary used is the real estate specific dictionary developed Ruscheinsky et al. (2018). For the classical machine learning method, a support vector machine (SVM), which already has stated to be potent in a real estate context (Hausler et al., 2018), is utilized. The more complex deep learning approach is based on a Long Short-Term Memory (LSTM) model. The usefulness of deep learning-based approaches for sentiment analysis in a real estate context has been proven before by Braun et al. (2019). As high-tradevolume-stocks tend to be discussed most on Twitter, posts are collected from this platform (Xu and Cohen, 2018), including a ten-year timespan from 2013 to 2022. Hereby selection is made on the basis of cashtags representing all US REITs. The monthly total return of the FTSE Nareit allEquity Total Return states the dependent variable, whereby the created sentiment variable is the variable of interest. Contribution to science and practice The aim of this study is to create a standardized framework that enables investors of all kinds to better classify current market events and thus better navigate the opaque real estate market. This framework could be applied not only by investors, but vice versa by REITs to understand and optimize their position in society and in the investor landscape. To the authors knowledge, this is the first study to analyze the impact of social media sentiment on (indirect) real estate returns, based on a comprehensive national dataset.
    JEL: R3
    Date: 2023–01–01
    URL: http://d.repec.org/n?u=RePEc:arz:wpaper:eres2023_200&r=cmp
  9. By: Edson Pindza; Jules Clement Mba; Sutene Mwambi; Nneka Umeorah
    Abstract: Cryptocurrencies and Bitcoin, in particular, are prone to wild swings resulting in frequent jumps in prices, making them historically popular for traders to speculate. A better understanding of these fluctuations can greatly benefit crypto investors by allowing them to make informed decisions. It is claimed in recent literature that Bitcoin price is influenced by sentiment about the Bitcoin system. Transaction, as well as the popularity, have shown positive evidence as potential drivers of Bitcoin price. This study considers a bivariate jump-diffusion model to describe Bitcoin price dynamics and the number of Google searches affecting the price, representing a sentiment indicator. We obtain a closed formula for the Bitcoin price and derive the Black-Scholes equation for Bitcoin options. We first solve the corresponding Bitcoin option partial differential equation for the pricing process by introducing artificial neural networks and incorporating multi-layer perceptron techniques. The prediction performance and the model validation using various high-volatile stocks were assessed.
    Date: 2023–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2310.09622&r=cmp
  10. By: George Loewenstein; Zachary Wojtowicz
    Abstract: Attention is a pivotal resource in the modern economy and plays an increasingly prominent role in economic analysis. We summarize research on attention from both psychology and economics, placing a particular emphasis on its capacity to explain numerous documented violations of classical economic theory. We also propose promising new directions for future research, including attention-based utility, the recent proliferation of attentional externalities introduced by digital technology, the potential for artificial intelligence to compete with human attention, and the significant role that boredom, curiosity, and other motivational states play in determining how people allocate attention.
    Keywords: attention, motivation, behavioural bias, information, learning, education, artificial intelligence, machine learning, future of work
    JEL: D83 D90 D91 I00
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_10712&r=cmp
  11. By: Giovanni Compiani; Ilya Morozov; Stephan Seiler
    Abstract: We propose a demand estimation method that allows researchers to estimate substitution patterns from unstructured image and text data. We first employ a series of machine learning models to measure product similarity from products’ images and textual descriptions. We then estimate a nested logit model with product-pair specific nesting parameters that depend on the image and text similarities between products. Our framework does not require collecting product attributes for each category and can capture product similarity along dimensions that are hard to account for with observed attributes. We apply our method to a dataset describing the behavior of Amazon shoppers across several categories and show that incorporating texts and images in demand estimation helps us recover a flexible cross-price elasticity matrix.
    Keywords: demand estimation, unstructured data, computer vision, text models
    JEL: C10 C50 C81
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_10695&r=cmp
  12. By: Kristina McElheran; J. Frank Li; Erik Brynjolfsson; Zachary Kroff; Emin Dinlersoz; Lucia S. Foster; Nikolas Zolas
    Abstract: We study the early adoption and diffusion of five AI-related technologies (automated-guided vehicles, machine learning, machine vision, natural language processing, and voice recognition) as documented in the 2018 Annual Business Survey of 850, 000 firms across the United States. We find that fewer than 6% of firms used any of the AI-related technologies we measure, though most very large firms reported at least some AI use. Weighted by employment, average adoption was just over 18%. AI use in production, while varying considerably by industry, nevertheless was found in every sector of the economy and clustered with emerging technologies such as cloud computing and robotics. Among dynamic young firms, AI use was highest alongside more-educated, more-experienced, and younger owners, including owners motivated by bringing new ideas to market or helping the community. AI adoption was also more common alongside indicators of high-growth entrepreneurship, including venture capital funding, recent product and process innovation, and growth-oriented business strategies. Early adoption was far from evenly distributed: a handful of “superstar” cities and emerging hubs led startups’ adoption of AI. These patterns of early AI use foreshadow economic and social impacts far beyond this limited initial diffusion, with the possibility of a growing “AI divide” if early patterns persist.
    JEL: M15 O3
    Date: 2023–10
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:31788&r=cmp
  13. By: Eric Ghysels; Jack Morgan; Hamed Mohammadbagherpoor
    Abstract: Quantum computers are not yet up to the task of providing computational advantages for practical stochastic diffusion models commonly used by financial analysts. In this paper we introduce a class of stochastic processes that are both realistic in terms of mimicking financial market risks as well as more amenable to potential quantum computational advantages. The type of models we study are based on a regime switching volatility model driven by a Markov chain with observable states. The basic model features a Geometric Brownian Motion with drift and volatility parameters determined by the finite states of a Markov chain. We study algorithms to estimate credit risk and option pricing on a gate-based quantum computer. These models bring us closer to realistic market settings, and therefore quantum computing closer the realm of practical applications.
    Date: 2023–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2311.00825&r=cmp
  14. By: Michael Barnett; William Brock; Lars Peter Hansen; Ruimeng Hu; Joseph Huang
    Abstract: We study the implications of model uncertainty in a climate-economics framework with three types of capital: "dirty" capital that produces carbon emissions when used for production, "clean" capital that generates no emissions but is initially less productive than dirty capital, and knowledge capital that increases with R\&D investment and leads to technological innovation in green sector productivity. To solve our high-dimensional, non-linear model framework we implement a neural-network-based global solution method. We show there are first-order impacts of model uncertainty on optimal decisions and social valuations in our integrated climate-economic-innovation framework. Accounting for interconnected uncertainty over climate dynamics, economic damages from climate change, and the arrival of a green technological change leads to substantial adjustments to investment in the different capital types in anticipation of technological change and the revelation of climate damage severity.
    Date: 2023–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2310.13200&r=cmp
  15. By: Conor B. Hamill; Raad Khraishi; Simona Gherghel; Jerrard Lawrence; Salvatore Mercuri; Ramin Okhrati; Greig A. Cowan
    Abstract: Interest-free promotions are a prevalent strategy employed by credit card lenders to attract new customers, yet the research exploring their effects on both consumers and lenders remains relatively sparse. The process of selecting an optimal promotion strategy is intricate, involving the determination of an interest-free period duration and promotion-availability window, all within the context of competing offers, fluctuating market dynamics, and complex consumer behaviour. In this paper, we introduce an agent-based model that facilitates the exploration of various credit card promotions under diverse market scenarios. Our approach, distinct from previous agent-based models, concentrates on optimising promotion strategies and is calibrated using benchmarks from the UK credit card market from 2019 to 2020, with agent properties derived from historical distributions of the UK population from roughly the same period. We validate our model against stylised facts and time-series data, thereby demonstrating the value of this technique for investigating pricing strategies and understanding credit card customer behaviour. Our experiments reveal that, in the absence of competitor promotions, lender profit is maximised by an interest-free duration of approximately 12 months while market share is maximised by offering the longest duration possible. When competitors do not offer promotions, extended promotion availability windows yield maximum profit for lenders while also maximising market share. In the context of concurrent interest-free promotions, we identify that the optimal lender strategy entails offering a more competitive interest-free period and a rapid response to competing promotional offers. Notably, a delay of three months in responding to a rival promotion corresponds to a 2.4% relative decline in income.
    Date: 2023–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2311.01901&r=cmp
  16. By: Anubha Goel; Puneet Pasricha; Juho Kanniainen
    Abstract: In this research, we introduce a novel methodology for the index tracking problem with sparse portfolios by leveraging topological data analysis (TDA). Utilizing persistence homology to measure the riskiness of assets, we introduce a topological method for data-driven learning of the parameters for regularization terms. Specifically, the Vietoris-Rips filtration method is utilized to capture the intricate topological features of asset movements, providing a robust framework for portfolio tracking. Our approach has the advantage of accommodating both $\ell_1$ and $\ell_2$ penalty terms without the requirement for expensive estimation procedures. We empirically validate the performance of our methodology against state-of-the-art sparse index tracking techniques, such as Elastic-Net and SLOPE, using a dataset that covers 23 years of S&P500 index and its constituent data. Our out-of-sample results show that this computationally efficient technique surpasses conventional methods across risk metrics, risk-adjusted performance, and trading expenses in varied market conditions. Furthermore, in turbulent markets, it not only maintains but also enhances tracking performance.
    Date: 2023–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2310.09578&r=cmp
  17. By: Thomas R. Cook; Nathan M. Palmer
    Abstract: Despite growing interest in the use of complex models, such as machine learning (ML) models, for credit underwriting, ML models are difficult to interpret, and it is possible for them to learn relationships that yield de facto discrimination. How can we understand the behavior and potential biases of these models, especially if our access to the underlying model is limited? We argue that counterfactual reasoning is ideal for interpreting model behavior, and that Gaussian processes (GP) can provide approximate counterfactual reasoning while also incorporating uncertainty in the underlying model’s functional form. We illustrate with an exercise in which a simulated lender uses a biased machine model to decide credit terms. Comparing aggregate outcomes does not clearly reveal bias, but with a GP model we can estimate individual counterfactual outcomes. This approach can detect the bias in the lending model even when only a relatively small sample is available. To demonstrate the value of this approach for the more general task of model interpretability, we also show how the GP model’s estimates can be aggregated to recreate the partial density functions for the lending model.
    Keywords: models; Gaussian process; model bias
    JEL: C10 C14 C18 C45
    Date: 2023–06–15
    URL: http://d.repec.org/n?u=RePEc:fip:fedkrw:97176&r=cmp
  18. By: Julia Hatamyar; Noemi Kreif; Rudi Rocha; Martin Huber
    Abstract: We combine two recently proposed nonparametric difference-in-differences methods, extending them to enable the examination of treatment effect heterogeneity in the staggered adoption setting using machine learning. The proposed method, machine learning difference-in-differences (MLDID), allows for estimation of time-varying conditional average treatment effects on the treated, which can be used to conduct detailed inference on drivers of treatment effect heterogeneity. We perform simulations to evaluate the performance of MLDID and find that it accurately identifies the true predictors of treatment effect heterogeneity. We then use MLDID to evaluate the heterogeneous impacts of Brazil's Family Health Program on infant mortality, and find those in poverty and urban locations experienced the impact of the policy more quickly than other subgroups.
    Date: 2023–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2310.11962&r=cmp
  19. By: Joshua Rosaler; Dhruv Desai; Bhaskarjit Sarmah; Dimitrios Vamvourellis; Deran Onay; Dhagash Mehta; Stefano Pasquali
    Abstract: We initiate a novel approach to explain the out of sample performance of random forest (RF) models by exploiting the fact that any RF can be formulated as an adaptive weighted K nearest-neighbors model. Specifically, we use the proximity between points in the feature space learned by the RF to re-write random forest predictions exactly as a weighted average of the target labels of training data points. This linearity facilitates a local notion of explainability of RF predictions that generates attributions for any model prediction across observations in the training set, and thereby complements established methods like SHAP, which instead generates attributions for a model prediction across dimensions of the feature space. We demonstrate this approach in the context of a bond pricing model trained on US corporate bond trades, and compare our approach to various existing approaches to model explainability.
    Date: 2023–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2310.12428&r=cmp
  20. By: Giulio Cornelli; Jon Frost; Saurabh Mishra
    Abstract: How does economic activity related to artificial intelligence (AI) impact the income of various groups in an economy? This study, using a panel of 86 countries over 2010–19, finds that investment in AI is associated with higher income inequality. In particular, AI investment is tied to higher real incomes and income shares for households in the top decile, while households in the fifth and bottom decile see a decline in their income shares. We also find a positive association with exports of modern services linked to AI. In labour markets, there is a contraction in overall employment, a shift from mid-skill to high-skill managerial roles and a reduced labour share of income.
    Keywords: artificial intelligence, automation, services, structural shifts, inequality
    JEL: D31 D63 O32
    Date: 2023–10
    URL: http://d.repec.org/n?u=RePEc:bis:biswps:1135&r=cmp
  21. By: Jean-Marie John-Mathews (IMT-BS - MMS - Département Management, Marketing et Stratégie - TEM - Télécom Ecole de Management - IMT - Institut Mines-Télécom [Paris] - IMT-BS - Institut Mines-Télécom Business School - IMT - Institut Mines-Télécom [Paris], LITEM - Laboratoire en Innovation, Technologies, Economie et Management (EA 7363) - UEVE - Université d'Évry-Val-d'Essonne - Université Paris-Saclay - IMT-BS - Institut Mines-Télécom Business School - IMT - Institut Mines-Télécom [Paris])
    Abstract: We consider two fundamental and related issues currently facing the development of Artificial Intelligence (AI): the lack of ethics, and the interpretability of AI decisions. Can interpretable AI decisions help to address the issue of ethics in AI? Using a randomized study, we experimentally show that the empirical and liberal turn of the production of explanations tends to select AI explanations with a low denunciatory power. Under certain conditions, interpretability tools are therefore not means but, paradoxically, obstacles to the production of ethical AI since they can give the illusion of being sensitive to ethical incidents. We also show that the denunciatory power of AI explanations is highly dependent on the context in which the explanation takes place, such as the gender or education of the person for whom the explication is intended. AI ethics tools are therefore sometimes too flexible and self-regulation through the liberal production of explanations does not seem to be enough to resolve ethical issues. By following an STS pragmatist program, we highlight the role of non-human actors (such as computational paradigms, testing environments, etc.) in the formation of structural power relations, such as sexism. We then propose two scenarios for the future development of ethical AI: more external regulation, or more liberalization of AI explanations. These two opposite paths will play a major role in the future development of ethical AI.
    Keywords: Artificial intelligence, Ethics Interpretability, Experimentation, Self-regulation
    Date: 2022–01
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-03395823&r=cmp
  22. By: Harouna Kinda; Abrams M.E. Tagem
    Abstract: Double taxation treaties, by assigning taxing rights to rival countries and thereby eradicating double taxation, aim to facilitate cross-border trade and investment. The eradication of double taxation is achieved through reductions in withholding tax rates on passive income in source countries, resulting in revenue losses. Multinational corporations structure their investments to benefit from treaty-reduced withholding tax rates, exacerbating the revenue losses.
    Keywords: Double taxation treaties, Entropy balancing weights, Resource revenues, Revenue mobilization, Taxes
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:unu:wpaper:wp-2023-125&r=cmp
  23. By: Xu Yang; Xiao Yang; Weiqing Liu; Jinhui Li; Peng Yu; Zeqi Ye; Jiang Bian
    Abstract: In the wake of relentless digital transformation, data-driven solutions are emerging as powerful tools to address multifarious industrial tasks such as forecasting, anomaly detection, planning, and even complex decision-making. Although data-centric R&D has been pivotal in harnessing these solutions, it often comes with significant costs in terms of human, computational, and time resources. This paper delves into the potential of large language models (LLMs) to expedite the evolution cycle of data-centric R&D. Assessing the foundational elements of data-centric R&D, including heterogeneous task-related data, multi-facet domain knowledge, and diverse computing-functional tools, we explore how well LLMs can understand domain-specific requirements, generate professional ideas, utilize domain-specific tools to conduct experiments, interpret results, and incorporate knowledge from past endeavors to tackle new challenges. We take quantitative investment research as a typical example of industrial data-centric R&D scenario and verified our proposed framework upon our full-stack open-sourced quantitative research platform Qlib and obtained promising results which shed light on our vision of automatic evolving of industrial data-centric R&D cycle.
    Date: 2023–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2310.11249&r=cmp
  24. By: Gaetan de Rassenfosse (Ecole polytechnique federale de Lausanne); Adam Jaffe (Brandeis University); Melissa Wasserman (The University of Texas at Austin - School of Law)
    Abstract: This symposium Article discusses issues raised for patent processes and policy created by inventions generated by artificial intelligence (AI). The Article begins by examining the normative desirability of allowing patents on AI-generated inventions. While it is unclear whether patent protection is needed to incentivize the creation of AI-generated inventions, a stronger case can be made that AI-generated inventions should be patent eligible to encourage the commercialization and technology transfer of AI-generated inventions. Next, the Article examines how the emergence of AI inventions will alter patentability standards, and whether a differentiated patent system that treats AI-generated inventions differently from hu-man-generated inventions is normatively desirable. This Article concludes by considering the larger implications of allowing patents on AI-generated inventions, including changes to the patent examination process, a possible increase in the concentration of patent ownership and patent thickets, and potentially unlimited inventions.
    Keywords: generative AI; patent; intellectual property; invention
    JEL: K20 D23 O34
    Date: 2023–05
    URL: http://d.repec.org/n?u=RePEc:iip:wpaper:22&r=cmp
  25. By: Sukwoong Choi; William S. Moses; Neil Thompson
    Abstract: Quantum computing promises transformational gains for solving some problems, but little to none for others. For anyone hoping to use quantum computers now or in the future, it is important to know which problems will benefit. In this paper, we introduce a framework for answering this question both intuitively and quantitatively. The underlying structure of the framework is a race between quantum and classical computers, where their relative strengths determine when each wins. While classical computers operate faster, quantum computers can sometimes run more efficient algorithms. Whether the speed advantage or the algorithmic advantage dominates determines whether a problem will benefit from quantum computing or not. Our analysis reveals that many problems, particularly those of small to moderate size that can be important for typical businesses, will not benefit from quantum computing. Conversely, larger problems or those with particularly big algorithmic gains will benefit from near-term quantum computing. Since very large algorithmic gains are rare in practice and theorized to be rare even in principle, our analysis suggests that the benefits from quantum computing will flow either to users of these rare cases, or practitioners processing very large data.
    Date: 2023–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2310.15505&r=cmp
  26. By: Bhaskarjit Sarmah; Tianjie Zhu; Dhagash Mehta; Stefano Pasquali
    Abstract: For a financial analyst, the question and answer (Q\&A) segment of the company financial report is a crucial piece of information for various analysis and investment decisions. However, extracting valuable insights from the Q\&A section has posed considerable challenges as the conventional methods such as detailed reading and note-taking lack scalability and are susceptible to human errors, and Optical Character Recognition (OCR) and similar techniques encounter difficulties in accurately processing unstructured transcript text, often missing subtle linguistic nuances that drive investor decisions. Here, we demonstrate the utilization of Large Language Models (LLMs) to efficiently and rapidly extract information from earnings report transcripts while ensuring high accuracy transforming the extraction process as well as reducing hallucination by combining retrieval-augmented generation technique as well as metadata. We evaluate the outcomes of various LLMs with and without using our proposed approach based on various objective metrics for evaluating Q\&A systems, and empirically demonstrate superiority of our method.
    Date: 2023–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2310.10760&r=cmp

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.