|
on Computational Economics |
Issue of 2023‒07‒24
24 papers chosen by |
By: | Denis Koshelev (Bank of Russia, Russian Federation); Alexey Ponomarenko (Bank of Russia, Russian Federation); Sergei Seleznev (Bank of Russia, Russian Federation) |
Abstract: | In this paper, we propose a new procedure for unconditional and conditional forecasting in agent-based models. The proposed algorithm is based on the application of amortized neural networks and consists of two steps. The first step simulates artificial datasets from the model. In the second step, a neural network is trained to predict the future values of the variables using the history of observations. The main advantage of the proposed algorithm is its speed. This is due to the fact that, after the training procedure, it can be used to yield predictions for almost any data without additional simulations or the re-estimation of the neural network. |
Keywords: | agent-based models, amortized simulation-based inference, Bayesian models, forecasting, neural networks. |
JEL: | C11 C15 C32 C45 C53 C63 |
Date: | 2023–07 |
URL: | http://d.repec.org/n?u=RePEc:bkr:wpaper:wps115&r=cmp |
By: | Thibault Collin (Université Paris Dauphine-PSL - PSL - Université Paris sciences et lettres) |
Abstract: | The general scope of this thesis will be to further study the application of artificial neural networks in the context of hedging rainbow options. Due to their inherently complex features, such as the correlated paths that the prices of their underlying assets take or their absence from traded markets, finding an optimal hedging strategy for rainbow options is difficult, and traders usually have to resort to models and methods they know are inaccurate. An alternative approach involving deep learning however recently surfaced in the context of hedging vanilla options [6], and researchers have started to see potential in the use of neural networks for options endowed with exotic features in [5], [12] and [22]. The key to a near-perfect hedge for contingent claims might be hidden behind the training of neural network algorithms [6], and the scope of this research will be to further investigate how those innovative hedging techniques can be extended to rainbow options [22], using recent research [21], and to compare our results with those proposed by the current models and techniques used by traders, such as running Monte-Carlo path simulations. In order to accomplish that, we will try to develop an algorithm capable of designing an innovative and optimal hedging strategy for rainbow options using some intuition developed to hedge vanilla options [21] and price exotics [5]. But although it was shown from past literature to be potentially efficient and cost-effective, the opaque nature of an artificial neural network will make it difficult for the deep learning algorithm to be fully trusted and used as a sole method for hedging purposes, but rather as an additional technique associated with other more reliable models. |
Keywords: | Quantitative finance, deep hedging, deep learning, machine learning, rainbow options, call options, call worst-of options, black scholes, geometric brownian motion |
Date: | 2023–06–04 |
URL: | http://d.repec.org/n?u=RePEc:hal:wpaper:hal-04060013&r=cmp |
By: | Karol Chojnacki (University of Warsaw, Faculty of Economic Sciences); Robert Ślepaczuk (University of Warsaw, Quantitative Finance Research Group, Department of Quantitative Finance, Faculty of Economic Sciences) |
Keywords: | Algorithmic Investment Strategies, Machine Learning, Recurrent Neural Networks, Long Short-Term Memory, XGBoost, Walk Forward Optimization, Trading algorithms, Technical Analysis Indicators |
JEL: | C4 C14 C45 C53 C58 G13 |
Date: | 2023 |
URL: | http://d.repec.org/n?u=RePEc:war:wpaper:2023-15&r=cmp |
By: | Wang, Zuyi; Tejeda, Hernan A.; Kim, Man-Keun |
Keywords: | Agribusiness, Marketing, Research Methods/Statistical Methods |
Date: | 2023 |
URL: | http://d.repec.org/n?u=RePEc:ags:aaea22:335521&r=cmp |
By: | Falco J. Bargagli-Stoffi; Fabio Incerti; Massimo Riccaboni; Armando Rungi |
Abstract: | In this contribution, we propose machine learning techniques to predict zombie firms. First, we derive the risk of failure by training and testing our algorithms on disclosed financial information and non-random missing values of 304, 906 firms active in Italy from 2008 to 2017. Then, we spot the highest financial distress conditional on predictions that lies above a threshold for which a combination of false positive rate (false prediction of firm failure) and false negative rate (false prediction of active firms) is minimized. Therefore, we identify zombies as firms that persist in a state of financial distress, i.e., their forecasts fall into the risk category above the threshold for at least three consecutive years. For our purpose, we implement a gradient boosting algorithm (XGBoost) that exploits information about missing values. The inclusion of missing values in our predictive model is crucial because patterns of undisclosed accounts are correlated with firm failure. Finally, we show that our preferred machine learning algorithm outperforms (i) proxy models such as Z-scores and the Distance-to-Default, (ii) traditional econometric methods, and (iii) other widely used machine learning techniques. We provide evidence that zombies are on average less productive and smaller, and that they tend to increase in times of crisis. Finally, we argue that our application can help financial institutions and public authorities design evidence-based policies-e.g., optimal bankruptcy laws and information disclosure policies. |
Date: | 2023–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2306.08165&r=cmp |
By: | Ilias Chronopoulos; Katerina Chrysikou; George Kapetanios; James Mitchell; Aristeidis Raftapostolos |
Abstract: | In this paper we study neural networks and their approximating power in panel data models. We provide asymptotic guarantees on deep feed-forward neural network estimation of the conditional mean, building on the work of Farrell et al. (2021), and explore latent patterns in the cross-section. We use the proposed estimators to forecast the progression of new COVID-19 cases across the G7 countries during the pandemic. We find significant forecasting gains over both linear panel and nonlinear time-series models. Containment or lockdown policies, as instigated at the national level by governments, are found to have out-of-sample predictive power for new COVID-19 cases. We illustrate how the use of partial derivatives can help open the “black box” of neural networks and facilitate semi-structural analysis: school and workplace closures are found to have been effective policies at restricting the progression of the pandemic across the G7 countries. But our methods illustrate significant heterogeneity and time variation in the effectiveness of specific containment policies. |
Keywords: | Machine Learning; Neural Networks; Panel Data; Nonlinearity; Forecasting; COVID-19; Policy Interventions |
JEL: | C33 C45 |
Date: | 2023–07–05 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedcwq:96408&r=cmp |
By: | Cassidy K. Buhler; Hande Y. Benson |
Abstract: | The Markowitz mean-variance portfolio optimization model aims to balance expected return and risk when investing. However, there is a significant limitation when solving large portfolio optimization problems efficiently: the large and dense covariance matrix. Since portfolio performance can be potentially improved by considering a wider range of investments, it is imperative to be able to solve large portfolio optimization problems efficiently, typically in microseconds. We propose dimension reduction and increased sparsity as remedies for the covariance matrix. The size reduction is based on predictions from machine learning techniques and the solution to a linear programming problem. We find that using the efficient frontier from the linear formulation is much better at predicting the assets on the Markowitz efficient frontier, compared to the predictions from neural networks. Reducing the covariance matrix based on these predictions decreases both runtime and total iterations. We also present a technique to sparsify the covariance matrix such that it preserves positive semi-definiteness, which improves runtime per iteration. The methods we discuss all achieved similar portfolio expected risk and return as we would obtain from a full dense covariance matrix but with improved optimizer performance. |
Date: | 2023–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2306.12639&r=cmp |
By: | Jesús Fernández-Villaverde; Isaiah Hull |
Abstract: | We introduce a novel approach to solving dynamic programming problems, such as those in many economic models, on a quantum annealer, a specialized device that performs combinatorial optimization. Quantum annealers attempt to solve an NP-hard problem by starting in a quantum superposition of all states and generating candidate global solutions in milliseconds, irrespective of problem size. Using existing quantum hardware, we achieve an order-of-magnitude speed-up in solving the real business cycle model over benchmarks in the literature. We also provide a detailed introduction to quantum annealing and discuss its potential use for more challenging economic problems. |
Keywords: | computational methods, dynamic equilibrium economies, quantum computing, quantum annealing |
JEL: | C63 C80 E37 |
Date: | 2023 |
URL: | http://d.repec.org/n?u=RePEc:ces:ceswps:_10500&r=cmp |
By: | Marco Catola; Silvia Leoni |
Abstract: | The application of Agent-Based Modelling to Game Theory allows us to benefit from the strengths of both approaches, and to enrich the study of games when solutions are difficult to elicit analytically. Using an agent-based approach to sequential games, however, poses some issues that result in a few applications of this type. We contribute to this aspect by applying the agent-based approach to a lobbying game involving environmental regulation and firms’ choice of abatement. We simulate this game and test the robustness of its game-theoretical prediction against the results obtained. We find that while theoretical predictions are generally consistent with the simulated results, this novel approach highlights a few differences. First, the market converges to a green state for a larger number of cases with respect to theoretical predictions. Second, simulations show that it is possible for this market to converge to a polluting state in the very long run. This result is not envisaged by theoretical predictions. Sensitivity experiments on the main model parameters confirm the robustness of our findings. |
Keywords: | Agent-Based-Modelling, Environmental Regulation, Industrial Organisation, Lobbying |
JEL: | C63 D72 L13 L51 |
Date: | 2023–06–01 |
URL: | http://d.repec.org/n?u=RePEc:pie:dsedps:2023/294&r=cmp |
By: | Xinli Yu; Zheng Chen; Yuan Ling; Shujing Dong; Zongyi Liu; Yanbin Lu |
Abstract: | This paper presents a novel study on harnessing Large Language Models' (LLMs) outstanding knowledge and reasoning abilities for explainable financial time series forecasting. The application of machine learning models to financial time series comes with several challenges, including the difficulty in cross-sequence reasoning and inference, the hurdle of incorporating multi-modal signals from historical news, financial knowledge graphs, etc., and the issue of interpreting and explaining the model results. In this paper, we focus on NASDAQ-100 stocks, making use of publicly accessible historical stock price data, company metadata, and historical economic/financial news. We conduct experiments to illustrate the potential of LLMs in offering a unified solution to the aforementioned challenges. Our experiments include trying zero-shot/few-shot inference with GPT-4 and instruction-based fine-tuning with a public LLM model Open LLaMA. We demonstrate our approach outperforms a few baselines, including the widely applied classic ARMA-GARCH model and a gradient-boosting tree model. Through the performance comparison results and a few examples, we find LLMs can make a well-thought decision by reasoning over information from both textual news and price time series and extracting insights, leveraging cross-sequence information, and utilizing the inherent knowledge embedded within the LLM. Additionally, we show that a publicly available LLM such as Open-LLaMA, after fine-tuning, can comprehend the instruction to generate explainable forecasts and achieve reasonable performance, albeit relatively inferior in comparison to GPT-4. |
Date: | 2023–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2306.11025&r=cmp |
By: | Xiaoyue Li; John M. Mulvey |
Abstract: | Optimal execution of a portfolio have been a challenging problem for institutional investors. Traders face the trade-off between average trading price and uncertainty, and traditional methods suffer from the curse of dimensionality. Here, we propose a four-step numerical framework for the optimal portfolio execution problem where multiple market regimes exist, with the underlying regime switching based on a Markov process. The market impact costs are modelled with a temporary part and a permanent part, where the former affects only the current trade while the latter persists. Our approach accepts impact cost functions in generic forms. First, we calculate the approximated orthogonal portfolios based on estimated impact cost functions; second, we employ dynamic program to learn the optimal selling schedule of each approximated orthogonal portfolio; third, weights of a neural network are pre-trained with the strategy suggested by previous step; last, we train the neural network to optimize on the original trading model. In our experiment of a 10-asset liquidation example with quadratic impact costs, the proposed combined method provides promising selling strategy for both CRRA (constant relative risk aversion) and mean-variance objectives. The running time is linear in the number of risky assets in the portfolio as well as in the number of trading periods. Possible improvements in running time are discussed for potential large-scale usages. |
Date: | 2023–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2306.08809&r=cmp |
By: | Leonardo Bargigli; Filippo Pietrini |
Abstract: | We examine the dependence of the cyclical fluctuations of demand on specific behavioral attitudes of heterogeneous agents. Extending the model of Tassier (2004), we use simulations to investigate consumption dynamics when agents are inclined both to conformism and distinction, and they use goods as elements of a communication system. Our results challenge the view stating that conspicuous consumption is typical only of a wealthy class and of some positional goods, since in our model there are no assumptions about features of the goods or income distribution. |
Keywords: | goods cycles, agent based model, sociology of consumption. |
JEL: | D91 |
Date: | 2023 |
URL: | http://d.repec.org/n?u=RePEc:frz:wpaper:wp2023_01.rdf&r=cmp |
By: | Carlo Drago (University of Niccolò Cusano); Loris Di Nallo (University of Cassino e del Lazio Meridionale); Maria Lucetta Russotto (University of Firenze) |
Abstract: | Promoting social information reporting and disclosure can promote sustainable banking. The paper aims to measure banking social sustainability by constructing a new interval-based composite indicator using the Thomson Reuters database. In this work, we propose an approach to constructing interval-based composite indicators that enhance the composite indicator’s construction sensibly, allowing us to measure the uncertainty due to the choices in the composite indicator design. The methodological approach employed is based on a Monte-Carlo simulation and allows for improving the information the composite indicators can obtain. So, we measure the value of the social indicator and its subcomponents and the value’s uncertainty due to the different possible weights. The results show that the best international ESG practices in European banks relate to French and United Kingdom Banks, primarily than Italian banks. Finally, we analyze innovative perspectives and propose policy recommendations, considering the growing attention to the issue of ESG disclosure and its adherence to reality, to support sustainable banking ecosystems. |
Keywords: | Social Index, Sustainable Banking, ESG, Monte-Carlo Simulation, Machine Learning, Interval-based Composite Indicators |
JEL: | G21 Q5 C02 C15 C43 C63 |
Date: | 2023–06 |
URL: | http://d.repec.org/n?u=RePEc:fem:femwpa:2023.13&r=cmp |
By: | Okan Erol, Kazim |
Abstract: | This paper aims to study the income distribution and redistribution in Turkey by using a microsimulation analysis. Developing a Turkish tax and benefit microsimulation model allows the analysis of the Turkish public revenue system by using national SILC data. In this research, TRSILC input data have been transformed into a EUROMOD input dataset and included in the EUROMOD model, such that we can test the effectiveness of the Turkish tax and benefit system on data representative for Turkish private households. The aim of this study is to be able to compare the Turkish tax-benefit system with those of other European countries by implementing the same methodology. As a pioneering model, TURKMOD can be essential to assess the of the tax and benefit system on income inequality in Turkey. |
Date: | 2022–02–01 |
URL: | http://d.repec.org/n?u=RePEc:ese:cempwp:cempa1-22&r=cmp |
By: | Fabio Baschetti (Scuola Normale Superiore); Giacomo Bormetti (University of Bologna); Pietro Rossi (University of Bologna; Prometeia S.p.A) |
Abstract: | We propose a neural network-based approach to calibrating stochastic volatility models, which combines the pioneering grid approach by Horvath et al. (2021) with the pointwise two-stage calibration of Bayer and Stemper (2018). Our methodology inherits robustness from the former while not suffering from the need for interpolation/extrapolation techniques, a clear advantage ensured by the pointwise approach. The crucial point to the entire procedure is the generation of implied volatility surfaces on random grids, which one dispenses to the network in the training phase. We support the validity of our calibration technique with several empirical and Monte Carlo experiments for the rough Bergomi and Heston models under a simple but effective parametrization of the forward variance curve. The approach paves the way for valuable applications in financial engineering - for instance, pricing under local stochastic volatility models - and extensions to the fast-growing field of path-dependent volatility models. |
Date: | 2023–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2306.11061&r=cmp |
By: | Gaël Le Mens; Balász Kovács; Michael T. Hannan; Guillem Pros |
Abstract: | Recently, the world's attention has been captivated by Large Language Models (LLMs) thanks to OpenAI's Chat-GPT, which rapidly proliferated as an app powered by GPT-3 and now its successor, GPT-4. If these LLMs produce human-like text, the semantic spaces they construct likely align with those used by humans for interpreting and generating language. This suggests that social scientists could use these LLMs to construct measures of semantic similarity that match human judgment. In this article, we provide an empirical test of this intuition. We use GPT-4 to construct a new measure of typicality– the similarity of a text document to a concept or category. We evaluate its performance against other model-based typicality measures in terms of their correspondence with human typicality ratings. We conduct this comparative analysis in two domains: the typicality of books in literary genres (using an existing dataset of book descriptions) and the typicality of tweets authored by US Congress members in the Democratic and Republican parties (using a novel dataset). The GPT-4 Typicality measure not only meets or exceeds the current state-of-the-art but accomplishes this without any model training. This is a breakthrough because the previous state-of-the-art measure required fine-tuning a model (a BERT text classifier) on hundreds of thousands of text documents to achieve its performance. Our comparative analysis emphasizes the need for systematic empirical validation of measures based on LLMs: several measures based on other recent LLMs achieve at best a moderate correspondence with human judgments. |
Keywords: | categories, concepts, deep learning, typicality, GPT, chatGPT, BERT, Similarity |
JEL: | C18 C52 |
Date: | 2023–06 |
URL: | http://d.repec.org/n?u=RePEc:upf:upfgen:1864&r=cmp |
By: | Kai Feng; Han Hong; Ke Tang; Jingyuan Wang |
Abstract: | This paper proposes a statistical framework with which artificial intelligence can improve human decision making. The performance of each human decision maker is first benchmarked against machine predictions; we then replace the decisions made by a subset of the decision makers with the recommendation from the proposed artificial intelligence algorithm. Using a large nationwide dataset of pregnancy outcomes and doctor diagnoses from prepregnancy checkups of reproductive age couples, we experimented with both a heuristic frequentist approach and a Bayesian posterior loss function approach with an application to abnormal birth detection. We find that our algorithm on a test dataset results in a higher overall true positive rate and a lower false positive rate than the diagnoses made by doctors only. We also find that the diagnoses of doctors from rural areas are more frequently replaceable, suggesting that artificial intelligence assisted decision making tends to improve precision more in less developed regions. |
Date: | 2023–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2306.11689&r=cmp |
By: | OECD |
Abstract: | This report focuses on regulatory sandboxes in artificial intelligence (AI), where authorities engage firms to test innovative products or services that challenge existing legal frameworks. Participating firms obtain a waiver from specific legal provisions or compliance processes to innovate. It highlights positive impacts like increased venture capital investment in fintech start-ups. It points out challenges, risks, and policy considerations for AI sandboxes, emphasizing interdisciplinary cooperation, building AI expertise, regulatory interoperability, and trade policy. It also addresses the importance of comprehensive criteria for eligibility and assessing trials, as well as the impact on innovation and competition. |
Date: | 2023–07–13 |
URL: | http://d.repec.org/n?u=RePEc:oec:stiaab:356-en&r=cmp |
By: | Lin Liu; Rajarshi Mukherjee; James M. Robins |
Abstract: | In this article we develop a feasible version of the assumption-lean tests in Liu et al. 20 that can falsify an analyst's justification for the validity of a reported nominal $(1 - \alpha)$ Wald confidence interval (CI) centered at a double machine learning (DML) estimator for any member of the class of doubly robust (DR) functionals studied by Rotnitzky et al. 21. The class of DR functionals is broad and of central importance in economics and biostatistics. It strictly includes both (i) the class of mean-square continuous functionals that can be written as an expectation of an affine functional of a conditional expectation studied by Chernozhukov et al. 22 and the class of functionals studied by Robins et al. 08. The present state-of-the-art estimators for DR functionals $\psi$ are DML estimators $\hat{\psi}_{1}$. The bias of $\hat{\psi}_{1}$ depends on the product of the rates at which two nuisance functions $b$ and $p$ are estimated. Most commonly an analyst justifies the validity of her Wald CIs by proving that, under her complexity-reducing assumptions, the Cauchy-Schwarz (CS) upper bound for the bias of $\hat{\psi}_{1}$ is $o (n^{- 1 / 2})$. Thus if the hypothesis $H_{0}$: the CS upper bound is $o (n^{- 1 / 2})$ is rejected by our test, we will have falsified the analyst's justification for the validity of her Wald CIs. In this work, we exhibit a valid assumption-lean falsification test of $H_{0}$, without relying on complexity-reducing assumptions on $b, p$, or their estimates $\hat{b}, \hat{p}$. Simulation experiments are conducted to demonstrate how the proposed assumption-lean test can be used in practice. An unavoidable limitation of our methodology is that no assumption-lean test of $H_{0}$, including ours, can be a consistent test. Thus failure of our test to reject is not meaningful evidence in favor of $H_{0}$. |
Date: | 2023–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2306.10590&r=cmp |
By: | Richiardi, Matteo; Bronka, Patryk; van de Ven, Justin |
Date: | 2022–03–01 |
URL: | http://d.repec.org/n?u=RePEc:ese:cempwp:cempa3-22&r=cmp |
By: | Evelina Gavrilova; Audun Langørgen; Floris T. Zoutman; Floris Zoutman |
Abstract: | This paper develops a machine-learning method that allows researchers to estimate heterogeneous treatment effects with panel data in a setting with many covariates. Our method, which we name the dynamic causal forest (DCF) method, extends the causal-forest method of Wager and Athey (2018) by allowing for the estimation of dynamic treatment effects in a difference-in-difference setting. Regular causal forests require conditional independence to consistently estimate heterogeneous treatment effects. In contrast, DCFs provide a consistent estimate for heterogeneous treatment effects under the weaker assumption of parallel trends. DCFs can be used to create event-study plots which aid in the inspection of pre-trends and treatment effect dynamics. We provide an empirical application, where DCFs are applied to estimate the incidence of payroll tax on wages paid to employees. We consider treatment effect heterogeneity associated with personal- and firm-level variables. We find that on average the incidence of the tax is shifted onto workers through incidental payments, rather than contracted wages. Heterogeneity is mainly explained by firm-and workforce-level variables. Firms with a large and heterogeneous workforce are most effective in passing on the incidence of the tax to workers. |
Keywords: | causal forest, treatment effect heterogeneity, payroll tax incidence, administrative data |
JEL: | C18 H22 J31 M54 |
Date: | 2023 |
URL: | http://d.repec.org/n?u=RePEc:ces:ceswps:_10532&r=cmp |
By: | Astrid Bertrand (IP Paris - Institut Polytechnique de Paris, DIVA - Design, Interaction, Visualization & Applications - LTCI - Laboratoire Traitement et Communication de l'Information - IMT - Institut Mines-Télécom [Paris] - Télécom Paris); James Eagan (DIVA - Design, Interaction, Visualization & Applications - LTCI - Laboratoire Traitement et Communication de l'Information - IMT - Institut Mines-Télécom [Paris] - Télécom Paris, IP Paris - Institut Polytechnique de Paris, INFRES - Département Informatique et Réseaux - Télécom ParisTech); Winston Maxwell (SES - Département Sciences Economiques et Sociales - Télécom ParisTech, ECOGE - Economie Gestion - I3 SES - Institut interdisciplinaire de l’innovation de Telecom Paris - Télécom ParisTech - I3 - Institut interdisciplinaire de l’innovation - CNRS - Centre National de la Recherche Scientifique, IP Paris - Institut Polytechnique de Paris, Télécom ParisTech, I3 SES - Institut interdisciplinaire de l’innovation de Telecom Paris - Télécom ParisTech - I3 - Institut interdisciplinaire de l’innovation - CNRS - Centre National de la Recherche Scientifique) |
Abstract: | Robo-advisors are democratizing access to life-insurance by enabling fully online underwriting. In Europe, financial legislation requires that the reasons for recommending a life insurance plan be explained according to the characteristics of the client, in order to empower the client to make a "fully informed decision". In this study conducted in France, we seek to understand whether legal requirements for feature-based explanations actually help users in their decision-making. We conduct a qualitative study to characterize the explainability needs formulated by non-expert users and by regulators expert in customer protection. We then run a large-scale quantitative study using Robex, a simplified robo-advisor built using ecological interface design that delivers recommendations with explanations in different hybrid textual and visual formats: either "dialogic"-more textual-or "graphical"-more visual. We find that providing feature-based explanations does not improve appropriate reliance or understanding compared to not providing any explanation. In addition, dialogic explanations increase users' trust in the recommendations of the robo-advisor, sometimes to the users' detriment. This real-world scenario illustrates how XAI can address information asymmetry in complex areas such as finance. This work has implications for other critical, AI-based recommender systems, where the General Data Protection Regulation (GDPR) may require similar provisions for feature-based explanations. CCS CONCEPTS • Human-centered computing → Empirical studies in HCI. |
Keywords: | explainability intelligibility AI regulation financial inclusion, explainability, intelligibility, AI regulation, financial inclusion |
Date: | 2023–06–12 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-04125939&r=cmp |
By: | Richiardi, Matteo; Bronka, Patryk |
Date: | 2022–03–30 |
URL: | http://d.repec.org/n?u=RePEc:ese:cempwp:cempa5-22&r=cmp |
By: | Alex Kim; Maximilian Muhn; Valeri Nikolaev |
Abstract: | Generative AI tools such as ChatGPT can fundamentally change the way investors process information. We probe the economic usefulness of these tools in summarizing complex corporate disclosures using the stock market as a laboratory. The unconstrained summaries are dramatically shorter, often by more than 70% compared to the originals, whereas their information content is amplified. When a document has a positive (negative) sentiment, its summary becomes more positive (negative). More importantly, the summaries are more effective at explaining stock market reactions to the disclosed information. Motivated by these findings, we propose a measure of information "bloat." We show that bloated disclosure is associated with adverse capital markets consequences, such as lower price efficiency and higher information asymmetry. Finally, we show that the model is effective at constructing targeted summaries that identify firms' (non-)financial performance and risks. Collectively, our results indicate that generative language modeling adds considerable value for investors with information processing constraints. |
Date: | 2023–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2306.10224&r=cmp |