nep-cmp New Economics Papers
on Computational Economics
Issue of 2024‒06‒17
29 papers chosen by



  1. Simulating the economic impact of rationality through reinforcement learning and agent-based modelling By Simone Brusatin; Tommaso Padoan; Andrea Coletta; Domenico Delli Gatti; Aldo Glielmo
  2. Pricing Catastrophe Bonds -- A Probabilistic Machine Learning Approach By Xiaowei Chen; Hong Li; Yufan Lu; Rui Zhou
  3. Optimizing Deep Reinforcement Learning for American Put Option Hedging By Reilly Pickard; F. Wredenhagen; Y. Lawryshyn
  4. The effectiveness of central bank purchases of long-term treasury securities: A neural network approach By Tänzer, Alina
  5. Comparative analysis of neural network architectures for short-term FOREX forecasting By Theodoros Zafeiriou; Dimitris Kalles
  6. Manufacturing Sentiment: Forecasting Industrial Production with Text Analysis By Tomaz Cajner; Leland D. Crane; Christopher J. Kurz; Norman J. Morin; Paul E. Soto; Betsy Vrankovich
  7. Unleashing the Power of AI: Transforming Marketing Decision-Making in Heavy Machinery with Machine Learning, Radar Chart Simulation, and Markov Chain Analysis By Tian Tian; Jiahao Deng
  8. Neural Network Learning of Black-Scholes Equation for Option Pricing By Daniel de Souza Santos; Tiago Alessandro Espinola Ferreira
  9. Can machine learning unlock new insights into high-frequency trading? By G. Ibikunle; B. Moews; K. Rzayev
  10. Hedging American Put Options with Deep Reinforcement Learning By Reilly Pickard; Finn Wredenhagen; Julio DeJesus; Mario Schlener; Yuri Lawryshyn
  11. Portfolio Management using Deep Reinforcement Learning By Ashish Anil Pawar; Vishnureddy Prashant Muskawar; Ritesh Tiku
  12. Finding a needle in a haystack: a machine learning framework for anomaly detection in payment systems By Ajit Desai; Anneke Kosse; Jacob Sharples
  13. Artificial Intelligence Investments Reduce Risks to Critical Mineral Supply By Joaquin Vespignani; Russell Smyth
  14. Algorithm as Experiment: Machine Learning, Market Design, and Policy Eligibility Rules By Yusuke Narita; Kohei Yata
  15. Forecasting the Stability and Growth Pact compliance using Machine Learning By Kea Baret; Amélie Barbier-Gauchard; Theophilos Papadimitriou
  16. Large Language Model in Financial Regulatory Interpretation By Zhiyu Cao; Zachary Feinstein
  17. Overcoming Anchoring Bias: The Potential of AI and XAI-based Decision Support By Felix Haag; Carlo Stingl; Katrin Zerfass; Konstantin Hopf; Thorsten Staake
  18. Quantile Preferences in Portfolio Choice: A Q-DRL Approach to Dynamic Diversification By Attila Sarkany; Lukas Janasek; Jozef Barunik
  19. Strategic Behavior and AI Training Data By Christian Peukert; Florian Abeillon; Jérémie Haese; Franziska Kaiser; Alexander Staub
  20. Textual Representation of Business Plans and Firm Success By Maria S. Mavillonio
  21. Breaking open the black box of the production function: an agent-based model accounting for time in production processes By Jack Birner; Marco Mazzoli; Eleonora Priori; Pietro Terna
  22. Wealth, Cost, and Misperception: Empirical Estimation of Three Interaction Channels in a Financial-Macroeconomic Agent-Based Model By Jiri Kukacka; Erik Zila
  23. STRIDE: A Tool-Assisted LLM Agent Framework for Strategic and Interactive Decision-Making By Chuanhao Li; Runhan Yang; Tiankai Li; Milad Bafarassat; Kourosh Sharifi; Dirk Bergemann; Zhuoran Yang
  24. De-Biasing Models of Biased Decisions: A Comparison of Methods Using Mortgage Application Data By Nicholas Tenev
  25. Identifying Monetary Policy Shocks: A Natural Language Approach By S. Borağan Aruoba; Thomas Drechsel
  26. On Quantum Ambiguity and Potential Exponential Computational Speed-Ups to Solving By Eric Ghysels; Jack Morgan
  27. Convolutional Neural Networks to signal currency crises: from the Asian financial crisis to the Covid crisis. By Sylvain BARTHÉLÉMY; Virginie GAUTIER; Fabien RONDEAU
  28. How good are LLMs in risk profiling? By Thorsten Hens; Trine Nordlie
  29. Household Bargaining with Limited Commitment: A Practitioner’s Guide By Adam Hallengreen; Thomas H. Joergensen; Annasofie M. Olesen

  1. By: Simone Brusatin; Tommaso Padoan; Andrea Coletta; Domenico Delli Gatti; Aldo Glielmo
    Abstract: Agent-based models (ABMs) are simulation models used in economics to overcome some of the limitations of traditional frameworks based on general equilibrium assumptions. However, agents within an ABM follow predetermined, not fully rational, behavioural rules which can be cumbersome to design and difficult to justify. Here we leverage multi-agent reinforcement learning (RL) to expand the capabilities of ABMs with the introduction of fully rational agents that learn their policy by interacting with the environment and maximising a reward function. Specifically, we propose a 'Rational macro ABM' (R-MABM) framework by extending a paradigmatic macro ABM from the economic literature. We show that gradually substituting ABM firms in the model with RL agents, trained to maximise profits, allows for a thorough study of the impact of rationality on the economy. We find that RL agents spontaneously learn three distinct strategies for maximising profits, with the optimal strategy depending on the level of market competition and rationality. We also find that RL agents with independent policies, and without the ability to communicate with each other, spontaneously learn to segregate into different strategic groups, thus increasing market power and overall profits. Finally, we find that a higher degree of rationality in the economy always improves the macroeconomic environment as measured by total output, depending on the specific rational policy, this can come at the cost of higher instability. Our R-MABM framework is general, it allows for stable multi-agent learning, and represents a principled and robust direction to extend existing economic simulators.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.02161&r=
  2. By: Xiaowei Chen; Hong Li; Yufan Lu; Rui Zhou
    Abstract: This paper proposes a probabilistic machine learning method to price catastrophe (CAT) bonds in the primary market. The proposed method combines machine-learning-based predictive models with Conformal Prediction, an innovative algorithm that generates distribution-free probabilistic forecasts for CAT bond prices. Using primary market CAT bond transaction records between January 1999 and March 2021, the proposed method is found to be more robust and yields more accurate predictions of the bond spreads than traditional regression-based methods. Furthermore, the proposed method generates more informative prediction intervals than linear regression and identifies important nonlinear relationships between various risk factors and bond spreads, suggesting that linear regressions could misestimate the bond spreads. Overall, this paper demonstrates the potential of machine learning methods in improving the pricing of CAT bonds.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.00697&r=
  3. By: Reilly Pickard; F. Wredenhagen; Y. Lawryshyn
    Abstract: This paper contributes to the existing literature on hedging American options with Deep Reinforcement Learning (DRL). The study first investigates hyperparameter impact on hedging performance, considering learning rates, training episodes, neural network architectures, training steps, and transaction cost penalty functions. Results highlight the importance of avoiding certain combinations, such as high learning rates with a high number of training episodes or low learning rates with few training episodes and emphasize the significance of utilizing moderate values for optimal outcomes. Additionally, the paper warns against excessive training steps to prevent instability and demonstrates the superiority of a quadratic transaction cost penalty function over a linear version. This study then expands upon the work of Pickard et al. (2024), who utilize a Chebyshev interpolation option pricing method to train DRL agents with market calibrated stochastic volatility models. While the results of Pickard et al. (2024) showed that these DRL agents achieve satisfactory performance on empirical asset paths, this study introduces a novel approach where new agents at weekly intervals to newly calibrated stochastic volatility models. Results show DRL agents re-trained using weekly market data surpass the performance of those trained solely on the sale date. Furthermore, the paper demonstrates that both single-train and weekly-train DRL agents outperform the Black-Scholes Delta method at transaction costs of 1% and 3%. This practical relevance suggests that practitioners can leverage readily available market data to train DRL agents for effective hedging of options in their portfolios.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.08602&r=
  4. By: Tänzer, Alina
    Abstract: Central bank intervention in the form of quantitative easing (QE) during times of low interest rates is a controversial topic. This paper introduces a novel approach to study the effectiveness of such unconventional measures. Using U.S. data on six key financial and macroeconomic variables between 1990 and 2015, the economy is estimated by artificial neural networks. Historical counterfactual analyses show that real effects are less pronounced than yield effects. Disentangling the effects of the individual asset purchase programs, impulse response functions provide evidence for QE being less effective the more the crisis is overcome. The peak effects of all QE interventions during the Financial Crisis only amounts to 1.3 pp for GDP growth and 0.6 pp for inflation respectively. Hence, the time as well as the volume of the interventions should be deliberated.
    Keywords: Artificial Intelligence, Machine Learning, Neural Networks, Forecasting and Simulation: Models and Applications, Financial Markets and the Macroeconomy, Monetary Policy, Central Banks and Their Policies
    JEL: C45 E47 E44 E52 E58
    Date: 2024
    URL: http://d.repec.org/n?u=RePEc:zbw:imfswp:295732&r=
  5. By: Theodoros Zafeiriou; Dimitris Kalles
    Abstract: The present document delineates the analysis, design, implementation, and benchmarking of various neural network architectures within a short-term frequency prediction system for the foreign exchange market (FOREX). Our aim is to simulate the judgment of the human expert (technical analyst) using a system that responds promptly to changes in market conditions, thus enabling the optimization of short-term trading strategies. We designed and implemented a series of LSTM neural network architectures which are taken as input the exchange rate values and generate the short-term market trend forecasting signal and an ANN custom architecture based on technical analysis indicator simulators We performed a comparative analysis of the results and came to useful conclusions regarding the suitability of each architecture and the cost in terms of time and computational power to implement them. The ANN custom architecture produces better prediction quality with higher sensitivity using fewer resources and spending less time than LSTM architectures. The ANN custom architecture appears to be ideal for use in low-power computing systems and for use cases that need fast decisions with the least possible computational cost.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.08045&r=
  6. By: Tomaz Cajner; Leland D. Crane; Christopher J. Kurz; Norman J. Morin; Paul E. Soto; Betsy Vrankovich
    Abstract: This paper examines the link between industrial production and the sentiment expressed in natural language survey responses from U.S. manufacturing firms. We compare several natural language processing (NLP) techniques for classifying sentiment, ranging from dictionary-based methods to modern deep learning methods. Using a manually labeled sample as ground truth, we find that deep learning models partially trained on a human-labeled sample of our data outperform other methods for classifying the sentiment of survey responses. Further, we capitalize on the panel nature of the data to train models which predict firm-level production using lagged firm-level text. This allows us to leverage a large sample of "naturally occurring" labels with no manual input. We then assess the extent to which each sentiment measure, aggregated to monthly time series, can serve as a useful statistical indicator and forecast industrial production. Our results suggest that the text responses provide information beyond the available numerical data from the same survey and improve out-of-sample forecasting; deep learning methods and the use of naturally occurring labels seem especially useful for forecasting. We also explore what drives the predictions made by the deep learning models, and find that a relatively small number of words associated with very positive/negative sentiment account for much of the variation in the aggregatesentiment index.
    Keywords: Industrial Production; Natural Language Processing; Machine Learning; Forecasting
    JEL: C10 E17 O14
    Date: 2024–05–03
    URL: http://d.repec.org/n?u=RePEc:fip:fedgfe:2024-26&r=
  7. By: Tian Tian; Jiahao Deng
    Abstract: This pioneering research introduces a novel approach for decision-makers in the heavy machinery industry, specifically focusing on production management. The study integrates machine learning techniques like Ridge Regression, Markov chain analysis, and radar charts to optimize North American Crawler Cranes market production processes. Ridge Regression enables growth pattern identification and performance assessment, facilitating comparisons and addressing industry challenges. Markov chain analysis evaluates risk factors, aiding in informed decision-making and risk management. Radar charts simulate benchmark product designs, enabling data-driven decisions for production optimization. This interdisciplinary approach equips decision-makers with transformative insights, enhancing competitiveness in the heavy machinery industry and beyond. By leveraging these techniques, companies can revolutionize their production management strategies, driving success in diverse markets.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.01913&r=
  8. By: Daniel de Souza Santos; Tiago Alessandro Espinola Ferreira
    Abstract: One of the most discussed problems in the financial world is stock option pricing. The Black-Scholes Equation is a Parabolic Partial Differential Equation which provides an option pricing model. The present work proposes an approach based on Neural Networks to solve the Black-Scholes Equations. Real-world data from the stock options market were used as the initial boundary to solve the Black-Scholes Equation. In particular, times series of call options prices of Brazilian companies Petrobras and Vale were employed. The results indicate that the network can learn to solve the Black-Sholes Equation for a specific real-world stock options time series. The experimental results showed that the Neural network option pricing based on the Black-Sholes Equation solution can reach an option pricing forecasting more accurate than the traditional Black-Sholes analytical solutions. The experimental results making it possible to use this methodology to make short-term call option price forecasts in options markets.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.05780&r=
  9. By: G. Ibikunle; B. Moews; K. Rzayev
    Abstract: We design and train machine learning models to capture the nonlinear interactions between financial market dynamics and high-frequency trading (HFT) activity. In doing so, we introduce new metrics to identify liquidity-demanding and -supplying HFT strategies. Both types of HFT strategies increase activity in response to information events and decrease it when trading speed is restricted, with liquidity-supplying strategies demonstrating greater responsiveness. Liquidity-demanding HFT is positively linked with latency arbitrage opportunities, whereas liquidity-supplying HFT is negatively related, aligning with theoretical expectations. Our metrics have implications for understanding the information production process in financial markets.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.08101&r=
  10. By: Reilly Pickard; Finn Wredenhagen; Julio DeJesus; Mario Schlener; Yuri Lawryshyn
    Abstract: This article leverages deep reinforcement learning (DRL) to hedge American put options, utilizing the deep deterministic policy gradient (DDPG) method. The agents are first trained and tested with Geometric Brownian Motion (GBM) asset paths and demonstrate superior performance over traditional strategies like the Black-Scholes (BS) Delta, particularly in the presence of transaction costs. To assess the real-world applicability of DRL hedging, a second round of experiments uses a market calibrated stochastic volatility model to train DRL agents. Specifically, 80 put options across 8 symbols are collected, stochastic volatility model coefficients are calibrated for each symbol, and a DRL agent is trained for each of the 80 options by simulating paths of the respective calibrated model. Not only do DRL agents outperform the BS Delta method when testing is conducted using the same calibrated stochastic volatility model data from training, but DRL agents achieves better results when hedging the true asset path that occurred between the option sale date and the maturity. As such, not only does this study present the first DRL agents tailored for American put option hedging, but results on both simulated and empirical market testing data also suggest the optimality of DRL agents over the BS Delta method in real-world scenarios. Finally, note that this study employs a model-agnostic Chebyshev interpolation method to provide DRL agents with option prices at each time step when a stochastic volatility model is used, thereby providing a general framework for an easy extension to more complex underlying asset processes.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.06774&r=
  11. By: Ashish Anil Pawar; Vishnureddy Prashant Muskawar; Ritesh Tiku
    Abstract: Algorithmic trading or Financial robots have been conquering the stock markets with their ability to fathom complex statistical trading strategies. But with the recent development of deep learning technologies, these strategies are becoming impotent. The DQN and A2C models have previously outperformed eminent humans in game-playing and robotics. In our work, we propose a reinforced portfolio manager offering assistance in the allocation of weights to assets. The environment proffers the manager the freedom to go long and even short on the assets. The weight allocation advisements are restricted to the choice of portfolio assets and tested empirically to knock benchmark indices. The manager performs financial transactions in a postulated liquid market without any transaction charges. This work provides the conclusion that the proposed portfolio manager with actions centered on weight allocations can surpass the risk-adjusted returns of conventional portfolio managers.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.01604&r=
  12. By: Ajit Desai; Anneke Kosse; Jacob Sharples
    Abstract: We propose a flexible machine learning (ML) framework for real-time transaction monitoring in high-value payment systems (HVPS), which are a central piece of a country's financial infrastructure. This framework can be used by system operators and overseers to detect anomalous transactions, which - if caused by a cyber attack or an operational outage and left undetected - could have serious implications for the HVPS, its participants and the financial system more broadly. Given the substantial volume of payments settled each day and the scarcity of actual anomalous transactions in HVPS, detecting anomalies resembles an attempt to find a needle in a haystack. Therefore, our framework uses a layered approach. In the first layer, a supervised ML algorithm is used to identify and separate 'typical' payments from 'unusual' payments. In the second layer, only the 'unusual' payments are run through an unsupervised ML algorithm for anomaly detection. We test this framework using artificially manipulated transactions and payments data from the Canadian HVPS. The ML algorithm employed in the first layer achieves a detection rate of 93%, marking a significant improvement over commonly-used econometric models. Moreover, the ML algorithm used in the second layer marks the artificially manipulated transactions as nearly twice as suspicious as the original transactions, proving its effectiveness.
    Keywords: payment systems, transaction monitoring, anomaly detection, machine learning
    JEL: C45 C55 D83 E42
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:bis:biswps:1188&r=
  13. By: Joaquin Vespignani; Russell Smyth
    Abstract: This paper employs insights from earth science on the financial risk of project developments to present an economic theory of critical minerals. Our theory posits that back-ended critical mineral projects that have unaddressed technical and nontechnical barriers, such as those involving lithium and cobalt, exhibit an additional risk for investors which we term the “back-ended risk premium†. We show that the back-ended risk premium increases the cost of capital and, therefore, has the potential to reduce investment in the sector. We posit that the back-ended risk premium may also reduce the gains in productivity expected from artificial intelligence (AI) technologies in the mining sector. Progress in AI may, however, lessen the back-ended risk premium itself through shortening the duration of mining projects and the required rate of investment through reducing the associated risk. We conclude that the best way to reduce the costs associated with energy transition is for governments to invest heavily in AI mining technologies and research.
    Keywords: critical minerals, artificial Intelligence, risk premium
    JEL: Q02 Q40 Q50
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:een:camaaa:2024-30&r=
  14. By: Yusuke Narita (Yale University); Kohei Yata (Yale University)
    Abstract: Algorithms make a growing portion of policy and business decisions. We develop a treatment-effect estimator using algorithmic decisions as instruments for a class of stochastic and deterministic algorithms. Our estimator is consistent and asymptotically normal for well-defined causal effects. A special case of our setup is multidimensional regression discontinuity designs with complex boundaries. We apply our estimator to evaluate the Coronavirus Aid, Relief, and Economic Security Act, which allocated many billions of dollars worth of relief funding to hospitals via an algorithmic rule. The funding is shown to have little effect on COVID-19-related hospital activities. Naive estimates exhibit selection bias.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:2391&r=
  15. By: Kea Baret (BETA - Bureau d'Économie Théorique et Appliquée - AgroParisTech - UNISTRA - Université de Strasbourg - Université de Haute-Alsace (UHA) - Université de Haute-Alsace (UHA) Mulhouse - Colmar - UL - Université de Lorraine - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement); Amélie Barbier-Gauchard (BETA - Bureau d'Économie Théorique et Appliquée - AgroParisTech - UNISTRA - Université de Strasbourg - Université de Haute-Alsace (UHA) - Université de Haute-Alsace (UHA) Mulhouse - Colmar - UL - Université de Lorraine - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement); Theophilos Papadimitriou (DUTH - Democritus University of Thrace)
    Abstract: Since the reinforcement of the Stability and Growth Pact (1996), the European Commission closely monitors public finance in the EU members. A failure to comply with the 3% limit rule on the public deficit by a country triggers an audit. In this paper, we present a Machine Learning based forecasting model for the compliance with the 3% limit rule. To do so, we use data spanning the period from 2006 to 2018 (a turbulent period including the Global Financial Crisis and the Sovereign Debt Crisis) for the 28 EU Member States. A set of eight features are identified as predictors from 141 variables through a feature selection procedure. The forecasting is performed using the Support Vector Machines (SVM). The proposed model reached 91.7% forecasting accuracy and outperformed the Logit model that we used as benchmark.
    Keywords: Fiscal Rules, Fiscal Compliance, Stability and Growth Pact, Machine learning
    Date: 2023–10–26
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-03121966&r=
  16. By: Zhiyu Cao; Zachary Feinstein
    Abstract: This study explores the innovative use of Large Language Models (LLMs) as analytical tools for interpreting complex financial regulations. The primary objective is to design effective prompts that guide LLMs in distilling verbose and intricate regulatory texts, such as the Basel III capital requirement regulations, into a concise mathematical framework that can be subsequently translated into actionable code. This novel approach aims to streamline the implementation of regulatory mandates within the financial reporting and risk management systems of global banking institutions. A case study was conducted to assess the performance of various LLMs, demonstrating that GPT-4 outperforms other models in processing and collecting necessary information, as well as executing mathematical calculations. The case study utilized numerical simulations with asset holdings -- including fixed income, equities, currency pairs, and commodities -- to demonstrate how LLMs can effectively implement the Basel III capital adequacy requirements.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.06808&r=
  17. By: Felix Haag; Carlo Stingl; Katrin Zerfass; Konstantin Hopf; Thorsten Staake
    Abstract: Information systems (IS) are frequently designed to leverage the negative effect of anchoring bias to influence individuals' decision-making (e.g., by manipulating purchase decisions). Recent advances in Artificial Intelligence (AI) and the explanations of its decisions through explainable AI (XAI) have opened new opportunities for mitigating biased decisions. So far, the potential of these technological advances to overcome anchoring bias remains widely unclear. To this end, we conducted two online experiments with a total of N=390 participants in the context of purchase decisions to examine the impact of AI and XAI-based decision support on anchoring bias. Our results show that AI alone and its combination with XAI help to mitigate the negative effect of anchoring bias. Ultimately, our findings have implications for the design of AI and XAI-based decision support and IS to overcome cognitive biases.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.04972&r=
  18. By: Attila Sarkany (Institute of Economic Studies, Charles University, Prague, Czech Republic & The Czech Academy of Sciences, IITA, Prague, Czech Republic); Lukas Janasek (Institute of Economic Studies, Charles University, Prague, Czech Republic & The Czech Academy of Sciences, IITA, Prague, Czech Republic); Jozef Barunik (Institute of Economic Studies, Charles University, Prague, Czech Republic & The Czech Academy of Sciences, IITA, Prague, Czech Republic)
    Abstract: We develop a novel approach to understand the dynamic diversification of decision makers with quantile preferences. Due to unavailability of analytical solutions to such complex problems, we suggest to approximate the behavior of agents with a Quantile Deep Reinforcement Learning (Q-DRL) algorithm. The research will provide a new level of understanding the behavior of economic agents with respect to preferences, captured by quantiles, without assuming a specific utility function or distribution of returns. Furthermore, we are challenging the traditional diversification methods as they proved to be insufficient due to heightened correlations and similar risk features between asset classes, and rather the research delves into risk factor investing as a solution and portfolio optimization based on them.
    Keywords: Portfolio Management, Quantile Deep Reinforcement Learning, Factor investing, Deep-Learning, Advantage-Actor-Critic
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:fau:wpaper:wp2024_21&r=
  19. By: Christian Peukert; Florian Abeillon; Jérémie Haese; Franziska Kaiser; Alexander Staub
    Abstract: Human-created works represent critical data inputs to artificial intelligence (AI). Strategic behaviour can play a major role for AI training datasets, be it in limiting access to existing works or in deciding which types of new works to create or whether to create new works at all. We examine creators’ behavioral change when their works become training data for AI. Specifically, we focus on contributors on Unsplash, a popular stock image platform with about 6 million high-quality photos and illustrations. In the summer of 2020, Unsplash launched an AI research program by releasing a dataset of 25, 000 images for commercial use. We study contributors’ reactions, comparing contributors whose works were included in this dataset to contributors whose works were not included. Our results suggest that treated contributors left the platform at a higher-than-usual rate and substantially slowed down the rate of new uploads. Professional and more successful photographers react stronger than amateurs and less successful photographers. We also show that affected users changed the variety and novelty of contributions to the platform, with long-run implications for the stock of works potentially available for AI training. Taken together, our findings highlight the trade-off between interests of rightsholders and promoting innovation at the technological frontier. We discuss implications for copyright and AI policy.
    Keywords: generative artificial intelligence, training data, licensing, copyright, natural experiment
    JEL: K11 L82 L86
    Date: 2024
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_11099&r=
  20. By: Maria S. Mavillonio
    Abstract: In this paper, we leverage recent advancements in large language models to extract information from business plans on various equity crowdfunding platforms and predict the success of firm campaigns. Our approach spans a broad and comprehensive spectrum of model complexities, ranging from standard textual analysis to more intricate textual representations - e.g. Transformers-, thereby offering a clear view of the challenges in understanding of the underlying data. To this end, we build a novel dataset comprising more than 640 equity crowdfunding campaigns from major Italian platforms. Through rigorous analysis, our results indicate a compelling correlation between the use of intricate textual representations and the enhanced predictive capacity for identifying successful campaigns.
    Keywords: Crowdfunding, Text Representation, Natural Language Processing, Transformers
    JEL: C45 C53 G23 L26
    Date: 2024–05–01
    URL: http://d.repec.org/n?u=RePEc:pie:dsedps:2024/308&r=
  21. By: Jack Birner; Marco Mazzoli; Eleonora Priori; Pietro Terna
    Abstract: Traditional notions of production function do not consider the time dimension, appearing thus timeless and instantaneous. We propose an agent-based model accounting for the whole production side of the economy to unfold the production process from its very beginning, when firms receive production orders, to the delivery of the products to the market. In the model we analyze with a high-degree of details how heterogeneous firms, having labor and capital as productive factors, behave along all the realization processes of their outputs. The main focus covers: i) the heterogeneous duration of firms' production processes, ii) the adaptive strategies they implement to adjust their choices, and iii) the possible failures which may occur due to the duration of the production. Our agent-based model is a controlled experiment: we use a virtual central planner mechanism, which acts as the demand side of the economy, to observe which firm individual behaviors and aggregate macroeconomic outcomes emerge as a reply to its different behaviors in a ceteris paribus environment. Our applied goal, then, is to discuss the role of industrial policy by modeling production processes in detail.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.07103&r=
  22. By: Jiri Kukacka (Czech Academy of Sciences, Institute of Information Theory and Automation, Czechia & Charles University, Faculty of Social Sciences, Institute of Economic Studies); Erik Zila (Czech Academy of Sciences, Institute of Information Theory and Automation, Czechia & Charles University, Faculty of Social Sciences, Institute of Economic Studies)
    Abstract: Financial-macroeconomic agent-based models offer a promising avenue for understanding complex economic interactions, but their use is hindered by challenging empirical estimation. Our paper addresses this gap by constructing a stylized integrated model and estimating its core parameters using US data from 1954 to 2022. To tackle econometric obstacles, including mixed data frequencies, we adapt the simulated method of moments. We then focus on three key interaction channels. The stock market influences the real sector through the wealth effect, which boosts current consumption, and the cost effect, which lowers financing costs for firms. Conversely, the real economy impacts the stock market via the price misperception effect, where economic conditions help approximate the fundamental value of stocks. Our results provide strong statistical support for all three channels, offering novel empirical insights into critical dynamics between the two sectors of the economy.
    Keywords: integrated agent-based model, behavioral finance and macroeconomics, bounded rationality, heuristic switching, simulated method of moments
    JEL: C13 C53 E12 G41 E71
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:fau:wpaper:wp2024_22&r=
  23. By: Chuanhao Li (Yale University); Runhan Yang (The Chinese University of Hong Kong); Tiankai Li (University of Science and Technology of China); Milad Bafarassat (Sabanci University); Kourosh Sharifi (Sabanci University); Dirk Bergemann (Yale University); Zhuoran Yang (Yale University)
    Abstract: Large Language Models (LLMs) like GPT-4 have revolutionized natural language processing, showing remarkable linguistic proficiency and reasoning capabilities. However, their application in strategic multi-agent decision-making environments is hampered by significant limitations including poor mathematical reasoning, difficulty in following instructions, and a tendency to generate incorrect information. These deficiencies hinder their performance in strategic and interactive tasks that demand adherence to nuanced game rules, long-term planning, exploration in unknown environments, and anticipation of opponentsÕ moves. To overcome these obstacles, this paper presents a novel LLM agent framework equipped with memory and specialized tools to enhance their strategic decision-making capabilities. We deploy the tools in a number of economically important environments, in particular bilateral bargaining and multi-agent and dynamic mechanism design. We employ quantitative metrics to assess the frameworkÕs performance in various strategic decision-making problems. Our findings establish that our enhanced framework significantly improves the strategic decision-making capability of LLMs. While we highlight the inherent limitations of current LLM models, we demonstrate the improvements through targeted enhancements, suggesting a promising direction for future developments in LLM applications for interactive environments.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:2393&r=
  24. By: Nicholas Tenev
    Abstract: Prediction models can improve efficiency by automating decisions such as the approval of loan applications. However, they may inherit bias against protected groups from the data they are trained on. This paper adds counterfactual (simulated) ethnic bias to real data on mortgage application decisions, and shows that this bias is replicated by a machine learning model (XGBoost) even when ethnicity is not used as a predictive variable. Next, several other de-biasing methods are compared: averaging over prohibited variables, taking the most favorable prediction over prohibited variables (a novel method), and jointly minimizing errors as well as the association between predictions and prohibited variables. De-biasing can recover some of the original decisions, but the results are sensitive to whether the bias is effected through a proxy.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.00910&r=
  25. By: S. Borağan Aruoba; Thomas Drechsel
    Abstract: We develop a novel method for the identification of monetary policy shocks. By applying natural language processing techniques to documents that Federal Reserve staff prepare in advance of policy decisions, we capture the Fed's information set. Using machine learning techniques, we then predict changes in the target interest rate conditional on this information set and obtain a measure of monetary policy shocks as the residual. We show that the documents' text contains essential information about the economy which is not captured by numerical forecasts that the staff include in the same documents. The dynamic responses of macro variables to our monetary policy shocks are consistent with the theoretical consensus. Shocks constructed by only controlling for the staff forecasts imply responses of macro variables at odds with theory. We directly link these differences to the information that our procedure extracts from the text over and above information captured by the forecasts.
    JEL: C10 E31 E32 E52 E58
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:32417&r=
  26. By: Eric Ghysels; Jack Morgan
    Abstract: We formulate quantum computing solutions to a large class of dynamic nonlinear asset pricing models using algorithms, in theory exponentially more efficient than classical ones, which leverage the quantum properties of superposition and entanglement. The equilibrium asset pricing solution is a quantum state. We introduce quantum decision-theoretic foundations of ambiguity and model/parameter uncertainty to deal with model selection.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.01479&r=
  27. By: Sylvain BARTHÉLÉMY (Gwenlake, Rennes, France); Virginie GAUTIER (TAC Economics and Univ Rennes, CNRS, CREM – UMR6211, F-35000 Rennes France); Fabien RONDEAU (Univ Rennes, CNRS, CREM – UMR6211, F-35000 Rennes France)
    Abstract: We study the class of congestion games with player-specic payoff functions Milchtaich (1996). Focusing on a case where the number of resources is equal to two, we give a short and simple method for identifying the exact number of Nash equilibria in pure strategies. We propose an algorithmic method, first to find one or more Nash equilibria; second, to compare the optimal Nash equilibrium, in which the social cost is minimized, with the worst Nash equilibrium, in which the converse is true; third, to identify the time associated to the computations when the number of players increases.
    Keywords: currency crises, early warning system, neural network, convolutional neural network, SHAP values.
    JEL: F14 F31 F47
    Date: 2024–03
    URL: https://d.repec.org/n?u=RePEc:tut:cremwp:2024-01&r=
  28. By: Thorsten Hens (University of Zurich - Department of Banking and Finance; Norwegian School of Economics and Business Administration (NHH); Swiss Finance Institute); Trine Nordlie (Norwegian School of Economics (NHH))
    Abstract: This study compares OpenAI’s ChatGPT-4 and Google’s Bard with bank experts in determining investors’ risk profiles. We find that for half of the client cases used, there are no statistically significant differences in the risk profiles. Moreover, the economic relevance of the differences is small. However, the LLMs are not good in explaining the risk profiles.
    Keywords: Large Language Models, ChatGPT, Bard, Risk Profiling
    JEL: D8 D14 D81 G51
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:chf:rpseri:rp2430&r=
  29. By: Adam Hallengreen (Department of Economics, University of Copenhagen); Thomas H. Joergensen (Department of Economics, University of Copenhagen); Annasofie M. Olesen (Department of Economics, University of Copenhagen)
    Abstract: In this guide, we introduce the limited commitment model of dynamic household bargaining behavior over the life cycle. The guide is intended to make the limited commitment model more accessible to researchers who are interested in studying intra-household allocations and divorce over the life cycle. We mitigate computational challenges by providing a flexible base of code that can be customized and extended to the specific use case. The main contribution is to discuss practical implementation details of the model class, and provide guidance on how to efficiently solve limited commitment models using state-of-the-art numerical methods. The setup and solution algorithm is presented through a stylized example of dynamic consumption allocation and includes accompanying Python and C++ code used to generate all results.
    Keywords: Household Bargaining, limited commitment, life cycle, couples, numerical dynamic programming
    JEL: D13 D15 C61 C63 C78
    Date: 2024–05–16
    URL: http://d.repec.org/n?u=RePEc:kud:kucebi:2409&r=

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.