nep-cmp New Economics Papers
on Computational Economics
Issue of 2024‒06‒24
nineteen papers chosen by



  1. Review of deep learning models for crypto price prediction: implementation and evaluation By Jingyang Wu; Xinyi Zhang; Fangyixuan Huang; Haochen Zhou; Rohtiash Chandra
  2. Multivariate macroeconomic forecasting: From DSGE and BVAR to artificial neural networks By Tänzer, Alina
  3. Simulation-Based Benchmarking of Reinforcement Learning Agents for Personalized Retail Promotions By Yu Xia; Sriram Narayanamoorthy; Zhengyuan Zhou; Joshua Mabry
  4. Forecasting Tail Risk via Neural Networks with Asymptotic Expansions By Yuji Sakurai; Zhuohui Chen
  5. Stock picking with machine learning By Wolff, Dominik; Echterling, Fabian
  6. Challenges and Opportunities of Artificial Intelligence and Machine Learning in Circular Economy By Despotovic, Miroslav; Glatschke, Matthias
  7. Research on Credit Risk Early Warning Model of Commercial Banks Based on Neural Network Algorithm By Yu Cheng; Qin Yang; Liyang Wang; Ao Xiang; Jingyu Zhang
  8. FinRobot: An Open-Source AI Agent Platform for Financial Applications using Large Language Models By Hongyang Yang; Boyu Zhang; Neng Wang; Cheng Guo; Xiaoli Zhang; Likun Lin; Junlin Wang; Tianyu Zhou; Mao Guan; Runjia Zhang; Christina Dan Wang
  9. Comprehensive Causal Machine Learning By Michael Lechner; Jana Mareckova
  10. Markowitz Meets Bellman: Knowledge-distilled Reinforcement Learning for Portfolio Management By Gang Hu; Ming Gu
  11. Off-the-Shelf Neural Network Architectures for Forex Time Series Prediction come at a Cost By Theodoros Zafeiriou; Dimitris Kalles
  12. fabOF: A Novel Tree Ensemble Method for Ordinal Prediction By Buczak, Philip
  13. NIFTY Financial News Headlines Dataset By Raeid Saqur; Ken Kato; Nicholas Vinden; Frank Rudzicz
  14. Generating density nowcasts for U.S. GDP growth with deep learning: Bayes by Backprop and Monte Carlo dropout By Krist\'of N\'emeth; D\'aniel Hadh\'azi
  15. Deep Penalty Methods: A Class of Deep Learning Algorithms for Solving High Dimensional Optimal Stopping Problems By Yunfei Peng; Pengyu Wei; Wei Wei
  16. Some models are useful, but for how long?: A decision theoretic approach to choosing when to refit large-scale prediction models By Kentaro Hoffman; Stephen Salerno; Jeff Leek; Tyler McCormick
  17. A Dynamic Model of Performative Human-ML Collaboration: Theory and Empirical Evidence By Tom S\"uhr; Samira Samadi; Chiara Farronato
  18. A Hybrid Deep Learning Framework for Stock Price Prediction Considering the Investor Sentiment of Online Forum Enhanced by Popularity By Huiyu Li; Junhua Hu
  19. Talk to Fed: a Big Dive into FOMC Transcripts By Daniel Aromí; Daniel Heymann

  1. By: Jingyang Wu; Xinyi Zhang; Fangyixuan Huang; Haochen Zhou; Rohtiash Chandra
    Abstract: There has been much interest in accurate cryptocurrency price forecast models by investors and researchers. Deep Learning models are prominent machine learning techniques that have transformed various fields and have shown potential for finance and economics. Although various deep learning models have been explored for cryptocurrency price forecasting, it is not clear which models are suitable due to high market volatility. In this study, we review the literature about deep learning for cryptocurrency price forecasting and evaluate novel deep learning models for cryptocurrency stock price prediction. Our deep learning models include variants of long short-term memory (LSTM) recurrent neural networks, variants of convolutional neural networks (CNNs), and the Transformer model. We evaluate univariate and multivariate approaches for multi-step ahead predicting of cryptocurrencies close-price. Our results show that the univariate LSTM model variants perform best for cryptocurrency predictions. We also carry out volatility analysis on the four cryptocurrencies which reveals significant fluctuations in their prices throughout the COVID-19 pandemic. Additionally, we investigate the prediction accuracy of two scenarios identified by different training sets for the models. First, we use the pre-COVID-19 datasets to model cryptocurrency close-price forecasting during the early period of COVID-19. Secondly, we utilise data from the COVID-19 period to predict prices for 2023 to 2024.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.11431&r=
  2. By: Tänzer, Alina
    Abstract: This paper contributes a multivariate forecasting comparison between structural models and Machine-Learning-based tools. Specifically, a fully connected feed forward nonlinear autoregressive neural network (ANN) is contrasted to a well established dynamic stochastic general equilibrium (DSGE) model, a Bayesian vector autoregression (BVAR) using optimized priors as well as Greenbook and SPF forecasts. Model estimation and forecasting is based on an expanding window scheme using quarterly U.S. real-time data (1964Q2:2020Q3) for 8 macroeconomic time series (GDP, inflation, federal funds rate, spread, consumption, investment, wage, hours worked), allowing for up to 8 quarter ahead forecasts. The results show that the BVAR improves forecasts compared to the DSGE model, however there is evidence for an overall improvement of predictions when relying on ANN, or including them in a weighted average. Especially, ANNbased inflation forecasts improve other predictions by up to 50%. These results indicate that nonlinear data-driven ANNs are a useful method when it comes to macroeconomic forecasting.
    Keywords: Artificial Intelligence, Machine Learning, Neural Networks, Forecast Comparison/ Competition, Macroeconomic Forecasting, Crises Forecasting, Inflation Forecasting, Interest Rate Forecasting, Production, Saving, Consumption and Investment Forecasting
    Date: 2024
    URL: http://d.repec.org/n?u=RePEc:zbw:imfswp:295733&r=
  3. By: Yu Xia; Sriram Narayanamoorthy; Zhengyuan Zhou; Joshua Mabry
    Abstract: The development of open benchmarking platforms could greatly accelerate the adoption of AI agents in retail. This paper presents comprehensive simulations of customer shopping behaviors for the purpose of benchmarking reinforcement learning (RL) agents that optimize coupon targeting. The difficulty of this learning problem is largely driven by the sparsity of customer purchase events. We trained agents using offline batch data comprising summarized customer purchase histories to help mitigate this effect. Our experiments revealed that contextual bandit and deep RL methods that are less prone to over-fitting the sparse reward distributions significantly outperform static policies. This study offers a practical framework for simulating AI agents that optimize the entire retail customer journey. It aims to inspire the further development of simulation tools for retail AI systems.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.10469&r=
  4. By: Yuji Sakurai; Zhuohui Chen
    Abstract: We propose a new machine-learning-based approach for forecasting Value-at-Risk (VaR) named CoFiE-NN where a neural network (NN) is combined with Cornish-Fisher expansions (CoFiE). CoFiE-NN can capture non-linear dynamics of high-order statistical moments thanks to the flexibility of a NN while maintaining interpretability of the outputs by using CoFiE which is a well-known statistical formula. First, we explain CoFiE-NN. Second, we compare the forecasting performance of CoFiE-NN with three conventional models using both Monte Carlo simulation and real data. To do so, we employ Long Short-Term Memory (LSTM) as our main specification of the NN. We then apply the CoFiE-NN for different asset classes, with a focus on foreign exchange markets. We report that CoFiE-NN outperfoms the conventional EGARCH-t model and the Extreme Value Theory model in several statistical criteria for both the simulated data and the real data. Finally, we introduce a new empirical proxy for tail risk named tail risk ratio under CoFiE-NN. We discover that the only 20 percent of tail risk dynamics across 22 currencies is explained by one common factor. This is contrasting to the fact that 60 percent of volatility dynamics across the same currencies is explained by one common factor.
    Keywords: Machine learning; Value-at-Risk; Neural Network
    Date: 2024–05–10
    URL: http://d.repec.org/n?u=RePEc:imf:imfwpa:2024/099&r=
  5. By: Wolff, Dominik; Echterling, Fabian
    Abstract: We analyze machine learning algorithms for stock selection. Our study builds on weekly data for the historical constituents of the S&P500 over the period from January 1999 to March 2021 and builds on typical equity factors, additional firm fundamentals, and technical indicators. A variety of machine learning models are trained on the binary classification task to predict whether a specific stock outperforms or underperforms the cross‐sectional median return over the subsequent week. We analyze weekly trading strategies that invest in stocks with the highest predicted outperformance probability. Our empirical results show substantial and significant outperformance of machine learning‐based stock selection models compared to an equally weighted benchmark. Interestingly, we find more simplistic regularized logistic regression models to perform similarly well compared to more complex machine learning models. The results are robust when applied to the STOXX Europe 600 as alternative asset universe.
    Date: 2024–05–28
    URL: https://d.repec.org/n?u=RePEc:dar:wpaper:145491&r=
  6. By: Despotovic, Miroslav (University of Applied Sciences Kufstein, Tirol, Austria); Glatschke, Matthias
    Abstract: The inherent "take-make-waste" of the current linear economy is a major contributor to exceeding planetary boundaries. The transition to a circular economy (CE) and the associated challenges and opportunities require fast, innovative solutions. Artificial Intelligence (AI) and Machine Learning (ML) can play a key role in the transition to a CE paradigm by overcoming the challenges of increasing material extraction and use and creating a far more environmentally sustainable future. The objective of this article is to provide a status quo on the use of AI and ML in the transition to CE and to discuss the potential and challenges in this regard. The literature survey on Google Scholar using targeted queries with predefined keywords and search operators revealed that the number of experimental scientific contributions to AI and ML in the CE has increased significantly in recent years. As the number of research articles increased, so did the number of ML methods and algorithms covered in experimental CE publications. In addition, we found that there are 84% more AI and ML-affiliated research articles on CE in Google Scholar since 2020, compared to the total number of entries, and 55% more articles since 2023, compared to the respective articles up to 2023. The status quo of the scientific contributions shows that AI and ML are seen as extremely useful tools for the CE and their use is steadily increasing.
    Date: 2024–05–26
    URL: https://d.repec.org/n?u=RePEc:osf:socarx:6qmhf&r=
  7. By: Yu Cheng; Qin Yang; Liyang Wang; Ao Xiang; Jingyu Zhang
    Abstract: In the realm of globalized financial markets, commercial banks are confronted with an escalating magnitude of credit risk, thereby imposing heightened requisites upon the security of bank assets and financial stability. This study harnesses advanced neural network techniques, notably the Backpropagation (BP) neural network, to pioneer a novel model for preempting credit risk in commercial banks. The discourse initially scrutinizes conventional financial risk preemptive models, such as ARMA, ARCH, and Logistic regression models, critically analyzing their real-world applications. Subsequently, the exposition elaborates on the construction process of the BP neural network model, encompassing network architecture design, activation function selection, parameter initialization, and objective function construction. Through comparative analysis, the superiority of neural network models in preempting credit risk in commercial banks is elucidated. The experimental segment selects specific bank data, validating the model's predictive accuracy and practicality. Research findings evince that this model efficaciously enhances the foresight and precision of credit risk management.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.10762&r=
  8. By: Hongyang Yang; Boyu Zhang; Neng Wang; Cheng Guo; Xiaoli Zhang; Likun Lin; Junlin Wang; Tianyu Zhou; Mao Guan; Runjia Zhang; Christina Dan Wang
    Abstract: As financial institutions and professionals increasingly incorporate Large Language Models (LLMs) into their workflows, substantial barriers, including proprietary data and specialized knowledge, persist between the finance sector and the AI community. These challenges impede the AI community's ability to enhance financial tasks effectively. Acknowledging financial analysis's critical role, we aim to devise financial-specialized LLM-based toolchains and democratize access to them through open-source initiatives, promoting wider AI adoption in financial decision-making. In this paper, we introduce FinRobot, a novel open-source AI agent platform supporting multiple financially specialized AI agents, each powered by LLM. Specifically, the platform consists of four major layers: 1) the Financial AI Agents layer that formulates Financial Chain-of-Thought (CoT) by breaking sophisticated financial problems down into logical sequences; 2) the Financial LLM Algorithms layer dynamically configures appropriate model application strategies for specific tasks; 3) the LLMOps and DataOps layer produces accurate models by applying training/fine-tuning techniques and using task-relevant data; 4) the Multi-source LLM Foundation Models layer that integrates various LLMs and enables the above layers to access them directly. Finally, FinRobot provides hands-on for both professional-grade analysts and laypersons to utilize powerful AI techniques for advanced financial analysis. We open-source FinRobot at \url{https://github.com/AI4Finance-Found ation/FinRobot}.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.14767&r=
  9. By: Michael Lechner; Jana Mareckova
    Abstract: Uncovering causal effects at various levels of granularity provides substantial value to decision makers. Comprehensive machine learning approaches to causal effect estimation allow to use a single causal machine learning approach for estimation and inference of causal mean effects for all levels of granularity. Focusing on selection-on-observables, this paper compares three such approaches, the modified causal forest (mcf), the generalized random forest (grf), and double machine learning (dml). It also provides proven theoretical guarantees for the mcf and compares the theoretical properties of the approaches. The findings indicate that dml-based methods excel for average treatment effects at the population level (ATE) and group level (GATE) with few groups, when selection into treatment is not too strong. However, for finer causal heterogeneity, explicitly outcome-centred forest-based approaches are superior. The mcf has three additional benefits: (i) It is the most robust estimator in cases when dml-based approaches underperform because of substantial selectivity; (ii) it is the best estimator for GATEs when the number of groups gets larger; and (iii), it is the only estimator that is internally consistent, in the sense that low-dimensional causal ATEs and GATEs are obtained as aggregates of finer-grained causal parameters.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.10198&r=
  10. By: Gang Hu; Ming Gu
    Abstract: Investment portfolios, central to finance, balance potential returns and risks. This paper introduces a hybrid approach combining Markowitz's portfolio theory with reinforcement learning, utilizing knowledge distillation for training agents. In particular, our proposed method, called KDD (Knowledge Distillation DDPG), consist of two training stages: supervised and reinforcement learning stages. The trained agents optimize portfolio assembly. A comparative analysis against standard financial models and AI frameworks, using metrics like returns, the Sharpe ratio, and nine evaluation indices, reveals our model's superiority. It notably achieves the highest yield and Sharpe ratio of 2.03, ensuring top profitability with the lowest risk in comparable return scenarios.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.05449&r=
  11. By: Theodoros Zafeiriou; Dimitris Kalles
    Abstract: Our study focuses on comparing the performance and resource requirements between different Long Short-Term Memory (LSTM) neural network architectures and an ANN specialized architecture for forex market prediction. We analyze the execution time of the models as well as the resources consumed, such as memory and computational power. Our aim is to demonstrate that the specialized architecture not only achieves better results in forex market prediction but also executes using fewer resources and in a shorter time frame compared to LSTM architectures. This comparative analysis will provide significant insights into the suitability of these two types of architectures for time series prediction in the forex market environment.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.10679&r=
  12. By: Buczak, Philip
    Abstract: Ordinal responses commonly occur in the life sciences, e.g., through school grades or rating scales. Where traditionally parametric statistical models have been used, machine learning (ML) methods such as random forest (RF) are increasingly employed for ordinal prediction. As RF does not account for ordinality, several extensions have been proposed. A promising approach lies in assigning optimized numeric scores to the ordinal response categories and using regression RF. However, these optimization procedures are computationally expensive and have been shown to yield only situational benefit. In this work, I propose Frequency Adjusted Borders Ordinal Forest (fabOF), a novel tree ensemble method for ordinal prediction forgoing extensive optimization while offering improved predictive performance in simulation and an illustrative example of student performance.
    Date: 2024–05–15
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:h8t4p&r=
  13. By: Raeid Saqur; Ken Kato; Nicholas Vinden; Frank Rudzicz
    Abstract: We introduce and make publicly available the NIFTY Financial News Headlines dataset, designed to facilitate and advance research in financial market forecasting using large language models (LLMs). This dataset comprises two distinct versions tailored for different modeling approaches: (i) NIFTY-LM, which targets supervised fine-tuning (SFT) of LLMs with an auto-regressive, causal language-modeling objective, and (ii) NIFTY-RL, formatted specifically for alignment methods (like reinforcement learning from human feedback (RLHF)) to align LLMs via rejection sampling and reward modeling. Each dataset version provides curated, high-quality data incorporating comprehensive metadata, market indices, and deduplicated financial news headlines systematically filtered and ranked to suit modern LLM frameworks. We also include experiments demonstrating some applications of the dataset in tasks like stock price movement and the role of LLM embeddings in information acquisition/richness. The NIFTY dataset along with utilities (like truncating prompt's context length systematically) are available on Hugging Face at https://huggingface.co/datasets/raeidsaq ur/NIFTY.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.09747&r=
  14. By: Krist\'of N\'emeth; D\'aniel Hadh\'azi
    Abstract: Recent results in the literature indicate that artificial neural networks (ANNs) can outperform the dynamic factor model (DFM) in terms of the accuracy of GDP nowcasts. Compared to the DFM, the performance advantage of these highly flexible, nonlinear estimators is particularly evident in periods of recessions and structural breaks. From the perspective of policy-makers, however, nowcasts are the most useful when they are conveyed with uncertainty attached to them. While the DFM and other classical time series approaches analytically derive the predictive (conditional) distribution for GDP growth, ANNs can only produce point nowcasts based on their default training procedure (backpropagation). To fill this gap, first in the literature, we adapt two different deep learning algorithms that enable ANNs to generate density nowcasts for U.S. GDP growth: Bayes by Backprop and Monte Carlo dropout. The accuracy of point nowcasts, defined as the mean of the empirical predictive distribution, is evaluated relative to a naive constant growth model for GDP and a benchmark DFM specification. Using a 1D CNN as the underlying ANN architecture, both algorithms outperform those benchmarks during the evaluation period (2012:Q1 -- 2022:Q4). Furthermore, both algorithms are able to dynamically adjust the location (mean), scale (variance), and shape (skew) of the empirical predictive distribution. The results indicate that both Bayes by Backprop and Monte Carlo dropout can effectively augment the scope and functionality of ANNs, rendering them a fully compatible and competitive alternative for classical time series approaches.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.15579&r=
  15. By: Yunfei Peng; Pengyu Wei; Wei Wei
    Abstract: We propose a deep learning algorithm for high dimensional optimal stopping problems. Our method is inspired by the penalty method for solving free boundary PDEs. Within our approach, the penalized PDE is approximated using the Deep BSDE framework proposed by \cite{weinan2017deep}, which leads us to coin the term "Deep Penalty Method (DPM)" to refer to our algorithm. We show that the error of the DPM can be bounded by the loss function and $O(\frac{1}{\lambda})+O(\lambda h) +O(\sqrt{h})$, where $h$ is the step size in time and $\lambda$ is the penalty parameter. This finding emphasizes the need for careful consideration when selecting the penalization parameter and suggests that the discretization error converges at a rate of order $\frac{1}{2}$. We validate the efficacy of the DPM through numerical tests conducted on a high-dimensional optimal stopping model in the area of American option pricing. The numerical tests confirm both the accuracy and the computational efficiency of our proposed algorithm.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.11392&r=
  16. By: Kentaro Hoffman; Stephen Salerno; Jeff Leek; Tyler McCormick
    Abstract: Large-scale prediction models (typically using tools from artificial intelligence, AI, or machine learning, ML) are increasingly ubiquitous across a variety of industries and scientific domains. Such methods are often paired with detailed data from sources such as electronic health records, wearable sensors, and omics data (high-throughput technology used to understand biology). Despite their utility, implementing AI and ML tools at the scale necessary to work with this data introduces two major challenges. First, it can cost tens of thousands of dollars to train a modern AI/ML model at scale. Second, once the model is trained, its predictions may become less relevant as patient and provider behavior change, and predictions made for one geographical area may be less accurate for another. These two challenges raise a fundamental question: how often should you refit the AI/ML model to optimally trade-off between cost and relevance? Our work provides a framework for making decisions about when to {\it refit} AI/ML models when the goal is to maintain valid statistical inference (e.g. estimating a treatment effect in a clinical trial). Drawing on portfolio optimization theory, we treat the decision of {\it recalibrating} versus {\it refitting} the model as a choice between ''investing'' in one of two ''assets.'' One asset, recalibrating the model based on another model, is quick and relatively inexpensive but bears uncertainty from sampling and the possibility that the other model is not relevant to current circumstances. The other asset, {\it refitting} the model, is costly but removes the irrelevance concern (though not the risk of sampling error). We explore the balancing act between these two potential investments in this paper.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.13926&r=
  17. By: Tom S\"uhr; Samira Samadi; Chiara Farronato
    Abstract: Machine learning (ML) models are increasingly used in various applications, from recommendation systems in e-commerce to diagnosis prediction in healthcare. In this paper, we present a novel dynamic framework for thinking about the deployment of ML models in a performative, human-ML collaborative system. In our framework, the introduction of ML recommendations changes the data generating process of human decisions, which are only a proxy to the ground truth and which are then used to train future versions of the model. We show that this dynamic process in principle can converge to different stable points, i.e. where the ML model and the Human+ML system have the same performance. Some of these stable points are suboptimal with respect to the actual ground truth. We conduct an empirical user study with 1, 408 participants to showcase this process. In the study, humans solve instances of the knapsack problem with the help of machine learning predictions. This is an ideal setting because we can see how ML models learn to imitate human decisions and how this learning process converges to a stable point. We find that for many levels of ML performance, humans can improve the ML predictions to dynamically reach an equilibrium performance that is around 92% of the maximum knapsack value. We also find that the equilibrium performance could be even higher if humans rationally followed the ML recommendations. Finally, we test whether monetary incentives can increase the quality of human decisions, but we fail to find any positive effect. Our results have practical implications for the deployment of ML models in contexts where human decisions may deviate from the indisputable ground truth.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.13753&r=
  18. By: Huiyu Li; Junhua Hu
    Abstract: Stock price prediction has always been a difficult task for forecasters. Using cutting-edge deep learning techniques, stock price prediction based on investor sentiment extracted from online forums has become feasible. We propose a novel hybrid deep learning framework for predicting stock prices. The framework leverages the XLNET model to analyze the sentiment conveyed in user posts on online forums, combines these sentiments with the post popularity factor to compute daily group sentiments, and integrates this information with stock technical indicators into an improved BiLSTM-highway model for stock price prediction. Through a series of comparative experiments involving four stocks on the Chinese stock market, it is demonstrated that the hybrid framework effectively predicts stock prices. This study reveals the necessity of analyzing investors' textual views for stock price prediction.
    Date: 2024–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2405.10584&r=
  19. By: Daniel Aromí (IIEP UBA-Conicet/FCE UBA); Daniel Heymann (IIEP UBA-Conicet/FCE UBA)
    Abstract: We propose a method to generate “synthetic surveys” that shed light on policymakers’ perceptions and narratives. This exercise is implemented using 80 time-stamped Large Language Models (LLMs) fine-tuned with FOMC meetings’ transcripts. Given a text input, finetuned models identify highly likely responses for the corresponding FOMC meeting. We evaluate this tool in three different tasks: sentiment analysis, evaluation of transparency in Central Bank communication and characterization of policymaking narratives. Our analysis covers the housing bubble and the subsequent Great Recession (2003-2012). For the first task, LLMs are prompted to generate phrases that describe economic conditions. The resulting output is verified to transmit policymakers’ information regarding macroeconomic and financial dynamics. To analyze transparency, we compare the content of each FOMC minutes to content generated synthetically through the corresponding fine-tuned LLM. The evaluation suggests the tone of each meeting is transmitted adequately by the corresponding minutes. In the third task, we show LLMs produce insightul depictions of evolving policymaking narratives. Thisanalysis reveals relevant narratives’ features such as goals, perceived threats, identified macroeconomic drivers, categorizations of the state of the economy and manifestations of emotional states.
    Keywords: Monetary policy, large language models, narratives, transparency.
    Date: 2024–05
    URL: https://d.repec.org/n?u=RePEc:aoz:wpaper:323&r=

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.