nep-cmp New Economics Papers
on Computational Economics
Issue of 2024–11–18
nineteen papers chosen by
Stan Miles, Thompson Rivers University


  1. Taming the Curse of Dimensionality:Quantitative Economics with Deep Learning By Jesús Fernández-Villaverde; Galo Nuno; Jesse Perla
  2. Dynamic graph neural networks for enhanced volatility prediction in financial markets By Pulikandala Nithish Kumar; Nneka Umeorah; Alex Alochukwu
  3. Reproducing and Extending Experiments in Behavioral Strategy with Large Language Models By Daniel Albert; Stephan Billinger
  4. Can GANs Learn the Stylized Facts of Financial Time Series? By Sohyeon Kwon; Yongjae Lee
  5. Quantum Computing for Multi Period Asset Allocation By Queenie Sun; Nicholas Grablevsky; Huaizhang Deng; Pooya Azadi
  6. Forecasting US Presidential Election 2024 using multiple machine learning algorithms By Sinha, Pankaj; Kumar, Amit; Biswas, Sumana; Gupta, Chirag
  7. Double Jeopardy and Climate Impact in the Use of Large Language Models: Socio-economic Disparities and Reduced Utility for Non-English Speakers By Aivin V. Solatorio; Gabriel Stefanini Vicente; Holly Krambeck; Olivier Dupriez
  8. Optimizing Time Series Forecasting: A Comparative Study of Adam and Nesterov Accelerated Gradient on LSTM and GRU networks Using Stock Market data By Ahmad Makinde
  9. Hospital Admission Rates in São Paulo, Brazil - Lee-Carter model vs. neural networks By Rodolfo Monfilier Peres; Onofre Alves Simões
  10. UCFE: A User-Centric Financial Expertise Benchmark for Large Language Models By Yuzhe Yang; Yifei Zhang; Yan Hu; Yilin Guo; Ruoli Gan; Yueru He; Mingcong Lei; Xiao Zhang; Haining Wang; Qianqian Xie; Jimin Huang; Honghai Yu; Benyou Wang
  11. An Innovative Attention-based Ensemble System for Credit Card Fraud Detection By Mehdi Hosseini Chagahi; Niloufar Delfan; Saeed Mohammadi Dashtaki; Behzad Moshiri; Md. Jalil Piran
  12. Living on the Highway: Addressing Germany's HGV Parking Crisis through Machine Learning Satellite Image Analysis By Julius Range; Benedikt Gloria; Albert Erasmus Grafe
  13. Deep Learning Methods for S Shaped Utility Maximisation with a Random Reference Point By Ashley Davey; Harry Zheng
  14. Reinforcement Learning in Non-Markov Market-Making By Luca Lalor; Anatoliy Swishchuk
  15. Quantifying uncertainty: a new era of measurement through large language models By Francesco Audrino; Jessica Gentner; Simon Stalder
  16. Statistical Properties of Deep Neural Networks with Dependent Data By Chad Brown
  17. Neuro-Symbolic Traders: Assessing the Wisdom of AI Crowds in Markets By Namid R. Stillman; Rory Baggott
  18. Forecasting 2024 US Presidential Election by States Using County Level Data: Too Close to Call By M. Hashem Pesaran; Hayun Song
  19. Implications of Behavioral Rules in Agent-Based Macroeconomics By Herbert Dawid; Domenico Delli Gatti; Luca Eduardo Fierro; Sebastian Poledna

  1. By: Jesús Fernández-Villaverde (University of Pennsylvania, CEPR and NBER); Galo Nuno (Banco de Espana, CEPR, CEMFI); Jesse Perla (University of British Columbia)
    Abstract: We argue that deep learning provides a promising avenue for taming the curse of dimensionality in quantitative economics. We begin by exploring the unique challenges posed by solving dynamic equilibrium models, especially the feedback loop between individual agents’ decisions and the aggregate consistency conditions required by equilibrium. Following this, we introduce deep neural networks and demonstrate their application by solving the stochastic neoclassical growth model. Next, we compare deep neural networks with traditional solution methods in quantitative economics. We conclude with a survey of neural network applications in quantitative economics and offer reasons for cautious optimism.
    Keywords: Deep learning, quantitative economics
    JEL: C61 C63 E27
    Date: 2024–10–29
    URL: https://d.repec.org/n?u=RePEc:pen:papers:24-034
  2. By: Pulikandala Nithish Kumar; Nneka Umeorah; Alex Alochukwu
    Abstract: Volatility forecasting is essential for risk management and decision-making in financial markets. Traditional models like Generalized Autoregressive Conditional Heteroskedasticity (GARCH) effectively capture volatility clustering but often fail to model complex, non-linear interdependencies between multiple indices. This paper proposes a novel approach using Graph Neural Networks (GNNs) to represent global financial markets as dynamic graphs. The Temporal Graph Attention Network (Temporal GAT) combines Graph Convolutional Networks (GCNs) and Graph Attention Networks (GATs) to capture the temporal and structural dynamics of volatility spillovers. By utilizing correlation-based and volatility spillover indices, the Temporal GAT constructs directed graphs that enhance the accuracy of volatility predictions. Empirical results from a 15-year study of eight major global indices show that the Temporal GAT outperforms traditional GARCH models and other machine learning methods, particularly in short- to mid-term forecasts. The sensitivity and scenario-based analysis over a range of parameters and hyperparameters further demonstrate the significance of the proposed technique. Hence, this work highlights the potential of GNNs in modeling complex market behaviors, providing valuable insights for financial analysts and investors.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.16858
  3. By: Daniel Albert; Stephan Billinger
    Abstract: In this study, we propose LLM agents as a novel approach in behavioral strategy research, complementing simulations and laboratory experiments to advance our understanding of cognitive processes in decision-making. Specifically, we reproduce a human laboratory experiment in behavioral strategy using large language model (LLM) generated agents and investigate how LLM agents compare to observed human behavior. Our results show that LLM agents effectively reproduce search behavior and decision-making comparable to humans. Extending our experiment, we analyze LLM agents' simulated "thoughts, " discovering that more forward-looking thoughts correlate with favoring exploitation over exploration to maximize wealth. We show how this new approach can be leveraged in behavioral strategy research and address limitations.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.06932
  4. By: Sohyeon Kwon; Yongjae Lee
    Abstract: In the financial sector, a sophisticated financial time series simulator is essential for evaluating financial products and investment strategies. Traditional back-testing methods have mainly relied on historical data-driven approaches or mathematical model-driven approaches, such as various stochastic processes. However, in the current era of AI, data-driven approaches, where models learn the intrinsic characteristics of data directly, have emerged as promising techniques. Generative Adversarial Networks (GANs) have surfaced as promising generative models, capturing data distributions through adversarial learning. Financial time series, characterized 'stylized facts' such as random walks, mean-reverting patterns, unexpected jumps, and time-varying volatility, present significant challenges for deep neural networks to learn their intrinsic characteristics. This study examines the ability of GANs to learn diverse and complex temporal patterns (i.e., stylized facts) of both univariate and multivariate financial time series. Our extensive experiments revealed that GANs can capture various stylized facts of financial time series, but their performance varies significantly depending on the choice of generator architecture. This suggests that naively applying GANs might not effectively capture the intricate characteristics inherent in financial time series, highlighting the importance of carefully considering and validating the modeling choices.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.09850
  5. By: Queenie Sun; Nicholas Grablevsky; Huaizhang Deng; Pooya Azadi
    Abstract: Portfolio construction has been a long-standing topic of research in finance. The computational complexity and the time taken both increase rapidly with the number of investments in the portfolio. It becomes difficult, even impossible for classic computers to solve. Quantum computing is a new way of computing which takes advantage of quantum superposition and entanglement. It changes how such problems are approached and is not constrained by some of the classic computational complexity. Studies have shown that quantum computing can offer significant advantages over classical computing in many fields. The application of quantum computing has been constrained by the unavailability of actual quantum computers. In the past decade, there has been the rapid development of the large-scale quantum computer. However, software development for quantum computing is slow in many fields. In our study, we apply quantum computing to a multi-asset portfolio simulation. The simulation is based on historic data, covariance, and expected returns, all calculated using quantum computing. Although technically a solvable problem for classical computing, we believe the software development is important to the future application of quantum computing in finance. We conducted this study through simulation of a quantum computer and the use of Rensselaer Polytechnic Institute's IBM quantum computer.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.11997
  6. By: Sinha, Pankaj; Kumar, Amit; Biswas, Sumana; Gupta, Chirag
    Abstract: The outcome of the US presidential election is one of the most significant events that impacts trade, investment, and geopolitical policies on the global stage. It also sets the direction of the world economy and global politics for the next few years. Hence, it is of prime importance not just for the American population but also to shape the future well-being of the masses worldwide. Therefore, this study aims to forecast the popular vote share of the incumbent party candidate in the Presidential election of 2024. The study applies the regularization-based machine learning algorithm of Lasso to select the most important economic and non-economic indicators influencing the electorate. The variables identified by lasso were further used with lasso (regularization), random forest (bagging) and gradient boosting (boosting) techniques of machine learning to forecast the popular vote share of the incumbent party candidate in the 2024 US Presidential election. The findings suggest that June Gallup ratings, average Gallup ratings, scandal ratings, oil price indicator, unemployment indicator and crime rate impact the popular vote share of the incumbent party candidate. The prediction made by Lasso emerges as the most consistent estimate of the popular vote share forecast. The lasso-based prediction model forecasts that Kamala Harris, the Democratic Party candidate, will receive a popular vote share of 47.04% in the 2024 US Presidential Election.
    Keywords: US Presidential Election, Machine Learning, Lasso, Random Forest
    JEL: C1 C10 C15 C6 C63 G0
    Date: 2024–10–20
    URL: https://d.repec.org/n?u=RePEc:pra:mprapa:122490
  7. By: Aivin V. Solatorio; Gabriel Stefanini Vicente; Holly Krambeck; Olivier Dupriez
    Abstract: Artificial Intelligence (AI), particularly large language models (LLMs), holds the potential to bridge language and information gaps, which can benefit the economies of developing nations. However, our analysis of FLORES-200, FLORES+, Ethnologue, and World Development Indicators data reveals that these benefits largely favor English speakers. Speakers of languages in low-income and lower-middle-income countries face higher costs when using OpenAI's GPT models via APIs because of how the system processes the input -- tokenization. Around 1.5 billion people, speaking languages primarily from lower-middle-income countries, could incur costs that are 4 to 6 times higher than those faced by English speakers. Disparities in LLM performance are significant, and tokenization in models priced per token amplifies inequalities in access, cost, and utility. Moreover, using the quality of translation tasks as a proxy measure, we show that LLMs perform poorly in low-resource languages, presenting a ``double jeopardy" of higher costs and poor performance for these users. We also discuss the direct impact of fragmentation in tokenizing low-resource languages on climate. This underscores the need for fairer algorithm development to benefit all linguistic groups.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.10665
  8. By: Ahmad Makinde
    Abstract: Several studies have discussed the impact different optimization techniques in the context of time series forecasting across different Neural network architectures. This paper examines the effectiveness of Adam and Nesterov's Accelerated Gradient (NAG) optimization techniques on LSTM and GRU neural networks for time series prediction, specifically stock market time-series. Our study was done by training LSTM and GRU models with two different optimization techniques - Adam and Nesterov Accelerated Gradient (NAG), comparing and evaluating their performance on Apple Inc's closing price data over the last decade. The GRU model optimized with Adam produced the lowest RMSE, outperforming the other model-optimizer combinations in both accuracy and convergence speed. The GRU models with both optimizers outperformed the LSTM models, whilst the Adam optimizer outperformed the NAG optimizer for both model architectures. The results suggest that GRU models optimized with Adam are well-suited for practitioners in time-series prediction, more specifically stock price time series prediction producing accurate and computationally efficient models. The code for the experiments in this project can be found at https://github.com/AhmadMak/Time-Series-Optimization-Research Keywords: Time-series Forecasting, Neural Network, LSTM, GRU, Adam Optimizer, Nesterov Accelerated Gradient (NAG) Optimizer
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.01843
  9. By: Rodolfo Monfilier Peres; Onofre Alves Simões
    Abstract: In Brazil, hospital admissions account for nearly 50% of the total cost of health insurance claims, while representing only 1% of total medical procedures. Therefore, modeling hospital admissions is useful for insurers to evaluate costs in order to maintain their solvency. This article analyzes the use of the Lee-Carter model to predict hospital admissions in the state of São Paulo, Brazil, and contrasts it with the Long Short Term Memory (LSTM) neural network. The results showed that the two approaches have similar performance. This was not a disappointing result, on the contrary: from now on, future work can further test whether LSTM models are able to give a better result than Lee-Carter, for example by working with longer data sequences or by adapting the models.
    Keywords: Hospital Admissions; Lee-Carter; Neural Networks; LSTM; Brazil.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:ise:remwps:wp03492024
  10. By: Yuzhe Yang; Yifei Zhang; Yan Hu; Yilin Guo; Ruoli Gan; Yueru He; Mingcong Lei; Xiao Zhang; Haining Wang; Qianqian Xie; Jimin Huang; Honghai Yu; Benyou Wang
    Abstract: This paper introduces the UCFE: User-Centric Financial Expertise benchmark, an innovative framework designed to evaluate the ability of large language models (LLMs) to handle complex real-world financial tasks. UCFE benchmark adopts a hybrid approach that combines human expert evaluations with dynamic, task-specific interactions to simulate the complexities of evolving financial scenarios. Firstly, we conducted a user study involving 804 participants, collecting their feedback on financial tasks. Secondly, based on this feedback, we created our dataset that encompasses a wide range of user intents and interactions. This dataset serves as the foundation for benchmarking 12 LLM services using the LLM-as-Judge methodology. Our results show a significant alignment between benchmark scores and human preferences, with a Pearson correlation coefficient of 0.78, confirming the effectiveness of the UCFE dataset and our evaluation approach. UCFE benchmark not only reveals the potential of LLMs in the financial sector but also provides a robust framework for assessing their performance and user satisfaction.The benchmark dataset and evaluation code are available.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.14059
  11. By: Mehdi Hosseini Chagahi; Niloufar Delfan; Saeed Mohammadi Dashtaki; Behzad Moshiri; Md. Jalil Piran
    Abstract: Detecting credit card fraud (CCF) holds significant importance due to its role in safeguarding consumers from unauthorized transactions that have the potential to result in financial detriment and negative impacts on their credit rating. It aids financial institutions in upholding the reliability of their payment mechanisms and circumventing the expensive procedure of compensating for deceitful transactions. The utilization of Artificial Intelligence methodologies demonstrated remarkable efficacy in the identification of credit card fraud instances. Within this study, we present a unique attention-based ensemble model. This model is enhanced by adding an attention layer for integration of first layer classifiers' predictions and a selection layer for choosing the best integrated value. The attention layer is implemented with two aggregation operators: dependent ordered weighted averaging (DOWA) and induced ordered weighted averaging (IOWA). The performance of the IOWA operator is very close to the learning algorithm in neural networks which is based on the gradient descent optimization method, and performing the DOWA operator is based on weakening the classifiers that make outlier predictions compared to other learners. Both operators have a sufficient level of complexity for the recognition of complex patterns. Accuracy and diversity are the two criteria we use for selecting the classifiers whose predictions are to be integrated by the two aggregation operators. Using a bootstrap forest, we identify the 13 most significant features of the dataset that contribute the most to CCF detection and use them to feed the proposed model. Exhibiting its efficacy, the ensemble model attains an accuracy of 99.95% with an area under the curve (AUC) of 1.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.09069
  12. By: Julius Range; Benedikt Gloria; Albert Erasmus Grafe
    Abstract: The rapid increasing demand for freight transport has precipitated a critical need for expanded infrastructure, particularly in Germany, where a significant crisis in Heavy Goods Vehicle (HGV) parking facilities is emerging. Our study aims to determine the optimum supply of HGV parking lots required to mitigate this problem. Utilizing state-of-the-art object detection techniques in satellite imagery, we conduct a comprehensive analysis to assess the current availability of HGV parking spaces. Our machine learning-based approach enables an accurate and large-scale evaluation, revealing a considerable undersupply of HGV parking lots across Germany. These findings underscore the severity of the infrastructure deficit in the context of increasing freight transport demands. In a next step, we conduct a location analysis to determine regions, which are impacted acutely. Our results therefore deliver valuable insights to specialized real-estate developers seeking to cater to the demand and profit from this deficit. Based on the results, we develop industry and policy recommendations aimed at addressing this shortfall.
    Keywords: Machine Learning; satellite image analysis; specialized real estate; Transportation
    JEL: R3
    Date: 2024–01–01
    URL: https://d.repec.org/n?u=RePEc:arz:wpaper:eres2024-164
  13. By: Ashley Davey; Harry Zheng
    Abstract: We consider the portfolio optimisation problem where the terminal function is an S-shaped utility applied at the difference between the wealth and a random benchmark process. We develop several numerical methods for solving the problem using deep learning and duality methods. We use deep learning methods to solve the associated Hamilton-Jacobi-Bellman equation for both the primal and dual problems, and the adjoint equation arising from the stochastic maximum principle. We compare the solution of this non-concave problem to that of concavified utility, a random function depending on the benchmark, in both complete and incomplete markets. We give some numerical results for power and log utilities to show the accuracy of the suggested algorithms.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.05524
  14. By: Luca Lalor; Anatoliy Swishchuk
    Abstract: We develop a deep reinforcement learning (RL) framework for an optimal market-making (MM) trading problem, specifically focusing on price processes with semi-Markov and Hawkes Jump-Diffusion dynamics. We begin by discussing the basics of RL and the deep RL framework used, where we deployed the state-of-the-art Soft Actor-Critic (SAC) algorithm for the deep learning part. The SAC algorithm is an off-policy entropy maximization algorithm more suitable for tackling complex, high-dimensional problems with continuous state and action spaces like in optimal market-making (MM). We introduce the optimal MM problem considered, where we detail all the deterministic and stochastic processes that go into setting up an environment for simulating this strategy. Here we also give an in-depth overview of the jump-diffusion pricing dynamics used, our method for dealing with adverse selection within the limit order book, and we highlight the working parts of our optimization problem. Next, we discuss training and testing results, where we give visuals of how important deterministic and stochastic processes such as the bid/ask, trade executions, inventory, and the reward function evolved. We include a discussion on the limitations of these results, which are important points to note for most diffusion models in this setting.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.14504
  15. By: Francesco Audrino; Jessica Gentner; Simon Stalder
    Abstract: This paper presents an innovative method for measuring uncertainty via large language models (LLMs), which offer greater precision and contextual sensitivity than the conventional methods used to construct prominent uncertainty indices. By analysing newspaper texts with state-of-the-art LLMs, our approach captures nuances often missed by conventional methods. We develop indices for various types of uncertainty, including geopolitical risk, economic policy, monetary policy, and financial market uncertainty. Our findings show that shocks to these LLM-based indices exhibit stronger associations with macroeconomic variables, shifts in investor behaviour, and asset return variations than conventional indices, underscoring their potential for more accurately reflecting uncertainty.
    Keywords: Uncertainty measurement, Large language models, Economic policy, Geopolitical risk, Monetary policy, Financial markets
    JEL: C45 C55 E44 G12
    Date: 2024
    URL: https://d.repec.org/n?u=RePEc:snb:snbwpa:2024-12
  16. By: Chad Brown
    Abstract: This paper establishes statistical properties of deep neural network (DNN) estimators under dependent data. Two general results for nonparametric sieve estimators directly applicable to DNNs estimators are given. The first establishes rates for convergence in probability under nonstationary data. The second provides non-asymptotic probability bounds on $\mathcal{L}^{2}$-errors under stationary $\beta$-mixing data. I apply these results to DNN estimators in both regression and classification contexts imposing only a standard H\"older smoothness assumption. These results are then used to demonstrate how asymptotic inference can be conducted on the finite dimensional parameter of a partially linear regression model after first-stage DNN estimation of infinite dimensional parameters. The DNN architectures considered are common in applications, featuring fully connected feedforward networks with any continuous piecewise linear activation function, unbounded weights, and a width and depth that grows with sample size. The framework provided also offers potential for research into other DNN architectures and time-series applications.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.11113
  17. By: Namid R. Stillman; Rory Baggott
    Abstract: Deep generative models are becoming increasingly used as tools for financial analysis. However, it is unclear how these models will influence financial markets, especially when they infer financial value in a semi-autonomous way. In this work, we explore the interplay between deep generative models and market dynamics. We develop a form of virtual traders that use deep generative models to make buy/sell decisions, which we term neuro-symbolic traders, and expose them to a virtual market. Under our framework, neuro-symbolic traders are agents that use vision-language models to discover a model of the fundamental value of an asset. Agents develop this model as a stochastic differential equation, calibrated to market data using gradient descent. We test our neuro-symbolic traders on both synthetic data and real financial time series, including an equity stock, commodity, and a foreign exchange pair. We then expose several groups of neuro-symbolic traders to a virtual market environment. This market environment allows for feedback between the traders belief of the underlying value to the observed price dynamics. We find that this leads to price suppression compared to the historical data, highlighting a future risk to market stability. Our work is a first step towards quantifying the effect of deep generative agents on markets dynamics and sets out some of the potential risks and benefits of this approach in the future.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.14587
  18. By: M. Hashem Pesaran; Hayun Song
    Abstract: This document is a follow up to the paper by Ahmed and Pesaran (2020, AP) and reports state-level forecasts for the 2024 US presidential election. It updates the 3, 107 county level data used by AP and uses the same machine learning techniques as before to select the variables used in forecasting voter turnout and the Republican vote shares by states for 2024. The models forecast the non-swing states correctly but give mixed results for the swing states (Nevada, Arizona, Wisconsin, Michigan, Pennsylvania, North Carolina, and Georgia). Our forecasts for the swing states do not make use of any polling data but confirm the very close nature of the 2024 election, much closer than AP’s predictions for 2020. The forecasts are too close to call.
    Keywords: voter turnout, popular and electoral college votes, simultaneity and recursive identification, high dimensional forecasting models, Lasso, OCMT
    JEL: C53 C55 D72
    Date: 2024
    URL: https://d.repec.org/n?u=RePEc:ces:ceswps:_11415
  19. By: Herbert Dawid; Domenico Delli Gatti; Luca Eduardo Fierro; Sebastian Poledna
    Abstract: In this paper we examine the role of the design of behavioral rules in agent-based macroeconomic modeling. Based on clear theoretical foundations, we develop a general representation of the behavioral rules governing price and quantity decisions of firms and show how rules used in four main families of agent-based macroeconomic models can be interpreted as special cases of these general rules. We embed the four variations of these rules into a calibrated agent-based macroeconomic framework and show that they all yield qualitatively very similar dynamics in business-as-usual times. However, the impact of demand, cost, and productivity shocks differ substantially depending on which of the four variants of the price and quantity rules are used.
    Keywords: agent-based macroeconomics, behavioral rules, pricing, forecasting
    JEL: C63 E37
    Date: 2024
    URL: https://d.repec.org/n?u=RePEc:ces:ceswps:_11411

This nep-cmp issue is ©2024 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.