|
on Computational Economics |
By: | Andrei Neagu; Fr\'ed\'eric Godin; Leila Kosseim |
Abstract: | Dynamic hedging is a financial strategy that consists in periodically transacting one or multiple financial assets to offset the risk associated with a correlated liability. Deep Reinforcement Learning (DRL) algorithms have been used to find optimal solutions to dynamic hedging problems by framing them as sequential decision-making problems. However, most previous work assesses the performance of only one or two DRL algorithms, making an objective comparison across algorithms difficult. In this paper, we compare the performance of eight DRL algorithms in the context of dynamic hedging; Monte Carlo Policy Gradient (MCPG), Proximal Policy Optimization (PPO), along with four variants of Deep Q-Learning (DQL) and two variants of Deep Deterministic Policy Gradient (DDPG). Two of these variants represent a novel application to the task of dynamic hedging. In our experiments, we use the Black-Scholes delta hedge as a baseline and simulate the dataset using a GJR-GARCH(1, 1) model. Results show that MCPG, followed by PPO, obtain the best performance in terms of the root semi-quadratic penalty. Moreover, MCPG is the only algorithm to outperform the Black-Scholes delta hedge baseline with the allotted computational budget, possibly due to the sparsity of rewards in our environment. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.05521 |
By: | Anastasis Kratsios; Xiaofei Shi; Qiang Sun; Zhanhao Zhang |
Abstract: | We present a general computational framework for solving continuous-time financial market equilibria under minimal modeling assumptions while incorporating realistic financial frictions, such as trading costs, and supporting multiple interacting agents. Inspired by generative adversarial networks (GANs), our approach employs a novel generative deep reinforcement learning framework with a decoupling feedback system embedded in the adversarial training loop, which we term as the \emph{reinforcement link}. This architecture stabilizes the training dynamics by incorporating feedback from the discriminator. Our theoretically guided feedback mechanism enables the decoupling of the equilibrium system, overcoming challenges that hinder conventional numerical algorithms. Experimentally, our algorithm not only learns but also provides testable predictions on how asset returns and volatilities emerge from the endogenous trading behavior of market participants, where traditional analytical methods fall short. The design of our model is further supported by an approximation guarantee. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.04300 |
By: | Anindya Sarkar; G. Vadivu |
Abstract: | This research proposes a cutting-edge ensemble deep learning framework for stock price prediction by combining three advanced neural network architectures: The particular areas of interest for the research include but are not limited to: Variational Autoencoder (VAE), Transformer, and Long Short-Term Memory (LSTM) networks. The presented framework is aimed to substantially utilize the advantages of each model which would allow for achieving the identification of both linear and non-linear relations in stock price movements. To improve the accuracy of its predictions it uses rich set of technical indicators and it scales its predictors based on the current market situation. By trying out the framework on several stock data sets, and benchmarking the results against single models and conventional forecasting, the ensemble method exhibits consistently high accuracy and reliability. The VAE is able to learn linear representation on high-dimensional data while the Transformer outstandingly perform in recognizing long-term patterns on the stock price data. LSTM, based on its characteristics of being a model that can deal with sequences, brings additional improvements to the given framework, especially regarding temporal dynamics and fluctuations. Combined, these components provide exceptional directional performance and a very small disparity in the predicted results. The present solution has given a probable concept that can handle the inherent problem of stock price prediction with high reliability and scalability. Compared to the performance of individual proposals based on the neural network, as well as classical methods, the proposed ensemble framework demonstrates the advantages of combining different architectures. It has a very important application in algorithmic trading, risk analysis, and control and decision-making for finance professions and scholars. |
Date: | 2025–03 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2503.22192 |
By: | Tom L. Dudda; Lars Hornuf |
Abstract: | We examine predictive machine learning studies from 50 top business and economic journals published between 2010 and 2023. We investigate their transparency regarding the predictive performance of machine learning models compared to less complex traditional statistical models that require fewer resources in terms of time and energy. We find that the adoption of machine learning varies by discipline, and is most frequently used in information systems, marketing, and operations research journals. Our analysis also reveals that 28% of studies do not benchmark the predictive performance of machine learning models against traditional statistical models. These studies receive fewer citations, arguably due to a less rigorous analysis. Studies including traditional statistical models as benchmarks typically report high outperformance for the best machine learning model. However, the performance improvement is substantially lower for the average reported machine learning model. We contend that, due to opaque reporting practices, it often remains unclear whether the predictive gains justify the increased costs of more complex models. We advocate for standardized, transparent model reporting that relates predictive gains to the efficiency of machine learning models compared to less-costly traditional statistical models. |
Keywords: | machine learning, predictive modelling, transparent model reporting |
JEL: | C18 C40 C52 |
Date: | 2025 |
URL: | https://d.repec.org/n?u=RePEc:ces:ceswps:_11721 |
By: | Kühl, Niklas; Schemmer, Max; Goutier, Marc; Satzger, Gerhard |
Abstract: | Within the last decade, the application of "artificial intelligence" and "machine learning" has become popular across multiple disciplines, especially in information systems. The two terms are still used inconsistently in academia and industry—sometimes as synonyms, sometimes with different meanings. With this work, we try to clarify the relationship between these concepts. We review the relevant literature and develop a conceptual framework to specify the role of machine learning in building (artificial) intelligent agents. Additionally, we propose a consistent typology for AI-based information systems. We contribute to a deeper understanding of the nature of both concepts and to more terminological clarity and guidance—as a starting point for interdisciplinary discussions and future research. |
Date: | 2025–04–02 |
URL: | https://d.repec.org/n?u=RePEc:dar:wpaper:153962 |
By: | Jiayin Liu; Chenglong Zhang |
Abstract: | Auctions are important mechanisms extensively implemented in various markets, e.g., search engines' keyword auctions, antique auctions, etc. Finding an optimal auction mechanism is extremely difficult due to the constraints of imperfect information, incentive compatibility (IC), and individual rationality (IR). In addition to the traditional economic methods, some recently attempted to find the optimal (single) auction using deep learning methods. Unlike those attempts focusing on single auctions, we develop deep learning methods for double auctions, where imperfect information exists on both the demand and supply sides. The previous attempts on single auction cannot directly apply to our contexts and those attempts additionally suffer from limited generalizability, inefficiency in ensuring the constraints, and learning fluctuations. We innovate in designing deep learning models for solving the more complex problem and additionally addressing the previous models' three limitations. Specifically, we achieve generalizability by leveraging a transformer-based architecture to model market participants as sequences for varying market sizes; we utilize the numerical features of the constraints and pre-treat them for a higher learning efficiency; we develop a gradient-conflict-elimination scheme to address the problem of learning fluctuation. Extensive experimental evaluations demonstrate the superiority of our approach to classical and machine learning baselines. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.05355 |
By: | Yue Yin |
Abstract: | In online advertising systems, publishers often face a trade-off in information disclosure strategies: while disclosing more information can enhance efficiency by enabling optimal allocation of ad impressions, it may lose revenue potential by decreasing uncertainty among competing advertisers. Similar to other challenges in market design, understanding this trade-off is constrained by limited access to real-world data, leading researchers and practitioners to turn to simulation frameworks. The recent emergence of large language models (LLMs) offers a novel approach to simulations, providing human-like reasoning and adaptability without necessarily relying on explicit assumptions about agent behavior modeling. Despite their potential, existing frameworks have yet to integrate LLM-based agents for studying information asymmetry and signaling strategies, particularly in the context of auctions. To address this gap, we introduce InfoBid, a flexible simulation framework that leverages LLM agents to examine the effects of information disclosure strategies in multi-agent auction settings. Using GPT-4o, we implemented simulations of second-price auctions with diverse information schemas. The results reveal key insights into how signaling influences strategic behavior and auction outcomes, which align with both economic and social learning theories. Through InfoBid, we hope to foster the use of LLMs as proxies for human economic and social agents in empirical studies, enhancing our understanding of their capabilities and limitations. This work bridges the gap between theoretical market designs and practical applications, advancing research in market simulations, information design, and agent-based reasoning while offering a valuable tool for exploring the dynamics of digital economies. |
Date: | 2025–03 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2503.22726 |
By: | Giovanni Dosi; Marcelo C. Pereira; Gabriel Petrini; Andrea Roventini; Maria Enrica Virgillito |
Abstract: | Agent-Based Models (ABMs) provide powerful tools for economic analysis, capturing microto- macro interactions and emergent properties. However, integration with empirical data has been a persistent challenge. To address it, we propose a protocol for integration between empirical data and ABM, building a new multidimensional similarity index that aggregates different similarity measures into a composite score, specifically designed to quantify alignment between simulated and real-world data. This metric enables a complete model ranking procedure, facilitating a streamlined model selection. The protocol is designed to be model-agnostic and flexible, allowing its application to a wide range of models beyond ABMs, including aggregate dynamical systems and any type of computational model. As an example, we apply our methodology to different configurations and model versions of the Schumpeter meeting Keynes (K+S) ABM family (Dosi, Fagiolo, and Roventini, 2010) using US data (from 1948Q1 to 2019Q1). Next, we propose a policy-informed application, attributing different weights to variables associated with policy-making decisions and technological change. The exercise is done in order to showcase the capacity of the procedure to target specific policy variables of interest, allowing for the design of empirically informed scenario analyses and projections on real-world dynamics. |
Date: | 2025–04–23 |
URL: | https://d.repec.org/n?u=RePEc:ssa:lemwps:2025/17 |
By: | Clayton, Christopher; Coppola, Antonio; Maggiori, Matteo (Stanford University); Schreger, Jesse |
Abstract: | Economic pressure—the use of economic means by governments to achieve geopolitical ends—has become a prominent feature of global power dynamics. This paper introduces a methodology using large language models (LLMs) to systematically extract signals of geoeconomic pressure from large textual corpora. We quantify not just the direct effects of implemented policies but also the off-path threats that induce compliance without formal action. We systematically identify governments, firms, tools, and activities that are involved in this pressure. We demonstrate that firms respond differently to various forms of economic pressure, as well responding differently to policies that have been implemented versus the threat of future pressure. |
Date: | 2025–03–01 |
URL: | https://d.repec.org/n?u=RePEc:osf:socarx:zsc4x_v1 |
By: | Shovon Sengupta; Bhanu Pratap; Amit Pawar |
Abstract: | The conventional linear Phillips curve model, while widely used in policymaking, often struggles to deliver accurate forecasts in the presence of structural breaks and inherent nonlinearities. This paper addresses these limitations by leveraging machine learning methods within a New Keynesian Phillips Curve framework to forecast and explain headline inflation in India, a major emerging economy. Our analysis demonstrates that machine learning-based approaches significantly outperform standard linear models in forecasting accuracy. Moreover, by employing explainable machine learning techniques, we reveal that the Phillips curve relationship in India is highly nonlinear, characterized by thresholds and interaction effects among key variables. Headline inflation is primarily driven by inflation expectations, followed by past inflation and the output gap, while supply shocks, except rainfall, exert only a marginal influence. These findings highlight the ability of machine learning models to improve forecast accuracy and uncover complex, nonlinear dynamics in inflation data, offering valuable insights for policymakers. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.05350 |
By: | Alejandro Lopez-Lira; Jihoon Kwon; Sangwoon Yoon; Jy-yong Sohn; Chanyeol Choi |
Abstract: | The rapid advancements in Large Language Models (LLMs) have unlocked transformative possibilities in natural language processing, particularly within the financial sector. Financial data is often embedded in intricate relationships across textual content, numerical tables, and visual charts, posing challenges that traditional methods struggle to address effectively. However, the emergence of LLMs offers new pathways for processing and analyzing this multifaceted data with increased efficiency and insight. Despite the fast pace of innovation in LLM research, there remains a significant gap in their practical adoption within the finance industry, where cautious integration and long-term validation are prioritized. This disparity has led to a slower implementation of emerging LLM techniques, despite their immense potential in financial applications. As a result, many of the latest advancements in LLM technology remain underexplored or not fully utilized in this domain. This survey seeks to bridge this gap by providing a comprehensive overview of recent developments in LLM research and examining their applicability to the financial sector. Building on previous survey literature, we highlight several novel LLM methodologies, exploring their distinctive capabilities and their potential relevance to financial data analysis. By synthesizing insights from a broad range of studies, this paper aims to serve as a valuable resource for researchers and practitioners, offering direction on promising research avenues and outlining future opportunities for advancing LLM applications in finance. |
Date: | 2025–03 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2503.22693 |
By: | Zhenwei Lin (Graduate School of Economics, University of Tokyo); Masafumi Nakano (GCI Asset Management); Akihiko Takahashi (Graduate School of Economics, The University of Tokyo) |
Abstract: | This paper presents a novel approach to sentiment analysis in the context of investments in the Japanese stock market. Specifically, we begin by creating an original set of keywords derived from news headlines sourced from a Japanese financial news platform. Subsequently, we develop new polarity scores for these keywords, based on market returns, to construct sentiment lexicons. These lexicons are then utilized to guide investment decisions regarding the stocks of companies included in either the TOPIX 500 or the Nikkei 225, which are Japan’s representative stock indices. Furthermore, empirical studies validate the effectiveness of our proposed method, which significantly outperforms a ChatGPT-based sentiment analysis approach. This provides strong evidence for the advantage of integrating market data into textual sentiment evaluation to enhance financial investment strategies. |
Date: | 2024–11 |
URL: | https://d.repec.org/n?u=RePEc:cfi:fseres:cf601 |
By: | Alejandro Rodriguez Dominguez |
Abstract: | Fundamental and necessary principles for achieving efficient portfolio optimization based on asset and diversification dynamics are presented. The Commonality Principle is a necessary and sufficient condition for identifying optimal drivers of a portfolio in terms of its diversification dynamics. The proof relies on the Reichenbach Common Cause Principle, along with the fact that the sensitivities of portfolio constituents with respect to the common causal drivers are themselves causal. A conformal map preserves idiosyncratic diversification from the unconditional setting while optimizing systematic diversification on an embedded space of these sensitivities. Causal methodologies for combinatorial driver selection are presented, such as the use of Bayesian networks and correlation-based algorithms from Reichenbach's principle. Limitations of linear models in capturing causality are discussed, and included for completeness alongside more advanced models such as neural networks. Portfolio optimization methods are presented that map risk from the sensitivity space to other risk measures of interest. Finally, the work introduces a novel risk management framework based on Common Causal Manifolds, including both theoretical development and experimental validation. The sensitivity space is predicted along the common causal manifold, which is modeled as a causal time system. Sensitivities are forecasted using SDEs calibrated to data previously extracted from neural networks to move along the manifold via its tangent bundles. An optimization method is then proposed that accumulates information across future predicted tangent bundles on the common causal time system manifold. It aggregates sensitivity-based distance metrics along the trajectory to build a comprehensive sensitivity distance matrix. This matrix enables trajectory-wide optimal diversification, taking into account future dynamics. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.05743 |
By: | Minshuo Chen; Renyuan Xu; Yumin Xu; Ruixun Zhang |
Abstract: | Financial scenario simulation is essential for risk management and portfolio optimization, yet it remains challenging especially in high-dimensional and small data settings common in finance. We propose a diffusion factor model that integrates latent factor structure into generative diffusion processes, bridging econometrics with modern generative AI to address the challenges of the curse of dimensionality and data scarcity in financial simulation. By exploiting the low-dimensional factor structure inherent in asset returns, we decompose the score function--a key component in diffusion models--using time-varying orthogonal projections, and this decomposition is incorporated into the design of neural network architectures. We derive rigorous statistical guarantees, establishing nonasymptotic error bounds for both score estimation at O(d^{5/2} n^{-2/(k+5)}) and generated distribution at O(d^{5/4} n^{-1/2(k+5)}), primarily driven by the intrinsic factor dimension k rather than the number of assets d, surpassing the dimension-dependent limits in the classical nonparametric statistics literature and making the framework viable for markets with thousands of assets. Numerical studies confirm superior performance in latent subspace recovery under small data regimes. Empirical analysis demonstrates the economic significance of our framework in constructing mean-variance optimal portfolios and factor portfolios. This work presents the first theoretical integration of factor structure with diffusion models, offering a principled approach for high-dimensional financial simulation with limited data. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.06566 |
By: | Carro, Adrian; Hinterschweiger, Marc; Uluc, Arzu; Borsos, András; Kaszowska-Mojsa, Jagoda (Institute for New Economic Thinking, University of Oxford); Glielmo, Aldo (Bank of Italy) |
Abstract: | Over the past decade, agent-based models (ABMs) have been increasingly employed as analytical tools within economic policy institutions. This chapter documents this trend by surveying the ABM-relevant research and policy outputs of central banks and other related economic policy institutions. We classify these studies and reports into three main categories: (i) applied research connected to the mandates of central banks, (ii) technical and methodological research supporting the advancement of ABMs; and (iii) examples of the integration of ABMs into policy work. Our findings indicate that ABMs have emerged as effective complementary tools for central banks in carrying out their responsibilities, especially after the extension of their mandates following the global financial crisis of 2007-2009. While acknowledging that room for improvement remains, we argue that integrating ABMs into the analytical frameworks of central banks can support more effective policy responses to both existing and emerging economic challenges, including financial innovation and climate change. |
Keywords: | Agent-based models, household analysis, financial institutions, central bank policies, monetary policy, prudential policies |
JEL: | C63 E37 E58 |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:amz:wpaper:2025-05 |
By: | Juan Tenorio; Heidi Alpiste; Jakelin Rem\'on; Arian Segil |
Abstract: | In recent years, the use of databases that analyze trends, sentiments or news to make economic projections or create indicators has gained significant popularity, particularly with the Google Trends platform. This article explores the potential of Google search data to develop a new index that improves economic forecasts, with a particular focus on one of the key components of economic activity: private consumption (64\% of GDP in Peru). By selecting and estimating categorized variables, machine learning techniques are applied, demonstrating that Google data can identify patterns to generate a leading indicator in real time and improve the accuracy of forecasts. Finally, the results show that Google's "Food" and "Tourism" categories significantly reduce projection errors, highlighting the importance of using this information in a segmented manner to improve macroeconomic forecasts. |
Date: | 2025–03 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2503.21981 |
By: | Pangallo, Marco; Lafond, François; Farmer, J. Doyne; Wiese, Samuel; Muellbauer, John; Moran, José; Dyer, Joel; Kaszowska-Mojsa, Jagoda (Institute for New Economic Thinking, University of Oxford); Calinescu, Anisoara (Institute for New Economic Thinking, University of Oxford) |
Abstract: | In the last few years, economic agent-based models have made the transition from qualitative models calibrated to match stylised facts to quantitative models for time series forecasting, and in some cases, their predictions have performed as well or better than those of standard models (see, e.g. Poledna et al. (2023a); Hommes et al. (2022); Pichler et al. (2022)). Here, we build on the model of Poledna et al., adding several new features such as housing markets, realistic synthetic populations of individuals with income, wealth and consumption heterogeneity, enhanced behavioural rules and market mechanisms, and an enhanced credit market. We calibrate our model for all 38 OECD member countries using state-of-the-art approximate Bayesian inference methods and test it by making out-of-sample forecasts. It outperforms both the Poledna and AR(1) time series models by a highly statistically significant margin. Our model is built within a platform we have developed, making it easy to build, run, and evaluate alternative models, which we hope will encourage future work in this area. |
Keywords: | Agent-based models, Bayesian estimation, Economic forecasting |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:amz:wpaper:2024-06 |
By: | Hess, Simon |
Abstract: | This paper studies the effects of introducing a Central Bank Digital Currency (CBDC) on economic output, bank intermediation and financial stability in a closed economy using an Agent-based Stock Flow Consistent (AB-SFC) Model. Thereby a digital bank run is simulated across various economic environments with different monetary policy and bank bankruptcy regimes. According to the model, non-remunerated CBDC issued in a positive-interest environment with a corridor system may increase GDP through increased seigniorage income and government spending. Also bank funding becomes more expensive since bank deposit stickiness is prevented. Non-remunerated CBDC issued in a zero-interest environment has no impact since there is no distributional effect of the interest payments. In a floor-system where the interest rate on CBDC matches the policy rate, CBDC also counteracts deposit stickiness and redistributes bank profits from shareholders to depositors. Thereby CBDC improves the transmission of the policy rate to households and firms. The bank bankruptcy regime also affects the outcome. While CBDC makes no difference in a bailout regime it does in a bail-in regime where it decreases inequality and distributes bank rescue costs evenly among households and firms, potentially enhancing financial stability. Introducing CBDC within a deposit insurance system postpones bank rescue payments, which creates an additional dynamic in GDP. |
Keywords: | central bank digital currency, agent-based model, bank run, bailout, bail-in, financial stability |
JEL: | E42 E58 G21 G23 G28 |
Date: | 2025 |
URL: | https://d.repec.org/n?u=RePEc:zbw:roswps:171 |
By: | Amin Haeri; Jonathan Vitrano; Mahdi Ghelichi |
Abstract: | Risk management in finance involves recognizing, evaluating, and addressing financial risks to maintain stability and ensure regulatory compliance. Extracting relevant insights from extensive regulatory documents is a complex challenge requiring advanced retrieval and language models. This paper introduces RiskData, a dataset specifically curated for finetuning embedding models in risk management, and RiskEmbed, a finetuned embedding model designed to improve retrieval accuracy in financial question-answering systems. The dataset is derived from 94 regulatory guidelines published by the Office of the Superintendent of Financial Institutions (OSFI) from 1991 to 2024. We finetune a state-of-the-art sentence BERT embedding model to enhance domain-specific retrieval performance typically for Retrieval-Augmented Generation (RAG) systems. Experimental results demonstrate that RiskEmbed significantly outperforms general-purpose and financial embedding models, achieving substantial improvements in ranking metrics. By open-sourcing both the dataset and the model, we provide a valuable resource for financial institutions and researchers aiming to develop more accurate and efficient risk management AI solutions. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.06293 |