nep-cmp New Economics Papers
on Computational Economics
Issue of 2024‒03‒25
nineteen papers chosen by



  1. CNN-DRL with Shuffled Features in Finance By Sina Montazeri; Akram Mirzaeinia; Amir Mirzaeinia
  2. Reinforcement Learning for Optimal Execution when Liquidity is Time-Varying By Andrea Macr\`i; Fabrizio Lillo
  3. Social Environment Design By Edwin Zhang; Sadie Zhao; Tonghan Wang; Safwan Hossain; Henry Gasztowtt; Stephan Zheng; David C. Parkes; Milind Tambe; Yiling Chen
  4. Extending the Scope of Inference About Predictive Ability to Machine Learning Methods By Juan Carlos Escanciano; Ricardo Parra
  5. CaT-GNN: Enhancing Credit Card Fraud Detection via Causal Temporal Graph Neural Networks By Yifan Duan; Guibin Zhang; Shilong Wang; Xiaojiang Peng; Wang Ziqi; Junyuan Mao; Hao Wu; Xinke Jiang; Kun Wang
  6. MDGNN: Multi-Relational Dynamic Graph Neural Network for Comprehensive and Dynamic Stock Investment Prediction By Hao Qian; Hongting Zhou; Qian Zhao; Hao Chen; Hongxiang Yao; Jingwei Wang; Ziqi Liu; Fei Yu; Zhiqiang Zhang; Jun Zhou
  7. Alpha-GPT 2.0: Human-in-the-Loop AI for Quantitative Investment By Hang Yuan; Saizhuo Wang; Jian Guo
  8. Deep Hedging with Market Impact By Andrei Neagu; Fr\'ed\'eric Godin; Clarence Simard; Leila Kosseim
  9. Loquacity and visible emotion: ChatGPT as a policy advisor By Claudia Biancotti; Carolina Camassa
  10. Cross-Temporal Forecast Reconciliation at Digital Platforms with Machine Learning By Jeroen Rombouts; Marie Ternes; Ines Wilms
  11. Machine Learning and Data-Driven Approaches in Spatial Statistics: A Case Study of Housing Price Estimation By Sarah Soleiman; Julien Randon-Furling; Marie Cottrell
  12. Learning from the Past: The Role of Personal Experiences in Artificial Stock Markets By Lenhard, Gregor
  13. Quantifying neural network uncertainty under volatility clustering By Steven Y. K. Wong; Jennifer S. K. Chan; Lamiae Azizi
  14. Shall We Talk: Exploring Spontaneous Collaborations of Competing LLM Agents By Zengqing Wu; Shuyuan Zheng; Qianying Liu; Xu Han; Brian Inhyuk Kwon; Makoto Onizuka; Shaojie Tang; Run Peng; Chuan Xiao
  15. The Heterogeneous Productivity Effects of Generative AI By David H. Kreitmeir; Paul A. Raschky
  16. Machine Learning Who to Nudge: Causal vs Predictive Targeting in a Field Experiment on Student Financial Aid Renewal By Athey, Susan; Keleher, Niall; Spiess, Jann
  17. Enhancing Rolling Horizon Production Planning Through Stochastic Optimization Evaluated by Means of Simulation By Manuel Schlenkrich; Wolfgang Seiringer; Klaus Altendorfer; Sophie N. Parragh
  18. Introducing Textual Measures of Central Bank Policy-Linkages Using ChatGPT By Leek, Lauren Caroline; Bischl, Simeon; Freier, Maximilian
  19. Exact simulation scheme for the Ornstein-Uhlenbeck driven stochastic volatility model with the Karhunen-Lo\`eve expansions By Jaehyuk Choi

  1. By: Sina Montazeri; Akram Mirzaeinia; Amir Mirzaeinia
    Abstract: In prior methods, it was observed that the application of Convolutional Neural Networks agent in Deep Reinforcement Learning to financial data resulted in an enhanced reward. In this study, a specific permutation was applied to the feature vector, thereby generating a CNN matrix that strategically positions more pertinent features in close proximity. Our comprehensive experimental evaluations unequivocally demonstrate a substantial enhancement in reward attainment.
    Date: 2024–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.03338&r=cmp
  2. By: Andrea Macr\`i; Fabrizio Lillo
    Abstract: Optimal execution is an important problem faced by any trader. Most solutions are based on the assumption of constant market impact, while liquidity is known to be dynamic. Moreover, models with time-varying liquidity typically assume that it is observable, despite the fact that, in reality, it is latent and hard to measure in real time. In this paper we show that the use of Double Deep Q-learning, a form of Reinforcement Learning based on neural networks, is able to learn optimal trading policies when liquidity is time-varying. Specifically, we consider an Almgren-Chriss framework with temporary and permanent impact parameters following several deterministic and stochastic dynamics. Using extensive numerical experiments, we show that the trained algorithm learns the optimal policy when the analytical solution is available, and overcomes benchmarks and approximated solutions when the solution is not available.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.12049&r=cmp
  3. By: Edwin Zhang; Sadie Zhao; Tonghan Wang; Safwan Hossain; Henry Gasztowtt; Stephan Zheng; David C. Parkes; Milind Tambe; Yiling Chen
    Abstract: Artificial Intelligence (AI) holds promise as a technology that can be used to improve government and economic policy-making. This paper proposes a new research agenda towards this end by introducing Social Environment Design, a general framework for the use of AI for automated policy-making that connects with the Reinforcement Learning, EconCS, and Computational Social Choice communities. The framework seeks to capture general economic environments, includes voting on policy objectives, and gives a direction for the systematic analysis of government and economic policy through AI simulation. We highlight key open problems for future research in AI-based policy-making. By solving these challenges, we hope to achieve various social welfare objectives, thereby promoting more ethical and responsible decision making.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.14090&r=cmp
  4. By: Juan Carlos Escanciano; Ricardo Parra
    Abstract: Though out-of-sample forecast evaluation is systematically employed with modern machine learning methods and there exists a well-established classic inference theory for predictive ability, see, e.g., West (1996, Asymptotic Inference About Predictive Ability, \textit{Econometrica}, 64, 1067-1084), such theory is not directly applicable to modern machine learners such as the Lasso in the high dimensional setting. We investigate under which conditions such extensions are possible. Two key properties for standard out-of-sample asymptotic inference to be valid with machine learning are (i) a zero-mean condition for the score of the prediction loss function; and (ii) a fast rate of convergence for the machine learner. Monte Carlo simulations confirm our theoretical findings. For accurate finite sample inferences with machine learning, we recommend a small out-of-sample vs in-sample size ratio. We illustrate the wide applicability of our results with a new out-of-sample test for the Martingale Difference Hypothesis (MDH). We obtain the asymptotic null distribution of our test and use it to evaluate
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.12838&r=cmp
  5. By: Yifan Duan; Guibin Zhang; Shilong Wang; Xiaojiang Peng; Wang Ziqi; Junyuan Mao; Hao Wu; Xinke Jiang; Kun Wang
    Abstract: Credit card fraud poses a significant threat to the economy. While Graph Neural Network (GNN)-based fraud detection methods perform well, they often overlook the causal effect of a node's local structure on predictions. This paper introduces a novel method for credit card fraud detection, the \textbf{\underline{Ca}}usal \textbf{\underline{T}}emporal \textbf{\underline{G}}raph \textbf{\underline{N}}eural \textbf{N}etwork (CaT-GNN), which leverages causal invariant learning to reveal inherent correlations within transaction data. By decomposing the problem into discovery and intervention phases, CaT-GNN identifies causal nodes within the transaction graph and applies a causal mixup strategy to enhance the model's robustness and interpretability. CaT-GNN consists of two key components: Causal-Inspector and Causal-Intervener. The Causal-Inspector utilizes attention weights in the temporal attention mechanism to identify causal and environment nodes without introducing additional parameters. Subsequently, the Causal-Intervener performs a causal mixup enhancement on environment nodes based on the set of nodes. Evaluated on three datasets, including a private financial dataset and two public datasets, CaT-GNN demonstrates superior performance over existing state-of-the-art methods. Our findings highlight the potential of integrating causal reasoning with graph neural networks to improve fraud detection capabilities in financial transactions.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.14708&r=cmp
  6. By: Hao Qian; Hongting Zhou; Qian Zhao; Hao Chen; Hongxiang Yao; Jingwei Wang; Ziqi Liu; Fei Yu; Zhiqiang Zhang; Jun Zhou
    Abstract: The stock market is a crucial component of the financial system, but predicting the movement of stock prices is challenging due to the dynamic and intricate relations arising from various aspects such as economic indicators, financial reports, global news, and investor sentiment. Traditional sequential methods and graph-based models have been applied in stock movement prediction, but they have limitations in capturing the multifaceted and temporal influences in stock price movements. To address these challenges, the Multi-relational Dynamic Graph Neural Network (MDGNN) framework is proposed, which utilizes a discrete dynamic graph to comprehensively capture multifaceted relations among stocks and their evolution over time. The representation generated from the graph offers a complete perspective on the interrelationships among stocks and associated entities. Additionally, the power of the Transformer structure is leveraged to encode the temporal evolution of multiplex relations, providing a dynamic and effective approach to predicting stock investment. Further, our proposed MDGNN framework achieves the best performance in public datasets compared with state-of-the-art (SOTA) stock investment methods.
    Date: 2024–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.06633&r=cmp
  7. By: Hang Yuan; Saizhuo Wang; Jian Guo
    Abstract: Recently, we introduced a new paradigm for alpha mining in the realm of quantitative investment, developing a new interactive alpha mining system framework, Alpha-GPT. This system is centered on iterative Human-AI interaction based on large language models, introducing a Human-in-the-Loop approach to alpha discovery. In this paper, we present the next-generation Alpha-GPT 2.0 \footnote{Draft. Work in progress}, a quantitative investment framework that further encompasses crucial modeling and analysis phases in quantitative investment. This framework emphasizes the iterative, interactive research between humans and AI, embodying a Human-in-the-Loop strategy throughout the entire quantitative investment pipeline. By assimilating the insights of human researchers into the systematic alpha research process, we effectively leverage the Human-in-the-Loop approach, enhancing the efficiency and precision of quantitative investment research.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.09746&r=cmp
  8. By: Andrei Neagu; Fr\'ed\'eric Godin; Clarence Simard; Leila Kosseim
    Abstract: Dynamic hedging is the practice of periodically transacting financial instruments to offset the risk caused by an investment or a liability. Dynamic hedging optimization can be framed as a sequential decision problem; thus, Reinforcement Learning (RL) models were recently proposed to tackle this task. However, existing RL works for hedging do not consider market impact caused by the finite liquidity of traded instruments. Integrating such feature can be crucial to achieve optimal performance when hedging options on stocks with limited liquidity. In this paper, we propose a novel general market impact dynamic hedging model based on Deep Reinforcement Learning (DRL) that considers several realistic features such as convex market impacts, and impact persistence through time. The optimal policy obtained from the DRL model is analysed using several option hedging simulations and compared to commonly used procedures such as delta hedging. Results show our DRL model behaves better in contexts of low liquidity by, among others: 1) learning the extent to which portfolio rebalancing actions should be dampened or delayed to avoid high costs, 2) factoring in the impact of features not considered by conventional approaches, such as previous hedging errors through the portfolio value, and the underlying asset's drift (i.e. the magnitude of its expected return).
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.13326&r=cmp
  9. By: Claudia Biancotti (Bank of Italy); Carolina Camassa (Bank of Italy)
    Abstract: ChatGPT, a software seeking to simulate human conversational abilities, is attracting increasing attention. It is sometimes portrayed as a groundbreaking productivity aid, including for creative work. In this paper, we run an experiment to assess its potential in complex writing tasks. We ask the software to compose a policy brief for the Board of the Bank of Italy. We find that ChatGPT can accelerate workflows by providing well-structured content suggestions, and by producing extensive, linguistically correct text in a matter of seconds. It does, however, require a significant amount of expert supervision, which partially offsets productivity gains. If the app is used naively, output can be incorrect, superficial, or irrelevant. Superficiality is an especially problematic limitation in the context of policy advice intended for high-level audiences.
    Keywords: Large language models, generative artificial intelligence, ChatGPT
    JEL: O33 O32
    Date: 2023–10
    URL: http://d.repec.org/n?u=RePEc:bdi:opques:qef_814_23&r=cmp
  10. By: Jeroen Rombouts; Marie Ternes; Ines Wilms
    Abstract: Platform businesses operate on a digital core and their decision making requires high-dimensional accurate forecast streams at different levels of cross-sectional (e.g., geographical regions) and temporal aggregation (e.g., minutes to days). It also necessitates coherent forecasts across all levels of the hierarchy to ensure aligned decision making across different planning units such as pricing, product, controlling and strategy. Given that platform data streams feature complex characteristics and interdependencies, we introduce a non-linear hierarchical forecast reconciliation method that produces cross-temporal reconciled forecasts in a direct and automated way through the use of popular machine learning methods. The method is sufficiently fast to allow forecast-based high-frequency decision making that platforms require. We empirically test our framework on a unique, large-scale streaming dataset from a leading on-demand delivery platform in Europe.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.09033&r=cmp
  11. By: Sarah Soleiman (SAMM - Statistique, Analyse et Modélisation Multidisciplinaire (SAmos-Marin Mersenne) - UP1 - Université Paris 1 Panthéon-Sorbonne); Julien Randon-Furling (SAMM - Statistique, Analyse et Modélisation Multidisciplinaire (SAmos-Marin Mersenne) - UP1 - Université Paris 1 Panthéon-Sorbonne); Marie Cottrell (SAMM - Statistique, Analyse et Modélisation Multidisciplinaire (SAmos-Marin Mersenne) - UP1 - Université Paris 1 Panthéon-Sorbonne)
    Date: 2022–07–06
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-03900972&r=cmp
  12. By: Lenhard, Gregor (University of Basel)
    Abstract: Recent survey evidence suggests that investors form beliefs about future stock returns by predominantly extrapolating their own experience: They overweight returns they have personally experienced while underweighting returns from earlier years and consequently expect high (low) stock market returns when they observe bullish (bearish) markets in their lifespan. Such events are difficult to reconcile with the existing models. This paper introduces a simple agent-based model for simulating artificial stock markets in which mean-variance optimizing investors have heterogeneous beliefs about future capital gains to form their expectations. Using this framework, I successfully reproduce various stylized facts from the empirical finance literature, such as under diversification, the predictive power of the price-dividend ratio, and the autocorrelation of price changes. The experimental findings show that the most realistic market scenarios are produced when agents have a bias for recent returns. The study also established a link between under diversification of investor portfolios and personal experiences.
    JEL: C63 G12 D84
    Date: 2024–03–03
    URL: http://d.repec.org/n?u=RePEc:bsl:wpaper:2024/01&r=cmp
  13. By: Steven Y. K. Wong; Jennifer S. K. Chan; Lamiae Azizi
    Abstract: Time-series with time-varying variance pose a unique challenge to uncertainty quantification (UQ) methods. Time-varying variance, such as volatility clustering as seen in financial time-series, can lead to large mismatch between predicted uncertainty and forecast error. Building on recent advances in neural network UQ literature, we extend and simplify Deep Evidential Regression and Deep Ensembles into a unified framework to deal with UQ under the presence of volatility clustering. We show that a Scale Mixture Distribution is a simpler alternative to the Normal-Inverse-Gamma prior that provides favorable complexity-accuracy trade-off. To illustrate the performance of our proposed approach, we apply it to two sets of financial time-series exhibiting volatility clustering: cryptocurrencies and U.S. equities.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.14476&r=cmp
  14. By: Zengqing Wu; Shuyuan Zheng; Qianying Liu; Xu Han; Brian Inhyuk Kwon; Makoto Onizuka; Shaojie Tang; Run Peng; Chuan Xiao
    Abstract: Recent advancements have shown that agents powered by large language models (LLMs) possess capabilities to simulate human behaviors and societal dynamics. However, the potential for LLM agents to spontaneously establish collaborative relationships in the absence of explicit instructions has not been studied. To address this gap, we conduct three case studies, revealing that LLM agents are capable of spontaneously forming collaborations even within competitive settings. This finding not only demonstrates the capacity of LLM agents to mimic competition and cooperation in human societies but also validates a promising vision of computational social science. Specifically, it suggests that LLM agents could be utilized to model human social interactions, including those with spontaneous collaborations, thus offering insights into social phenomena. The source codes for this study are available at https://github.com/wuzengqing001225/SABM_ShallWeTalk .
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.12327&r=cmp
  15. By: David H. Kreitmeir (Department of Economics and SoDa Labs, Monash University); Paul A. Raschky (Department of Economics and SoDa Labs, Monash University)
    Abstract: We analyse the individual productivity effects of Italy's ban on ChatGPT, a generative pretrained transformer chatbot. We compile data on the daily coding output quantity and quality of over 36, 000 GitHub users in Italy and other European countries and combine these data with the sudden announcement of the ban in a difference-in-differences framework. Among the affected users in Italy, we find a short-term increase in output quantity and quality for less experienced users and a decrease in productivity on more routine tasks for experienced users.
    Keywords: artificial intelligence, productivity
    JEL: D8 J24 O33
    Date: 2024–03
    URL: http://d.repec.org/n?u=RePEc:ajr:sodwps:2024-01&r=cmp
  16. By: Athey, Susan (Stanford U); Keleher, Niall (Innovations for Poverty Action, New Haven); Spiess, Jann (Stanford U)
    Abstract: In many settings, interventions may be more effective for some individuals than others, so that targeting interventions may be beneficial. We analyze the value of targeting in the context of a large-scale field experiment with over 53, 000 college students, where the goal was to use "nudges" to encourage students to renew their financial-aid applications before a non-binding deadline. We begin with baseline approaches to targeting. First, we target based on a causal forest that estimates heterogeneous treatment effects and then assigns students to treatment according to those estimated to have the highest treatment effects. Next, we evaluate two alternative targeting policies, one targeting students with low predicted probability of renewing financial aid in the absence of the treatment, the other targeting those with high probability. The predicted baseline outcome is not the ideal criterion for targeting, nor is it a priori clear whether to prioritize low, high, or intermediate predicted probability. Nonetheless, targeting on low baseline outcomes is common in practice, for example because the relationship between individual characteristics and treatment effects is often difficult or impossible to estimate with historical data. We propose hybrid approaches that incorporate the strengths of both predictive approaches (accurate estimation) and causal approaches (correct criterion); we show that targeting intermediate baseline outcomes is most effective, while targeting based on low baseline outcomes is detrimental. In one year of the experiment, nudging all students improved early filing by an average of 6.4 percentage points over a baseline average of 37% filing, and we estimate that targeting half of the students using our preferred policy attains around 75% of this benefit.
    Date: 2023–10
    URL: http://d.repec.org/n?u=RePEc:ecl:stabus:4146&r=cmp
  17. By: Manuel Schlenkrich; Wolfgang Seiringer; Klaus Altendorfer; Sophie N. Parragh
    Abstract: Production planning must account for uncertainty in a production system, arising from fluctuating demand forecasts. Therefore, this article focuses on the integration of updated customer demand into the rolling horizon planning cycle. We use scenario-based stochastic programming to solve capacitated lot sizing problems under stochastic demand in a rolling horizon environment. This environment is replicated using a discrete event simulation-optimization framework, where the optimization problem is periodically solved, leveraging the latest demand information to continually adjust the production plan. We evaluate the stochastic optimization approach and compare its performance to solving a deterministic lot sizing model, using expected demand figures as input, as well as to standard Material Requirements Planning (MRP). In the simulation study, we analyze three different customer behaviors related to forecasting, along with four levels of shop load, within a multi-item and multi-stage production system. We test a range of significant parameter values for the three planning methods and compute the overall costs to benchmark them. The results show that the production plans obtained by MRP are outperformed by deterministic and stochastic optimization. Particularly, when facing tight resource restrictions and rising uncertainty in customer demand, the use of stochastic optimization becomes preferable compared to deterministic optimization.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.14506&r=cmp
  18. By: Leek, Lauren Caroline (European University Institute); Bischl, Simeon; Freier, Maximilian
    Abstract: While institutionally independent, monetary policy-makers do not operate in a vacuum. The policy choices of a central bank are intricately linked to government policies and financial markets. We present novel indices of monetary, fiscal and financial policy-linkages based on central bank communication, namely, speeches by 118 central banks worldwide from 1997 to mid-2023. Our indices measure not only instances of monetary, fiscal or financial dominance but, importantly, also identify communication that aims to coordinate monetary policy with the government and financial markets. To create our indices, we use a Large Language Model (ChatGPT 3.5-0301) and provide transparent prompt-engineering steps, considering both accuracy on the basis of a manually coded dataset as well as efficiency regarding token usage. We also test several model improvements and provide descriptive statistics of the trends of the indices over time and across central banks including correlations with political-economic variables.
    Date: 2024–02–14
    URL: http://d.repec.org/n?u=RePEc:osf:socarx:78wnp&r=cmp
  19. By: Jaehyuk Choi
    Abstract: This study proposes a new exact simulation scheme of the Ornstein-Uhlenbeck driven stochastic volatility model. With the Karhunen-Lo\`eve expansions, the stochastic volatility path following the Ornstein-Uhlenbeck process is expressed as a sine series, and the time integrals of volatility and variance are analytically derived as the sums of independent normal random variates. The new method is several hundred times faster than Li and Wu [Eur. J. Oper. Res., 2019, 275(2), 768-779] that relies on computationally expensive numerical transform inversion. The simulation algorithm is further improved with the conditional Monte-Carlo method and the martingale-preserving control variate on the spot price.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.09243&r=cmp

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.