|
on Computational Economics |
By: | Bokai Cao; Saizhuo Wang; Xinyi Lin; Xiaojun Wu; Haohan Zhang; Lionel M. Ni; Jian Guo |
Abstract: | Quantitative investment (quant) is an emerging, technology-driven approach in asset management, increasingy shaped by advancements in artificial intelligence. Recent advances in deep learning and large language models (LLMs) for quant finance have improved predictive modeling and enabled agent-based automation, suggesting a potential paradigm shift in this field. In this survey, taking alpha strategy as a representative example, we explore how AI contributes to the quantitative investment pipeline. We first examine the early stage of quant research, centered on human-crafted features and traditional statistical models with an established alpha pipeline. We then discuss the rise of deep learning, which enabled scalable modeling across the entire pipeline from data processing to order execution. Building on this, we highlight the emerging role of LLMs in extending AI beyond prediction, empowering autonomous agents to process unstructured data, generate alphas, and support self-iterative workflows. |
Date: | 2025–03 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2503.21422 |
By: | Zongxiao Wu; Yizhe Dong; Yaoyiran Li; Baofeng Shi |
Abstract: | This study explores the integration of a representative large language model, ChatGPT, into lending decision-making with a focus on credit default prediction. Specifically, we use ChatGPT to analyse and interpret loan assessments written by loan officers and generate refined versions of these texts. Our comparative analysis reveals significant differences between generative artificial intelligence (AI)-refined and human-written texts in terms of text length, semantic similarity, and linguistic representations. Using deep learning techniques, we show that incorporating unstructured text data, particularly ChatGPT-refined texts, alongside conventional structured data significantly enhances credit default predictions. Furthermore, we demonstrate how the contents of both human-written and ChatGPT-refined assessments contribute to the models' prediction and show that the effect of essential words is highly context-dependent. Moreover, we find that ChatGPT's analysis of borrower delinquency contributes the most to improving predictive accuracy. We also evaluate the business impact of the models based on human-written and ChatGPT-refined texts, and find that, in most cases, the latter yields higher profitability than the former. This study provides valuable insights into the transformative potential of generative AI in financial services. |
Date: | 2025–03 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2503.18029 |
By: | Julien Pascal |
Abstract: | This paper presents a novel method to solve high-dimensional economic models using neural networks when the exact calculation of the gradient by backpropagation is impractical or inapplicable. This method relies on the gradient-free bias-corrected Monte Carlo (bc-MC) operator, which constitutes, under certain conditions, an asymptotically unbiased estimator of the gradient of the loss function. This method is well-suited for high-dimensional models, as it requires only two evaluations of a residual function to approximate the gradient of the loss function, regardless of the model dimension. I demonstrate that the gradient-free bias-corrected Monte Carlo operator has appealing properties as long as the economic model satisfies Lipschitz continuity. This makes the method particularly attractive in situations involving non-differentiable loss functions. I demonstrate the broad applicability of the gradient-free bc-MC operator by solving large-scale overlapping generations (OLG) models with aggregate uncertainty, including scenarios involving borrowing constraints that introduce non-differentiability in household optimization problems. |
Keywords: | Dynamic programming, neural networks, machine learning, Monte Carlo, overlapping generations, occasionally binding constraints. |
JEL: | C45 C61 C63 C68 E32 E37 |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:bcl:bclwop:bclwp196 |
By: | Alejandro Lopez-Lira |
Abstract: | This paper presents a realistic simulated stock market where large language models (LLMs) act as heterogeneous competing trading agents. The open-source framework incorporates a persistent order book with market and limit orders, partial fills, dividends, and equilibrium clearing alongside agents with varied strategies, information sets, and endowments. Agents submit standardized decisions using structured outputs and function calls while expressing their reasoning in natural language. Three findings emerge: First, LLMs demonstrate consistent strategy adherence and can function as value investors, momentum traders, or market makers per their instructions. Second, market dynamics exhibit features of real financial markets, including price discovery, bubbles, underreaction, and strategic liquidity provision. Third, the framework enables analysis of LLMs' responses to varying market conditions, similar to partial dependence plots in machine-learning interpretability. The framework allows simulating financial theories without closed-form solutions, creating experimental designs that would be costly with human participants, and establishing how prompts can generate correlated behaviors affecting market stability. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.10789 |
By: | Ziqi Li; Zhan Peng |
Abstract: | Moran Eigenvector Spatial Filtering (ESF) approaches have shown promise in accounting for spatial effects in statistical models. Can this extend to machine learning? This paper examines the effectiveness of using Moran Eigenvectors as additional spatial features in machine learning models. We generate synthetic datasets with known processes involving spatially varying and nonlinear effects across two different geometries. Moran Eigenvectors calculated from different spatial weights matrices, with and without a priori eigenvector selection, are tested. We assess the performance of popular machine learning models, including Random Forests, LightGBM, XGBoost, and TabNet, and benchmark their accuracies in terms of cross-validated R2 values against models that use only coordinates as features. We also extract coefficients and functions from the models using GeoShapley and compare them with the true processes. Results show that machine learning models using only location coordinates achieve better accuracies than eigenvector-based approaches across various experiments and datasets. Furthermore, we discuss that while these findings are relevant for spatial processes that exhibit positive spatial autocorrelation, they do not necessarily apply when modeling network autocorrelation and cases with negative spatial autocorrelation, where Moran Eigenvectors would still be useful. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.12450 |
By: | Hannes Wallimann; Noah Balthasar |
Abstract: | Children's travel behavior plays a critical role in shaping long-term mobility habits and public health outcomes. Despite growing global interest, little is known about the factors influencing travel mode choice of children for school journeys in Switzerland. This study addresses this gap by applying a random forest classifier - a machine learning algorithm - to data from the Swiss Mobility and Transport Microcensus, in order to identify key predictors of children's travel mode choice for school journeys. Distance consistently emerges as the most important predictor across all models, for instance when distinguishing between active vs. non-active travel or car vs. non-car usage. The models show relatively high performance, with overall classification accuracy of 87.27% (active vs. non-active) and 78.97% (car vs. non-car), respectively. The study offers empirically grounded insights that can support school mobility policies and demonstrates the potential of machine learning in uncovering behavioral patterns in complex transport datasets. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.09947 |
By: | Yuanjun Feng; Vivek Chodhary; Yash Raj Shrestha |
Abstract: | This study examines the understudied role of algorithmic evaluation of human judgment in hybrid decision-making systems, a critical gap in management research. While extant literature focuses on human reluctance to follow algorithmic advice, we reverse the perspective by investigating how AI agents based on large language models (LLMs) assess and integrate human input. Our work addresses a pressing managerial constraint: firms barred from deploying LLMs directly due to privacy concerns can still leverage them as mediating tools (for instance, anonymized outputs or decision pipelines) to guide high-stakes choices like pricing or discounts without exposing proprietary data. Through a controlled prediction task, we analyze how an LLM-based AI agent weights human versus algorithmic predictions. We find that the AI system systematically discounts human advice, penalizing human errors more severely than algorithmic errors--a bias exacerbated when the agent's identity (human vs AI) is disclosed and the human is positioned second. These results reveal a disconnect between AI-generated trust metrics and the actual influence of human judgment, challenging assumptions about equitable human-AI collaboration. Our findings offer three key contributions. First, we identify a reverse algorithm aversion phenomenon, where AI agents undervalue human input despite comparable error rates. Second, we demonstrate how disclosure and positional bias interact to amplify this effect, with implications for system design. Third, we provide a framework for indirect LLM deployment that balances predictive power with data privacy. For practitioners, this research emphasize the need to audit AI weighting mechanisms, calibrate trust dynamics, and strategically design decision sequences in human-AI systems. |
Date: | 2025–03 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.13871 |
By: | Julian Junyan Wang; Victor Xiaoqi Wang |
Abstract: | This study provides the first comprehensive assessment of consistency and reproducibility in Large Language Model (LLM) outputs in finance and accounting research. We evaluate how consistently LLMs produce outputs given identical inputs through extensive experimentation with 50 independent runs across five common tasks: classification, sentiment analysis, summarization, text generation, and prediction. Using three OpenAI models (GPT-3.5-turbo, GPT-4o-mini, and GPT-4o), we generate over 3.4 million outputs from diverse financial source texts and data, covering MD&As, FOMC statements, finance news articles, earnings call transcripts, and financial statements. Our findings reveal substantial but task-dependent consistency, with binary classification and sentiment analysis achieving near-perfect reproducibility, while complex tasks show greater variability. More advanced models do not consistently demonstrate better consistency and reproducibility, with task-specific patterns emerging. LLMs significantly outperform expert human annotators in consistency and maintain high agreement even where human experts significantly disagree. We further find that simple aggregation strategies across 3-5 runs dramatically improve consistency. Simulation analysis reveals that despite measurable inconsistency in LLM outputs, downstream statistical inferences remain remarkably robust. These findings address concerns about what we term "G-hacking, " the selective reporting of favorable outcomes from multiple Generative AI runs, by demonstrating that such risks are relatively low for finance and accounting tasks. |
Date: | 2025–03 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2503.16974 |
By: | Tianshi Mu; Pranjal Rawat; John Rust; Chengjun Zhang; Qixuan Zhong |
Abstract: | We compare the performance of human and artificially intelligent (AI) decision makers in simple binary classification tasks where the optimal decision rule is given by Bayes Rule. We reanalyze choices of human subjects gathered from laboratory experiments conducted by El-Gamal and Grether and Holt and Smith. We confirm that while overall, Bayes Rule represents the single best model for predicting human choices, subjects are heterogeneous and a significant share of them make suboptimal choices that reflect judgement biases described by Kahneman and Tversky that include the ``representativeness heuristic'' (excessive weight on the evidence from the sample relative to the prior) and ``conservatism'' (excessive weight on the prior relative to the sample). We compare the performance of AI subjects gathered from recent versions of large language models (LLMs) including several versions of ChatGPT. These general-purpose generative AI chatbots are not specifically trained to do well in narrow decision making tasks, but are trained instead as ``language predictors'' using a large corpus of textual data from the web. We show that ChatGPT is also subject to biases that result in suboptimal decisions. However we document a rapid evolution in the performance of ChatGPT from sub-human performance for early versions (ChatGPT 3.5) to superhuman and nearly perfect Bayesian classifications in the latest versions (ChatGPT 4o). |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.10636 |
By: | Yu Jeffrey Hu; Jeroen Rombouts; Ines Wilms |
Abstract: | Machine learning models are widely recognized for their strong performance in forecasting. To keep that performance in streaming data settings, they have to be monitored and frequently re-trained. This can be done with machine learning operations (MLOps) techniques under supervision of an MLOps engineer. However, in digital platform settings where the number of data streams is typically large and unstable, standard monitoring becomes either suboptimal or too labor intensive for the MLOps engineer. As a consequence, companies often fall back on very simple worse performing ML models without monitoring. We solve this problem by adopting a design science approach and introducing a new monitoring framework, the Machine Learning Monitoring Agent (MLMA), that is designed to work at scale for any ML model with reasonable labor cost. A key feature of our framework concerns test-based automated re-training based on a data-adaptive reference loss batch. The MLOps engineer is kept in the loop via key metrics and also acts, pro-actively or retrospectively, to maintain performance of the ML model in the production stage. We conduct a large-scale test at a last-mile delivery platform to empirically validate our monitoring framework. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.16789 |
By: | Ji Ma |
Abstract: | Large language models (LLMs) increasingly serve as human-like decision-making agents in social science and applied settings. These LLM-agents are typically assigned human-like characters and placed in real-life contexts. However, how these characters and contexts shape an LLM's behavior remains underexplored. This study proposes and tests methods for probing, quantifying, and modifying an LLM's internal representations in a Dictator Game -- a classic behavioral experiment on fairness and prosocial behavior. We extract ``vectors of variable variations'' (e.g., ``male'' to ``female'') from the LLM's internal state. Manipulating these vectors during the model's inference can substantially alter how those variables relate to the model's decision-making. This approach offers a principled way to study and regulate how social concepts can be encoded and engineered within transformer-based models, with implications for alignment, debiasing, and designing AI agents for social simulations in both academic and commercial applications. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.11671 |
By: | Yu Zhang; Zelin Wu; Claudio Tessone |
Abstract: | Cryptocurrencies are digital tokens built on blockchain technology, with thousands actively traded on centralized exchanges (CEXs). Unlike stocks, which are backed by real businesses, cryptocurrencies are recognized as a distinct class of assets by researchers. How do investors treat this new category of asset in trading? Are they similar to stocks as an investment tool for investors? We answer these questions by investigating cryptocurrencies' and stocks' price time series which can reflect investors' attitudes towards the targeted assets. Concretely, we use different machine learning models to classify cryptocurrencies' and stocks' price time series in the same period and get an extremely high accuracy rate, which reflects that cryptocurrency investors behave differently in trading from stock investors. We then extract features from these price time series to explain the price pattern difference, including mean, variance, maximum, minimum, kurtosis, skewness, and first to third-order autocorrelation, etc., and then use machine learning methods including logistic regression (LR), random forest (RF), support vector machine (SVM), etc. for classification. The classification results show that these extracted features can help to explain the price time series pattern difference between cryptocurrencies and stocks. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.12771 |
By: | Achim Ahrens; Victor Chernozhukov; Christian Hansen; Damian Kozbur; Mark Schaffer; Thomas Wiemann |
Abstract: | This paper provides a practical introduction to Double/Debiased Machine Learning (DML). DML provides a general approach to performing inference about a target parameter in the presence of nuisance parameters. The aim of DML is to reduce the impact of nuisance parameter estimation on estimators of the parameter of interest. We describe DML and its two essential components: Neyman orthogonality and cross-fitting. We highlight that DML reduces functional form dependence and accommodates the use of complex data types, such as text data. We illustrate its application through three empirical examples that demonstrate DML's applicability in cross-sectional and panel settings. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.08324 |
By: | Cong William Lin; Wu Zhu |
Abstract: | Large Language Models (LLMs), such as ChatGPT, are reshaping content creation and academic writing. This study investigates the impact of AI-assisted generative revisions on research manuscripts, focusing on heterogeneous adoption patterns and their influence on writing convergence. Leveraging a dataset of over 627, 000 academic papers from arXiv, we develop a novel classification framework by fine-tuning prompt- and discipline-specific large language models to detect the style of ChatGPT-revised texts. Our findings reveal substantial disparities in LLM adoption across academic disciplines, gender, native language status, and career stage, alongside a rapid evolution in scholarly writing styles. Moreover, LLM usage enhances clarity, conciseness, and adherence to formal writing conventions, with improvements varying by revision type. Finally, a difference-in-differences analysis shows that while LLMs drive convergence in academic writing, early adopters, male researchers, non-native speakers, and junior scholars exhibit the most pronounced stylistic shifts, aligning their writing more closely with that of established researchers. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.13629 |
By: | Kasymkhan Khubiev; Mikhail Semenov |
Abstract: | Algorithmic trading relies on extracting meaningful signals from diverse financial data sources, including candlestick charts, order statistics on put and canceled orders, traded volume data, limit order books, and news flow. While deep learning has demonstrated remarkable success in processing unstructured data and has significantly advanced natural language processing, its application to structured financial data remains an ongoing challenge. This study investigates the integration of deep learning models with financial data modalities, aiming to enhance predictive performance in trading strategies and portfolio optimization. We present a novel approach to incorporating limit order book analysis into algorithmic trading by developing embedding techniques and treating sequential limit order book snapshots as distinct input channels in an image-based representation. Our methodology for processing limit order book data achieves state-of-the-art performance in high-frequency trading algorithms, underscoring the effectiveness of deep learning in financial applications. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.13521 |
By: | Timoth\'ee Fabre; Damien Challet |
Abstract: | This paper investigates real-time detection of spoofing activity in limit order books, focusing on cryptocurrency centralized exchanges. We first introduce novel order flow variables based on multi-scale Hawkes processes that account both for the size and placement distance from current best prices of new limit orders. Using a Level-3 data set, we train a neural network model to predict the conditional probability distribution of mid price movements based on these features. Our empirical analysis highlights the critical role of the posting distance of limit orders in the price formation process, showing that spoofing detection models that do not take the posting distance into account are inadequate to describe the data. Next, we propose a spoofing detection framework based on the probabilistic market manipulation gain of a spoofing agent and use the previously trained neural network to compute the expected gain. Running this algorithm on all submitted limit orders in the period 2024-12-04 to 2024-12-07, we find that 31% of large orders could spoof the market. Because of its simple neuronal architecture, our model can be run in real time. This work contributes to enhancing market integrity by providing a robust tool for monitoring and mitigating spoofing in both cryptocurrency exchanges and traditional financial markets. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.15908 |
By: | Zhao, Chuqing; Chen, Yisong |
Abstract: | Online platforms such as Reddit have become significant spaces for public discussions on mental health, offering valuable insights into psychological distress and support-seeking behaviors. Large Language Models (LLMs) have emerged as powerful tools for analyzing these discussions, enabling the identification of mental health trends, crisis signals, and potential interventions. This work develops an LLM-based topic modeling framework tailored for domain-specific mental health discourse, uncovering latent themes within user-generated content. Additionally, an interactive and interpretable visualization system is designed to allow users to explore data at various levels of granularity, enhancing the understanding of mental health narratives. This approach aims to bridge the gap between large-scale AI analysis and human-centered interpretability, contributing to more effective and responsible mental health insights on social media. |
Date: | 2025–05–02 |
URL: | https://d.repec.org/n?u=RePEc:osf:socarx:xbpts_v1 |
By: | Buxmann, Peter; Glauben, Adrian; Hendriks, Patrick |
Abstract: | Large Language Models (LLMs) revolutionieren die Art und Weise, wie Texte oder auch Software geschrieben werden. In diesem Artikel wollen wir insbesondere auf den Einsatz von ChatGPT in Unternehmen eingehen. Schwerpunkt ist ein Fallbeispiel zur Neugestaltung von Serviceprozessen, das gemeinsam mit einem mittelständischen Softwarehaus entwickelt wurde. Wir zeigen, wie LLMs Geschäftsprozesse transformieren können und welche wirtschaftlichen Effekte sich daraus ergeben. |
Date: | 2024–04 |
URL: | https://d.repec.org/n?u=RePEc:dar:wpaper:154532 |
By: | Youngbin Lee; Yejin Kim; Suin Kim; Yongjae Lee |
Abstract: | Portfolio optimization faces challenges due to the sensitivity in traditional mean-variance models. The Black-Litterman model mitigates this by integrating investor views, but defining these views remains difficult. This study explores the integration of large language models (LLMs) generated views into portfolio optimization using the Black-Litterman framework. Our method leverages LLMs to estimate expected stock returns from historical prices and company metadata, incorporating uncertainty through the variance in predictions. We conduct a backtest of the LLM-optimized portfolios from June 2024 to February 2025, rebalancing biweekly using the previous two weeks of price data. As baselines, we compare against the S&P 500, an equal-weighted portfolio, and a traditional mean-variance optimized portfolio constructed using the same set of stocks. Empirical results suggest that different LLMs exhibit varying levels of predictive optimism and confidence stability, which impact portfolio performance. The source code and data are available at https://github.com/youngandbin/LLM-MVO-B LM. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.14345 |
By: | Andrey Fradkin |
Abstract: | This paper documents three stylized facts about the demand for Large Language Models (LLMs) using data from OpenRouter, a prominent LLM marketplace. First, new models experience rapid initial adoption that stabilizes within weeks. Second, model releases differ substantially in whether they primarily attract new users or substitute demand from competing models. Third, multihoming, using multiple models simultaneously, is common among apps. These findings suggest significant horizontal and vertical differentiation in the LLM market, implying opportunities for providers to maintain demand and pricing power despite rapid technological advances. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.15440 |
By: | Chris Hays; Manish Raghavan |
Abstract: | Researchers and practitioners often wish to measure treatment effects in settings where units interact via markets and recommendation systems. In these settings, units are affected by certain shared states, like prices, algorithmic recommendations or social signals. We formalize this structure, calling it shared-state interference, and argue that our formulation captures many relevant applied settings. Our key modeling assumption is that individuals' potential outcomes are independent conditional on the shared state. We then prove an extension of a double machine learning (DML) theorem providing conditions for achieving efficient inference under shared-state interference. We also instantiate our general theorem in several models of interest where it is possible to efficiently estimate the average direct effect (ADE) or global average treatment effect (GATE). |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.08836 |
By: | Koukorinis, Andreas; Peters, Gareth W.; Germano, Guido |
Abstract: | We combine a hidden Markov model (HMM) and a kernel machine (SVM/MKL) into a hybrid HMM-SVM/MKL generative-discriminative learning approach to accurately classify high-frequency financial regimes and predict the direction of trades. We capture temporal dependencies and key stylized facts in high-frequency financial time series by integrating the HMM to produce model-based generative feature embeddings from microstructure time series data. These generative embeddings then serve as inputs to a SVM with single- and multi-kernel (MKL) formulations for predictive discrimination. Our methodology, which does not require manual feature engineering, improves classification accuracy compared to single-kernel SVMs and kernel target alignment methods. It also outperforms both logistic classifier and feed-forward networks. This hybrid HMM-SVM-MKL approach shows high-frequency time-series classification improvements that can significantly benefit applications in finance. |
Keywords: | Fisher information kernel; hidden Markov model; Kernel methods; support vector machine |
JEL: | C1 F3 G3 |
Date: | 2025–06 |
URL: | https://d.repec.org/n?u=RePEc:ehl:lserod:128016 |
By: | Fabienne Schmid; Daniel Oeltz |
Abstract: | We present a robust Deep Hedging framework for the pricing and hedging of option portfolios that significantly improves training efficiency and model robustness. In particular, we propose a neural model for training model embeddings which utilizes the paths of several advanced equity option models with stochastic volatility in order to learn the relationships that exist between hedging strategies. A key advantage of the proposed method is its ability to rapidly and reliably adapt to new market regimes through the recalibration of a low-dimensional embedding vector, rather than retraining the entire network. Moreover, we examine the observed Profit and Loss distributions on the parameter space of the models used to learn the embeddings. The results show that the proposed framework works well with data generated by complex models and can serve as a construction basis for an efficient and robust simulation tool for the systematic development of an entirely model-independent hedging strategy. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.16436 |
By: | Songci Xu; Qiangqiang Cheng; Chi-Guhn Lee |
Abstract: | In the realm of stock prediction, machine learning models encounter considerable obstacles due to the inherent low signal-to-noise ratio and the nonstationary nature of financial markets. These challenges often result in spurious correlations and unstable predictive relationships, leading to poor performance of models when applied to out-of-sample (OOS) domains. To address these issues, we investigate \textit{Domain Generalization} techniques, with a particular focus on causal representation learning to improve a prediction model's generalizability to OOS domains. By leveraging multi-factor models from econometrics, we introduce a novel error bound that explicitly incorporates causal relationships. In addition, we present the connection between the proposed error bound and market nonstationarity. We also develop a \textit{Causal Discovery} technique to discover invariant feature representations, which effectively mitigates the proposed error bound, and the influence of spurious correlations on causal discovery is rigorously examined. Our theoretical findings are substantiated by numerical results, showcasing the effectiveness of our approach in enhancing the generalizability of stock prediction models. |
Date: | 2025–03 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2503.20987 |
By: | Bokai Cao; Xueyuan Lin; Yiyan Qi; Chengjin Xu; Cehao Yang; Jian Guo |
Abstract: | Market simulator tries to create high-quality synthetic financial data that mimics real-world market dynamics, which is crucial for model development and robust assessment. Despite continuous advancements in simulation methodologies, market fluctuations vary in terms of scale and sources, but existing frameworks often excel in only specific tasks. To address this challenge, we propose Financial Wind Tunnel (FWT), a retrieval-augmented market simulator designed to generate controllable, reasonable, and adaptable market dynamics for model testing. FWT offers a more comprehensive and systematic generative capability across different data frequencies. By leveraging a retrieval method to discover cross-sectional information as the augmented condition, our diffusion-based simulator seamlessly integrates both macro- and micro-level market patterns. Furthermore, our framework allows the simulation to be controlled with wide applicability, including causal generation through "what-if" prompts or unprecedented cross-market trend synthesis. Additionally, we develop an automated optimizer for downstream quantitative models, using stress testing of simulated scenarios via FWT to enhance returns while controlling risks. Experimental results demonstrate that our approach enables the generalizable and reliable market simulation, significantly improve the performance and adaptability of downstream models, particularly in highly complex and volatile market conditions. Our code and data sample is available at https://anonymous.4open.science/r/fwt_-E 852 |
Date: | 2025–03 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2503.17909 |
By: | Vasilios Plakandaras (Department of Economics, Democritus University of Thrace, Komotini, Greece); Matteo Bonato (Department of Economics and Econometrics, University of Johannesburg, Auckland Park, South Africa; IPAG Business School, 184 Boulevard Saint-Germain, 75006 Paris, France); Rangan Gupta (Department of Economics, University of Pretoria, Private Bag X20, Hatfield 0028, South Africa); Oguzhan Cepni (Ostim Technical University, Ankara, Turkiye; University of Edinburgh Business School, Centre for Business, Climate Change, and Sustainability; Department of Economics, Copenhagen Business School, Denmark) |
Abstract: | This paper forecasts monthly cross-sectional realized variance (RV) for U.S. equities across 49 industries and all 50 states. We exploit information in both own-market and cross-market (oil) realized moments (semi-variance, leverage, skewness, kurtosis, and upside and downside tail risk) as predictors. To accommodate cross-sectional dependence, we compare standard econometric panel models with machine-learning approaches and introduce a new machine-learning technique tailored specifically to panel data. Using observations from April 1994 through April 2023, the panel-dedicated machine-learning model consistently outperforms all other methods, while oil-related moments add little incremental predictive power beyond own-market moments. Short-horizon forecasts successfully capture immediate shocks, whereas longer-horizon forecasts reflect broader structural economic changes. These results carry important implications for portfolio allocation and risk management. |
Keywords: | Cross-sectional realized variance, Realized moments, Machine learning, Forecasting |
JEL: | C33 C53 G10 G17 |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:pre:wpaper:202518 |
By: | Michael J. Yuan; Carlos Campoy; Sydney Lai; James Snewin; Ju Long |
Abstract: | Decentralized AI agent networks, such as Gaia, allows individuals to run customized LLMs on their own computers and then provide services to the public. However, in order to maintain service quality, the network must verify that individual nodes are running their designated LLMs. In this paper, we demonstrate that in a cluster of mostly honest nodes, we can detect nodes that run unauthorized or incorrect LLM through social consensus of its peers. We will discuss the algorithm and experimental data from the Gaia network. We will also discuss the intersubjective validation system, implemented as an EigenLayer AVS to introduce financial incentives and penalties to encourage honest behavior from LLM nodes. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.13443 |
By: | Jeonggyu Huh; Jaegi Jeon; Hyeng Keun Koo |
Abstract: | Solving large-scale, continuous-time portfolio optimization problems involving numerous assets and state-dependent dynamics has long been challenged by the curse of dimensionality. Traditional dynamic programming and PDE-based methods, while rigorous, typically become computationally intractable beyond a small number of state variables (often limited to ~3-6 in prior numerical studies). To overcome this critical barrier, we introduce the \emph{Pontryagin-Guided Direct Policy Optimization} (PG-DPO) framework. PG-DPO leverages Pontryagin's Maximum Principle to directly guide neural network policies via backpropagation-through-time, naturally incorporating exogenous state processes without requiring dense state grids. Crucially, our computationally efficient ``Two-Stage'' variant exploits rapidly stabilizing costate estimates derived from BPTT, converting them into near-optimal closed-form Pontryagin controls after only a short warm-up, significantly reducing training overhead. This enables a breakthrough in scalability: numerical experiments demonstrate that PG-DPO successfully tackles problems with dimensions previously considered far out of reach, optimizing portfolios with up to 50 assets and 10 state variables. The framework delivers near-optimal policies, offering a practical and powerful alternative for high-dimensional continuous-time portfolio choice. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.11116 |
By: | Mfon Akpan; Adeyemi Adebayo |
Abstract: | Artificial Intelligence's (AI) rapid development and growth not only transformed industries but also fired up important debates about its impacts on employment, resource allocation, and the ethics involved in decision-making. It serves to understand how changes within an industry will be able to influence society with that change. Advancing AI technologies will create a dual paradox of efficiency, greater resource consumption, and displacement of traditional labor. In this context, we explore the impact of AI on energy consumption, human labor roles, and hybrid roles widespread human labor replacement. We used mixed methods involving qualitative and quantitative analyses of data identified from various sources. Findings suggest that AI increases energy consumption and has impacted human labor roles to a minimal extent, considering that its applicability is limited to some tasks that require human judgment. In this context, the |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.10503 |
By: | Bin Ramli, Muhammad Sukri |
Abstract: | Mandatory job rotations are a cornerstone of the Malaysian civil service, designed to enhance governance, reduce integrity risks, and foster organizational agility. However, these rotations present significant onboarding challenges, requiring employees to rapidly adapt to diverse roles and complex responsibilities, particularly in 'hot seat' and high-risk-to-corruption positions. This study focuses on the Jabatan Kastam Diraja Malaysia (JKDM), where the need for efficient onboarding is heightened by the structured tenure of job rotations. The necessity to quickly acclimate to new roles within a defined period, especially in sensitive positions, underscores the urgency of effective onboarding strategies. To address the inherent onboarding complexities, particularly in navigating intricate customs regulations, this research proposes leveraging Large Language Models (LLMs), with a specific focus on NotebookLM. NotebookLM's ability to ingest and summarize extensive regulatory documents, coupled with features like interactive training modules and AI-powered Q&A, offers a dynamic, personalized learning experience. This approach aims to surpass traditional training limitations, streamlining onboarding, enhancing knowledge transfer, and boosting productivity within JKDM. The study outlines an implementation plan, including a pilot program and department-wide rollout, with expected outcomes of improved onboarding efficiency, enhanced knowledge sharing, and increased operational effectiveness, ultimately contributing to a more agile and integrity-driven public service. Figure |
Date: | 2025–03–03 |
URL: | https://d.repec.org/n?u=RePEc:osf:osfxxx:gjv9r_v1 |
By: | Dario Crisci; Sebastian E. Ferrando; Konrad Gajewski |
Abstract: | An agent-based modelling methodology for the joint price evolution of two stocks is put forward. The method models future multidimensional price trajectories reflecting how a class of agents rebalance their portfolios in an operational way by reacting to how stocks' charts unfold. Prices are expressed in units of a third stock that acts as numeraire. The methodology is robust, in particular, it does not depend on any prior probability or analytical assumptions and it is based on constructing scenarios/trajectories. A main ingredient is a superhedging interpretation that provides relative superhedging prices between the two modelled stocks. The operational nature of the methodology gives objective conditions for the validity of the model and so implies realistic risk-rewards profiles for the agent's operations. Superhedging computations are performed with a dynamic programming algorithm deployed on a graph data structure. Null subsets of the trajectory space are directly related to arbitrage opportunities (i.e. there is no need for probabilistic considerations) that may emerge during the trajectory set construction. It follows that the superhedging algorithm handles null sets in a rigorous and intuitive way. Superhedging and underhedging bounds are kept relevant to the investor by means of a worst case pruning method and, as an alternative, a theory supported pruning that relies on a new notion of small arbitrage. |
Date: | 2025–03 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2503.18165 |
By: | Duc Tuyen TA; Wajdi Ben Saad; Ji Young Oh |
Abstract: | With the introduction of the PSD2 regulation in the EU which established the Open Banking framework, a new window of opportunities has opened for banks and fintechs to explore and enrich Bank transaction descriptions with the aim of building a better understanding of customer behavior, while using this understanding to prevent fraud, reduce risks and offer more competitive and tailored services. And although the usage of natural language processing models and techniques has seen an incredible progress in various applications and domains over the past few years, custom applications based on domain-specific text corpus remain unaddressed especially in the banking sector. In this paper, we introduce a language-based Open Banking transaction classification system with a focus on the french market and french language text. The system encompasses data collection, labeling, preprocessing, modeling, and evaluation stages. Unlike previous studies that focus on general classification approaches, this system is specifically tailored to address the challenges posed by training a language model with a specialized text corpus (Banking data in the French context). By incorporating language-specific techniques and domain knowledge, the proposed system demonstrates enhanced performance and efficiency compared to generic approaches. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.12319 |
By: | Bruno Bouchard; Xiaolu Tan |
Abstract: | We provide an extension of the unbiased simulation method for SDEs developed in Henry-Labordere et al. [Ann Appl Probab. 27:6 (2017) 1-37] to a class of path-dependent dynamics, pertaining for Asian options. In our setting, both the payoff and the SDE's coefficients depend on the (weighted) average of the process or, more precisely, on the integral of the solution to the SDE against a continuous function with bounded variations. In particular, this applies to the numerical resolution of the class of path-dependent PDEs whose regularity, in the sens of Dupire, is studied in Bouchard and Tan [Ann. I.H.P., to appear]. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.16349 |