nep-cmp New Economics Papers
on Computational Economics
Issue of 2025–11–03
twenty-two papers chosen by
Stan Miles, Thompson Rivers University


  1. Combining machine learning techniques with NDEA methodology: the use of R.F. and A.N.N. By Pinto, Claudio
  2. News-Aware Direct Reinforcement Trading for Financial Markets By Qing-Yu Lan; Zhan-He Wang; Jun-Qian Jiang; Yu-Tong Wang; Yun-Song Piao
  3. Physics-Informed Graph Neural Networks for Attack Path Prediction By Marin François; Pierre-Emmanuel Arduin; Myriam Merad
  4. Quantum and Classical Machine Learning in Decentralized Finance: Comparative Evidence from Multi-Asset Backtesting of Automated Market Makers By Chi-Sheng Chen; Aidan Hung-Wen Tsai
  5. Comparing LLMs for Sentiment Analysis in Financial Market News By Lucas Eduardo Pereira Teles; Carlos M. S. Figueiredo
  6. A three-step machine learning approach to predict market bubbles with financial news By Abraham Atsiwo
  7. Fusing Narrative Semantics for Financial Volatility Forecasting By Yaxuan Kong; Yoontae Hwang; Marcus Kaiser; Chris Vryonides; Roel Oomen; Stefan Zohren
  8. Spiking Neural Network for Cross-Market Portfolio Optimization in Financial Markets: A Neuromorphic Computing Approach By Amarendra Mohan; Ameer Tamoor Khan; Shuai Li; Xinwei Cao; Zhibin Li
  9. Quantum Machine Learning methods for Fourier-based distribution estimation with application in option pricing By Fernando Alonso; \'Alvaro Leitao; Carlos V\'azquez
  10. A Topological Approach to Parameterizing Deep Hedging Networks By Alok Das; Kiseop Lee
  11. Reinforcement Learning and Consumption-Savings Behavior By Brandon Kaplowitz
  12. Disentangling Age, Time, and Cohort Effects in Income Inequality: A Proxy Machine Learning Approach By David Bruns-Smith; Emi Nakamura; Jón Steinsson
  13. ATLAS: Adaptive Trading with LLM AgentS Through Dynamic Prompt Optimization and Multi-Agent Coordination By Charidimos Papadakis; Angeliki Dimitriou; Giorgos Filandrianos; Maria Lymperaiou; Konstantinos Thomas; Giorgos Stamou
  14. Integrating Transparent Models, LLMs, and Practitioner-in-the-Loop: A Case of Nonprofit Program Evaluation By Ji Ma; Albert Casella
  15. Hierarchical AI Multi-Agent Fundamental Investing: Evidence from China's A-Share Market By Chujun He; Zhonghao Huang; Xiangguo Li; Ye Luo; Kewei Ma; Yuxuan Xiong; Xiaowei Zhang; Mingyang Zhao
  16. At-Risk Transformation for U.S. Recession Prediction By Rahul Billakanti; Minchul Shin
  17. A Neural Network-VAR for Long-Term Forecasting: An Application to Monetary Policy Effects in the Euro Area By Diana Barro; Antonella Basso; Marco Corazza; Guglielmo Alessandro Visentin
  18. A Multi-Layer Machine Learning and Econometric Pipeline for Forecasting Market Risk: Evidence from Cryptoasset Liquidity Spillovers By Yimeng Qiu; Feihuang Fang
  19. "Job Allocation in the Levy Institute Microsimulation Model" By Brandon Istenes
  20. Multiple-Try Simulated Annealing for Constrained Optimization By Diana Barro; Roberto Casarin; Anthony Osuntuyi
  21. Optimized Multi-Level Monte Carlo Parametrization and Antithetic Sampling for Nested Simulations By Alexandre Boumezoued; Adel Cherchali; Vincent Lemaire; Gilles Pag\`es; Mathieu Truc
  22. Portfolio selection with exogenous and endogenous transaction costs under a two-factor stochastic volatility model By Dong Yan; Ke Zhou; Zirun Wang; Xin-Jiang He

  1. By: Pinto, Claudio
    Abstract: The objective of the present work is to combine NDEA approach with machine learning techniques and neural networks. At this end we exploit the models proposed in Pinto, 2024. The integration process involves the application of a machine learning technique upstream of the resolution of NDEA models and the application of an artificial neural network downstream the resolution of a NDEA models. In particular here we propose the application of a Random Forest algorithm in regression models to adjust data on: 1) input and output, 2) resource allocation preferences among sub-processes, 3) cost budgets, revenue targets and profit targets, from the influence of internal and external factors in order to improve the calculation of optimal weights. Downstream of the resolution of NDEA models, the use of several artificial neural network models is to prosed to optimise the calculation of the economic quantities of interest derived from optimal NDEA solutions. The approach enhances the discrimination power and robustness of optimal NDEA weights as well as the robustness of the calculation of formulas of the economic quatities.
    Keywords: Network Data Envelopment Analisys, Random Forest Regression, Artificial Neural Network, external factors
    JEL: C45 C53 C61 L20
    Date: 2025–09–07
    URL: https://d.repec.org/n?u=RePEc:pra:mprapa:126539
  2. By: Qing-Yu Lan; Zhan-He Wang; Jun-Qian Jiang; Yu-Tong Wang; Yun-Song Piao
    Abstract: The financial market is known to be highly sensitive to news. Therefore, effectively incorporating news data into quantitative trading remains an important challenge. Existing approaches typically rely on manually designed rules and/or handcrafted features. In this work, we directly use the news sentiment scores derived from large language models, together with raw price and volume data, as observable inputs for reinforcement learning. These inputs are processed by sequence models such as recurrent neural networks or Transformers to make end-to-end trading decisions. We conduct experiments using the cryptocurrency market as an example and evaluate two representative reinforcement learning algorithms, namely Double Deep Q-Network (DDQN) and Group Relative Policy Optimization (GRPO). The results demonstrate that our news-aware approach, which does not depend on handcrafted features or manually designed rules, can achieve performance superior to market benchmarks. We further highlight the critical role of time-series information in this process.
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2510.19173
  3. By: Marin François (LAMSADE - Laboratoire d'analyse et modélisation de systèmes pour l'aide à la décision - Université Paris Dauphine-PSL - PSL - Université Paris Sciences et Lettres - CNRS - Centre National de la Recherche Scientifique); Pierre-Emmanuel Arduin (DRM - Dauphine Recherches en Management - Université Paris Dauphine-PSL - PSL - Université Paris Sciences et Lettres - CNRS - Centre National de la Recherche Scientifique); Myriam Merad (LAMSADE - Laboratoire d'analyse et modélisation de systèmes pour l'aide à la décision - Université Paris Dauphine-PSL - PSL - Université Paris Sciences et Lettres - CNRS - Centre National de la Recherche Scientifique)
    Abstract: The automated identification and evaluation of potential attack paths within infrastructures is a critical aspect of cybersecurity risk assessment. However, existing methods become impractical when applied to complex infrastructures. While machine learning (ML) has proven effective in predicting the exploitation of individual vulnerabilities, its potential for full-path prediction remains largely untapped. This challenge stems from two key obstacles: the lack of adequate datasets for training the models and the dimensionality of the learning problem. To address the first issue, we provide a dataset of 1033 detailed environment graphs and associated attack paths, with the objective of supporting the community in advancing ML-based attack path prediction. To tackle the second, we introduce a novel Physics-Informed Graph Neural Network (PIGNN) architecture for attack path prediction. Our experiments demonstrate its effectiveness, achieving an F1 score of 0.9308 for full-path prediction. We also introduce a self-supervised learning architecture for initial access and impact prediction, achieving F1 scores of 0.9780 and 0.8214, respectively. Our results indicate that the PIGNN effectively captures adversarial patterns in high-dimensional spaces, demonstrating promising generalization potential towards fully automated assessments.
    Keywords: Attack path prediction, Deep learning, Physics-informed neural networks, Graph neural networks
    Date: 2025–04–10
    URL: https://d.repec.org/n?u=RePEc:hal:journl:hal-05323716
  4. By: Chi-Sheng Chen; Aidan Hung-Wen Tsai
    Abstract: This study presents a comprehensive empirical comparison between quantum machine learning (QML) and classical machine learning (CML) approaches in Automated Market Makers (AMM) and Decentralized Finance (DeFi) trading strategies through extensive backtesting on 10 models across multiple cryptocurrency assets. Our analysis encompasses classical ML models (Random Forest, Gradient Boosting, Logistic Regression), pure quantum models (VQE Classifier, QNN, QSVM), hybrid quantum-classical models (QASA Hybrid, QASA Sequence, QuantumRWKV), and transformer models. The results demonstrate that hybrid quantum models achieve superior overall performance with 11.2\% average return and 1.42 average Sharpe ratio, while classical ML models show 9.8\% average return and 1.47 average Sharpe ratio. The QASA Sequence hybrid model achieves the highest individual return of 13.99\% with the best Sharpe ratio of 1.76, demonstrating the potential of quantum-classical hybrid approaches in AMM and DeFi trading strategies.
    Date: 2025–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2510.15903
  5. By: Lucas Eduardo Pereira Teles; Carlos M. S. Figueiredo
    Abstract: This article presents a comparative study of large language models (LLMs) in the task of sentiment analysis of financial market news. This work aims to analyze the performance difference of these models in this important natural language processing task within the context of finance. LLM models are compared with classical approaches, allowing for the quantification of the benefits of each tested model or approach. Results show that large language models outperform classical models in the vast majority of cases.
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2510.15929
  6. By: Abraham Atsiwo
    Abstract: This study presents a three-step machine learning framework to predict bubbles in the S&P 500 stock market by combining financial news sentiment with macroeconomic indicators. Building on traditional econometric approaches, the proposed approach predicts bubble formation by integrating textual and quantitative data sources. In the first step, bubble periods in the S&P 500 index are identified using a right-tailed unit root test, a widely recognized real-time bubble detection method. The second step extracts sentiment features from large-scale financial news articles using natural language processing (NLP) techniques, which capture investors' expectations and behavioral patterns. In the final step, ensemble learning methods are applied to predict bubble occurrences based on high sentiment-based and macroeconomic predictors. Model performance is evaluated through k-fold cross-validation and compared against benchmark machine learning algorithms. Empirical results indicate that the proposed three-step ensemble approach significantly improves predictive accuracy and robustness, providing valuable early warning insights for investors, regulators, and policymakers in mitigating systemic financial risks.
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2510.16636
  7. By: Yaxuan Kong; Yoontae Hwang; Marcus Kaiser; Chris Vryonides; Roel Oomen; Stefan Zohren
    Abstract: We introduce M2VN: Multi-Modal Volatility Network, a novel deep learning-based framework for financial volatility forecasting that unifies time series features with unstructured news data. M2VN leverages the representational power of deep neural networks to address two key challenges in this domain: (i) aligning and fusing heterogeneous data modalities, numerical financial data and textual information, and (ii) mitigating look-ahead bias that can undermine the validity of financial models. To achieve this, M2VN combines open-source market features with news embeddings generated by Time Machine GPT, a recently introduced point-in-time LLM, ensuring temporal integrity. An auxiliary alignment loss is introduced to enhance the integration of structured and unstructured data within the deep learning architecture. Extensive experiments demonstrate that M2VN consistently outperforms existing baselines, underscoring its practical value for risk management and financial decision-making in dynamic markets.
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2510.20699
  8. By: Amarendra Mohan (IIT Kharagpur); Ameer Tamoor Khan (University of Copenhagen); Shuai Li (University of Oulu); Xinwei Cao (Jiangnan University); Zhibin Li (Chengdu University of Information Technology)
    Abstract: Cross-market portfolio optimization has become increasingly complex with the globalization of financial markets and the growth of high-frequency, multi-dimensional datasets. Traditional artificial neural networks, while effective in certain portfolio management tasks, often incur substantial computational overhead and lack the temporal processing capabilities required for large-scale, multi-market data. This study investigates the application of Spiking Neural Networks (SNNs) for cross-market portfolio optimization, leveraging neuromorphic computing principles to process equity data from both the Indian (Nifty 500) and US (S&P 500) markets. A five-year dataset comprising approximately 1, 250 trading days of daily stock prices was systematically collected via the Yahoo Finance API. The proposed framework integrates Leaky Integrate-andFire neuron dynamics with adaptive thresholding, spike-timingdependent plasticity, and lateral inhibition to enable event-driven processing of financial time series. Dimensionality reduction is achieved through hierarchical clustering, while populationbased spike encoding and multiple decoding strategies support robust portfolio construction under realistic trading constraints, including cardinality limits, transaction costs, and adaptive risk aversion. Experimental evaluation demonstrates that the SNN-based framework delivers superior risk-adjusted returns and reduced volatility compared to ANN benchmarks, while substantially improving computational efficiency. These findings highlight the promise of neuromorphic computation for scalable, efficient, and robust portfolio optimization across global financial markets.
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2510.15921
  9. By: Fernando Alonso; \'Alvaro Leitao; Carlos V\'azquez
    Abstract: The ongoing progress in quantum technologies has fueled a sustained exploration of their potential applications across various domains. One particularly promising field is quantitative finance, where a central challenge is the pricing of financial derivatives-traditionally addressed through Monte Carlo integration techniques. In this work, we introduce two hybrid classical-quantum methods to address the option pricing problem. These approaches rely on reconstructing Fourier series representations of statistical distributions from the outputs of Quantum Machine Learning (QML) models based on Parametrized Quantum Circuits (PQCs). We analyze the impact of data size and PQC dimensionality on performance. Quantum Accelerated Monte Carlo (QAMC) is employed as a benchmark to quantitatively assess the proposed models in terms of computational cost and accuracy in the extraction of Fourier coefficients. Through the numerical experiments, we show that the proposed methods achieve remarkable accuracy, becoming a competitive quantum alternative for derivatives valuation.
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2510.19494
  10. By: Alok Das; Kiseop Lee
    Abstract: Deep hedging uses recurrent neural networks to hedge financial products that cannot be fully hedged in incomplete markets. Previous work in this area focuses on minimizing some measure of quadratic hedging error by calculating pathwise gradients, but doing so requires large batch sizes and can make training effective models in a reasonable amount of time challenging. We show that by adding certain topological features, we can reduce batch sizes substantially and make training these models more practically feasible without greatly compromising hedging performance.
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2510.16938
  11. By: Brandon Kaplowitz
    Abstract: This paper demonstrates how reinforcement learning can explain two puzzling empirical patterns in household consumption behavior during economic downturns. I develop a model where agents use Q-learning with neural network approximation to make consumption-savings decisions under income uncertainty, departing from standard rational expectations assumptions. The model replicates two key findings from recent literature: (1) unemployed households with previously low liquid assets exhibit substantially higher marginal propensities to consume (MPCs) out of stimulus transfers compared to high-asset households (0.50 vs 0.34), even when neither group faces borrowing constraints, consistent with Ganong et al. (2024); and (2) households with more past unemployment experiences maintain persistently lower consumption levels after controlling for current economic conditions, a "scarring" effect documented by Malmendier and Shen (2024). Unlike existing explanations based on belief updating about income risk or ex-ante heterogeneity, the reinforcement learning mechanism generates both higher MPCs and lower consumption levels simultaneously through value function approximation errors that evolve with experience. Simulation results closely match the empirical estimates, suggesting that adaptive learning through reinforcement learning provides a unifying framework for understanding how past experiences shape current consumption behavior beyond what current economic conditions would predict.
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2510.20748
  12. By: David Bruns-Smith; Emi Nakamura; Jón Steinsson
    Abstract: A canonical finding from earlier research is that the cross-sectional variance of income increases sharply with age Deaton and Paxson (1994). However, the trend in this age profile is not separately identified from time and cohort trends. Conventional methods solve this identification problem by ruling out "time effects." This strong assumption is rejected by the data. We propose a new proxy variable machine learning approach to disentangle age, time and cohort effects. Using this method, we estimate a significantly smaller slope of the age profile of income variance for the US than conventional methods, as well as less erratic slopes for 11 other countries.
    JEL: E20 J20
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:nbr:nberwo:34380
  13. By: Charidimos Papadakis; Angeliki Dimitriou; Giorgos Filandrianos; Maria Lymperaiou; Konstantinos Thomas; Giorgos Stamou
    Abstract: Large language models show promise for financial decision-making, yet deploying them as autonomous trading agents raises fundamental challenges: how to adapt instructions when rewards arrive late and obscured by market noise, how to synthesize heterogeneous information streams into coherent decisions, and how to bridge the gap between model outputs and executable market actions. We present ATLAS (Adaptive Trading with LLM AgentS), a unified multi-agent framework that integrates structured information from markets, news, and corporate fundamentals to support robust trading decisions. Within ATLAS, the central trading agent operates in an order-aware action space, ensuring that outputs correspond to executable market orders rather than abstract signals. The agent can incorporate feedback while trading using Adaptive-OPRO, a novel prompt-optimization technique that dynamically adapts the prompt by incorporating real-time, stochastic feedback, leading to increasing performance over time. Across regime-specific equity studies and multiple LLM families, Adaptive-OPRO consistently outperforms fixed prompts, while reflection-based feedback fails to provide systematic gains.
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2510.15949
  14. By: Ji Ma; Albert Casella
    Abstract: Public and nonprofit organizations often hesitate to adopt AI tools because most models are opaque even though standard approaches typically analyze aggregate patterns rather than offering actionable, case-level guidance. This study tests a practitioner-in-the-loop workflow that pairs transparent decision-tree models with large language models (LLMs) to improve predictive accuracy, interpretability, and the generation of practical insights. Using data from an ongoing college-success program, we build interpretable decision trees to surface key predictors. We then provide each tree's structure to an LLM, enabling it to reproduce case-level predictions grounded in the transparent models. Practitioners participate throughout feature engineering, model design, explanation review, and usability assessment, ensuring that field expertise informs the analysis at every stage. Results show that integrating transparent models, LLMs, and practitioner input yields accurate, trustworthy, and actionable case-level evaluations, offering a viable pathway for responsible AI adoption in the public and nonprofit sectors.
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2510.19799
  15. By: Chujun He; Zhonghao Huang; Xiangguo Li; Ye Luo; Kewei Ma; Yuxuan Xiong; Xiaowei Zhang; Mingyang Zhao
    Abstract: We present a multi-agent, AI-driven framework for fundamental investing that integrates macro indicators, industry-level and firm-specific information to construct optimized equity portfolios. The architecture comprises: (i) a Macro agent that dynamically screens and weights sectors based on evolving economic indicators and industry performance; (ii) four firm-level agents -- Fundamental, Technical, Report, and News -- that conduct in-depth analyses of individual firms to ensure both breadth and depth of coverage; (iii) a Portfolio agent that uses reinforcement learning to combine the agent outputs into a unified policy to generate the trading strategy; and (iv) a Risk Control agent that adjusts portfolio positions in response to market volatility. We evaluate the system on the constituents by the CSI 300 Index of China's A-share market and find that it consistently outperforms standard benchmarks and a state-of-the-art multi-agent trading system on risk-adjusted returns and drawdown control. Our core contribution is a hierarchical multi-agent design that links top-down macro screening with bottom-up fundamental analysis, offering a robust and extensible approach to factor-based portfolio construction.
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2510.21147
  16. By: Rahul Billakanti; Minchul Shin
    Abstract: We propose a simple binarization of predictors—an “at-risk” transformation—as an alternative to the standard practice of using continuous, standardized variables in recession forecasting models. By converting predictors into indicators of unusually weak states, we demonstrate their ability to capture the discrete nature of rare events such as U.S. recessions. Using a large panel of monthly U.S. macroeconomic and financial data, we show that binarized predictors consistently improve out-of-sample forecasting performance—often making linear models competitive with flexible machine learning methods—and that the gains are particularly pronounced around the onset of recessions
    Keywords: Recession Forecasting; Machine Learning; Feature Engineering; At-Risk Transformation; Binarized Predictors; Diffusion Index
    JEL: C25 C53 E32 E37
    Date: 2025–10–30
    URL: https://d.repec.org/n?u=RePEc:fip:fedpwp:102004
  17. By: Diana Barro (Ca’ Foscari University of Venice); Antonella Basso (Ca’ Foscari University of Venice); Marco Corazza (Ca’ Foscari University of Venice); Guglielmo Alessandro Visentin (Henley Business School, University of Reading)
    Abstract: We propose a hybrid approach that combines Neural Networks with a Vector Autoregression (VAR) model to generate long-term forecasts of time series. We apply this methodology to forecast the impact of shifts in monetary policies within the Euro area on a comprehensive set of macroeconomic variables. Our analysis begins with a standard (linear) VAR model, which is then enhanced by incorporating Neural Networks to generate long-term forecasts for key variables such as the interest rate, inflation, real output, narrow money, exchange rate, and corporate bond spread. The results suggest that a Neural Network-VAR model offers improvements over the traditional linear VAR for forecasting certain macroeconomic variables in the long run. However, due to the limited sample size, the nonlinear model does not consistently outperform the linear VAR.
    Keywords: Forecasting; VAR; Neural Networks; Monetary policies; Euro area
    JEL: C32 C45 C53 E52
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:ven:wpaper:2025:24
  18. By: Yimeng Qiu; Feihuang Fang
    Abstract: We study whether liquidity and volatility proxies of a core set of cryptoassets generate spillovers that forecast market-wide risk. Our empirical framework integrates three statistical layers: (A) interactions between core liquidity and returns, (B) principal-component relations linking liquidity and returns, and (C) volatility-factor projections that capture cross-sectional volatility crowding. The analysis is complemented by vector autoregression impulse responses and forecast error variance decompositions (see Granger 1969; Sims 1980), heterogeneous autoregressive models with exogenous regressors (HAR-X, Corsi 2009), and a leakage-safe machine learning protocol using temporal splits, early stopping, validation-only thresholding, and SHAP-based interpretation. Using daily data from 2021 to 2025 (1462 observations across 74 assets), we document statistically significant Granger-causal relationships across layers and moderate out-of-sample predictive accuracy. We report the most informative figures, including the pipeline overview, Layer A heatmap, Layer C robustness analysis, vector autoregression variance decompositions, and the test-set precision-recall curve. Full data and figure outputs are provided in the artifact repository.
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2510.20066
  19. By: Brandon Istenes
    Abstract: The Levy Institute Microsimulation Model (LIMM) is a tool used for policy simulations to estimate ex-ante the employment and income effects of sectoral investments. In Istenes (2023), a simple implementation of the LIMM for New York State initially had difficulty producing realistic conditional distributions of allocated jobs. This paper identifies the sources of that problem, which produces significant distortions to the characteristic distributions of job recipients. Solutions to the problem are presented with theoretical and empirical analysis. The relevance of this problem to other LIMM-based models is discussed; while it is theoretically relevant, it is unlikely to have a substantial impact on results.
    Keywords: Employment Simulation; Statistical Matching; LIMM
    Date: 2025–04
    URL: https://d.repec.org/n?u=RePEc:lev:wrkpap:wp_1079
  20. By: Diana Barro (Ca’ Foscari University of Venice); Roberto Casarin (Ca’ Foscari University of Venice; European Center for Living Technology); Anthony Osuntuyi (Ca’ Foscari University of Venice)
    Abstract: In the large class of robust optimization methods, stochastic programming and stochastic optimization gained popularity thanks to the theoretical guarantees of the algorithms. This paper focuses on simulated annealing, a stochastic-based algorithm for numerical optimization problems with a good global exploration ability. However, the global optimum values cannot always be guaranteed without a slowly decreasing cooling schedule. This ultimately negatively impacts the convergence speed of the algorithm. This deficiency is overcome in this study by a new stochastic optimization algorithm built on generalized Metropolis and simulated annealing (SA) algorithms. The ergodicity of the proposed constrained multiple-try Metropolis SA is proved. Several constrained optimization benchmarks and challenging real-world high-dimensional problems from finance were considered for assessing the performance of the proposed algorithm.
    Keywords: Simulated annealing, multiple-try Metropolis, constrained optimization, penalty method
    JEL: C61 C63 G11
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:ven:wpaper:2025:20
  21. By: Alexandre Boumezoued; Adel Cherchali; Vincent Lemaire; Gilles Pag\`es; Mathieu Truc
    Abstract: Estimating risk measures such as large loss probabilities and Value-at-Risk is fundamental in financial risk management and often relies on computationally intensive nested Monte Carlo methods. While Multi-Level Monte Carlo (MLMC) techniques and their weighted variants are typically more efficient, their effectiveness tends to deteriorate when dealing with irregular functions, notably indicator functions, which are intrinsic to these risk measures. We address this issue by introducing a novel MLMC parametrization that significantly improves performance in practical, non-asymptotic settings while maintaining theoretical asymptotic guarantees. We also prove that antithetic sampling of MLMC levels enhances efficiency regardless of the regularity of the underlying function. Numerical experiments motivated by the calculation of economic capital in a life insurance context confirm the practical value of our approach for estimating loss probabilities and quantiles, bridging theoretical advances and practical requirements in financial risk estimation.
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2510.18995
  22. By: Dong Yan; Ke Zhou; Zirun Wang; Xin-Jiang He
    Abstract: In this paper, we investigate a portfolio selection problem with transaction costs under a two-factor stochastic volatility structure, where volatility follows a mean-reverting process with a stochastic mean-reversion level. The model incorporates both proportional exogenous transaction costs and endogenous costs modeled by a stochastic liquidity risk process. Using an option-implied approach, we extract an S-shaped utility function that reflects investor behavior and apply its concave envelope transformation to handle the non-concavity. The resulting problem reduces to solving a five-dimensional nonlinear Hamilton-Jacobi-Bellman equation. We employ a deep learning-based policy iteration scheme to numerically compute the value function and the optimal policy. Numerical experiments are conducted to analyze how both types of transaction costs and stochastic volatility affect optimal investment decisions.
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2510.21156

This nep-cmp issue is ©2025 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.