nep-cmp New Economics Papers
on Computational Economics
Issue of 2025–06–23
thirty papers chosen by
Stan Miles, Thompson Rivers University


  1. TaxAgent: How Large Language Model Designs Fiscal Policy By Jizhou Wang; Xiaodan Fang; Lei Huang; Yongfeng Huang
  2. Can Artificial Intelligence Trade the Stock Market? By J\k{e}drzej Maskiewicz; Pawe{\l} Sakowski
  3. GenAI in Entrepreneurship: a systematic review of generative artificial intelligence in entrepreneurship research: current issues and future directions By Anna Kusetogullari; Huseyin Kusetogullari; Martin Andersson; Tony Gorschek
  4. Forecasting the Moroccan Stock Market: A Theoretical Approach Integrating Macroeconomic and Sentiment Data through Deep Learning By Imad Talhartit; Sanae Ait Jillali; Mounime El Kabbouri
  5. Training NTK to Generalize with KARE By Johannes Schwab; Bryan T. Kelly; Semyon Malamud; Teng Andrea Xu
  6. Interpretable LLMs for Credit Risk: A Systematic Review and Taxonomy By Muhammed Golec; Maha AlabdulJalil
  7. Enhancing the Merger Simulation Toolkit with ML/AI By Harold D. Chiang; Jack Collison; Lorenzo Magnolfi; Christopher Sullivan
  8. Applying Informer for Option Pricing: A Transformer-Based Approach By Feliks Ba\'nka; Jaros{\l}aw A. Chudziak
  9. An Interpretable Machine Learning Approach in Predicting Inflation Using Payments System Data: A Case Study of Indonesia By Wishnu Badrawani
  10. Deep Learning Enhanced Multivariate GARCH By Haoyuan Wang; Chen Liu; Minh-Ngoc Tran; Chao Wang
  11. NewsNet-SDF: Stochastic Discount Factor Estimation with Pretrained Language Model News Embeddings via Adversarial Networks By Shunyao Wang; Ming Cheng; Christina Dan Wang
  12. Neural Jumps for Option Pricing By Duosi Zheng; Hanzhong Guo; Yanchu Liu; Wei Huang
  13. Matching Markets Meet LLMs: Algorithmic Reasoning with Ranked Preferences By Hadi Hosseini; Samarth Khanna; Ronak Singh
  14. Machine learning and financial inclusion: Evidence from credit risk assessment of small-business loans in China By YANG, ZHANG; JIANXIONG LIN; YIHE QIAN; LIANJIE SHU
  15. EDINET-Bench: Evaluating LLMs on Complex Financial Tasks using Japanese Financial Statements By Issa Sugiura; Takashi Ishida; Taro Makino; Chieko Tazuke; Takanori Nakagawa; Kosuke Nakago; David Ha
  16. Deep Reinforcement Learning for Investor-Specific Portfolio Optimization: A Volatility-Guided Asset Selection Approach By Arishi Orra; Aryan Bhambu; Himanshu Choudhary; Manoj Thakur; Selvaraju Natarajan
  17. S-shaped Utility Maximization with VaR Constraint and Partial Information By Dongmei Zhu; Ashley Davey; Harry Zheng
  18. Orthogonality-Constrained Deep Instrumental Variable Model for Causal Effect Estimation By Shunxin Yao
  19. How Small is Big Enough? Open Labeled Datasets and the Development of Deep Learning By Daniel Souza; Aldo Geuna; Jeff Rodríguez
  20. Exploring Microstructural Dynamics in Cryptocurrency Limit Order Books: Better Inputs Matter More Than Stacking Another Hidden Layer By Haochuan; Wang
  21. Augmenting the availability of historical GDP per capita estimates through machine learning By Philipp Koch; Viktor Stojkoski; C\'esar A. Hidalgo
  22. Conversational Analysis with AI - CA to the Power of AI: Rethinking Coding in Qualitative Analysis By Friese, Susanne PhD
  23. A primal-dual price-optimization method for computing equilibrium prices in mean-field games models By Xu Wang; Samy Wu Fung; Levon Nurbekyan
  24. Modeling Knowledge and Decision-Making with the Conditional Reasoning Framework By Moreno, William
  25. Assessing the Dynamics of the Coffee Value Chain in Davao del Sur: An Agent-Based Modeling Approach By Lucia Stephanie B. Sibala; Novy Aila B. Rivas; Giovanna Fae R. Oguis
  26. Is relevancy everything? A deep-learning approach to understand the effect of image-text congruence By Cao, Jingcun; Li, Xiaolin; Zhang, Lingling
  27. Application of the theory of “Military campaign success” based on the genetic algorithm of “The Art of War” to the war between Israel and Iran By MENG, WEI; Zhang, Xiaoyin
  28. Litigation Risk and the Valuation of Legal Claims: A Real Option Approach By Jose Portela; Eduardo S. Schwartz; Jaime Aparicio Garcia
  29. Global Socio-economic Resilience to Natural Disasters By Robin Middelanis; Bramka Arga Jafino; Ruth Hill; Minh Cong Nguyen; Stephane Hallegatte
  30. Market pathways to food systems transformation toward healthy and equitable diets through convergent innovation By Jeroen Struben; Derek Chan; Byomkesh Talukder; Laurette Dubé

  1. By: Jizhou Wang; Xiaodan Fang; Lei Huang; Yongfeng Huang
    Abstract: Economic inequality is a global challenge, intensifying disparities in education, healthcare, and social stability. Traditional systems like the U.S. federal income tax reduce inequality but lack adaptability. Although models like the Saez Optimal Taxation adjust dynamically, they fail to address taxpayer heterogeneity and irrational behavior. This study introduces TaxAgent, a novel integration of large language models (LLMs) with agent-based modeling (ABM) to design adaptive tax policies. In our macroeconomic simulation, heterogeneous H-Agents (households) simulate real-world taxpayer behaviors while the TaxAgent (government) utilizes LLMs to iteratively optimize tax rates, balancing equity and productivity. Benchmarked against Saez Optimal Taxation, U.S. federal income taxes, and free markets, TaxAgent achieves superior equity-efficiency trade-offs. This research offers a novel taxation solution and a scalable, data-driven framework for fiscal policy evaluation.
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2506.02838
  2. By: J\k{e}drzej Maskiewicz; Pawe{\l} Sakowski
    Abstract: The paper explores the use of Deep Reinforcement Learning (DRL) in stock market trading, focusing on two algorithms: Double Deep Q-Network (DDQN) and Proximal Policy Optimization (PPO) and compares them with Buy and Hold benchmark. It evaluates these algorithms across three currency pairs, the S&P 500 index and Bitcoin, on the daily data in the period of 2019-2023. The results demonstrate DRL's effectiveness in trading and its ability to manage risk by strategically avoiding trades in unfavorable conditions, providing a substantial edge over classical approaches, based on supervised learning in terms of risk-adjusted returns.
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2506.04658
  3. By: Anna Kusetogullari; Huseyin Kusetogullari; Martin Andersson; Tony Gorschek
    Abstract: Generative Artificial Intelligence (GenAI) and Large Language Models (LLMs) are recognized to have significant effects on industry and business dynamics, not least because of their impact on the preconditions for entrepreneurship. There is still a lack of knowledge of GenAI as a theme in entrepreneurship research. This paper presents a systematic literature review aimed at identifying and analyzing the evolving landscape of research on the effects of GenAI on entrepreneurship. We analyze 83 peer-reviewed articles obtained from leading academic databases: Web of Science and Scopus. Using natural language processing and unsupervised machine learning techniques with TF-IDF vectorization, Principal Component Analysis (PCA), and hierarchical clustering, five major thematic clusters are identified: (1) Digital Transformation and Behavioral Models, (2) GenAI-Enhanced Education and Learning Systems, (3) Sustainable Innovation and Strategic AI Impact, (4) Business Models and Market Trends, and (5) Data-Driven Technological Trends in Entrepreneurship. Based on the review, we discuss future research directions, gaps in the current literature, as well as ethical concerns raised in the literature. We highlight the need for more macro-level research on GenAI and LLMs as external enablers for entrepreneurship and for research on effective regulatory frameworks that facilitate business experimentation, innovation, and further technology development.
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2505.05523
  4. By: Imad Talhartit (Université Hassan 1er [Settat], Ecole Nationale de Commerce et Gestion - Settat, Laboratory of Finance, Audit and Organizational Governance Research); Sanae Ait Jillali (Université Hassan 1er [Settat], Ecole Nationale de Commerce et Gestion - Settat, Laboratory of Finance, Audit and Organizational Governance Research); Mounime El Kabbouri (Université Hassan 1er [Settat], Ecole Nationale de Commerce et Gestion - Settat, Laboratory of Finance, Audit and Organizational Governance Research)
    Abstract: In today's data-driven economy, predicting stock market behavior has become a key focus for both finance professionals and academics. Traditionally reliant on historical and economic data, stock price forecasting is now being enhanced by AI technologies, especially Deep Learning and Natural Language Processing (NLP), which allow the integration of qualitative data like news sentiment and investor opinions. Deep Learning uses multi-layered neural networks to analyze complex patterns, while NLP enables machines to interpret human language, making it useful for extracting sentiment from media sources. Though most research has focused on developed markets, emerging economies like Morocco offer a unique context due to their evolving financial systems and data limitations. This study takes a theoretical and exploratory approach, aiming to conceptually examine how macroeconomic indicators and sentiment analysis can be integrated using deep learning models to enhance stock price prediction in Morocco. Rather than building a model, the paper reviews literature, evaluates data sources, and identifies key challenges and opportunities. Ultimately, the study aims to bridge AI techniques with financial theory in an emerging market setting, providing a foundation for future empirical research and interdisciplinary collaboration.
    Keywords: Stock Price Prediction, Deep Learning, Natural Language Processing (NLP), Sentiment Analysis, Macroeconomic Indicators, Emerging Markets, Moroccan Financial Market
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:hal:journl:hal-05094029
  5. By: Johannes Schwab (École Polytechnique Fédérale de Lausanne (EPFL)); Bryan T. Kelly (Yale SOM; AQR Capital Management, LLC; National Bureau of Economic Research (NBER)); Semyon Malamud (Ecole Polytechnique Federale de Lausanne; Centre for Economic Policy Research (CEPR); Swiss Finance Institute); Teng Andrea Xu (AQR Capital Management, LLC)
    Abstract: The performance of the data-dependent neural tangent kernel (NTK; Jacot et al. (2018)) associated with a trained deep neural network (DNN) often matches or exceeds that of the full network. This implies that DNN training via gradient descent implicitly performs kernel learning by optimizing the NTK. In this paper, we propose instead to optimize the NTK explicitly. Rather than minimizing empirical risk, we train the NTK to minimize its generalization error using the recently developed Kernel Alignment Risk Estimator (KARE; Jacot et al. (2020)). Our simulations and real data experiments show that NTKs trained with KARE consistently match or significantly outperform the original DNN and the DNNinduced NTK (the after-kernel). These results suggest that explicitly trained kernels can outperform traditional end-to-end DNN optimization in certain settings, challenging the conventional dominance of DNNs. We argue that explicit training of NTK is a form of over-parametrized feature learning.
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:chf:rpseri:rp2551
  6. By: Muhammed Golec; Maha AlabdulJalil
    Abstract: Large Language Models (LLM), which have developed in recent years, enable credit risk assessment through the analysis of financial texts such as analyst reports and corporate disclosures. This paper presents the first systematic review and taxonomy focusing on LLMbased approaches in credit risk estimation. We determined the basic model architectures by selecting 60 relevant papers published between 2020-2025 with the PRISMA research strategy. And we examined the data used for scenarios such as credit default prediction and risk analysis. Since the main focus of the paper is interpretability, we classify concepts such as explainability mechanisms, chain of thought prompts and natural language justifications for LLM-based credit models. The taxonomy organizes the literature under four main headings: model architectures, data types, explainability mechanisms and application areas. Based on this analysis, we highlight the main future trends and research gaps for LLM-based credit scoring systems. This paper aims to be a reference paper for artificial intelligence and financial researchers.
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2506.04290
  7. By: Harold D. Chiang; Jack Collison; Lorenzo Magnolfi; Christopher Sullivan
    Abstract: This paper develops a flexible approach to predict the price effects of horizontal mergers using ML/AI methods. While standard merger simulation techniques rely on restrictive assumptions about firm conduct, we propose a data-driven framework that relaxes these constraints when rich market data are available. We develop and identify a flexible nonparametric model of supply that nests a broad range of conduct models and cost functions. To overcome the curse of dimensionality, we adapt the Variational Method of Moments (VMM) (Bennett and Kallus, 2023) to estimate the model, allowing for various forms of strategic interaction. Monte Carlo simulations show that our method significantly outperforms an array of misspecified models and rivals the performance of the true model, both in predictive performance and counterfactual merger simulations. As a way to interpret the economics of the estimated function, we simulate pass-through and reveal that the model learns markup and cost functions that imply approximately correct pass-through behavior. Applied to the American Airlines-US Airways merger, our method produces more accurate post-merger price predictions than traditional approaches. The results demonstrate the potential for machine learning techniques to enhance merger analysis while maintaining economic structure.
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2506.05225
  8. By: Feliks Ba\'nka; Jaros{\l}aw A. Chudziak
    Abstract: Accurate option pricing is essential for effective trading and risk management in financial markets, yet it remains challenging due to market volatility and the limitations of traditional models like Black-Scholes. In this paper, we investigate the application of the Informer neural network for option pricing, leveraging its ability to capture long-term dependencies and dynamically adjust to market fluctuations. This research contributes to the field of financial forecasting by introducing Informer's efficient architecture to enhance prediction accuracy and provide a more adaptable and resilient framework compared to existing methods. Our results demonstrate that Informer outperforms traditional approaches in option pricing, advancing the capabilities of data-driven financial forecasting in this domain.
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2506.05565
  9. By: Wishnu Badrawani
    Abstract: This paper evaluates the performance of prominent machine learning (ML) algorithms in predicting Indonesia's inflation using the payment system, capital market, and macroeconomic data. We compare the forecasting performance of each ML model, namely shrinkage regression, ensemble learning, and super vector regression, to that of the univariate time series ARIMA and SARIMA models. We examine various out-of-bag sample periods in each ML model to determine the appropriate data-splitting ratios for the regression case study. This study indicates that all ML models produced lower RMSEs and reduced average forecast errors by 45.16 percent relative to the ARIMA benchmark, with the Extreme Gradient Boosting model outperforming other ML models and the benchmark. Using the Shapley value, we discovered that numerous payment system variables significantly predict inflation. We explore the ML forecast using local Shapley decomposition and show the relationship between the explanatory variables and inflation for interpretation. The interpretation of the ML forecast highlights some significant findings and offers insightful recommendations, enhancing previous economic research that uses a more established econometric method. Our findings advocate ML models as supplementary tools for the central bank to predict inflation and support monetary policy.
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2506.10369
  10. By: Haoyuan Wang; Chen Liu; Minh-Ngoc Tran; Chao Wang
    Abstract: This paper introduces a novel multivariate volatility modeling framework, named Long Short-Term Memory enhanced BEKK (LSTM-BEKK), that integrates deep learning into multivariate GARCH processes. By combining the flexibility of recurrent neural networks with the econometric structure of BEKK models, our approach is designed to better capture nonlinear, dynamic, and high-dimensional dependence structures in financial return data. The proposed model addresses key limitations of traditional multivariate GARCH-based methods, particularly in capturing persistent volatility clustering and asymmetric co-movement across assets. Leveraging the data-driven nature of LSTMs, the framework adapts effectively to time-varying market conditions, offering improved robustness and forecasting performance. Empirical results across multiple equity markets confirm that the LSTM-BEKK model achieves superior performance in terms of out-of-sample portfolio risk forecast, while maintaining the interpretability from the BEKK models. These findings highlight the potential of hybrid econometric-deep learning models in advancing financial risk management and multivariate volatility forecasting.
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2506.02796
  11. By: Shunyao Wang; Ming Cheng; Christina Dan Wang
    Abstract: Stochastic Discount Factor (SDF) models provide a unified framework for asset pricing and risk assessment, yet traditional formulations struggle to incorporate unstructured textual information. We introduce NewsNet-SDF, a novel deep learning framework that seamlessly integrates pretrained language model embeddings with financial time series through adversarial networks. Our multimodal architecture processes financial news using GTE-multilingual models, extracts temporal patterns from macroeconomic data via LSTM networks, and normalizes firm characteristics, fusing these heterogeneous information sources through an innovative adversarial training mechanism. Our dataset encompasses approximately 2.5 million news articles and 10, 000 unique securities, addressing the computational challenges of processing and aligning text data with financial time series. Empirical evaluations on U.S. equity data (1980-2022) demonstrate NewsNet-SDF substantially outperforms alternatives with a Sharpe ratio of 2.80. The model shows a 471% improvement over CAPM, over 200% improvement versus traditional SDF implementations, and a 74% reduction in pricing errors compared to the Fama-French five-factor model. In comprehensive comparisons, our deep learning approach consistently outperforms traditional, modern, and other neural asset pricing models across all key metrics. Ablation studies confirm that text embeddings contribute significantly more to model performance than macroeconomic features, with news-derived principal components ranking among the most influential determinants of SDF dynamics. These results validate the effectiveness of our multimodal deep learning approach in integrating unstructured text with traditional financial data for more accurate asset pricing, providing new insights for digital intelligent decision-making in financial technology.
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2505.06864
  12. By: Duosi Zheng; Hanzhong Guo; Yanchu Liu; Wei Huang
    Abstract: Recognizing the importance of jump risk in option pricing, we propose a neural jump stochastic differential equation model in this paper, which integrates neural networks as parameter estimators in the conventional jump diffusion model. To overcome the problem that the backpropagation algorithm is not compatible with the jump process, we use the Gumbel-Softmax method to make the jump parameter gradient learnable. We examine the proposed model using both simulated data and S&P 500 index options. The findings demonstrate that the incorporation of neural jump components substantially improves the accuracy of pricing compared to existing benchmark models.
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2506.05137
  13. By: Hadi Hosseini; Samarth Khanna; Ronak Singh
    Abstract: The rise of Large Language Models (LLMs) has driven progress in reasoning tasks -- from program synthesis to scientific hypothesis generation -- yet their ability to handle ranked preferences and structured algorithms in combinatorial domains remains underexplored. We study matching markets, a core framework behind applications like resource allocation and ride-sharing, which require reconciling individual ranked preferences to ensure stable outcomes. We evaluate several state-of-the-art models on a hierarchy of preference-based reasoning tasks -- ranging from stable-matching generation to instability detection, instability resolution, and fine-grained preference queries -- to systematically expose their logical and algorithmic limitations in handling ranked inputs. Surprisingly, even top-performing models with advanced reasoning struggle to resolve instability in large markets, often failing to identify blocking pairs or execute algorithms iteratively. We further show that parameter-efficient fine-tuning (LoRA) significantly improves performance in small markets, but fails to bring about a similar improvement on large instances, suggesting the need for more sophisticated strategies to improve LLMs' reasoning with larger-context inputs.
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2506.04478
  14. By: YANG, ZHANG (Department of Finance and Business Economics, Faculty of Business Administration / Asia-Pacific Academy of Economics and Management, University of Macau); JIANXIONG LIN (QIFU Technology, China); YIHE QIAN (Department of Finance and Business Economics, Faculty of Business Administration, University of Macau); LIANJIE SHU (Faculty of Business Administration , University of Macau)
    Abstract: MachiAs a key enabler of poverty alleviation and equitable growth, financial inclusion aims to expand access to credit and financial services for underserved individuals and small businesses. However, the elevated default risk and data scarcity in inclusive lending present major challenges to traditional credit assessment tools. This study evaluates whether machine learning (ML) techniques can improve default prediction for small-business loans, thereby enhancing the effectiveness and fairness of credit allocation. Using proprietary loan-level data from a city commercial bank in China, we compare eight classification models—Logistic Regression, Linear Discriminant Analysis (LDA), K-Nearest Neighbors (KNN), Support Vector Machine (SVM), Decision Tree, Random Forest, XGBoost, and LightGBM—under three sampling strategies to address class imbalance. Our findings reveal that undersampling significantly enhances model performance, and tree-based ML models, particularly XGBoost and Decision Tree, outperform traditional classifiers. Feature importance and misclassification analyses suggest that documentation completeness, demographic traits, and credit utilization are critical predictors of default. By combining robust empirical validation with model interpretability, this study contributes to the growing literature at the intersection of machine learning, credit risk, and financial development. Our findings offer actionable insights for policymakers, financial institutions, and data scientists working to build fairer and more effective credit systems in emerging markets.
    Keywords: machine learning, financial inclusion, small business, China, credit risk assessment
    JEL: G21 G32 C53 O16
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:boa:wpaper:202532
  15. By: Issa Sugiura; Takashi Ishida; Taro Makino; Chieko Tazuke; Takanori Nakagawa; Kosuke Nakago; David Ha
    Abstract: Financial analysis presents complex challenges that could leverage large language model (LLM) capabilities. However, the scarcity of challenging financial datasets, particularly for Japanese financial data, impedes academic innovation in financial analytics. As LLMs advance, this lack of accessible research resources increasingly hinders their development and evaluation in this specialized domain. To address this gap, we introduce EDINET-Bench, an open-source Japanese financial benchmark designed to evaluate the performance of LLMs on challenging financial tasks including accounting fraud detection, earnings forecasting, and industry prediction. EDINET-Bench is constructed by downloading annual reports from the past 10 years from Japan's Electronic Disclosure for Investors' NETwork (EDINET) and automatically assigning labels corresponding to each evaluation task. Our experiments reveal that even state-of-the-art LLMs struggle, performing only slightly better than logistic regression in binary classification for fraud detection and earnings forecasting. These results highlight significant challenges in applying LLMs to real-world financial applications and underscore the need for domain-specific adaptation. Our dataset, benchmark construction code, and evaluation code is publicly available to facilitate future research in finance with LLMs.
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2506.08762
  16. By: Arishi Orra; Aryan Bhambu; Himanshu Choudhary; Manoj Thakur; Selvaraju Natarajan
    Abstract: Portfolio optimization requires dynamic allocation of funds by balancing the risk and return tradeoff under dynamic market conditions. With the recent advancements in AI, Deep Reinforcement Learning (DRL) has gained prominence in providing adaptive and scalable strategies for portfolio optimization. However, the success of these strategies depends not only on their ability to adapt to market dynamics but also on the careful pre-selection of assets that influence overall portfolio performance. Incorporating the investor's preference in pre-selecting assets for a portfolio is essential in refining their investment strategies. This study proposes a volatility-guided DRL-based portfolio optimization framework that dynamically constructs portfolios based on investors' risk profiles. The Generalized Autoregressive Conditional Heteroscedasticity (GARCH) model is utilized for volatility forecasting of stocks and categorizes them based on their volatility as aggressive, moderate, and conservative. The DRL agent is then employed to learn an optimal investment policy by interacting with the historical market data. The efficacy of the proposed methodology is established using stocks from the Dow $30$ index. The proposed investor-specific DRL-based portfolios outperformed the baseline strategies by generating consistent risk-adjusted returns.
    Date: 2025–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2505.03760
  17. By: Dongmei Zhu; Ashley Davey; Harry Zheng
    Abstract: We study S-shaped utility maximisation with VaR constraint and unobservable drift coefficient. Using the Bayesian filter, the concavification principle, and the change of measure, we give a semi-closed integral representation for the dual value function and find a critical wealth level that determines if the constrained problem admits a unique optimal solution and Lagrange multiplier or is infeasible. We also propose three algorithms (Lagrange, simulation, deep neural network) to solve the problem and compare their performances with numerical examples.
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2506.10103
  18. By: Shunxin Yao
    Abstract: OC-DeepIV is a neural network model designed for estimating causal effects. It characterizes heterogeneity by adding interaction features and reduces redundancy through orthogonal constraints. The model includes two feature extractors, one for the instrumental variable Z and the other for the covariate X*. The training process is divided into two stages: the first stage uses the mean squared error (MSE) loss function, and the second stage incorporates orthogonal regularization. Experimental results show that this model outperforms DeepIV and DML in terms of accuracy and stability. Future research directions include applying the model to real-world problems and handling scenarios with multiple processing variables.
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2506.02790
  19. By: Daniel Souza; Aldo Geuna; Jeff Rodríguez
    Abstract: We investigate the emergence of Deep Learning as a technoscientific field, emphasizing the role of open labeled datasets. Through qualitative and quantitative analyses, we evaluate the role of datasets like CIFAR-10 in advancing computer vision and object recognition, which are central to the Deep Learning revolution. Our findings highlight CIFAR-10’s crucial role and enduring influence on the field, as well as its importance in teaching ML techniques. Results also indicate that dataset characteristics such as size, number of instances, and number of categories, were key factors. Econometric analysis confirms that CIFAR-10, a small-but- sufficiently-large open dataset, played a significant and lasting role in technological advancements and had a major function in the development of the early scientific literature as shown by citation metrics.
    Keywords: Artificial Intelligence; Deep Learning; Emergence of technosciences; Open science; Open Labeled Datasets
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:cca:wpaper:738
  20. By: Haochuan (Kevin); Wang
    Abstract: Cryptocurrency price dynamics are driven largely by microstructural supply demand imbalances in the limit order book (LOB), yet the highly noisy nature of LOB data complicates the signal extraction process. Prior research has demonstrated that deep-learning architectures can yield promising predictive performance on pre-processed equity and futures LOB data, but they often treat model complexity as an unqualified virtue. In this paper, we aim to examine whether adding extra hidden layers or parameters to "blackbox ish" neural networks genuinely enhances short term price forecasting, or if gains are primarily attributable to data preprocessing and feature engineering. We benchmark a spectrum of models from interpretable baselines, logistic regression, XGBoost to deep architectures (DeepLOB, Conv1D+LSTM) on BTC/USDT LOB snapshots sampled at 100 ms to multi second intervals using publicly available Bybit data. We introduce two data filtering pipelines (Kalman, Savitzky Golay) and evaluate both binary (up/down) and ternary (up/flat/down) labeling schemes. Our analysis compares models on out of sample accuracy, latency, and robustness to noise. Results reveal that, with data preprocessing and hyperparameter tuning, simpler models can match and even exceed the performance of more complex networks, offering faster inference and greater interpretability.
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2506.05764
  21. By: Philipp Koch; Viktor Stojkoski; C\'esar A. Hidalgo
    Abstract: Can we use data on the biographies of historical figures to estimate the GDP per capita of countries and regions? Here we introduce a machine learning method to estimate the GDP per capita of dozens of countries and hundreds of regions in Europe and North America for the past 700 years starting from data on the places of birth, death, and occupations of hundreds of thousands of historical figures. We build an elastic net regression model to perform feature selection and generate out-of-sample estimates that explain 90% of the variance in known historical income levels. We use this model to generate GDP per capita estimates for countries, regions, and time periods for which this data is not available and externally validate our estimates by comparing them with four proxies of economic output: urbanization rates in the past 500 years, body height in the 18th century, wellbeing in 1850, and church building activity in the 14th and 15th century. Additionally, we show our estimates reproduce the well-known reversal of fortune between southwestern and northwestern Europe between 1300 and 1800 and find this is largely driven by countries and regions engaged in Atlantic trade. These findings validate the use of fine-grained biographical data as a method to produce historical GDP per capita estimates. We publish our estimates with confidence intervals together with all collected source data in a comprehensive dataset.
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2505.09399
  22. By: Friese, Susanne PhD
    Abstract: The rapid emergence of generative AI tools challenges traditional assumptions about qualitative data analysis, particularly the central role of coding. This article introduces Conversational Analysis to the Power of AI (CAAI), a novel methodological framework that replaces coding with structured, dialogic interaction between researchers and large language models. CAAI reimagines analysis as a process of iterative questioning, synthesis, and reflexive interpretation rather than segmentation and categorization. Grounded in a hermeneutic epistemology and emphasizing methodological rigor, CAAI integrates inductive, deductive, and abductive reasoning strategies. It allows researchers to adapt procedures from established methods like Grounded Theory while embracing a distributed and co-constructive model of knowledge creation. The article outlines a five-step process for CAAI, discusses reliability and validity in this new paradigm, and positions the approach within broader shifts toward post-coding qualitative inquiry. CAAI offers a compelling alternative for researchers seeking to deepen interpretation, democratize analytic access, and expand the epistemic horizons of qualitative research in the age of AI.
    Date: 2025–04–26
    URL: https://d.repec.org/n?u=RePEc:osf:osfxxx:6b52m_v2
  23. By: Xu Wang; Samy Wu Fung; Levon Nurbekyan
    Abstract: We develop a simple yet efficient Lagrangian method for computing equilibrium prices in a mean-field game price-formation model. We prove that equilibrium prices are optimal in terms of a suitable criterion and derive a primal-dual gradient-based algorithm for computing them. One of the highlights of our computational framework is the efficient, simple, and flexible implementation of the algorithm using modern automatic differentiation techniques. Our implementation is modular and admits a seamless extension to high-dimensional settings with more complex dynamics, costs, and equilibrium conditions. Additionally, automatic differentiation enables a versatile algorithm that requires only coding the cost functions of agents. It automatically handles the gradients of the costs, thereby eliminating the need to manually form the adjoint equations.
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2506.04169
  24. By: Moreno, William
    Abstract: Representing and reasoning with complex, uncertain, context-dependent, and value-laden knowledge remains a fundamental challenge in Artificial Intelligence (AI) and Knowledge Representation (KR). Existing frameworks often struggle to integrate diverse knowledge types, make underlying assumptions explicit, handle normative constraints, or provide robust justifications for inferences. This preprint introduces the Conditional Reasoning Framework (CRF) and its Orthogonal Knowledge Graph (OKG) as a novel computational and conceptual architecture designed to address these limitations. The CRF operationalizes conditional necessity through a quantifiable, counterfactual test derived from a generalization of J.L. Mackie's INUS condition, enabling context-dependent reasoning within the graph-based OKG. Its design is grounded in the novel Theory of Minimal Axiom Systems (TOMAS), which posits that meaningful representation requires at least two orthogonal (conceptually independent) foundational axioms; TOMAS provides a philosophical justification for the CRF's emphasis on axiom orthogonality and explicit context (W). Furthermore, the framework incorporates expectation calculus for handling uncertainty and integrates the "ought implies can" principle as a fundamental constraint for normative reasoning. By offering a principled method for structuring knowledge, analyzing dependencies (including diagnosing model limitations by identifying failures of expected necessary conditions), and integrating descriptive and prescriptive information, the CRF/OKG provides a promising foundation for developing more robust, transparent, and ethically-aware AI systems.
    Date: 2025–05–05
    URL: https://d.repec.org/n?u=RePEc:osf:osfxxx:zwpnv_v3
  25. By: Lucia Stephanie B. Sibala; Novy Aila B. Rivas; Giovanna Fae R. Oguis
    Abstract: The study investigates the coffee value chain dynamics in Davao del Sur using an agent-based model. Three main factors driving interactions among key players were identified: trust, risk, and transaction costs. The model was constructed using NetLogo 6.3.0, and data from a survey questionnaire collected three data points from BACOFA members. Five cases were explored, with each scenario simulated 1000 times. Findings suggest that producers often sell to the market rather than the cooperative due to higher prices. However, producers tend to prioritize trust in buyers and their risk attitude, leading to increased sales to the cooperative. The producer's risk attitude significantly influences their decision-making, affecting performance outcomes such as loans, demand, and price changes. All three factors play a role and exert varying impacts on the value chain. So, the stakeholders' decisions on prioritizing factors in improving relationships depend on their priorities. Nonetheless, simulations show that establishing a harmonious system benefiting all parties is possible. However, achieving this requires adjustments to demand, pricing, trust, and risk attitudes of key players, which may not align with the preferences of some parties in reality.
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2505.05797
  26. By: Cao, Jingcun; Li, Xiaolin; Zhang, Lingling
    Abstract: Firms increasingly use a combination of image and text description when displaying products and engaging consumers. Existing research has examined consumers’ response to text and image stimuli separately but has yet to systematically consider how the semantic relationship between image and text impacts consumer choice. In this research, we conduct a series of multimethod empirical studies to examine the congruence between image- and text-based product representation. First, we propose a deep-learning approach to measure image-text congruence by building a state-of-the-art two-branch neural network model based on wide residual networks and bidirectional encoder representations from transformers. Next, we apply our method to data from an online reading platform and discover a U-shaped effect of image-text congruence: Consumers’ preference toward a product is higher when the congruence between the image and text representation is either high or low than when the congruence is at the medium level. We then conduct experiments to establish the causal effect of this finding and explore the underlying mechanisms. We further explore the generalizability of the proposed deep-learning model and our substantive finding in two additional settings. Our research contributes to the literature on consumer information processing and generates managerial implications for practitioners on how to strategically pair images and text on digital platforms.
    JEL: L81
    Date: 2025–05–09
    URL: https://d.repec.org/n?u=RePEc:ehl:lserod:128215
  27. By: MENG, WEI; Zhang, Xiaoyin
    Abstract: This paper discusses the development of dynamics in the Israel-Iran conflict, with a forecast for 2024, using military wisdom inspired by Sun Tzu's Art of War in combination with genetic algorithms and complexity theory. The research points out how a variety of strategies undertaken result in combat resource consumption, morale, changes in international politics, and the creation of a multisegmented simulation in war between Israel and Iran. The research methodology includes a genetic algorithm that optimizes strategies, a nonlinear interaction analysis of complexity theory, and a calculus model that simulates resource depletion and morale changes. The results indicate that the rapid strike strategy of Israel participates in short-term superiority and deteriorates logistically and in morale as the duration of the war lengthens. On the contrary, Iran can flexibly adapt to a long war of attrition by using guerrilla warfare and asymmetric warfare. The result of the study indicates that Israel's capability for combat and supply lines would be weakened by the long war strategy, and asymmetric tactics by Iran hold an even higher advantage in this type of conflict; it must serve as a reference for strategic decision-making of nations in the future when confronting similar conflicts.
    Date: 2025–05–15
    URL: https://d.repec.org/n?u=RePEc:osf:socarx:j42ra_v1
  28. By: Jose Portela; Eduardo S. Schwartz; Jaime Aparicio Garcia
    Abstract: Legal claims are increasingly being considered as an alternative asset class, however, there appears to be a lack of a standard methodology for valuing litigation risk. This paper proposes a dynamic real options framework for the valuation of legal claims, explicitly incorporating the uncertainty and sequential nature of litigation processes. We develop a continuous-time stochastic model that accounts for the main procedural milestones and uncertainties, enabling the simulation of diverse litigation trajectories to estimate the net present value of a claim. The model permits the decision-maker to optimally continue or abandon the litigation at various stages, thereby capturing the embedded option value and enhancing claim valuation. This approach offers a novel risk management and valuation tool for a range of stakeholders, including investors, third-party funders, claimants, defendants, legal practitioners, auditors, and insurers. We demonstrate the practical relevance of the methodology by applying it to an actual international investment arbitration case.
    JEL: G01 G11 K0
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:nbr:nberwo:33790
  29. By: Robin Middelanis; Bramka Arga Jafino; Ruth Hill; Minh Cong Nguyen; Stephane Hallegatte
    Abstract: Most disaster risk assessments use damages to physical assets as their central metric, often neglecting distributional impacts and the coping and recovery capacity of affected people. To address this shortcoming, the concepts of well-being losses and socio-economic resilience—the ability to experience asset losses without a decline in well-being—have been proposed. This paper uses microsimulations to produce a global estimate of well-being losses from, and socio-economic resilience to, natural disasters, covering 132 countries. On average, each $1 in disaster-related asset losses results in well-being losses equivalent to a $2 uniform national drop in consumption, with significant variation within and across countries. The poorest income quintile within each country incurs only 9% of national asset losses but accounts for 33% of well-being losses. Compared to high-income countries, low-income countries experience 67% greater well-being losses per dollar of asset losses and require 56% more time to recover. Socio-economic resilience is uncorrelated with exposure or vulnerability to natural hazards. However, a 10 percent increase in GDP per capita is associated with a 0.9 percentage point gain in resilience, but this benefit arises indirectly—such as through higher rate of formal employment, better financial inclusion, and broader social protection coverage—rather than from higher income itself. This paper assess ten pol icy options and finds that socio-economic and financial interventions (such as insurance and social protection) can effectively complement asset-focused measures (e.g., construction standards) and that interventions targeting low-income populations usually have higher returns in terms of avoided well-being losses per dollar invested.
    Date: 2025–05–21
    URL: https://d.repec.org/n?u=RePEc:wbk:wbrwps:11129
  30. By: Jeroen Struben (EM - EMLyon Business School); Derek Chan; Byomkesh Talukder; Laurette Dubé
    Abstract: Achieving food system transformation requires a deep understanding of the market mechanisms that underpin both the social benefits and the externalities of modern development. We examine how market dynamics affect the production and consumption of healthy and equitable diets in North America. Using causal loop diagramming, we show how three market feedback processes (industry capabilities, consumer category considerations, and systems and institutions) both constrain and enable food system transformation. Through behavioral-dynamic computational modeling, we demonstrate the ineffectiveness of isolated social or commercial interventions to achieve equitable access to nutritious foods across populations of varying socioeconomic statuses. Rather, self-sustaining transformations at scale require convergent innovations that bridge individual and collective action across typically siloed sectors, to achieve alignment between commercial, social, and environmental goals and activities. We discuss how this simulation-based analytical framework can inform policy for food system transformation, whether at the local, national, or global level.
    Date: 2025–05–07
    URL: https://d.repec.org/n?u=RePEc:hal:journl:hal-05083052

This nep-cmp issue is ©2025 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.