nep-cmp New Economics Papers
on Computational Economics
Issue of 2025–09–29
27 papers chosen by
Stan Miles, Thompson Rivers University


  1. A survey of statistical arbitrage pair trading with machine learning, deep learning, and reinforcement learning methods By Yufei Sun
  2. Trading-R1: Financial Trading with LLM Reasoning via Reinforcement Learning By Yijia Xiao; Edward Sun; Tong Chen; Fang Wu; Di Luo; Wei Wang
  3. Deep Hedging to Manage Tail Risk By Yuming Ma
  4. Why Bonds Fail Differently? Explainable Multimodal Learning for Multi-Class Default Prediction By Yi Lu; Aifan Ling; Chaoqun Wang; Yaxin Xu
  5. Enhancing ML Models Interpretability for Credit Scoring By Sagi Schwartz; Qinling Wang; Fang Fang
  6. Deep Learning for Solving Economic Models By Jesús Fernández-Villaverde
  7. ProteuS: A Generative Approach for Simulating Concept Drift in Financial Markets By Andr\'es L. Su\'arez-Cetrulo; Alejandro Cervantes; David Quintana
  8. Benchmarking Machine Learning Models for ESG Prediction in South Korea Using News-Derived Time Series By yunwoo, Kim; Hwang, Junhyuk
  9. Demystifying planning application uncertainty: Using machine learning to predict and explain planning application assessment timeframes By Thackway, William; Soundararaj, Balamurugan; Pettit, Christopher
  10. Enhancing Disaster Evacuation Planning with Cognitive Agent-Based Models and Co-Creation By Hossein Moradi; Rouba Iskandar; Sebastian Rodriguez; Dhirendra Singh; Julie Dugdale; Dimitrios Tzempelikos; Athanasios Sfetsos; Evangelia Bakogianni; Evrydiki Pavlidi; Josué Díaz; Margalida Ribas; Alexandre Moragues; Joan Estrany
  11. Emergence of pluralistic ignorance: An agent-based approach By Katarzyna Sznajd-Weron; Barbara Kamińska
  12. Momentum-integrated Multi-task Stock Recommendation with Converge-based Optimization By Hao Wang; Jingshu Peng; Yanyan Shen; Xujia Li; Lei Chen
  13. A Research Agenda for the Economics of Transformative AI By Erik Brynjolfsson; Anton Korinek; Ajay K. Agrawal
  14. Beyond Expert Judgment: An Explainable Framework for Truth Discovery, Weak Supervision, and Learning-Based Ranking in Open-Source Intelligence Risk Identification By MENG, WEI
  15. Statistical Model Checking of NetLogo Models By Marco Pangallo; Daniele Giachini; Andrea Vandin
  16. Qualitative Research in an Era of AI: A Pragmatic Approach to Data Analysis, Workflow, and Computation By Abramson, Corey; Li, Zhuofan; Prendergast, Tara; Dohan, Daniel
  17. Meta-Learning Neural Process for Implied Volatility Surfaces with SABR-induced Priors By Jirong Zhuang; Xuan Wu
  18. Estimating National Weather Effects from the Ground Up By Daniel J. Wilson
  19. AI regulation: A primer for Latin American lawmakers By Eduardo Levy Yeyati; Ángeles Cortesi
  20. Economic and social outcomes of investment on research and development in Tajikistan’s agrifood system By Khakimov, Parviz; Aragie, Emerta A.; Goibov, Manuchehr; Ashurov, Timur
  21. Conditionnalité du crédit d'impôt reherche à un critère de versement de dividendes : un exercice de microsimulation By Pierre Courtioux; François Metivier
  22. Adaptive Temporal Fusion Transformers for Cryptocurrency Price Prediction By Arash Peik; Mohammad Ali Zare Chahooki; Amin Milani Fard; Mehdi Agha Sarram
  23. Volatility models in practice: Rough, Path-dependent or Markovian? By Eduardo Abi Jaber; Shaun Xiaoyuan Li
  24. Stabilising Lifetime PD Models under Forecast Uncertainty By Vahab Rostampour
  25. Hopf-Lax approximation for value functions of L´evy optimal control problems By Kupper, Michael; Nendel, Max; Sgarabottolo, Alessandro
  26. From Courtesy to Influence: Identifying the Role of China’s New Ambassador to Thailand through OSINT-Based Multilayered Networks and Inflection Point Early Warning By MENG, WEI
  27. Labor Market Dynamics, Monetary Policy Tradeoffs, and a Shortfalls Approach to Pursuing Maximum Employment By Brent Bundick; Isabel Cairó; Nicolas Petrosky-Nadeau

  1. By: Yufei Sun (Faculty of Economic Sciences, University of Warsaw)
    Abstract: Pair trading remains a cornerstone strategy in quantitative finance, having consistently attracted scholarly attention from both economists and computer scientists. Over recent decades, research has expanded beyond traditional linear frameworks—such as regression- and cointegration-based models—to embrace advanced methodologies, including machine learning (ML), deep learning (DL), reinforcement learning (RL), and deep reinforcement learning (DRL). These techniques have demonstrated superior capacity to capture nonlinear dependencies and complex dynamics in financial data, thereby enhancing predictive performance and strategy design. Building on these academic developments, practitioners are increasingly deploying DL models to forecast asset price movements and volatility in equity and foreign exchange markets, leveraging the advantages of artificial intelligence (AI) for trading. In parallel, DRL has gained prominence in algorithmic trading, where agents can autonomously learn optimal trading policies by interacting with market environments, enabling systems that move beyond price prediction to dynamic signal generation and portfolio allocation. This paper provides a comprehensive survey of ML-, DL-, RL-, and DRL-based approaches to pair trading within quantitative finance. By systematically reviewing existing studies and highlighting their methodological contributions, it offers researchers a structured foundation for replication and further development. In addition, the paper outlines promising avenues for future research that extend the application of AI-driven methods in statistical arbitrage and market microstructure analysis.
    Keywords: Pair Trading, Machine Learning, Deep Learning, Reinforcement Learning, Deep Reinforcement Learning, Artificial Intelligence, Quantitative Trading
    JEL: C4 C45 C55 C65 G11
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:war:wpaper:2025-22
  2. By: Yijia Xiao; Edward Sun; Tong Chen; Fang Wu; Di Luo; Wei Wang
    Abstract: Developing professional, structured reasoning on par with human financial analysts and traders remains a central challenge in AI for finance, where markets demand interpretability and trust. Traditional time-series models lack explainability, while LLMs face challenges in turning natural-language analysis into disciplined, executable trades. Although reasoning LLMs have advanced in step-by-step planning and verification, their application to risk-sensitive financial decisions is underexplored. We present Trading-R1, a financially-aware model that incorporates strategic thinking and planning for comprehensive thesis composition, facts-grounded analysis, and volatility-adjusted decision making. Trading-R1 aligns reasoning with trading principles through supervised fine-tuning and reinforcement learning with a three-stage easy-to-hard curriculum. Training uses Tauric-TR1-DB, a 100k-sample corpus spanning 18 months, 14 equities, and five heterogeneous financial data sources. Evaluated on six major equities and ETFs, Trading-R1 demonstrates improved risk-adjusted returns and lower drawdowns compared to both open-source and proprietary instruction-following models as well as reasoning models. The system generates structured, evidence-based investment theses that support disciplined and interpretable trading decisions. Trading-R1 Terminal will be released at https://github.com/TauricResearch/Tradin g-R1.
    Date: 2025–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2509.11420
  3. By: Yuming Ma
    Abstract: Extending Buehler et al.'s 2019 Deep Hedging paradigm, we innovatively employ deep neural networks to parameterize convex-risk minimization (CVaR/ES) for the portfolio tail-risk hedging problem. Through comprehensive numerical experiments on crisis-era bootstrap market simulators -- customizable with transaction costs, risk budgets, liquidity constraints, and market impact -- our end-to-end framework not only achieves significant one-day 99% CVaR reduction but also yields practical insights into friction-aware strategy adaptation, demonstrating robustness and operational viability in realistic markets.
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2506.22611
  4. By: Yi Lu; Aifan Ling; Chaoqun Wang; Yaxin Xu
    Abstract: In recent years, China's bond market has seen a surge in defaults amid regulatory reforms and macroeconomic volatility. Traditional machine learning models struggle to capture financial data's irregularity and temporal dependencies, while most deep learning models lack interpretability-critical for financial decision-making. To tackle these issues, we propose EMDLOT (Explainable Multimodal Deep Learning for Time-series), a novel framework for multi-class bond default prediction. EMDLOT integrates numerical time-series (financial/macroeconomic indicators) and unstructured textual data (bond prospectuses), uses Time-Aware LSTM to handle irregular sequences, and adopts soft clustering and multi-level attention to boost interpretability. Experiments on 1994 Chinese firms (2015-2024) show EMDLOT outperforms traditional (e.g., XGBoost) and deep learning (e.g., LSTM) benchmarks in recall, F1-score, and mAP, especially in identifying default/extended firms. Ablation studies validate each component's value, and attention analyses reveal economically intuitive default drivers. This work provides a practical tool and a trustworthy framework for transparent financial risk modeling.
    Date: 2025–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2509.10802
  5. By: Sagi Schwartz; Qinling Wang; Fang Fang
    Abstract: Predicting default is essential for banks to ensure profitability and financial stability. While modern machine learning methods often outperform traditional regression techniques, their lack of transparency limits their use in regulated environments. Explainable artificial intelligence (XAI) has emerged as a solution in domains like credit scoring. However, most XAI research focuses on post-hoc interpretation of black-box models, which does not produce models lightweight or transparent enough to meet regulatory requirements, such as those for Internal Ratings-Based (IRB) models. This paper proposes a hybrid approach: post-hoc interpretations of black-box models guide feature selection, followed by training glass-box models that maintain both predictive power and transparency. Using the Lending Club dataset, we demonstrate that this approach achieves performance comparable to a benchmark black-box model while using only 10 features - an 88.5% reduction. In our example, SHapley Additive exPlanations (SHAP) is used for feature selection, eXtreme Gradient Boosting (XGBoost) serves as the benchmark and the base black-box model, and Explainable Boosting Machine (EBM) and Penalized Logistic Tree Regression (PLTR) are the investigated glass-box models. We also show that model refinement using feature interaction analysis, correlation checks, and expert input can further enhance model interpretability and robustness.
    Date: 2025–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2509.11389
  6. By: Jesús Fernández-Villaverde
    Abstract: The ongoing revolution in artificial intelligence, especially deep learning, is transforming research across many fields, including economics. Its impact is particularly strong in solving equilibrium economic models. These models often lack closed-form solutions, so economists have relied on numerical methods such as value function iteration, perturbation, and projection techniques. While powerful, these approaches face the curse of dimensionality, making global solutions computationally infeasible as the number of state variables increases. Recent advances in deep learning offer a new paradigm: flexible tools that efficiently approximate complex functions, manage high-dimensional problems, and expand the reach of quantitative economics. After introducing the basic concepts of deep learning, I illustrate the approach with the neoclassical growth model and discuss related ideas, including the double descent phenomenon and implicit regularization.
    JEL: C45 C61 C63 C68
    Date: 2025–09
    URL: https://d.repec.org/n?u=RePEc:nbr:nberwo:34250
  7. By: Andr\'es L. Su\'arez-Cetrulo; Alejandro Cervantes; David Quintana
    Abstract: Financial markets are complex, non-stationary systems where the underlying data distributions can shift over time, a phenomenon known as regime changes, as well as concept drift in the machine learning literature. These shifts, often triggered by major economic events, pose a significant challenge for traditional statistical and machine learning models. A fundamental problem in developing and validating adaptive algorithms is the lack of a ground truth in real-world financial data, making it difficult to evaluate a model's ability to detect and recover from these drifts. This paper addresses this challenge by introducing a novel framework, named ProteuS, for generating semi-synthetic financial time series with pre-defined structural breaks. Our methodology involves fitting ARMA-GARCH models to real-world ETF data to capture distinct market regimes, and then simulating realistic, gradual, and abrupt transitions between them. The resulting datasets, which include a comprehensive set of technical indicators, provide a controlled environment with a known ground truth of regime changes. An analysis of the generated data confirms the complexity of the task, revealing significant overlap between the different market states. We aim to provide the research community with a tool for the rigorous evaluation of concept drift detection and adaptation mechanisms, paving the way for more robust financial forecasting models.
    Date: 2025–08
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2509.11844
  8. By: yunwoo, Kim; Hwang, Junhyuk
    Abstract: Existing ESG ratings have limitations like disclosure delays, inconsistencies, and uneven coverage, particularly in non-English markets. This paper addresses these issues by establishing the first machine learning benchmark for ESG prediction in the Korean market using news-derived time-series features. A standardized dataset of 278 Korean firms was constructed, and monthly sentiment and ESG-relevance features were generated from news using Korean-specific language models. A mask-aware CNN explicitly handles missing data by distinguishing observed months from imputed ones. The model achieved a Mean Absolute Error (MAE) of 17.9, a Root Mean Squared Error (RMSE) of 22.0, an 𝑅2 of 0.12, and a Spearman’s 𝜌 of 0.38, demonstrating that temporal modeling and explicit handling of missing data are crucial for improving predictive accuracy.
    Date: 2025–09–12
    URL: https://d.repec.org/n?u=RePEc:osf:socarx:v2738_v1
  9. By: Thackway, William; Soundararaj, Balamurugan; Pettit, Christopher
    Abstract: Despite housing supply shortages in financialised housing markets and acknowledgement of planning application (PA) assessment times as a supply side constraint, reliable and accessible information on PA assessment timeframes is limited. It is in this context that we built a model to predict and explain PA assessment timeframes in New South Wales, Australia. We constructed a dataset of 17, 000 PAs (submitted over 3 years) comprising PA attributes, environmental and zoning restrictions, and features derived from PA descriptions using natural language processing techniques. Quantile regression was applied using machine learning modelling to predict probabilistic intervals for assessment timeframes. We then employed an advanced model explanation tool to analyse feature contributions on an overall and individual PA basis. The best performing model, an extreme gradient boosted machine (XGB), achieved an R2 of 0.431, predicting 60.9% of assessment times within one month of actual values. While performance is moderate, the model significantly improves upon previous studies and the current best practice in NSW, which is simply average estimates, by council area, for PA assessment timeframes. The paper concludes by outlining suggestions for further improving model performance and on the benefits of a predictive tool for planners.
    Date: 2025–09–18
    URL: https://d.repec.org/n?u=RePEc:osf:socarx:prm25_v1
  10. By: Hossein Moradi (RMIT Europe - Royal Melbourne Institute of Technology - Europe); Rouba Iskandar (LIG - Laboratoire d'Informatique de Grenoble - Inria - Institut National de Recherche en Informatique et en Automatique - CNRS - Centre National de la Recherche Scientifique - UGA - Université Grenoble Alpes - Grenoble INP - Institut polytechnique de Grenoble - Grenoble Institute of Technology - UGA - Université Grenoble Alpes); Sebastian Rodriguez (RMIT University - Royal Melbourne Institute of Technology University); Dhirendra Singh (CSIRO Data61 [Sydney] - CSIRO - Commonwealth Scientific and Industrial Research Organisation [Australia]); Julie Dugdale (Institut Informatique et Mathématiques Appliquées de Grenoble (IMAG), LIG - Laboratoire d'Informatique de Grenoble - Inria - Institut National de Recherche en Informatique et en Automatique - CNRS - Centre National de la Recherche Scientifique - UGA - Université Grenoble Alpes - Grenoble INP - Institut polytechnique de Grenoble - Grenoble Institute of Technology - UGA - Université Grenoble Alpes); Dimitrios Tzempelikos; Athanasios Sfetsos (NCSR - National Center for Scientific Research "Demokritos"); Evangelia Bakogianni (NCSR - National Center for Scientific Research "Demokritos"); Evrydiki Pavlidi (NCSR - National Center for Scientific Research "Demokritos"); Josué Díaz; Margalida Ribas (UIB - Universitat de les Illes Balears = Universidad de las Islas Baleares = University of the Balearic Islands); Alexandre Moragues (UIB - Universitat de les Illes Balears = Universidad de las Islas Baleares = University of the Balearic Islands); Joan Estrany (UIB - Universitat de les Illes Balears = Universidad de las Islas Baleares = University of the Balearic Islands)
    Abstract: Agent-based models (ABMs) are increasingly used in disaster evacuation simulation to capture system level dynamics. While ABMs are often combined with human behavior models (HBMs), few approaches integrate these with infrastructure and demographic data that are carefully modeled using local knowledge, along with hazard-specific impacts and policy settings. Even fewer embed this integration within a co-creation loop that involves local stakeholders throughout the entire development lifecycle, from conception and design to implementation, testing, and beyond. This paper introduces the methodology that we developed to address this gap by combining a structured cocreation process with technical simulation development. The co-creation process engages local stakeholders, planners, and experts to iteratively shape evacuation scenarios, define assumptions, and validate outcomes, ensuring the model aligns with local realities. These inputs are translated into a multi-dimensional simulation framework built in MATSim, integrating network and infrastructure models, hazard effects, population, and behavior modeling enhanced through Belief-Desire-Intention cognitive architectures. We applied this methodology in different case study areas, demonstrating its capacity to simulate heterogeneous evacuation dynamics and provide diverse performance metrics. Finally, we explore how this methodology can be applied in other hazards, geographic regions, and evacuation scenarios, offering pathways for broader application and future development.
    Keywords: Disaster Preparedness, Disaster Evacuation Simulation, Co-creation Processes, Human Behavior Models, Agent-Based Models
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:hal:journl:hal-05267465
  11. By: Katarzyna Sznajd-Weron; Barbara Kamińska
    Abstract: Pluralistic ignorance is a puzzling social psychological phenomenon in which the majority of group members privately reject a norm yet mistakenly believe that most others accept it. Consequently, they publicly comply with the norm. This phenomenon has significant implications for politics, economics, and organizational dynamics because it can mask widespread support for change and hinder collective responses to large-scale societal challenges. The aim of this work is to demonstrate how agent-based modeling, a computational approach well-suited for studying complex social systems, can be applied to investigate pluralistic ignorance. Rather than providing a systematic literature review, we focus on several models, including our own two models based on the psychological Social Response Context Model, as well as two other representative models: one of the first and most influential computational models of self-enforcing norms, and a model of opinion expression based on a silence game. For all of these models, we provide custom NetLogo implementations, publicly available at https://barbarakaminska.github.io/NetLogo-Pluralistic-ignorance/, which allow users not only to run their own simulations but also to follow the algorithms step by step. In conclusion, we note that despite differences in assumptions and structures, these models consistently reproduce pluralistic ignorance, suggesting that it may be a robust emergent phenomenon.
    Keywords: Pluralistic ignorance; Social Response Context Model; Collective adaptation; Opinion dynamics; Agent-based modeling; NetLogo; Complex systems
    JEL: C63 D72 D91
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:ahh:wpaper:worms2508
  12. By: Hao Wang; Jingshu Peng; Yanyan Shen; Xujia Li; Lei Chen
    Abstract: Stock recommendation is critical in Fintech applications, which use price series and alternative information to estimate future stock performance. Although deep learning models are prevalent in stock recommendation systems, traditional time-series forecasting training often fails to capture stock trends and rankings simultaneously, which are essential consideration factors for investors. To tackle this issue, we introduce a Multi-Task Learning (MTL) framework for stock recommendation, \textbf{M}omentum-\textbf{i}ntegrated \textbf{M}ulti-task \textbf{Stoc}k \textbf{R}ecommendation with Converge-based Optimization (\textbf{MiM-StocR}). To improve the model's ability to capture short-term trends, we novelly invoke a momentum line indicator in model training. To prioritize top-performing stocks and optimize investment allocation, we propose a list-wise ranking loss function called Adaptive-k ApproxNDCG. Moreover, due to the volatility and uncertainty of the stock market, existing MTL frameworks face overfitting issues when applied to stock time series. To mitigate this issue, we introduce the Converge-based Quad-Balancing (CQB) method. We conducted extensive experiments on three stock benchmarks: SEE50, CSI 100, and CSI 300. MiM-StocR outperforms state-of-the-art MTL baselines across both ranking and profitable evaluations.
    Date: 2025–08
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2509.10461
  13. By: Erik Brynjolfsson; Anton Korinek; Ajay K. Agrawal
    Abstract: As we approach Transformative Artificial Intelligence (TAI), there is an urgent need to advance our understanding of how it could reshape our economic models, institutions and policies. We propose a research agenda for the economics of TAI by identifying nine Grand Challenges: economic growth, innovation, income distribution, decision-making power, geoeconomics, information flows, safety risks, human well-being, and transition dynamics. By accelerating work in these areas, researchers can develop insights and tools to help fulfill the economic potential of TAI.
    JEL: A11 O33 O40
    Date: 2025–09
    URL: https://d.repec.org/n?u=RePEc:nbr:nberwo:34256
  14. By: MENG, WEI
    Abstract: In open-source intelligence (OSINT) research, traditional risk identification methods reliant on expert scoring face growing challenges due to their high subjectivity, cost, and lack of scalability. This study aims to propose and validate an algorithmic framework that transcends expert judgment. Centered on truth discovery, weakly supervised learning, and learning-based ranking, it enables automated, explainable risk identification within complex, multi-source heterogeneous data. The study first constructs a hierarchical-quota sampling system, acquiring and deduplicating data from four source categories: institutional authorities, official statements, mainstream and international reports, and visual materials. Subsequently, a truth discovery algorithm estimates source credibility to replace expert weighting. Weakly supervised labeling functions generate initial annotations, which are then aggregated by generative models to form robust labels. Finally, a learning ranking model dynamically prioritizes risk trajectories, with explainability ensured through Explainable AI techniques (e.g., SHAP, Grad-CAM). Results demonstrate that this framework reliably identifies risk signals across multiple time windows and control conditions. The classifier achieves PR-AUC improvements exceeding expert baselines, with average absolute error in inflection point localization maintained below 1 hour. It exhibits high consistency and robustness across cross-domain datasets. The study concludes that algorithmic expert-scoring replacement not only excels in accuracy and efficiency but also significantly outperforms traditional models in transparency and reproducibility, offering a systematic, scalable, and cutting-edge approach for OSINT risk research.
    Date: 2025–09–14
    URL: https://d.repec.org/n?u=RePEc:osf:socarx:5642u_v1
  15. By: Marco Pangallo; Daniele Giachini; Andrea Vandin
    Abstract: Agent-based models (ABMs) are gaining increasing traction in several domains, due to their ability to represent complex systems that are not easily expressible with classical mathematical models. This expressivity and richness come at a cost: ABMs can typically be analyzed only through simulation, making their analysis challenging. Specifically, when studying the output of ABMs, the analyst is often confronted with practical questions such as: (i) how many independent replications should be run? (ii) how many initial time steps should be discarded as a warm-up? (iii) after the warm-up, how long should the model run? (iv) what are the right parameter values? Analysts usually resort to rules of thumb and experimentation, which lack statistical rigor. This is mainly because addressing these points takes time, and analysts prefer to spend their limited time improving the model. In this paper, we propose a methodology, drawing on the field of Statistical Model Checking, to automate the process and provide guarantees of statistical rigor for ABMs written in NetLogo, one of the most popular ABM platforms. We discuss MultiVeStA, a tool that dramatically reduces the time and human intervention needed to run statistically rigorous checks on ABM outputs, and introduce its integration with NetLogo. Using two ABMs from the NetLogo library, we showcase MultiVeStA's analysis capabilities for NetLogo ABMs, as well as a novel application to statistically rigorous calibration. Our tool-chain makes it immediate to perform statistical checks with NetLogo models, promoting more rigorous and reliable analyses of ABM outputs.
    Date: 2025–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2509.10977
  16. By: Abramson, Corey; Li, Zhuofan; Prendergast, Tara; Dohan, Daniel
    Abstract: Rapid computational developments—particularly the proliferation of artificial intelligence (AI)—increasingly shape social scientific research while raising new questions about in-depth qualitative methods such as ethnography and interviewing. Building on classic debates about using computers to analyze qualitative data, we revisit longstanding concerns and assess possibilities and dangers in an era of automation, AI chatbots, and "big data." We first historicize developments by revisiting classical and emergent concerns about qualitative analysis with computers. We then introduce a typology of contemporary modes of engagement—streamlining workflows, scaling up projects, hybrid analytical approaches, and the sociology of computation—alongside rejection of computational analyses. We illustrate these approaches with detailed workflow examples from a large-scale ethnographic study and guidance for solo researchers. We argue for a pragmatic sociological approach that moves beyond dualisms of technological optimism versus rejection to show how computational tools—simultaneously dangerous and generative—can be adapted to support longstanding qualitative aims when used carefully in ways aligned with core methodological commitments.
    Date: 2025–09–16
    URL: https://d.repec.org/n?u=RePEc:osf:socarx:7bsgy_v1
  17. By: Jirong Zhuang; Xuan Wu
    Abstract: Constructing the implied volatility surface (IVS) is reframed as a meta-learning problem training across trading days to learn a general process that reconstructs a full IVS from few quotes, eliminating daily recalibration. We introduce the Volatility Neural Process, an attention-based model that uses a two-stage training: pre-training on SABR-generated surfaces to encode a financial prior, followed by fine-tuning on market data. On S&P 500 options (2006-2023; out-of-sample 2019-2023), our model outperforms SABR, SSVI, Gaussian Process, and an ablation trained only on real data. Relative to the ablation, the SABR-induced prior reduces RMSE by about 40% and dominates in mid- and long-maturity regions where quotes are sparse. The learned prior suppresses large errors, providing a practical, data-efficient route to stable IVS construction with a single deployable model.
    Date: 2025–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2509.11928
  18. By: Daniel J. Wilson
    Abstract: Understanding the effects of weather on macroeconomic data is critically important, but it is hampered by limited time series observations. Utilizing geographically granular panel data leverages greater observations but introduces a “missing intercept” problem: “global” (e.g., nationwide spillovers and GE) effects are absorbed by time fixed effects. Standard solutions are infeasible when the number of global regressors is large. To overcome these problems and estimate granular, global, and total weather effects, we implement a two-step approach utilizing machine learning techniques. We apply this approach to estimate weather effects on U.S. monthly employment growth, obtaining several novel findings: (1) weather, and especially its lags, has substantial explanatory power for local employment growth, (2) shocks to both granular and global weather have significant immediate impacts on a broad set of macroeconomic outcomes, (3) responses to granular shocks are short-lived while those to global shocks are more persistent, (4) favorable weather shocks are often more impactful than unfavorable shocks, and (5) responses of most macroeconomic outcomes to weather shocks have been stable over time but the consumption response has fallen.
    Keywords: weather; Macroeconomic fluctuations; employment growth; granular shocks
    JEL: Q52 Q54 R11
    Date: 2025–09–23
    URL: https://d.repec.org/n?u=RePEc:fip:fedfwp:101766
  19. By: Eduardo Levy Yeyati; Ángeles Cortesi
    Abstract: Artificial intelligence (AI), particularly generative AI, has evolved rapidly, capturing the attention of policy makers, and raising important questions about regulation. This primer provides Latin American lawmakers a comprehensive overview of global AI regulatory efforts, proposes a taxonomy that categorizes the diverse approaches within the region’s socio-economic context, together with a set of guidelines and a toolkit of innovative strategies to address AI regulation in a flexible and forward-thinking manner.
    Date: 2024–11
    URL: https://d.repec.org/n?u=RePEc:udt:wpgobi:202411
  20. By: Khakimov, Parviz; Aragie, Emerta A.; Goibov, Manuchehr; Ashurov, Timur
    Abstract: The World Bank’s agriculture sector public expenditure review study (World Bank 2021) findings indicates that public expenditure on agriculture sector remains relatively small at less than one percent of GDP, though grew significantly between 2015 and 2020, and the sector relies heavily on donor financing (54 percent). There is a notable underinvestment in R&D, 0.7 percent of total public expenditure in agriculture sector between 2016-2019, which impacts productivity and climate resilience. In this brief, for evaluating the potential impact of investment on Research and Development (R&D) to accelerate agricultural transformation and inclusiveness in Tajikistan agrifood system (AFS), we rely on the IFPRI’s Rural Investment and Policy Analysis (RIAPA) economywide dynamic computable general equilibrium (CGE) model which incorporates household survey-based microsimulation and investment modules, and simulates the functioning of a market economy, comprising markets for products and factors which include land, labor, and capital (IFPRI 2023).
    Keywords: investment; research; development; agrifood systems; agricultural sector; computable general equilibrium models; Tajikistan; Asia; Central Asia
    Date: 2025–06–25
    URL: https://d.repec.org/n?u=RePEc:fpr:ceaspb:29
  21. By: Pierre Courtioux (De Vinci Higher Education (DVRC) et Centre d'Economie de la Sorbonne); François Metivier (Université Paris Cité et IPGP)
    Abstract: Based on a microsimulation analysis over the period 2009-2019, this note presents the results of different scenarios for making the French R&D tax credit (CIR - Crédit Impôt Recherche) conditional on a dividend payment criterion. It shows that 27% of companies declaring R&D expenditure for the CIR in a given year pay dividends to their shareholders. Furthermore, 14% of companies declaring R&D expenditure eligible for the CIR increased their dividends payments in the same year. Depending on the scenario adopted, the introduction of a condition on the non-payment of dividends or the absence of an increase in payments
    Keywords: R&D tax credit; dividend; microsimulation; France
    JEL: H25 O30 O38
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:mse:cesdoc:25016
  22. By: Arash Peik; Mohammad Ali Zare Chahooki; Amin Milani Fard; Mehdi Agha Sarram
    Abstract: Precise short-term price prediction in the highly volatile cryptocurrency market is critical for informed trading strategies. Although Temporal Fusion Transformers (TFTs) have shown potential, their direct use often struggles in the face of the market's non-stationary nature and extreme volatility. This paper introduces an adaptive TFT modeling approach leveraging dynamic subseries lengths and pattern-based categorization to enhance short-term forecasting. We propose a novel segmentation method where subseries end at relative maxima, identified when the price increase from the preceding minimum surpasses a threshold, thus capturing significant upward movements, which act as key markers for the end of a growth phase, while potentially filtering the noise. Crucially, the fixed-length pattern ending each subseries determines the category assigned to the subsequent variable-length subseries, grouping typical market responses that follow similar preceding conditions. A distinct TFT model trained for each category is specialized in predicting the evolution of these subsequent subseries based on their initial steps after the preceding peak. Experimental results on ETH-USDT 10-minute data over a two-month test period demonstrate that our adaptive approach significantly outperforms baseline fixed-length TFT and LSTM models in prediction accuracy and simulated trading profitability. Our combination of adaptive segmentation and pattern-conditioned forecasting enables more robust and responsive cryptocurrency price prediction.
    Date: 2025–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2509.10542
  23. By: Eduardo Abi Jaber (CMAP - Centre de Mathématiques Appliquées de l'Ecole polytechnique - Inria - Institut National de Recherche en Informatique et en Automatique - X - École polytechnique - IP Paris - Institut Polytechnique de Paris - CNRS - Centre National de la Recherche Scientifique); Shaun Xiaoyuan Li (UP1 - Université Paris 1 Panthéon-Sorbonne)
    Abstract: An extensive empirical study of the class of Volterra Bergomi models using SPX options data between 2011 and 2022 reveals the following fact-check on two fundamental claims echoed in the rough volatility literature: Do rough volatility models with Hurst index H ∈ (0, 1/2) really capture well SPX implied volatility surface with very few parameters? No, rough volatility models are inconsistent with the global shape of SPX smiles. They suffer from severe structural limitations imposed by the roughness component, with the Hurst parameter H ∈ (0, 1/2) controlling the smile in a poor way. In particular, the SPX at-the-money skew is incompatible with the power-law shape generated by rough volatility models. The skew of rough volatility models increases too fast on the short end, and decays too slow on the longer end where "negative" H is sometimes needed. Do rough volatility models really outperform consistently their classical Markovian counterparts? No, for short maturities they underperform their one-factor Markovian counterpart with the same number of parameters. For longer maturities, they do not systematically outperform the one-factor model and significantly underperform when compared to an under-parametrized two-factor Markovian model with only one additional calibratable parameter. On the positive side: our study identifies a (non-rough) path-dependent Bergomi model and an under-parametrized two-factor Markovian Bergomi model that consistently outperform their rough counterpart in capturing SPX smiles between one week and three years with only 3 to 4 calibratable parameters.
    Keywords: Neural Networks, Calibration, Pricing, Stochastic volatility, SPX options
    Date: 2025–05–07
    URL: https://d.repec.org/n?u=RePEc:hal:journl:hal-04372797
  24. By: Vahab Rostampour
    Abstract: Estimating lifetime probabilities of default (PDs) under IFRS~9 and CECL requires projecting point--in--time transition matrices over multiple years. A persistent weakness is that macroeconomic forecast errors compound across horizons, producing unstable and volatile PD term structures. This paper reformulates the problem in a state--space framework and shows that a direct Kalman filter leaves non--vanishing variability. We then introduce an anchored observation model, which incorporates a neutral long--run economic state into the filter. The resulting error dynamics exhibit asymptotic stochastic stability, ensuring convergence in probability of the lifetime PD term structure. Simulation on a synthetic corporate portfolio confirms that anchoring reduces forecast noise and delivers smoother, more interpretable projections.
    Date: 2025–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2509.10586
  25. By: Kupper, Michael (Center for Mathematical Economics, Bielefeld University); Nendel, Max (Center for Mathematical Economics, Bielefeld University); Sgarabottolo, Alessandro (Center for Mathematical Economics, Bielefeld University)
    Abstract: In this paper, we investigate stochastic versions of the Hopf-Lax formula which are based on compositions of the Hopf-Lax operator with the transition kernel of a Lévy process taking values in a separable Banach space. We show that, depending on the order of the composition, one obtains upper and lower bounds for the value function of a stochastic optimal control problem associated to the drift controlled Lévy dynamics. Dynamic consistency is restored by iterating the resulting operators. Moreover, the value function of the control problem is approximated both from above and below as the number of iterations tends to infinity, and we provide explicit convergence rates and guarantees for the approximation procedure.
    Keywords: Hopf-Lax formula, Lévy process, optimal control problem, nonlinear Lie- Trotter formula, Nisio semigroup, Wasserstein perturbation
    Date: 2025–08–18
    URL: https://d.repec.org/n?u=RePEc:bie:wpaper:747
  26. By: MENG, WEI
    Abstract: This study aims to examine how China's newly appointed ambassador to Thailand employs event networks and temporal dynamics to demonstrate issue entry and structural embedding in early diplomatic practices, thereby revealing the priorities and potential trends in China's diplomacy toward Thailand. Existing research predominantly focuses on macro-policy levels, lacking systematic quantitative analysis of event-level diplomatic activities. This study seeks to fill this gap. Methodologically, it employs an event-level observation-computational empirical design, constructing five-layer networks (administrative, legislative, multilateral, social, and media) and time series based on open-source intelligence (OSINT). The analytical process follows the HCLS paradigm: identifying structural hubs (Hub) via the Bridge Center Early Warning Index (BCEW), detecting rhythmic inflection points (Change) using CUSUM, BOCPD, and PELT methods, characterizing lead-response lag relationships (Lag) between issues through cross-correlation and Hawkes processes, and translating multidimensional evidence into issue priority scores (Score) using AHP→TOPSIS. (Score). Results indicate that the administrative and multilateral layers exhibit significant hub status within the network, while security and multilateral issues show statistically significant rhythmic inflection points within short-term windows. “Security→Administrative” and “Multilateral→UN-ESCAP” demonstrate strong coupling at zero lag, whereas legislative channel coupling is weaker and transient. Multi-criteria ranking indicates that security, digital cooperation, and multilateral rules form the priority issue sequence, remaining robust to weight perturbations. Integrating four evidence chains reveals that China's recent diplomatic focus toward Thailand centers on amplifying issue linkage through administrative and multilateral platforms, gradually shifting toward narrative coupling of rule-building and public diplomacy in the medium term. In conclusion, this study not only proposes a reproducible, falsifiable event-level diplomatic analysis methodology but also reveals the logical chain of “hub prioritization—issue triggering—platform amplification—narrative coupling—trend insight” in China-Thailand relations. This research offers a quantitative perspective for understanding the micro-operational mechanisms of Chinese diplomacy while providing empirical evidence for policy formulation and regional cooperation.
    Date: 2025–09–19
    URL: https://d.repec.org/n?u=RePEc:osf:socarx:ywv9r_v1
  27. By: Brent Bundick; Isabel Cairó; Nicolas Petrosky-Nadeau
    Abstract: This paper reviews recent academic studies to assess the implications of adopting a shortfalls, rather than a deviations, approach to pursuing maximum employment. Model-based simulations from these studies suggest three main findings. First, shortfalls rules generate inflationary pressure relative to deviations rules, which offsets downward pressure on inflation stemming from the presence of the effective lower bound. Second, since monetary policy leans against these inflationary pressures, a shortfalls rule implies a limited effect on average outcomes in the labor market. Finally, studies suggest that monetary policy can offset higher-than-desired average inflation under a shortfalls rule by leaning more strongly against deviations of inflation from the 2 percent objective, thereby keeping longer-term inflation expectations well anchored.
    Keywords: Asymmetric monetary policy strategies; Maximum employment; Effective lower bound
    JEL: E32 E52 E58
    Date: 2025–08–22
    URL: https://d.repec.org/n?u=RePEc:fip:fedgfe:2025-68

This nep-cmp issue is ©2025 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.