nep-cmp New Economics Papers
on Computational Economics
Issue of 2026–04–13
twenty papers chosen by
Stan Miles, Thompson Rivers University


  1. Validating Large Language Model Annotations By Anne Lundgaard Hansen
  2. Returns to Education in the United States: A Comparison of OLS and Double Machine Learning Methods By Helal, Al Mansor; Hiraki, Ryotaro; Patrinos, Harry Anthony
  3. School Governance and Learner Performance in Sub-Saharan Africa: A Neural Networks Approach By Sylvain K. Assienin; Auguste K. Kouakou; Christian K. Nda; Loukou L. E. Yobouet
  4. Machine Learning Approaches for Improving Demand Forecasting Accuracy in Retail Supply Chains By Abdelfatah, Omar Sharafeldin Mohamed
  5. Debiasing LLMs by Fine-tuning By Zhenyu Gao; Wenxi Jiang; Yutong Yan
  6. Using large language models as a source of human behavioral data in social science experiments By van Loon, Austin; Kanopka, Klint
  7. Amortized Inference for Correlated Discrete Choice Models via Equivariant Neural Networks By Easton K. Huch; Michael P. Keane
  8. Quantum Computing for Financial Transformation: A Review of Optimisation, Pricing, Risk, Machine Learning, and Post-Quantum Security By Hui Gong; Akash Sedai; Thomas Schroeder; Francesca Medda
  9. PolySwarm: A Multi-Agent Large Language Model Framework for Prediction Market Trading and Latency Arbitrage By Rajat M. Barot; Arjun S. Borkhatariya
  10. Anticipatory Reinforcement Learning: From Generative Path-Laws to Distributional Value Functions By Daniel Bloch
  11. How AI Aggregation Affects Knowledge By Daron Acemoglu; Tianyi Lin; Asuman Ozdaglar; James Siderius
  12. SBBTS: A Unified Schr\"odinger-Bass Framework for Synthetic Financial Time Series By Alexandre Alouadi; Gr\'egoire Loeper; C\'elian Marsala; Othmane Mazhar; Huy\^en Pham
  13. Beyond Black-Scholes: A Computational Framework for Option Pricing Using Heston, GARCH, and Jump Diffusion Models By Karmanpartap Singh Sidhu; Pranshi Saxena
  14. How AI Aggregation Affects Knowledge By Daron Acemoglu; Tianyi Lin; Asuman Ozdaglar; James Siderius
  15. Regime-aware conditional neural processes with multi-criteria decision support for operational electricity price forecasting By Abhinav Das; Stephan Schlüter; Lorenz Schneider
  16. Financial Anomaly Detection for the Canadian Market By Luigi Caputi; Nicholas Meadows
  17. Reproducibility: an open-source tool for computational hypothesis testing in natural language By Jimez, Bruno Oliveira Costa
  18. A Benchmark of Classical and Deep Learning Models for Agricultural Commodity Price Forecasting on A Novel Bangladeshi Market Price Dataset By Tashreef Muhammad; Tahsin Ahmed; Meherun Farzana; Md. Mahmudul Hasan; Abrar Eyasir; Md. Emon Khan; Mahafuzul Islam Shawon; Ferdous Mondol; Mahmudul Hasan; Muhammad Ibrahim
  19. Intraday Decision Support for Traders: Explainable CNN-Based Directional Price Forecasting from Candlestick Chart Images By Kazim, Zeeshan
  20. Modelling global trade with optimal transport By Gaskin, Thomas; Demirel, Guven; Wolfram, Marie-Therese; Duncan, Andrew

  1. By: Anne Lundgaard Hansen
    Abstract: This paper proposes a validation framework for LLM-generated measurements when reliable benchmarks are unavailable. Validity is established by testing whether an LLM can reconstruct passages from annotated labels while maintaining semantic consistency with the original text. The framework avoids circular reasoning by establishing testable prerequisite properties that must be met for a validation to be considered successful. Application to news article data demonstrates that the framework serves as a practical alternative to human benchmarking, which offers advantages in objectivity, scalability, and cost-effectiveness while identifying cases where LLMs capture economic meaning that human evaluators miss.
    Keywords: Economic measurement; Machine learning; Unstructured data; Sentiment; Computational techniques
    JEL: C18 C45 C80
    Date: 2026–03–30
    URL: https://d.repec.org/n?u=RePEc:fip:fedgfe:103001
  2. By: Helal, Al Mansor; Hiraki, Ryotaro; Patrinos, Harry Anthony
    Abstract: This study examines the economic returns to education in the U.S. using 2024 CPS data and compares Ordinary Least Squares (OLS) regression with a Double Machine Learning (DML) framework incorporating models such as random forests, boosted trees, lasso, GAMs, and neural networks (MLP). Results show consistent returns of 8 to 9 percent per additional year of schooling across methods. Simulations reveal that all predictors perform well under linear assumptions if hyperparameters are optimally adjusted, while OLS/Lasso suffer from nonlinearity. Findings suggest that OLS remains robust in low-dimensional, near-linear contexts, offering practical guidance for economists and policymakers balancing model complexity and interpretability in education research.
    Keywords: Returns to education, Machine learning
    JEL: I20 J31 J24 D62 O15
    Date: 2026
    URL: https://d.repec.org/n?u=RePEc:zbw:glodps:1733
  3. By: Sylvain K. Assienin (UJloG - Université Jean Lorougnon Guédé); Auguste K. Kouakou (UJloG - Université Jean Lorougnon Guédé); Christian K. Nda (UJloG - Université Jean Lorougnon Guédé); Loukou L. E. Yobouet (Université Alassane Ouattara [Bouaké, Côte d'Ivoire])
    Abstract: The aim of this paper is to analyse the impact of school governance on learner performance in Sub-Saharan Africa, in the face of persistent low performance in the region, revealed by the PASEC 2019 report. The study uses an econometric model followed by machine learning models (Regression Logistic, Random Forest, Extra Tress Classifier, Extreme Gradient Boosting, Artificial Neural Networks) to explore the relationships between school results and governance factors measured by school management, pedagogical practices and relations with stakeholders. The results show that artificial neural network models perform better than conventional approaches in terms of accuracy and explainability. Explainability by Shapley values shows that the quality of administrative and pedagogical management, benevolent school-student relations, and activities to promote the best students significantly improve performance. The study suggests capacity building for managers in order to improve the quality of administrative and pedagogical management. It also highlights the need to promote rigorous administrative governance, based on effective practices and adapted to local realities. In addition, specific strategies should be put in place to reward high-performing students, while encouraging professional collaboration between education stakeholders. Finally, a review of parental involvement practices is recommended in order to avoid inappropriate expectations likely to be detrimental to learners' performance.
    Keywords: Shapley values, neural networks, school performance, school governance
    Date: 2025–08–31
    URL: https://d.repec.org/n?u=RePEc:hal:journl:hal-05547822
  4. By: Abdelfatah, Omar Sharafeldin Mohamed
    Abstract: Accurate demand forecasting remains one of the most critical yet persistently challenging functions in retail supply chain management. Traditional statistical forecasting methods such as ARIMA and exponential smoothing have long served as industry standards; however, their limited capacity to capture nonlinear demand patterns, seasonal volatility, and external market signals has prompted growing interest in machine learning (ML) alternatives. This study investigates the comparative effectiveness of multiple ML approaches including Random Forest, Gradient Boosting (XGBoost), Long Short-Term Memory (LSTM) neural networks, and hybrid ensemble models against traditional baseline methods in the context of retail supply chain demand forecasting. Employing a quantitative research design, the study utilizes a panel dataset comprising 36 months of point-of-sale (POS) transaction records, promotional calendars, macroeconomic indicators, and weather data from 14 retail organizations operating across grocery, fashion, and consumer electronics segments. Forecasting accuracy is evaluated using Mean Absolute Percentage Error (MAPE), Root Mean Square Error (RMSE), and Forecast Bias metrics across multiple product categories and forecasting horizons (1-week, 4-week, and 12-week ahead). Results demonstrate that ensemble ML models particularly hybrid LSTM-XGBoost architectures achieve statistically significant improvements in forecasting accuracy over traditional methods, with MAPE reductions averaging 28.6% at the 4-week horizon. Feature importance analysis identifies promotional activity, competitor pricing signals, and lagged POS data as the most influential demand drivers. The study further reveals that ML forecasting benefits are heterogeneous across product categories, with highest gains 2 observed in high-velocity, promotion-sensitive SKUs and smallest gains in slow-moving, low-volatility items. A practical implementation framework is proposed, offering retail supply chain practitioners a structured pathway from data readiness assessment through model deployment and ongoing performance monitoring.
    Date: 2026–04–03
    URL: https://d.repec.org/n?u=RePEc:osf:socarx:4z9be_v1
  5. By: Zhenyu Gao; Wenxi Jiang; Yutong Yan
    Abstract: Prior research shows that large language models (LLMs) exhibit systematic extrapolation bias when forming predictions from both experimental and real-world data, and that prompt-based approaches appear limited in alleviating this bias. We propose a supervised fine-tuning (SFT) approach that uses Low-Rank Adaptation (LoRA) to train off-the-shelf LLMs on instruction datasets constructed from rational benchmark forecasts. By intervening at the parameter level, SFT changes how LLMs map observed information into forecasts and thereby mitigates extrapolation bias. We evaluate the fine-tuned model in two settings: controlled forecasting experiments and cross-sectional stock return prediction. In both settings, fine-tuning corrects the extrapolative bias out-of-sample, establishing a low-cost and generalizable method for debiasing LLMs.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.02921
  6. By: van Loon, Austin; Kanopka, Klint (New York University)
    Abstract: Large language models (LLMs) have prompted proposals to replace human subjects in social science experiments with simulated responses. Empirical evaluations suggest that this practice---often called silicon sampling---can sometimes approximate human behavior but is unreliable. We delineate where this approach may still provide value and where it may not, but primarily study an alternative approach: one in which model-based predictions are used not as substitutes for human data, but as auxiliary measurements within randomized experiments. We formalize the inference of causal estimands from mixed-subjects randomized controlled trials, in which outcomes are observed for a subset of units while predictions are available for all units. Under transparent design conditions, we derive a family of estimators that remain unbiased for the average treatment effect in finite samples while exploiting predictions to reduce variance. We characterize when prediction-powered, calibration-based, arm-specifically tuned, and difference-in-predictions estimators improve precision, and we provide a software package which operationalizes these results and aids researchers to jointly select estimators and allocate budgets between human data collection and prediction generation. Together, our results show how generative artificial intelligence can improve experimental social science without compromising scientific validity.
    Date: 2026–04–03
    URL: https://d.repec.org/n?u=RePEc:osf:socarx:y74mu_v1
  7. By: Easton K. Huch; Michael P. Keane
    Abstract: Discrete choice models are fundamental tools in management science, economics, and marketing for understanding and predicting decision-making. Logit-based models are dominant in applied work, largely due to their convenient closed-form expressions for choice probabilities. However, these models entail restrictive assumptions on the stochastic utility component, constraining our ability to capture realistic and theoretically grounded choice behavior—most notably, substitution patterns. In this work, we propose an amortized inference approach using a neural network emulator to approximate choice probabilities for general error distributions, including those with correlated errors. Our proposal includes a specialized neural network architecture and accompanying training procedures designed to respect the invariance properties of discrete choice models. We provide group-theoretic foundations for the architecture, including a proof of universal approximation given a minimal set of invariant features. Once trained, the emulator enables rapid likelihood evaluation and gradient computation. We use Sobolev training, augmenting the likelihood loss with a gradient-matching penalty so that the emulator learns both choice probabilities and their derivatives. We show that emulator-based maximum likelihood estimators are consistent and asymptotically normal under mild approximation conditions, and we provide sandwich standard errors that remain valid even with imperfect likelihood approximation. Simulations show significant gains over the GHK simulator in accuracy and speed.
    JEL: C10 C13 C15 C25 C35 C45
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:nbr:nberwo:35037
  8. By: Hui Gong; Akash Sedai; Thomas Schroeder; Francesca Medda
    Abstract: Quantum computing is becoming strategically relevant to finance because several core financial bottlenecks are already defined by combinatorial search, expectation estimation, rare-event analysis, representation learning, and long-horizon cryptographic resilience. This review examines that landscape across five connected domains: constrained portfolio optimisation, derivative pricing, tail-risk and scenario estimation, quantum machine learning, and post-quantum security. Rather than treating these topics as isolated demonstrations, the article studies them as linked layers of a financial-computation stack. Across all five domains, the review applies a common evaluative logic: identify the financial bottleneck, specify the relevant quantum primitive, compare it with an explicit classical benchmark, and assess the result under realistic implementation and governance constraints. The main conclusion is measured but consequential. The strongest near-term case for quantum finance lies in carefully designed hybrid workflows rather than blanket claims of universal advantage. Quantum optimisation is most credible when constrained search dominates; amplitude-estimation methods matter most when repeated expectation evaluation is the binding cost; quantum machine learning remains task dependent; and post-quantum cryptography is already strategically necessary because financial infrastructures must migrate before fault-tolerant attacks arrive. By combining system-level synthesis with locally reproducible small-scale case studies on simulated qubit registers, the article is intended both as a review of the field and as a handbook-style entry point for future work.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.08180
  9. By: Rajat M. Barot; Arjun S. Borkhatariya
    Abstract: This paper presents PolySwarm, a novel multi-agent large language model (LLM) framework designed for real-time prediction market trading and latency arbitrage on decentralized platforms such as Polymarket. PolySwarm deploys a swarm of 50 diverse LLM personas that concurrently evaluate binary outcome markets, aggregating individual probability estimates through confidence-weighted Bayesian combination of swarm consensus with market-implied probabilities, and applying quarter-Kelly position sizing for risk-controlled execution. The system incorporates an information-theoretic market analysis engine using Kullback-Leibler (KL) divergence and Jensen-Shannon (JS) divergence to detect cross-market inefficiencies and negation pair mispricings. A latency arbitrage module exploits stale Polymarket prices by deriving CEX-implied probabilities from a log-normal pricing model and executing trades within the human reaction-time window. We provide a full architectural description, implementation details, and evaluation methodology using Brier scores, calibration analysis, and log-loss metrics benchmarked against human superforecaster performance. We further discuss open challenges including hallucination in agent pools, computational cost at scale, regulatory exposure, and feedback-loop risk, and outline five priority directions for future research. Experimental results demonstrate that swarm aggregation consistently outperforms single-model baselines in probability calibration on Polymarket prediction tasks.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.03888
  10. By: Daniel Bloch
    Abstract: This paper introduces Anticipatory Reinforcement Learning (ARL), a novel framework designed to bridge the gap between non-Markovian decision processes and classical reinforcement learning architectures, specifically under the constraint of a single observed trajectory. In environments characterised by jump-diffusions and structural breaks, traditional state-based methods often fail to capture the essential path-dependent geometry required for accurate foresight. We resolve this by lifting the state space into a signature-augmented manifold, where the history of the process is embedded as a dynamical coordinate. By utilising a self-consistent field approach, the agent maintains an anticipated proxy of the future path-law, allowing for a deterministic evaluation of expected returns. This transition from stochastic branching to a single-pass linear evaluation significantly reduces computational complexity and variance. We prove that this framework preserves fundamental contraction properties and ensures stable generalisation even in the presence of heavy-tailed noise. Our results demonstrate that by grounding reinforcement learning in the topological features of path-space, agents can achieve proactive risk management and superior policy stability in highly volatile, continuous-time environments.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.04662
  11. By: Daron Acemoglu; Tianyi Lin; Asuman Ozdaglar; James Siderius
    Abstract: Artificial intelligence (AI) changes social learning when aggregated outputs become training data for future predictions. To study this, we extend the DeGroot model by introducing an AI aggregator that trains on population beliefs and feeds synthesized signals back to agents. We define the learning gap as the deviation of long-run beliefs from the efficient benchmark, allowing us to capture how AI aggregation affects learning. Our main result identifies a threshold in the speed of updating: when the aggregator updates too quickly, there is no positive-measure set of training weights that robustly improves learning across a broad class of environments, whereas such weights exist when updating is sufficiently slow. We then compare global and local architectures. Local aggregators trained on proximate or topic-specific data robustly improve learning in all environments. Consequently, replacing specialized local aggregators with a single global aggregator worsens learning in at least one dimension of the state.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.04906
  12. By: Alexandre Alouadi; Gr\'egoire Loeper; C\'elian Marsala; Othmane Mazhar; Huy\^en Pham
    Abstract: We study the problem of generating synthetic time series that reproduce both marginal distributions and temporal dynamics, a central challenge in financial machine learning. Existing approaches typically fail to jointly model drift and stochastic volatility, as diffusion-based methods fix the volatility while martingale transport models ignore drift. We introduce the Schr\"odinger-Bass Bridge for Time Series (SBBTS), a unified framework that extends the Schr\"odinger-Bass formulation to multi-step time series. The method constructs a diffusion process that jointly calibrates drift and volatility and admits a tractable decomposition into conditional transport problems, enabling efficient learning. Numerical experiments on the Heston model demonstrate that SBBTS accurately recovers stochastic volatility and correlation parameters that prior Schr\"odingerBridge methods fail to capture. Applied to S&P 500 data, SBBTS-generated synthetic time series consistently improve downstream forecasting performance when used for data augmentation, yielding higher classification accuracy and Sharpe ratio compared to real-data-only training. These results show that SBBTS provides a practical and effective framework for realistic time series generation and data augmentation in financial applications.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.07159
  13. By: Karmanpartap Singh Sidhu; Pranshi Saxena
    Abstract: This research addresses accurate option pricing by employing models beyond the traditional Black-Scholes framework. While Black-Scholes provides a closed-form solution, it is limited by assumptions of constant volatility, no dividends, and continuous price movements. To overcome these limitations, we use Monte Carlo simulation alongside the GARCH model, Heston stochastic volatility model, and Merton jump-diffusion model. The Black-Scholes-Monte Carlo method simulates diverse stock price paths using geometric Brownian motion. The GARCH model forecasts time-varying volatility from historical data. The Heston model incorporates stochastic volatility to capture volatility clustering and skew. The Merton jump-diffusion model adds sudden price jumps via a Poisson process. Results show the Heston model consistently produces estimates closer to market prices, while the Merton model performs well for volatile assets with sudden price movements. The GARCH model provides improved volatility forecasts for future option price prediction. All experiments used live market data from November 2024.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.06068
  14. By: Daron Acemoglu; Tianyi Lin; Asuman Ozdaglar; James Siderius
    Abstract: Artificial intelligence (AI) changes social learning when aggregated outputs become training data for future predictions. To study this, we extend the DeGroot model by introducing an AI aggregator that trains on population beliefs and feeds synthesized signals back to agents. We define the learning gap as the deviation of long-run beliefs from the efficient benchmark, allowing us to capture how AI aggregation affects learning. Our main result identifies a threshold in the speed of updating: when the aggregator updates too quickly, there is no positive-measure set of training weights that robustly improves learning across a broad class of environments, whereas such weights exist when updating is sufficiently slow. We then compare global and local architectures. Local aggregators trained on proximate or topic-specific data robustly improve learning in all environments. Consequently, replacing specialized local aggregators with a single global aggregator worsens learning in at least one dimension of the state.
    JEL: D80 D83 D85
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:nbr:nberwo:35036
  15. By: Abhinav Das (Universität Ulm (Germany, Ulm)); Stephan Schlüter (Technische Hochschule Ulm (Germany, Ulm)); Lorenz Schneider (EM - EMLyon Business School)
    Abstract: This work integrates Bayesian regime detection with conditional neural processes for 24-hour electricity price forecasting in the German, French, and Norwegian markets. Regimes are inferred via a disentangled sticky hierarchical Dirichlet process hidden Markov model (DS-HDP-HMM). For each regime, an independent conditional neural process (CNP) learns localized mappings from input contexts to 24-dimensional hourly price trajectories; final forecasts are produced as regime-weighted mixtures of the regime-specific CNP outputs. Temporal robustness and cross-market generalization are evaluated on Germany (2021–2023) and on France and Norway (2023). We benchmark against deep neural networks (DNN), the Lasso estimated autoregressive (LEAR) model, extreme gradient boosting (XGBoost), Bayesian long short-term memory (BLSTM), and the temporal fusion transformer (TFT), and assess downstream value through battery storage optimization. Results indicate that the proposed regime-aware CNP often delivers higher profits or lower costs, while DNN can be exceptionally competitive in specific cost-minimization settings. Because point accuracy does not necessarily translate into operational optimality, we apply the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) to aggregate forecasting and operational criteria. TOPSIS ranks the CNP as the leading model for 2023 and, overall, as the most balanced and consistently preferred solution across the considered markets.
    Keywords: Battery energy storage systems, Regime-aware prediction, MCDM, Electricity price forecasting
    Date: 2026–05–01
    URL: https://d.repec.org/n?u=RePEc:hal:journl:hal-05562231
  16. By: Luigi Caputi; Nicholas Meadows
    Abstract: In this work we evaluate the performance of three classes of methods for detecting financial anomalies: topological data analysis (TDA), principal component analyis (PCA), and Neural Network-based approaches. We apply these methods to the TSX-60 data to identify major financial stress events in the Canadian stock market. We show how neural network-based methods (such as GlocalKD and One-Shot GIN(E)) and TDA methods achieve the strongest performance. The effectiveness of TDA in detecting financial anomalies suggests that global topological properties are meaningful in distinguishing financial stress events.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.02549
  17. By: Jimez, Bruno Oliveira Costa
    Abstract: We present Reproducibility, an open-source software system for quantitative evaluation of scientific hypotheses formulated in natural language. The system converts scientific statements written in Portuguese or English into executable mathematical functions, assesses their internal consistency through Monte Carlo simulation, and measures their predictive power against real empirical data from open sources (World Bank, NASA POWER, IBGE) or user-supplied CSV files. We propose a composite reproducibility score — combining simulated consistency and empirical fit — and validate it against five cases with expected behaviour established by the literature.
    Date: 2026–03–31
    URL: https://d.repec.org/n?u=RePEc:osf:metaar:7enxu_v1
  18. By: Tashreef Muhammad; Tahsin Ahmed; Meherun Farzana; Md. Mahmudul Hasan; Abrar Eyasir; Md. Emon Khan; Mahafuzul Islam Shawon; Ferdous Mondol; Mahmudul Hasan; Muhammad Ibrahim
    Abstract: Accurate short-term forecasting of agricultural commodity prices is critical for food security planning and smallholder income stabilisation in developing economies, yet machine-learning-ready datasets for this purpose remain scarce in South Asia. This paper makes two contributions. First, we introduce AgriPriceBD, a benchmark dataset of 1, 779 daily retail mid-prices for five Bangladeshi commodities - garlic, chickpea, green chilli, cucumber, and sweet pumpkin - spanning July 2020 to June 2025, extracted from government reports via an LLM-assisted digitisation pipeline. Second, we evaluate seven forecasting approaches spanning classical models - na\"{i}ve persistence, SARIMA, and Prophet - and deep learning architectures - BiLSTM, Transformer, Time2Vec-enhanced Transformer, and Informer - with Diebold-Mariano statistical significance tests. Commodity price forecastability is fundamentally heterogeneous: na\"{i}ve persistence dominates on near-random-walk commodities. Time2Vec temporal encoding provides no statistically significant advantage over fixed sinusoidal encoding and causes catastrophic degradation on green chilli (+146.1% MAE, p
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.06227
  19. By: Kazim, Zeeshan
    Abstract: Short-horizon trader-support systems in financial markets should be judged not only by predictive skill but also by auditability, inspectability, and suitability for decision support under uncertainty. This paper studies an auditable 20-bar chart-image branch built from a retained MOEX Si futures artifact bundle. Starting from 14 contract-level one-minute candle files plus an auxiliary participant-position file, the pipeline resamples to 15-minute candles, selects the daily front contract by realized volume, applies a conservative liquidity filter, constructs a clean continuous-front series of 29, 926 bars across 502 trading days, and generates 10, 350 same-day, same-contract 20-bar windows rendered as 64 x 60 grayscale candlestick images. A three-block convolutional neural network (CNN) and a Grad-CAM-style local explanation layer are then embedded in a dashboard-centered inspection workflow. On the held-out test split (1, 646 windows), the model attains 0.552 accuracy, 0.530 balanced accuracy, 0.408 F1, 0.557 ROC-AUC, and 0.063 MCC. Performance is modest and, at the default 0.50 threshold, trails the naive majority-class baseline on raw accuracy, while remaining better than random in threshold-free ranking terms. Results are contract-concentrated: SiZ5 materially outperforms SiU5. Quantitative explanation summaries show that heat mass is concentrated mainly in the price panel (0.807) rather than volume (0.193), but remains broad and only weakly focused on the final bar (0.043). The contribution is therefore bounded: not novelty of chart-image CNN forecasting or Grad-CAM in futures, both of which already exist, but the integration of chronology-aware futures engineering, auditable chart-image modeling, local explanation outputs, and row-level dashboard inspection in a trader-facing decision-support artifact. Fee-aware economic validation and formal user evaluation remain separated as pending next-stage work.
    Date: 2026–03–31
    URL: https://d.repec.org/n?u=RePEc:osf:socarx:btnz8_v1
  20. By: Gaskin, Thomas; Demirel, Guven; Wolfram, Marie-Therese; Duncan, Andrew
    Abstract: Global trade is shaped by a complex mix of factors beyond supply and demand, including tangible variables like transport costs and tariffs, as well as less quantifiable influences such as political and economic relations. Traditionally, economists model trade using gravity models, which rely on explicit covariates that might struggle to capture these subtler drivers of trade. In this work, we employ optimal transport and a deep neural network to learn a time-dependent cost function from data, without imposing a specific functional form. This approach consistently outperforms traditional gravity models in accuracy and has similar performance to three-way gravity models, while providing natural uncertainty quantification. Applying our framework to global food and agricultural trade, we show that low income countries experienced disproportionately higher increases in trade costs due to the war in Ukraine’s impact on wheat markets. We also analyse the effects of free-trade agreements and trade disputes with China, as well as Brexit’s impact on British trade with Europe, uncovering hidden patterns that trade volumes alone cannot reveal.
    Keywords: REF fund 2025/2026
    JEL: L81
    Date: 2026–02–19
    URL: https://d.repec.org/n?u=RePEc:ehl:lserod:137330

This nep-cmp issue is ©2026 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the Griffith Business School of Griffith University in Australia.