|
on Computational Economics |
| By: | Alexander Eliseev (Bank of Russia, Russian Federation); Sergei Seleznev (Bank of Russia, Russian Federation) |
| Abstract: | Large language models (LLMs) are a type of machine learning tool that economists have started to apply in their empirical research. One such application is macroeconomic forecasting with backtesting of LLMs, even though they are trained on the same data that is used to estimate their forecasting performance. Can these in-sample accuracy results be extrapolated to the model’s out-of-sample performance? To answer this question, we developed a family of prompt sensitivity tests and two members of this family, which we call the fake date tests. These tests aim to detect two types of biases in LLMs’ in-sample forecasts: lookahead bias and context bias. According to the empirical results, none of the modern LLMs tested in this study passed our tests, signaling the presence of biases in their in-sample forecasts. |
| Keywords: | large language models, macroeconomic forecasting, lookahead bias, context bias |
| JEL: | C12 C52 C53 |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:bkr:wpaper:wps167 |
| By: | Ricardo Crisostomo; Diana Mykhalyuk |
| Abstract: | This paper investigates whether large language models (LLMs) can generate reliable stock market predictions. We evaluate four state-of-the-art models - ChatGPT, Gemini, DeepSeek, and Perplexity - across three prompting strategies: a naive query, a structured approach, and chain-of-thought reasoning. Our results show that LLM-generated recommendations are hindered by recurring reasoning failures, including financial misconceptions, carryover errors, and reliance on outdated or hallucinated information. When appropriately guided and supervised, LLMs demonstrate the capacity to outperform the market, but realizing LLMs' full potential requires substantial human oversight. We also find that grounding stock recommendations in official regulatory filings increases their forecasting accuracy. Overall, our findings underscore the need for robust safeguards and validation when deploying LLMs in financial markets. |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2603.19944 |
| By: | Vegard H. Larsen; Leif Anders Thorsrud |
| Abstract: | Building on recent advances in Natural Language Processing and modeling of sequences, we study how a multimodal Transformer-based deep learning architecture can be used for measurement and structural narrative attribution in macroeconomics. The framework we propose combines (news) text and (macroeconomic) time series information using cross-attention mechanisms, easily incorporates differences in data frequencies and reporting delays, and can be used together with Reinforcement Learning to produce structurally coherent summaries of high-frequency news flows. Applied and tested on both simulated and real-world data out-of-sample, the results we obtain are encouraging. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:bny:wpaper:0147 |
| By: | Lukas Gonon; Antoine Jacquier; Marcel Mordarski |
| Abstract: | We provide here a universal approximation theorem with precise quantitative error bounds for noisy quantum neural networks. We focus on applications to Quantitative Finance, where target functions are often given as expectations. We further provide a detailed numerical analysis, testing our results on actual noisy quantum hardware. |
| Date: | 2026–04 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2604.02064 |
| By: | Mehmet Caner; Agostino Capponi; Nathan Sun; Jonathan Y. Tan |
| Abstract: | We introduce a new agentic artificial intelligence (AI) platform for portfolio management. Our architecture consists of three layers. First, two large language model (LLM) agents are assigned specialized tasks: one agent screens for firms with desirable fundamentals, while a sentiment analysis agent screens for firms with desirable news. Second, these agents deliberate to generate and agree upon buy and sell signals from a large portfolio, substantially narrowing the pool of candidate assets. Finally, we apply a high-dimensional precision matrix estimation procedure to determine optimal portfolio weights. A defining theoretical feature of our framework is that the number of assets in the portfolio is itself a random variable, realized through the screening process. We introduce the concept of sensible screening and establish that, under mild screening errors, the squared Sharpe ratio of the screened portfolio consistently estimates its target. Empirically, our method achieves superior Sharpe ratios relative to an unscreened baseline portfolio and to conventional screening approaches, evaluated on S&P 500 data over the period 2020--2024. |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2603.23300 |
| By: | Minkey Chang |
| Abstract: | We study whether a risk-sensitive objective from asset-pricing theory -- recursive utility -- improves reinforcement learning for portfolio allocation. The Bellman equation under recursive utility involves a certainty equivalent (CE) of future value that has no closed form under observed returns; we approximate it by $K$-sample Monte Carlo and train actor-critic (PPO, A2C) on the resulting value target and an approximate advantage estimate (AAE) that generalizes the Bellman residual to multi-step with state-dependent weights. This formulation applies only to critic-based algorithms. On 10 chronological train/test splits of South Korean ETF data, the recursive-utility agent improves on the discounted (naive) baseline in Sharpe ratio, max drawdown, and cumulative return. Derivations, world model and metrics, and full result tables are in the appendices. |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2603.22880 |
| By: | Easton Huch; Michael Keane |
| Abstract: | Discrete choice models are fundamental tools in management science, economics, and marketing for understanding and predicting decision-making. Logit-based models are dominant in applied work, largely due to their convenient closed-form expressions for choice probabilities. However, these models entail restrictive assumptions on the stochastic utility component, constraining our ability to capture realistic and theoretically grounded choice behavior$-$most notably, substitution patterns. In this work, we propose an amortized inference approach using a neural network emulator to approximate choice probabilities for general error distributions, including those with correlated errors. Our proposal includes a specialized neural network architecture and accompanying training procedures designed to respect the invariance properties of discrete choice models. We provide group-theoretic foundations for the architecture, including a proof of universal approximation given a minimal set of invariant features. Once trained, the emulator enables rapid likelihood evaluation and gradient computation. We use Sobolev training, augmenting the likelihood loss with a gradient-matching penalty so that the emulator learns both choice probabilities and their derivatives. We show that emulator-based maximum likelihood estimators are consistent and asymptotically normal under mild approximation conditions, and we provide sandwich standard errors that remain valid even with imperfect likelihood approximation. Simulations show significant gains over the GHK simulator in accuracy and speed. |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2603.24705 |
| By: | Kemal Kirtac |
| Abstract: | This paper studies whether a lightweight trained aggregator can combine diverse zero-shot large language model judgments into a stronger downstream signal for corporate disclosure classification. Zero-shot LLMs can read disclosures without task-specific fine-tuning, but their predictions often vary across prompts, reasoning styles, and model families. I address this problem with a multi-agent framework in which three zero-shot agents independently read each disclosure and output a sentiment label, a confidence score, and a short rationale. A logistic meta-classifier then aggregates these signals to predict next-day stock return direction. I use a sample of 18, 420 U.S. corporate disclosures issued by Nasdaq and S&P 500 firms between 2018 and 2024, matched to next-day stock returns. Results show that the trained aggregator outperforms all single agents, majority vote, confidence-weighted voting, and a FinBERT baseline. Balanced accuracy rises from 0.561 for the best single agent to 0.612 for the trained aggregator, with the largest gains in disclosures combining strong current performance with weak guidance or elevated risk. The results suggest that zero-shot LLM agents capture complementary financial signals and that supervised aggregation can turn cross-agent disagreement into a more useful classification target. |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2603.20965 |
| By: | Paolo Pellizzari (Ca’ Foscari University of Venice); Francesca Parpinel (Ca’ Foscari University of Venice) |
| Abstract: | In light of the Madoff case, we present an Agent-Based Model of a Ponzi scheme. Agents are initially inclined to invest in the scam because they believe the wealth will increase, even if the fraudster dissipates it without any investment. We stress that the main characteristic of such schemes is the growing discrepancy between the perceived wealth and the actual total amount of money in the impostor's possession. The tendency gradually reverses and more agents withdraw their wealth (and made-up profits) if trust is lost as a result of hearing negative news about the economy. We look at how long it takes to expose the fraud and file for bankruptcy in relation to the volume of news that enters the market. We also look into the impact of a special agent dubbed Markopolos (inspired by a genuine personage) on the time to bankruptcy because of his capacity to quickly "convince" the agents he encounters to disinvest. Although the Markopolos effect seems to be statistically significant, it is not very strong when it comes to the results of a news flow and the subsequent widespread loss of faith and redemptions. |
| Keywords: | Agent-Based Model, Ponzi Schemes, NetLogo |
| JEL: | C63 C88 D83 K42 G11 |
| Date: | 2026 |
| URL: | https://d.repec.org/n?u=RePEc:ven:wpaper:2026:10 |
| By: | Nabeel Ahmad Saidd |
| Abstract: | Applying reinforcement learning (RL) to foreign exchange (Forex) trading remains challenging because realistic environments, well-defined reward functions, and expressive action spaces must be satisfied simultaneously, yet many prior studies rely on simplified simulators, single scalar rewards, and restricted action representations, limiting both interpretability and practical relevance. This paper presents a modular RL framework designed to address these limitations through three tightly integrated components: a friction-aware execution engine that enforces strict anti-lookahead semantics, with observations at time t, execution at time t+1, and mark-to-market at time t+1, while incorporating realistic costs such as spread, commission, slippage, rollover financing, and margin-triggered liquidation; a decomposable 11-component reward architecture with fixed weights and per-step diagnostic logging to enable systematic ablation and component-level attribution; and a 10-action discrete interface with legal-action masking that encodes explicit trading primitives while enforcing margin-aware feasibility constraints. Empirical evaluation on EURUSD focuses on learning dynamics rather than generalization and reveals strongly non-monotonic reward interactions, where additional penalties do not reliably improve outcomes; the full reward configuration achieves the highest training Sharpe (0.765) and cumulative return (57.09 percent). The expanded action space increases return but also turnover and reduces Sharpe relative to a conservative 3-action baseline, indicating a return-activity trade-off under a fixed training budget, while scaling-enabled variants consistently reduce drawdown, with the combined configuration achieving the strongest endpoint performance. |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2604.00031 |
| By: | Hongyang Yang; Boyu Zhang; Yang She; Xinyu Liao; Xiaoli Zhang |
| Abstract: | We present FinRL-X, a modular and deployment-consistent trading architecture that unifies data processing, strategy construction, backtesting, and broker execution under a weight-centric interface. While existing open-source platforms are often backtesting- or model-centric, they rarely provide system-level consistency between research evaluation and live deployment. FinRL-X addresses this gap through a composable strategy pipeline that integrates stock selection, portfolio allocation, timing, and portfolio-level risk overlays within a unified protocol. The framework supports both rule-based and AI-driven components, including reinforcement learning allocators and LLM-based sentiment signals, without altering downstream execution semantics. FinRL-X provides an extensible foundation for reproducible, end-to-end quantitative trading research and deployment. The official FinRL-X implementation is available at https://github.com/AI4Finance-Foundation /FinRL-Trading. |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2603.21330 |
| By: | DiGiuseppe, Matthew (Leiden University); Fu, Xuelong; Flynn, Michael E (Kansas State University) |
| Abstract: | Trade data are available at a high level of disaggregation, allowing scholars to examine flows of highly specific goods. Yet the sheer number of goods classifications (5, 000+) makes it difficult to analyze trade flows and tariff policy at a mid-level of aggregation beyond a few existing categorizations. Here, we outline a method that can scale---not merely classify---traded goods on researcher-defined dimensions that are orthogonal to existing classification schemes. We propose that the embedded knowledge in large language models (LLMs) can be used to conduct pairwise comparisons (PWCs) of Harmonized System (HS) product descriptions by determining their relative proximity to a specific concept. A Bayesian Bradley--Terry model then uses these PWCs to place individual items on a latent scale of interest. These estimates and their associated uncertainty can then be used for downstream descriptive or causal analysis. |
| Date: | 2026–03–27 |
| URL: | https://d.repec.org/n?u=RePEc:osf:socarx:t8wdg_v1 |
| By: | Sampat, Khushi (University of Warwick) |
| Abstract: | This paper studies how decentralised neural agents trained by regret minimisation learn equilibrium behaviour in static games and whether such learning can be extended beyond Nash equilibria. The analysis proceeds in two parts. The first chapter examines equilibrium selection in coordination games with multiple Nash equilibria. Building on recent evidence that neural agents trained across large distributions of games systematically favour risk-dominant equilibria, the chapter introduces a structured pre-training curriculum designed to instil a bias toward payoffdominant outcomes in Stag Hunt environments. While pre-training successfully induces effcient coordination in these games, the results show that this bias is rapidly eroded under subsequent adversarial training on heterogeneous games, where play reverts to mixed or risk-sensitive equilibria. The second chapter investigates whether decentralised learners can acquire correlated equilibrium behaviour when coordination requires conditioning on private signals. Initial experiments demonstrate that standard personal regret objectives lead agents to ignore mediator signals and converge to unconditional Nash strategies. This limitation is overcome by replacing personal regret with a squared obedience (swap) regret objective. Under this modified objective, neural agents successfully learn signal-contingent behaviour and generalise correlated equilibrium strategies to unseen coordination games. Together, the findings clarify the capabilities and limitations of regret-based learning as a mechanism for equilibrium formation in strategic environments. |
| Keywords: | Correlated Equilibrium ; Regret Minimisation ; Deep Reinforcement ; Learning ; Neural Networks ; Game Theory JEL classifications: C72 ; C63 ; C73 ; D83 ; C61 |
| Date: | 2026 |
| URL: | https://d.repec.org/n?u=RePEc:wrk:wrkesp:99 |
| By: | Nabeel Ahmad Saidd |
| Abstract: | Multi-horizon price forecasting is central to portfolio allocation, risk management, and algorithmic trading, yet deep learning architectures have proliferated faster than rigorous financial benchmarks can evaluate them. This study provides a controlled comparison of nine architectures (Autoformer, DLinear, iTransformer, LSTM, ModernTCN, N-HiTS, PatchTST, TimesNet, and TimeXer) spanning Transformer, MLP, CNN, and RNN families across cryptocurrency, forex, and equity index markets at 4-hour and 24-hour horizons. A total of 918 experiments were conducted under a strict five-stage protocol including fixed-seed Bayesian hyperparameter optimization, configuration freezing per asset class, multi-seed retraining, uncertainty aggregation, and statistical validation. ModernTCN achieves the best mean rank (1.333) with a 75 percent first-place rate, followed by PatchTST (2.000). Results reveal a clear three-tier ranking structure and show that architecture explains nearly all performance variance, while seed randomness is negligible. Rankings remain stable across horizons despite 2 to 2.5 times error amplification. Directional accuracy remains near 50 percent across all configurations, indicating that MSE-trained models lack directional skill at hourly resolution. The findings highlight the importance of architectural inductive bias over raw parameter count and provide reproducible guidance for multi-step financial forecasting. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2603.16886 |
| By: | Joohyoung Jeon; Hongchul Lee |
| Abstract: | For LLM trading agents to be genuinely trustworthy, they must demonstrate understanding of market dynamics rather than exploitation of memorized ticker associations. Building responsible multi-agent systems demands rigorous signal validation: proving that predictions reflect legitimate patterns, not pre-trained recall. We address two sources of spurious performance: memorization bias from ticker-specific pre-training, and survivorship bias from flawed backtesting. Our approach is to blindfold the agents--anonymizing all identifiers--and verify whether meaningful signals persist. BlindTrade anonymizes tickers and company names, and four LLM agents output scores along with reasoning. We construct a GNN graph from reasoning embeddings and trade using PPO-DSR policy. On 2025 YTD (through 2025-08-01), we achieved Sharpe 1.40 +/- 0.22 across 20 seeds and validated signal legitimacy through negative control experiments. To assess robustness beyond a single OOS window, we additionally evaluate an extended period (2024--2025), revealing market-regime dependency: the policy excels in volatile conditions but shows reduced alpha in trending bull markets. |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2603.17692 |
| By: | Victor Medina-Olivares; Wangzhen Xia; Stefan Lessmann; Nadja Klein |
| Abstract: | We propose a semi-structured discrete-time multi-state model to analyse mortgage delinquency transitions. This model combines an easy-to-understand structured additive predictor, which includes linear effects and smooth functions of time and covariates, with a flexible neural network component that captures complex nonlinearities and higher-order interactions. To ensure identifiability when covariates are present in both components, we orthogonalise the unstructured part relative to the structured design. For discrete-time competing transitions, we derive exact transformations that map binary logistic models to valid competing transition probabilities, avoiding the need for continuous-time approximations. In simulations, our framework effectively recovers structured baseline and covariate effects while using the neural component to detect interaction patterns. We demonstrate the method using the Freddie Mac Single-Family Loan-Level Dataset, employing an out-of-time test design. Compared with a structured generalised additive benchmark, the semi-structured model provides modest but consistent gains in discrimination across the earliest prediction spans, while maintaining similar Brier scores. Adding macroeconomic indicators provides limited incremental benefit in this out-of-time evaluation and does not materially change the estimated borrower-, loan-, or duration-driven effects. Overall, semi-structured multi-state modelling offers a practical compromise between transparent effect estimates and flexible pattern learning, with potential applications beyond credit-transition forecasting. |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2603.26309 |
| By: | Yijia Chen |
| Abstract: | The proliferation of diverse, high-leverage trading instruments in modern financial markets presents a complex, "noisy" environment, leading to a critical question: which trading strategies are evolutionarily viable? To investigate this, we construct a large-scale agent-based model, "MAS-Utopia, " comprising 10, 000 agents with five distinct archetypes. This society is immersed in five years of high-frequency data under a counterfactual baseline: zero transaction friction and a robust Unconditional Basic Income (UBI) safety net. The simulation reveals a powerful evolutionary convergence. Strategies that attempt to fight the market's current - namely Mean-Reversion ("buy-the-dip") - prove structurally fragile. In contrast, the Trend-Following archetype, which adapts to the market's flow, emerges as the dominant phenotype. Translating this finding, we architect an LLM-driven system that emulates this successful logic. Our findings offer profound implications, echoing the ancient wisdom of "Be Water": for investors, it demonstrates that survival is achieved not by rigid opposition, but by disciplined alignment with the prevailing current; for markets, it critiques tools that encourage contrarian gambling; for society, it underscores the stabilizing power of economic safety nets. |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2603.29593 |
| By: | Hongyang Yang; Yanxin Zhang; Yang She; Yue Xiao; Hao Wu; Yiyang Zhang; Jiapeng Hou; Rongshan Zhang |
| Abstract: | Housing selection is a high-stakes and largely irreversible decision problem. We study housing consultation as a decision-support interface for housing selection. Existing housing platforms and many LLM-based assistants often reduce this process to ranking or recommendation, resulting in opaque reasoning, brittle multi-constraint handling, and limited guarantees on factuality. We present HabitatAgent, the first LLM-powered multi-agent architecture for end-to-end housing consultation. HabitatAgent comprises four specialized agent roles: Memory, Retrieval, Generation, and Validation. The Memory Agent maintains multi-layer user memory through internal stages for constraint extraction, memory fusion, and verification-gated updates; the Retrieval Agent performs hybrid vector--graph retrieval (GraphRAG); the Generation Agent produces evidence-referenced recommendations and explanations; and the Validation Agent applies multi-tier verification and targeted remediation. Together, these agents provide an auditable and reliable workflow for end-to-end housing consultation. We evaluate HabitatAgent on 100 real user consultation scenarios (300 multi-turn question--answer pairs) under an end-to-end correctness protocol. A strong single-stage baseline (Dense+Rerank) achieves 75% accuracy, while HabitatAgent reaches 95%. |
| Date: | 2026–04 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2604.00556 |
| By: | Jonas Stein; Shannon Cruz; Davide Grossi; Martina Testori |
| Abstract: | A core tenet underpinning the conception of contemporary information networks, such as social media platforms, is that users should not be constrained in the amount of information they can freely and willingly exchange with one another about a given topic. By means of a computational agent-based model, we show how even in groups of truth-seeking and cooperative agents with perfect information-processing abilities, unconstrained information exchange may lead to detrimental effects on the correctness of the group's beliefs. If unconstrained information exchange can be detrimental even among such idealized agents, it is prudent to assume it can also be so in practice. We therefore argue that constraints on information flow should be carefully considered in the design of communication networks with substantial societal impact, such as social media platforms. |
| Date: | 2026–04 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2604.01838 |
| By: | Tianzuo Hu |
| Abstract: | We propose a Neural Hidden Markov Model (HMM) with Adaptive Granularity Attention (AGA) for high-frequency order flow modeling. The model addresses the challenge of capturing multi-scale temporal dynamics in financial markets, where fine-grained microstructure signals and coarse-grained liquidity trends coexist. The proposed framework integrates parallel multi-resolution encoders, including a dilated convolutional network for tick-level patterns and a wavelet-LSTM module for low-frequency dynamics. A gating mechanism conditioned on local volatility and transaction intensity adaptively fuses multi-scale representations, while a multi-head attention layer further enhances temporal dependency modeling. Within this architecture, a Neural HMM with conditional normalizing flow emissions is employed to jointly model latent market regimes and complex observation distributions. Empirical results on high-frequency limit order book data demonstrate that the proposed model outperforms fixed-resolution baselines in predicting short-term price movements and liquidity shocks. The adaptive granularity mechanism enables the model to dynamically adjust its focus across time scales, providing improved performance particularly during volatile market conditions. |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2603.20456 |
| By: | James Giesecke; Xiujian Peng |
| Abstract: | China has experienced remarkable economic growth since its reform and opening-up in the early 1980s. Although growth has moderated as the economy has matured and undergone structural adjustment, China has maintained relatively strong performance, averaging 5-6% annually between 2010 and 2024. Using an economy-wide dynamic computable general equilibrium model of the Chinese economy, CHINAGEM, and a historical/decomposition approach, this study identifies the key drivers of growth over 2012-2017 and 2017-2022. Productivity growth emerges as the dominant driver in both periods. In contrast, declining employment exerts a negative effect, which intensifies in 2017-2022 due to a sharper contraction in labour supply associated with population ageing. External demand and a rising preference for domestically produced goods also contributed positively to growth. Looking ahead, the projected decline in China's working-age population will place sustained downward pressure on labour supply. These findings underscore the central role of productivity growth in offsetting demographic headwinds. Policies that foster technological progress and innovation, alongside investment in human capital and skills, will be critical to sustaining long-term economic growth. |
| Keywords: | Economic Growth, Decomposition, Historical simulation, CGE model, China |
| JEL: | O47 O53 C68 |
| Date: | 2025–03 |
| URL: | https://d.repec.org/n?u=RePEc:cop:wpaper:g-366 |
| By: | Florence Paquette; Tania Belabbas; Emmanuel Hamel; Anne MacKay |
| Abstract: | We develop a quantum algorithm to price discretely monitored lookback options in the Black-Scholes framework using imaginary time evolution. By rewriting the pricing PDE as a Schrodinger-type equation, the problem becomes the imaginary time evolution of a quantum state under a non-Hermitian Hamiltonian. This evolution is approximated with the Variational Quantum imaginary time evolution (VarQITE) method, which replaces the exact non-unitary dynamics with a parameterized, hardware-efficient quantum circuit. A central challenge arises from jump conditions caused by the discrete updating of the running maximum. This feature is not present in standard quantum treatments of European or Asian options. To address this, we propose two quantum-compatible formulations: (i) a sequential approach that models jumps via dedicated jump Hamiltonians applied at monitoring dates, and (ii) a simultaneous multi-function evolution that removes explicit jumps at the expense of an increased number of dimensions. We compare both approaches in terms of qubit resources, circuit complexity and numerical accuracy, and benchmark them against Monte Carlo simulations. Our results show that discretely monitored, path-dependent options with jump conditions can be handled within a variational quantum framework, paving the way toward the quantum pricing of more complex derivatives with non-smooth dynamics. |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2604.00389 |