|
on Econometrics |
By: | Zeqi Wu; Meilin Wang; Wei Huang; Zheng Zhang |
Abstract: | Estimation and inference of treatment effects under unconfounded treatment assignments often suffer from bias and the `curse of dimensionality' due to the nonparametric estimation of nuisance parameters for high-dimensional confounders. Although debiased state-of-the-art methods have been proposed for binary treatments under particular treatment models, they can be unstable for small sample sizes. Moreover, directly extending them to general treatment models can lead to computational complexity. We propose a balanced neural networks weighting method for general treatment models, which leverages deep neural networks to alleviate the curse of dimensionality while retaining optimal covariate balance through calibration, thereby achieving debiased and robust estimation. Our method accommodates a wide range of treatment models, including average, quantile, distributional, and asymmetric least squares treatment effects, for discrete, continuous, and mixed treatments. Under regularity conditions, we show that our estimator achieves rate double robustness and $\sqrt{N}$-asymptotic normality, and its asymptotic variance achieves the semiparametric efficiency bound. We further develop a statistical inference procedure based on weighted bootstrap, which avoids estimating the efficient influence/score functions. Simulation results reveal that the proposed method consistently outperforms existing alternatives, especially when the sample size is small. Applications to the 401(k) dataset and the Mother's Significant Features dataset further illustrate the practical value of the method for estimating both average and quantile treatment effects under binary and continuous treatments, respectively. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.04044 |
By: | Jianhua Mei; Fu Ouyang; Thomas T. Yang |
Abstract: | We propose a novel and computationally efficient approach for nonparametric conditional density estimation in high-dimensional settings that achieves dimension reduction without imposing restrictive distributional or functional form assumptions. To uncover the underlying sparsity structure of the data, we develop an innovative conditional dependence measure and a modified cross-validation procedure that enables data-driven variable selection, thereby circumventing the need for subjective threshold selection. We demonstrate the practical utility of our dimension-reduced conditional density estimation by applying it to doubly robust estimators for average treatment effects. Notably, our proposed procedure is able to select relevant variables for nonparametric propensity score estimation and also inherently reduce the dimensionality of outcome regressions through a refined ignorability condition. We evaluate the finite-sample properties of our approach through comprehensive simulation studies and an empirical study on the effects of 401(k) eligibility on savings using SIPP data. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.22312 |
By: | Giuseppe Cavaliere; Adam McCloskey; Rasmus S. Pedersen; Anders Rahbek |
Abstract: | Limit distributions of likelihood ratio statistics are well-known to be discontinuous in the presence of nuisance parameters at the boundary of the parameter space, which lead to size distortions when standard critical values are used for testing. In this paper, we propose a new and simple way of constructing critical values that yields uniformly correct asymptotic size, regardless of whether nuisance parameters are at, near or far from the boundary of the parameter space. Importantly, the proposed critical values are trivial to compute and at the same time provide powerful tests in most settings. In comparison to existing size-correction methods, the new approach exploits the monotonicity of the two components of the limiting distribution of the likelihood ratio statistic, in conjunction with rectangular confidence sets for the nuisance parameters, to gain computational tractability. Uniform validity is established for likelihood ratio tests based on the new critical values, and we provide illustrations of their construction in two key examples: (i) testing a coefficient of interest in the classical linear regression model with non-negativity constraints on control coefficients, and, (ii) testing for the presence of exogenous variables in autoregressive conditional heteroskedastic models (ARCH) with exogenous regressors. Simulations confirm that the tests have desirable size and power properties. A brief empirical illustration demonstrates the usefulness of our proposed test in relation to testing for spill-overs and ARCH effects. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.19603 |
By: | Otilia Boldea; Alastair R. Hall |
Abstract: | We review recent developments in detecting and estimating multiple change-points in time series models with exogenous and endogenous regressors, panel data models, and factor models. This review differs from others in multiple ways: (1) it focuses on inference about the change-points in slope parameters, rather than in the mean of the dependent variable - the latter being common in the statistical literature; (2) it focuses on detecting - via sequential testing and other methods - multiple change-points, and only discusses one change-point when methods for multiple change-points are not available; (3) it is meant as a practitioner's guide for empirical macroeconomists first, and as a result, it focuses only on the methods derived under the most general assumptions relevant to macroeconomic applications. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.22204 |
By: | Sung Hoon Choi; Donggyu Kim |
Abstract: | Based on It\^o semimartingale models, several studies have proposed methods for forecasting intraday volatility using high-frequency financial data. These approaches typically rely on restrictive parametric assumptions and are often vulnerable to model misspecification. To address this issue, we introduce a novel nonparametric prediction method for the future intraday instantaneous volatility process during trading hours, which leverages both previous days' data and the current day's observed intraday data. Our approach imposes an interday-by-intraday matrix representation of the instantaneous volatility, which is decomposed into a low-rank conditional expectation component and a noise matrix. To predict the future conditional expected volatility vector, we exploit this low-rank structure and propose the Structural Intraday-volatility Prediction (SIP) procedure. We establish the asymptotic properties of the SIP estimator and demonstrate its effectiveness through an out-of-sample prediction study using real high-frequency trading data. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.22173 |
By: | Justin Whitehouse; Morgane Austern; Vasilis Syrgkanis |
Abstract: | Constructing confidence intervals for the value of an optimal treatment policy is an important problem in causal inference. Insight into the optimal policy value can guide the development of reward-maximizing, individualized treatment regimes. However, because the functional that defines the optimal value is non-differentiable, standard semi-parametric approaches for performing inference fail to be directly applicable. Existing approaches for handling this non-differentiability fall roughly into two camps. In one camp are estimators based on constructing smooth approximations of the optimal value. These approaches are computationally lightweight, but typically place unrealistic parametric assumptions on outcome regressions. In another camp are approaches that directly de-bias the non-smooth objective. These approaches don't place parametric assumptions on nuisance functions, but they either require the computation of intractably-many nuisance estimates, assume unrealistic $L^\infty$ nuisance convergence rates, or make strong margin assumptions that prohibit non-response to a treatment. In this paper, we revisit the problem of constructing smooth approximations of non-differentiable functionals. By carefully controlling first-order bias and second-order remainders, we show that a softmax smoothing-based estimator can be used to estimate parameters that are specified as a maximum of scores involving nuisance components. In particular, this includes the value of the optimal treatment policy as a special case. Our estimator obtains $\sqrt{n}$ convergence rates, avoids parametric restrictions/unrealistic margin assumptions, and is often statistically efficient. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.11780 |
By: | Joel L. Horowitz; Sokbae Lee |
Abstract: | This paper presents a computationally efficient method for binary classification using Manski's (1975, 1985) maximum score model when covariates are discretely distributed and parameters are partially but not point identified. We establish conditions under which it is minimax optimal to allow for either non-classification or random classification and derive finite-sample and asymptotic lower bounds on the probability of correct classification. We also describe an extension of our method to continuous covariates. Our approach avoids the computational difficulty of maximum score estimation by reformulating the problem as two linear programs. Compared to parametric and nonparametric methods, our method balances extrapolation ability with minimal distributional assumptions. Monte Carlo simulations and empirical applications demonstrate its effectiveness and practical relevance. |
Date: | 2025–08–05 |
URL: | https://d.repec.org/n?u=RePEc:azt:cemmap:16/25 |
By: | Goel, Deepti |
Abstract: | The most widely used textbooks on Introductory Econometrics conflate three distinct population parameters: the population regression function (PRF), the conditional expectation function (CEF), and the causal effect. They also incorrectly suggest, and sometimes state, that the Conditional Mean Zero assumption implies causal interpretation of regression coefficients. I highlight these issues and show that by incorporating new notation these limitations can easily be overcome. |
Keywords: | Regressions, Least Squares, Conditional Mean Zero, Causal Inference |
JEL: | A22 C18 |
Date: | 2025 |
URL: | https://d.repec.org/n?u=RePEc:zbw:glodps:1646 |
By: | Andrei Voronin |
Abstract: | Many causal and structural parameters in economics can be identified and estimated by computing the value of an optimization program over all distributions consistent with the model and the data. Existing tools apply when the data is discrete, or when only disjoint marginals of the distribution are identified, which is restrictive in many applications. We develop a general framework that yields sharp bounds on a linear functional of the unknown true distribution under i) an arbitrary collection of identified joint subdistributions and ii) structural conditions, such as (conditional) independence. We encode the identification restrictions as a continuous collection of moments of characteristic kernels, and use duality and approximation theory to rewrite the infinite-dimensional program over Borel measures as a finite-dimensional program that is simple to compute. Our approach yields a consistent estimator that is $\sqrt{n}$-uniformly valid for the sharp bounds. In the special case of empirical optimal transport with Lipschitz cost, where the minimax rate is $n^{2/d}$, our method yields a uniformly consistent estimator with an asymmetric rate, converging at $\sqrt{n}$ uniformly from one side. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.22422 |
By: | Bingqi Liu; Kangqiang Li; Tianxiao Pang |
Abstract: | Bayesian quantile regression based on the asymmetric Laplace distribution (ALD) likelihood suffers from two fundamental limitations: the non-differentiability of the check loss precludes gradient-based Markov chain Monte Carlo (MCMC) methods, and the posterior mean provides biased quantile estimates. We propose Bayesian smoothed quantile regression (BSQR), which replaces the check loss with a kernel-smoothed version, creating a continuously differentiable likelihood. This smoothing has two crucial consequences: it enables efficient Hamiltonian Monte Carlo sampling, and it yields a consistent posterior distribution, thereby resolving the inferential bias of the standard approach. We further establish conditions for posterior propriety under various priors (including improper and hierarchical) and characterize how kernel choice affects posterior concentration and computational efficiency. Extensive simulations validate our theoretical findings, demonstrating that BSQR achieves up to a 50% reduction in predictive check loss at extreme quantiles compared to ALD-based methods, while improving MCMC efficiency by 20-40% in effective sample size. An empirical application to financial risk measurement during the COVID-19 era illustrates BSQR's practical advantages in capturing dynamic systemic risk. The BSQR framework provides a theoretically-grounded and computationally-efficient solution to longstanding challenges in Bayesian quantile regression, with compact-support kernels like the uniform and triangular emerging as particularly effective choices. |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2508.01738 |
By: | Melody Huang; Cory McCartan |
Abstract: | To conduct causal inference in observational settings, researchers must rely on certain identifying assumptions. In practice, these assumptions are unlikely to hold exactly. This paper considers the bias of selection-on-observables, instrumental variables, and proximal inference estimates under violations of their identifying assumptions. We develop bias expressions for IV and proximal inference that show how violations of their respective assumptions are amplified by any unmeasured confounding in the outcome variable. We propose a set of sensitivity tools that quantify the sensitivity of different identification strategies, and an augmented bias contour plot visualizes the relationship between these strategies. We argue that the act of choosing an identification strategy implicitly expresses a belief about the degree of violations that must be present in alternative identification strategies. Even when researchers intend to conduct an IV or proximal analysis, a sensitivity analysis comparing different identification strategies can help to better understand the implications of each set of assumptions. Throughout, we compare the different approaches on a re-analysis of the impact of state surveillance on the incidence of protest in Communist Poland. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.23743 |
By: | Chakattrai Sookkongwaree; Tattep Lakmuang; Chainarong Amornbunchornvej |
Abstract: | Understanding causal relationships in time series is fundamental to many domains, including neuroscience, economics, and behavioral science. Granger causality is one of the well-known techniques for inferring causality in time series. Typically, Granger causality frameworks have a strong fix-lag assumption between cause and effect, which is often unrealistic in complex systems. While recent work on variable-lag Granger causality (VLGC) addresses this limitation by allowing a cause to influence an effect with different time lags at each time point, it fails to account for the fact that causal interactions may vary not only in time delay but also across frequency bands. For example, in brain signals, alpha-band activity may influence another region with a shorter delay than slower delta-band oscillations. In this work, we formalize Multi-Band Variable-Lag Granger Causality (MB-VLGC) and propose a novel framework that generalizes traditional VLGC by explicitly modeling frequency-dependent causal delays. We provide a formal definition of MB-VLGC, demonstrate its theoretical soundness, and propose an efficient inference pipeline. Extensive experiments across multiple domains demonstrate that our framework significantly outperforms existing methods on both synthetic and real-world datasets, confirming its broad applicability to any type of time series data. Code and datasets are publicly available. |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2508.00658 |
By: | Chengwang Liao; Zhentao Shi; Yapeng Zheng |
Abstract: | The synthetic control method (SCM) is widely used for constructing the counterfactual of a treated unit based on data from control units in a donor pool. Allowing the donor pool contains more control units than time periods, we propose a novel machine learning algorithm, named SCM-relaxation, for counterfactual prediction. Our relaxation approach minimizes an information-theoretic measure of the weights subject to a set of relaxed linear inequality constraints in addition to the simplex constraint. When the donor pool exhibits a group structure, SCM-relaxation approximates the equal weights within each group to diversify the prediction risk. Asymptotically, the proposed estimator achieves oracle performance in terms of out-of-sample prediction accuracy. We demonstrate our method by Monte Carlo simulations and by an empirical application that assesses the economic impact of Brexit on the United Kingdom's real GDP. |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2508.01793 |
By: | Sebastian Calónico; Sebastian Galiani |
Abstract: | Empirical research in the social and medical sciences frequently involves testing multiple hypotheses simultaneously, increasing the risk of false positives due to chance. Classical multiple testing procedures, such as the Bonferroni correction, control the family-wise error rate (FWER) but tend to be overly conservative, reducing statistical power. Stepwise alternatives like the Holm and Hochberg procedures offer improved power while maintaining error control under certain dependence structures. However, these standard approaches typically ignore hierarchical relationships among hypotheses—structures that are common in settings such as clinical trials and program evaluations, where outcomes are often logically or causally linked. Hierarchical multiple testing procedures—including fixed sequence, fallback, and gatekeeping methods—explicitly incorporate these relationships, providing more powerful and interpretable frameworks for inference. This paper reviews key hierarchical methods, compares their statistical properties and practical trade-offs, and discusses implications for applied empirical research. |
JEL: | C1 C11 C13 C15 C18 |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:34050 |
By: | Zequn Jin; Gaoqian Xu; Xi Zheng; Yahong Zhou |
Abstract: | This paper develops a robust and efficient method for policy learning from observational data in the presence of unobserved confounding, complementing existing instrumental variable (IV) based approaches. We employ the marginal sensitivity model (MSM) to relax the commonly used yet restrictive unconfoundedness assumption by introducing a sensitivity parameter that captures the extent of selection bias induced by unobserved confounders. Building on this framework, we consider two distributionally robust welfare criteria, defined as the worst-case welfare and policy improvement functions, evaluated over an uncertainty set of counterfactual distributions characterized by the MSM. Closed-form expressions for both welfare criteria are derived. Leveraging these identification results, we construct doubly robust scores and estimate the robust policies by maximizing the proposed criteria. Our approach accommodates flexible machine learning methods for estimating nuisance components, even when these converge at moderately slow rate. We establish asymptotic regret bounds for the resulting policies, providing a robust guarantee against the most adversarial confounding scenario. The proposed method is evaluated through extensive simulation studies and empirical applications to the JTPA study and Head Start program. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.20550 |
By: | James A. Duffy; Xiyu Jiao |
Abstract: | We consider the problem of performing inference on the number of common stochastic trends when data is generated by a cointegrated CKSVAR (a two-regime, piecewise-linear SVAR; Mavroeidis, 2021), using a modified version of the Breitung (2002) multivariate variance ratio test that is robust to the presence of nonlinear cointegration (of a known form). To derive the asymptotics of our test statistic, we prove a fundamental LLN-type result for a class of stable but nonstationary autoregressive processes, using a novel dual linear process approximation. We show that our modified test yields correct inferences regarding the number of common trends in such a system, whereas the unmodified test tends to infer a higher number of common trends than are actually present, when cointegrating relations are nonlinear. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.22869 |
By: | Martín Almuzara; Manuel Arellano; Richard Blundell; Stéphane Bonhomme |
Abstract: | We propose a nonlinear framework to study the dynamic transmission of aggregate and idiosyncratic shocks to household income that exploits both macro and micro data. Our approach allows us to examine empirically the following questions: (a) How do business-cycle fluctuations modulate the persistence of heterogeneous individual histories and the risk faced by households? (b) How do aggregate and idiosyncratic shocks propagate over time for households in different macro and micro states? (c) How do these shocks shape the cost of business-cycle risk? We develop new identification and estimation techniques, and provide a detailed empirical analysis combining macro time series for the U.S. and a time series of household panels from the PSID. |
Date: | 2025–08–07 |
URL: | https://d.repec.org/n?u=RePEc:azt:cemmap:17/25 |
By: | Raanan Sulitzeanu-Kenan; Micha Mandel; Yosef Rinott |
Abstract: | A central challenge in any study of the effects of beliefs on outcomes, such as decisions and behavior, is the risk of omitted variables bias. Omitted variables, frequently unmeasured or even unknown, can induce correlations between beliefs and decisions that are not genuinely causal, in which case the omitted variables are referred to as confounders. To address the challenge of causal inference, researchers frequently rely on information provision experiments to randomly manipulate beliefs. The information supplied in these experiments can serve as an instrumental variable (IV), enabling causal inference, so long as it influences decisions exclusively through its impact on beliefs. However, providing varying information to participants to shape their beliefs can raise both methodological and ethical concerns. Methodological concerns arise from potential violations of the exclusion restriction assumption. Such violations may stem from information source effects, when attitudes toward the source affect the outcome decision directly, thereby introducing a confounder. An ethical concern arises from manipulating the provided information, as it may involve deceiving participants. This paper proposes and empirically demonstrates a new method for treating beliefs and estimating their effects, the Anchoring-Based Causal Design (ABCD), which avoids deception and source influences. ABCD combines the cognitive mechanism known as anchoring with instrumental variable (IV) estimation. Instead of providing substantive information, the method employs a deliberately non-informative procedure in which participants compare their self-assessment of a concept to a randomly assigned anchor value. We present the method and the results of eight experiments demonstrating its application, strengths, and limitations. We conclude by discussing the potential of this design for advancing experimental social science. |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2508.01677 |
By: | Silva Lopes, Artur |
Abstract: | This book provides a comprehensive and systematic review of most of the literature on the univariate analysis of trends in economic time series. It also provides original insights and criticisms on some of the topics that are addressed. Its chapter structure is as follows. 1 Introduction (preliminary issues). 2 Historical perspective. 3 Modeling the trend. 4 Decomposition methods. 5 Testing for the presence of a trend. Annex: A brief introduction to filters. |
Keywords: | trend, long-run, low-frequency, linear trend, nonlinear trend, decomposition of time series, filtering, detrending, business cycles |
JEL: | B23 C22 C51 C52 E32 O47 |
Date: | 2025 |
URL: | https://d.repec.org/n?u=RePEc:zbw:esprep:323383 |
By: | Austin Feng; Francesco Ruggieri |
Abstract: | We propose a structural approach to extrapolate average partial effects away from the cutoff in regression discontinuity designs (RDDs). Our focus is on applications that exploit closely contested school district referenda to estimate the effects of changes in education spending on local economic outcomes. We embed these outcomes in a spatial equilibrium model of local jurisdictions in which fiscal policy is determined by majority rule voting. This integration provides a microfoundation for the running variable, the share of voters who approve a ballot initiative, and enables identification of structural parameters using RDD coefficients. We then leverage the model to simulate the effects of counterfactual referenda over a broad range of proposed spending changes. These scenarios imply realizations of the running variable away from the threshold, allowing extrapolation of RDD estimates to nonmarginal referenda. Applying the method to school expenditure ballot measures in Wisconsin, we document substantial heterogeneity in housing price capitalization across the approval margin. |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2508.02658 |
By: | Pablo Quintana (UNCuyo); Marcos Herrera-Gómez (CIANECO/CONICET/Universidad Nacional de Río Cuarto) |
Abstract: | Identifying regions that are both spatially contiguous and internally homogeneous remains a core challenge in spatial analysis and regional economics, especially with the increasing complexity of modern datasets. These limitations are particularly problematic when working with socioeconomic data that evolve over time. This paper presents a novel methodology for spatio-temporal regionalization—Spatial Deep Embedded Clustering (SDEC)—which integrates deep learning with spatially constrained clustering to effectively process time series data. The approach uses autoencoders to capture hidden temporal patterns and reduce dimensionality before clustering, ensuring that both spatial contiguity and temporal coherence are maintained. Through Monte Carlo simulations, we show that SDEC significantly outperforms traditional methods in capturing complex temporal patterns while preserving spatial structure. Using empirical examples, we demonstrate that the proposed framework provides a robust, scalable, and data-driven tool for researchers and policymakers working in public health, urban planning, and regional economic analysis. |
Keywords: | Spatial clustering, Spatial Data Science, Spatio-temporal Classification, Territorial analysis. |
JEL: | C23 C45 C63 |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:aoz:wpaper:368 |
By: | Lee, Yongseok (University of Florida); Leite, Walter |
Abstract: | Researchers using propensity score analysis (PSA) to estimate treatment effects using secondary data may have to handle data that is missing not at random (MNAR). Existing methods for PSA with MNAR data use logistic regression to model the missing data mechanisms, thus requiring manual specification of functional forms, and are difficult to implement with a large number of covariates. To overcome these limitations, this study proposes alternatives to existing methods by replacing logistic regression with a random forest. Also, it introduces the Dual-Forest Proximity imputation method, which leverages two types of proximity matrices of random forest techniques and incorporates missing pattern information in each matrix. Results from a Monte Carlo simulation show Dual-Forest Proximity imputation’s enhanced bias reduction with various types of MNAR mechanisms as compared to existing and alternative methods. A case study is also provided using data from the National Longitudinal Survey of Youth 1979 (NLSY79). |
Date: | 2025–07–07 |
URL: | https://d.repec.org/n?u=RePEc:osf:osfxxx:ex8ad_v1 |
By: | Arne Henningsen (Department of Food and Resource Economics, University of Copenhagen, Denmark); Guy Low (Business Economics Group, Wageningen University & Research, The Netherlands); David Wuepper (Institute for Food and Resource Economics, University of Bonn, Germany); Tobias Dalhaus (Business Economics Group, Wageningen University & Research, The Netherlands); Hugo Storm (Institute for Food and Resource Economics, University of Bonn, Germany); Dagim Belay (Department of Food and Resource Economics, University of Copenhagen, Denmark); Stefan Hirsch (Department of Management in Agribusiness, University of Hohenheim, Germany) |
Abstract: | Most research questions in agricultural and applied economics are of a causal nature, i.e., how one or more variables (e.g., policies, prices, the weather) affect one or more other variables (e.g., income, crop yields, pollution). Only some of these research questions can be studied experimentally. Most empirical studies in agricultural and applied economics thus rely on observational data. However, estimating causal effects with observational data requires appropriate research designs and a transparent discussion of all identifying assumptions, together with empirical evidence to assess the probability that they hold. This paper provides an overview of various approaches that are frequently used in agricultural and applied economics to estimate causal effects with observational data. It then provides advice and guidelines for agricultural and applied economists who are intending to estimate causal effects with observational data, e.g., how to assess and discuss the chosen identification strategies in their publications. |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2508.02310 |