|
on Econometrics |
By: | Tingting Cheng; Jiachen Cong; Fei Liu; Xuanbin Yang |
Abstract: | In this paper, we propose a novel factor-augmented forecasting regression model with a binary response variable. We develop a maximum likelihood estimation method for the regression parameters and establish the asymptotic properties of the resulting estimators. Monte Carlo simulation results show that the proposed estimation method performs very well in finite samples. Finally, we demonstrate the usefulness of the proposed model through an application to U.S. recession forecasting. The proposed model consistently outperforms conventional Probit regression across both in-sample and out-of-sample exercises, by effectively utilizing high-dimensional information through latent factors. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.16462 |
By: | Giuseppe Cavaliere; Luca Fanelli; Iliyan Georgiev |
Abstract: | Violation of the assumptions underlying classical (Gaussian) limit theory frequently leads to unreliable statistical inference. This paper shows the novel result that the bootstrap can detect such violation by means of simple and powerful tests which (a) induce no pre-testing bias, (b) can be performed using the same critical values in a broad range of applications, and (c) are consistent against deviations from asymptotic normality. By focusing on the discrepancy between the conditional distribution of a bootstrap statistic and the (limiting) Gaussian distribution which obtains under valid specification, we show how to assess whether this discrepancy is large enough to indicate specification invalidity. The method, which is computationally straightforward, only requires to measure the discrepancy between the bootstrap and the Gaussian distributions based on a sample of i.i.d. draws of the bootstrap statistic. We derive sufficient conditions for the randomness in the data to mix with the randomness in the bootstrap repetitions in a way such that (a), (b) and (c) above hold. To demonstrate the practical relevance and broad applicability of our diagnostic procedure, we discuss five scenarios where the asymptotic Gaussian approximation may fail: (i) weak instruments in instrumental variable regression; (ii) non-stationarity in autoregressive time series; (iii) parameters near or at the boundary of the parameter space; (iv) infinite variance innovations in a location model for i.i.d. data; (v) invalidity of the delta method due to (near-)rank deficiency in the implied Jacobian matrix. An illustration drawn from the empirical macroeconomic literature concludes. |
Date: | 2025–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2509.01351 |
By: | Enes Dilber; Colin Gray |
Abstract: | When using observational causal models, practitioners often want to disentangle the effects of many related, partially-overlapping treatments. Examples include estimating treatment effects of different marketing touchpoints, ordering different types of products, or signing up for different services. Common approaches that estimate separate treatment coefficients are too noisy for practical decision-making. We propose a computationally light model that uses a customized ridge regression to move between a heterogeneous and a homogenous model: it substantially reduces MSE for the effects of each individual sub-treatment while allowing us to easily reconstruct the effects of an aggregated treatment. We demonstrate the properties of this estimator in theory and simulation, and illustrate how it has unlocked targeted decision-making at Wayfair. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.01202 |
By: | Pedro Picchetti |
Abstract: | This paper studies the partial identification of treatment effects in Instrumental Variables (IV) settings with binary outcomes under violations of independence. I derive the identified sets for the treatment parameters of interest in the setting, as well as breakdown values for conclusions regarding the true treatment effects. I derive $\sqrt{N}$-consistent nonparametric estimators for the bounds of treatment effects and for breakdown values. These results can be used to assess the robustness of empirical conclusions obtained under the assumption that the instrument is independent from potential quantities, which is a pervasive concern in studies that use IV methods with observational data. In the empirical application, I show that the conclusions regarding the effects of family size on female unemployment using same-sex sibling as the instrument are highly sensitive to violations of independence. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.10242 |
By: | Ziyi Liu |
Abstract: | I propose a cohort-anchored framework for robust inference in event studies with staggered adoption. Robust inference based on aggregated event-study coefficients, as in Rambachan and Roth (2023), can be misleading because pre- and post-treatment coefficients are identified from different cohort compositions and the not-yet-treated control group changes over time. To address these issues, I work at the cohort-period level and introduce the \textit{block bias}-the parallel-trends violation for a cohort relative to its anchored initial control group-whose interpretation is consistent across pre- and post-treatment periods. For both the imputation estimator and the estimator in Callaway and Sant'Anna (2021) that uses not-yet-treated units as controls, I show an invertible decomposition linking these estimators' biases in post-treatment periods to block biases. This allows researchers to place transparent restrictions on block biases (e.g., Relative Magnitudes and Second Differences) and conduct robust inference using the algorithm from Rambachan and Roth (2023). In simulations, when parallel-trends violations differ across cohorts, my framework yields better-centered (and sometimes narrower) confidence sets than the aggregated approach. In a reanalysis of the effect of minimum-wage changes on teen employment in the Callaway and Sant'Anna (2021) application, my inference framework with the Second Differences restriction yields confidence sets centered well below zero, indicating robust negative effects, whereas inference based on aggregated coefficients yields sets centered near zero. The proposed framework is most useful when there are several cohorts, adequate within-cohort precision, and substantial cross-cohort heterogeneity. |
Date: | 2025–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2509.01829 |
By: | Rahul Singh; Moses Stewart |
Abstract: | Standard regression discontinuity design (RDD) models rely on the continuity of expected potential outcomes at the cutoff. The standard continuity assumption can be violated by strategic manipulation of the running variable, which is realistic when the cutoff is widely known and when the treatment of interest is a social program or government benefit. In this work, we identify the treatment effect despite such a violation, by leveraging a placebo treatment and a placebo outcome. We introduce a local instrumental variable estimator. Our estimator decomposes into two terms: the standard RDD estimator of the target outcome's discontinuity, and a new adjustment term based on the placebo outcome's discontinuity. We show that our estimator is consistent, and we justify a robust bias-corrected inference procedure. Our method expands the applicability of RDD to settings with strategic behavior around the cutoff, which commonly arise in social science. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.12693 |
By: | Tzvetan Moev |
Abstract: | Synthetic Control methods have recently gained considerable attention in applications with only one treated unit. Their popularity is partly based on the key insight that we can predict good synthetic counterfactuals for our treated unit. However, this insight of predicting counterfactuals is generalisable to microeconometric settings where we often observe many treated units. We propose the Correlated Synthetic Controls (CSC) estimator for such situations: intuitively, it creates synthetic controls that are correlated across individuals with similar observables. When treatment assignment is correlated with unobservables, we show that the CSC estimator has more desirable theoretical properties than the difference-in-differences estimator. We also utilise CSC in practice to obtain heterogeneous treatment effects in the well-known Mariel Boatlift study, leveraging additional information from the PSID. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.08918 |
By: | Monica Billio (Ca’ Foscari University of Venice; Venice centre in Economic and Risk Analytics); Roberto Casarin (European Centre for Living Technology; Venice centre in Economic and Risk Analytics); Fausto Corradin (Ca’ Foscari University of Venice); Antonio Peruzzi (Ca’ Foscari University of Venice) |
Abstract: | Bayes Factor (BF) is one of the tools used in Bayesian analysis for model selection. The predictive BF finds application in detecting outliers, which are relevant sources of estimation and forecast errors. An efficient framework for outlier detection is provided and purposely designed for large multidimensional datasets. Online detection and analytical tractability guarantee the procedure's efficiency. The proposed sequential Bayesian monitoring extends the univariate setup to a matrix–variate one. Prior perturbation based on power discounting is applied to obtain tractable predictive BFs. This way, computationally intensive procedures used in Bayesian Analysis are not required. The conditions leading to inconclusive responses in outlier identification are derived, and some robust approaches are proposed that exploit the predictive BF's variability to improve the standard discounting method. The effectiveness of the procedure is studied using simulated data. An illustration is provided through applications to relevant benchmark datasets from macroeconomics and finance. |
Keywords: | Bayesian Modelling, Bayes Factor, Sequential Model Assessment, Outliers |
JEL: | C11 C22 E31 F10 |
Date: | 2025 |
URL: | https://d.repec.org/n?u=RePEc:ven:wpaper:2025:14 |
By: | Matias D. Cattaneo; Filippo Palomba |
Abstract: | It is common practice to incorporate additional covariates in empirical economics. In the context of Regression Discontinuity (RD) designs, covariate adjustment plays multiple roles, making it essential to understand its impact on analysis and conclusions. Typically implemented via local least squares regressions, covariate adjustment can serve three main distinct purposes: (i) improving the efficiency of RD average causal effect estimators, (ii) learning about heterogeneous RD policy effects, and (iii) changing the RD parameter of interest. This article discusses and illustrates empirically how to leverage covariates effectively in RD designs. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.14311 |
By: | Mehmet Caner Agostino Capponi Mihailo Stojnic |
Abstract: | We introduce prototype consistent model-free, dense precision matrix estimators that have broad application in economics. Using quadratic form concentration inequalities and novel algebraic characterizations of confounding dimension reductions, we are able to: (i) obtain non-asymptotic bounds for precision matrix estimation errors and also (ii) consistency in high dimensions; (iii) uncover the existence of an intrinsic signal-to-noise -- underlying dimensions tradeoff; and (iv) avoid exact population sparsity assumptions. In addition to its desirable theoretical properties, a thorough empirical study of the S&P 500 index shows that a tuning parameter-free special case of our general estimator exhibits a doubly ascending Sharpe Ratio pattern, thereby establishing a link with the famous double descent phenomenon dominantly present in recent statistical and machine learning literature. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.04663 |
By: | Victor Chernozhukov; Iv\'an Fern\'andez-Val; Jonas Meier; Aico van Vuuren; Francis Vella |
Abstract: | We employ distribution regression (DR) to estimate the joint distribution of two outcome variables conditional on chosen covariates. While Bivariate Distribution Regression (BDR) is useful in a variety of settings, it is particularly valuable when some dependence between the outcomes persists after accounting for the impact of the covariates. Our analysis relies on a result from Chernozhukov et al. (2018) which shows that any conditional joint distribution has a local Gaussian representation. We describe how BDR can be implemented and present some associated functionals of interest. As modeling the unexplained dependence is a key feature of BDR, we focus on functionals related to this dependence. We decompose the difference between the joint distributions for different groups into composition, marginal and sorting effects. We provide a similar decomposition for the transition matrices which describe how location in the distribution in one of the outcomes is associated with location in the other. Our theoretical contributions are the derivation of the properties of these estimated functionals and appropriate procedures for inference. Our empirical illustration focuses on intergenerational mobility. Using the Panel Survey of Income Dynamics data, we model the joint distribution of parents' and children's earnings. By comparing the observed distribution with constructed counterfactuals, we isolate the impact of observable and unobservable factors on the observed joint distribution. We also evaluate the forces responsible for the difference between the transition matrices of sons' and daughters'. |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2508.12716 |
By: | Hoang Giang Pham; Tien Mai; Minh Ha Hoang |
Abstract: | In this paper, we revisit parameter estimation for multinomial logit (MNL), nested logit (NL), and tree-nested logit (TNL) models through the framework of convex conic optimization. Traditional approaches typically solve the maximum likelihood estimation (MLE) problem using gradient-based methods, which are sensitive to step-size selection and initialization, and may therefore suffer from slow or unstable convergence. In contrast, we propose a novel estimation strategy that reformulates these models as conic optimization problems, enabling more robust and reliable estimation procedures. Specifically, we show that the MLE for MNL admits an equivalent exponential cone program (ECP). For NL and TNL, we prove that when the dissimilarity (scale) parameters are fixed, the estimation problem is convex and likewise reducible to an ECP. Leveraging these results, we design a two-stage procedure: an outer loop that updates the scale parameters and an inner loop that solves the ECP to update the utility coefficients. The inner problems are handled by interior-point methods with iteration counts that grow only logarithmically in the target accuracy, as implemented in off-the-shelf solvers (e.g., MOSEK). Extensive experiments across estimation instances of varying size show that our conic approach attains better MLE solutions, greater robustness to initialization, and substantial speedups compared to standard gradient-based MLE, particularly on large-scale instances with high-dimensional specifications and large choice sets. Our findings establish exponential cone programming as a practical and scalable alternative for estimating a broad class of discrete choice models. |
Date: | 2025–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2509.01562 |
By: | Xin Liu (Washington State University); David M. Kaplan (University of Missouri) |
Abstract: | We consider quantile regression when the outcome is the log of a non-negative variable that can equal zero. Unlike the analogous mean regression, this is well-defined if the quantile level is not low enough to include the extensive margin, but "log-like" transformations are used in practice due to computational obstacles. We provide computational solutions and diagnostics, as well as theoretical results including identification, coefficient interpretation under proper specification, characterization of the misspecified log-linear model's estimand, and sensitivity of this estimand to changes in the conditional distribution. To illustrate these results, we revisit an empirical study of armed-group and civilian violence. |
Keywords: | equivariance, misspecification, multiplicative error, percent effect, robustness |
JEL: | C21 C23 |
Date: | 2025–09 |
URL: | https://d.repec.org/n?u=RePEc:umc:wpaper:2509 |
By: | Romuald Meango; Marc Henry; Ismael Mourifie |
Abstract: | Can stated preferences inform counterfactual analyses of actual choice? This research proposes a novel approach to researchers who have access to both stated choices in hypothetical scenarios and actual choices, matched or unmatched. The key idea is to use stated choices to identify the distribution of individual unobserved heterogeneity. If this unobserved heterogeneity is the source of endogeneity, the researcher can correct for its influence in a demand function estimation using actual choices and recover causal effects. Bounds on causal effects are derived in the case, where stated choice and actual choices are observed in unmatched data sets. These data combination bounds are of independent interest. We derive a valid bootstrap inference for the bounds and show its good performance in a simulation experiment. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.13552 |
By: | Albert Chiu |
Abstract: | We introduce an algorithm for identifying interpretable subgroups with elevated treatment effects, given an estimate of individual or conditional average treatment effects (CATE). Subgroups are characterized by ``rule sets'' -- easy-to-understand statements of the form (Condition A AND Condition B) OR (Condition C) -- which can capture high-order interactions while retaining interpretability. Our method complements existing approaches for estimating the CATE, which often produce high dimensional and uninterpretable results, by summarizing and extracting critical information from fitted models to aid decision making, policy implementation, and scientific understanding. We propose an objective function that trades-off subgroup size and effect size, and varying the hyperparameter that controls this trade-off results in a ``frontier'' of Pareto optimal rule sets, none of which dominates the others across all criteria. Valid inference is achievable through sample splitting. We demonstrate the utility and limitations of our method using simulated and empirical examples. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.09494 |
By: | Qian Wu; David M. Kaplan (University of Missouri) |
Abstract: | We develop multiple testing methodology to assess the evidence that an outcome variable’s distribution (not just mean) is "stochastically increasing" in a covariate. Such a relationship holds globally if at each possible outcome value, the conditional CDF evaluated at that value is decreasing in the covariate. Rather than test that single global null hypothesis, we use multiple testing to separately evaluate each constituent conditional CDF inequality. Inverting our multiple testing procedure that controls familywise error rate, we construct "inner" and "outer" confidence sets for the true set of inequalities consistent with stochastic increasingness. Simulations show reasonable finite-sample properties. Empirically, we apply our methodology to study the education gradient in health. Practically, we provide code implementing our methodology and replicating our results. |
Keywords: | confidence set, familywise error rate, health, life satisfaction |
JEL: | C25 I10 |
Date: | 2025–09 |
URL: | https://d.repec.org/n?u=RePEc:umc:wpaper:2511 |
By: | Jaap H. Abbring; {\O}ystein Daljord; Fedor Iskhakov |
Abstract: | We study the identification of dynamic discrete choice models with sophisticated, quasi-hyperbolic time preferences under exclusion restrictions. We consider both standard finite horizon problems and empirically useful infinite horizon ones, which we prove to always have solutions. We reduce identification to finding the present-bias and standard discount factors that solve a system of polynomial equations with coefficients determined by the data and use this to bound the cardinality of the identified set. The discount factors are usually identified, but hard to precisely estimate, because exclusion restrictions do not capture the defining feature of present bias, preference reversals, well. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.07286 |
By: | Sheng Chao Ho |
Abstract: | This paper studies nonparametric empirical Bayes methods in a heterogeneous parameters framework that features unknown means and variances. We provide extended Tweedie's formulae that express the (infeasible) optimal estimators of heterogeneous parameters, such as unit-specific means or quantiles, in terms of the density of certain sufficient statistics. These are used to propose feasible versions with nearly parametric regret bounds of the order of $(\log n)^\kappa / n$. The estimators are employed in a study of teachers' value-added, where we find that allowing for heterogeneous variances across teachers is crucial for delivery optimal estimates of teacher quality and detecting low-performing teachers. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.02293 |
By: | Cheuk Hang Leung; Yijun Li; Qi Wu |
Abstract: | Fintech lending has become a central mechanism through which digital platforms stimulate consumption, offering dynamic, personalized credit limits that directly shape the purchasing power of consumers. Although prior research shows that higher limits increase average spending, scalar-based outcomes obscure the heterogeneous distributional nature of consumer responses. This paper addresses this gap by proposing a new causal inference framework that estimates how continuous changes in the credit limit affect the entire distribution of consumer spending. We formalize distributional causal effects within the Wasserstein space and introduce a robust Distributional Double Machine Learning estimator, supported by asymptotic theory to ensure consistency and validity. To implement this estimator, we design a deep learning architecture comprising two components: a Neural Functional Regression Net to capture complex, nonlinear relationships between treatments, covariates, and distributional outcomes, and a Conditional Normalizing Flow Net to estimate generalized propensity scores under continuous treatment. Numerical experiments demonstrate that the proposed estimator accurately recovers distributional effects in a range of data-generating scenarios. Applying our framework to transaction-level data from a major BigTech platform, we find that increased credit limits primarily shift consumers towards higher-value purchases rather than uniformly increasing spending, offering new insights for personalized marketing strategies and digital consumer finance. |
Date: | 2025–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2509.03063 |
By: | Atika Aouri; Philipp Otto |
Abstract: | We introduce a heterogeneous spatiotemporal GARCH model for geostatistical data or processes on networks, e.g., for modelling and predicting financial return volatility across firms in a latent spatial framework. The model combines classical GARCH(p, q) dynamics with spatially correlated innovations and spatially varying parameters, estimated using local likelihood methods. Spatial dependence is introduced through a geostatistical covariance structure on the innovation process, capturing contemporaneous cross-sectional correlation. This dependence propagates into the volatility dynamics via the recursive GARCH structure, allowing the model to reflect spatial spillovers and contagion effects in a parsimonious and interpretable way. In addition, this modelling framework allows for spatial volatility predictions at unobserved locations. In an empirical application, we demonstrate how the model can be applied to financial stock networks. Unlike other spatial GARCH models, our framework does not rely on a fixed adjacency matrix; instead, spatial proximity is defined in a proxy space constructed from balance sheet characteristics. Using daily log returns of 50 publicly listed firms over a one-year period, we evaluate the model's predictive performance in a cross-validation study. |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2508.20101 |
By: | Stanis{\l}aw M. S. Halkiewicz |
Abstract: | This paper introduces the Non-Additive Difference-in-Differences (NA-DiD) framework, which extends classical DiD by incorporating non-additive measures the Choquet integral for effect aggregation. It serves as a novel econometric tool for impact evaluation, particularly in settings with non-additive treatment effects. First, we introduce the integral representation of the classial DiD model, and then extend it to non-additive measures, therefore deriving the formulae for NA-DiD estimation. Then, we give its theoretical properties. Applying NA-DiD to a simulated hospital hygiene intervention, we find that classical DiD can overestimate treatment effects, f.e. failing to account for compliance erosion. In contrast, NA-DiD provides a more accurate estimate by incorporating non-linear aggregation. The Julia implementation of the techniques used and introduced in this article is provided in the appendices. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.12690 |
By: | Haotian Deng |
Abstract: | This paper develops a framework for identifying treatment effects when a policy simultaneously alters both the incentive to participate and the outcome of interest -- such as hiring decisions and wages in response to employment subsidies; or working decisions and wages in response to job trainings. This framework was inspired by my PhD project on a Belgian reform that subsidised first-time hiring, inducing entry by marginal firms yet meanwhile changing the wages they pay. Standard methods addressing selection-into-treatment concepts (like Heckman selection equations and local average treatment effects), or before-after comparisons (including simple DiD or RDD), cannot isolate effects at this shifting margin where treatment defines who is observed. I introduce marginality-weighted estimands that recover causal effects among policy-induced entrants, offering a policy-relevant alternative in settings with endogenous selection. This method can thus be applied widely to understanding the economic impacts of public programmes, especially in fields largely relying on reduced-form causal inference estimation (e.g. labour economics, development economics, health economics). |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2508.21583 |
By: | Xin Liu (Washington State University); David M. Kaplan (University of Missouri) |
Abstract: | We propose inference methods to compare two continuous distributions across their support, with two main innovations. First, unlike previous such multiple testing procedures, ours apply much more broadly by allowing non-iid sampling, whenever a Donsker's theorem holds. We also establish our procedures' coherence and consonance. Second, we invert these procedures into confidence sets for the set of points where certain (in)equalities hold. For one-sided inference, these points have an economic interpretation in terms of restricted first-order stochastic dominance. All our methods provide much richer results than existing global tests that report only a single "reject" or "not reject" decision. |
Keywords: | equivariance, misspecification, multiplicative error, percent effect, robustness |
JEL: | C21 C23 |
Date: | 2025–09 |
URL: | https://d.repec.org/n?u=RePEc:umc:wpaper:2510 |
By: | Klieber, Karin; Coulombe, Philippe Goulet |
Abstract: | Local projections (LPs) are widely used in empirical macroeconomics to estimate impulse responses to policy interventions. Yet, in many ways, they are black boxes. It is often unclear what mechanism or historical episodes drive a particular estimate. We introduce a new decomposition of LP estimates into the sum of contributions of historical events, which is the product, for each time stamp, of a weight and the realization of the response variable. In the least squares case, we show that these weights admit two interpretations. First, they represent purified and standardized shocks. Second, they serve as proximity scores between the projected policy intervention and past interventions in the sample. Notably, this second interpretation extends naturally to machine learning methods, many of which yield impulse responses that, while nonlinear in predictors, still aggregate past outcomes linearly via proximity-based weights. Applying this framework to shocks in monetary and fiscal policy, global temperature, and the excess bond premium, we find that easily identifiable events—such as Nixon’s interference with the Fed, stagflation, World War II, and the Mount Agung volcanic eruption—emerge as dominant drivers of oftenheavily concentrated impulse response estimates. JEL Classification: C32, C53, E31, E52, E62 |
Keywords: | climate, financial shocks, fiscal multipliers, local projections, monetary policy |
Date: | 2025–08 |
URL: | https://d.repec.org/n?u=RePEc:ecb:ecbwps:20253105 |
By: | Marco Piccininni; Eric J. Tchetgen Tchetgen; Mats J. Stensrud |
Abstract: | We address an ambiguity in identification strategies using difference-in-differences, which are widely applied in empirical research, particularly in economics. The assumption commonly referred to as the "no-anticipation assumption" states that treatment has no effect on outcomes before its implementation. However, because standard causal models rely on a temporal structure in which causes precede effects, such an assumption seems to be inherently satisfied. This raises the question of whether the assumption is repeatedly stated out of redundancy or because the formal statements fail to capture the intended subject-matter interpretation. We argue that confusion surrounding the no-anticipation assumption arises from ambiguity in the intervention considered and that current formulations of the assumption are ambiguous. Therefore, new definitions and identification results are proposed. |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2507.12891 |