nep-ecm New Economics Papers
on Econometrics
Issue of 2026–03–30
33 papers chosen by
Sune Karlsson, Örebro universitet


  1. Testing the Exclusion Restriction in IV Models Using Non-Gaussianity: A LiNGAM-Based Approach By Fernando Delbianco
  2. Double Machine Learning for Static Panel Data with Instrumental Variables: New Method and Applications By Anna Baiardi; Paul S. Clarke; Andrea A. Naghi; Annalivia Polselli
  3. Specification testing for binary choice model via maximum score By Ota, Yuta; Otsu, Taisuke
  4. Triple/Double-Debiased Lasso By Denis Chetverikov; Jesper R.-V. S¿rensen; Aleh Tsyvinski
  5. Maximum softly penalized likelihood in factor analysis By Sterzinger, Philipp; Kosmidis, Ioannis; Moustaki, Irini
  6. Event-Study Designs for Discrete Outcomes under Transition Independence By Young Ahn; Hiroyuki Kasahara
  7. Doubly Robust Estimation of Treatment Effects in Staggered Difference-in-Differences with Time-Varying Covariates By Yuhao Deng; Le Kang
  8. Bandwidth Selection for Spatial HAC Standard Errors By Alexander Lehner
  9. Extreme value inference for heterogeneous heavy-tailed data: A derandomization theory By Daouia, Abdelaati; Hachem, Joseph; Stupfler, Gilles
  10. A latent variable approach to learning high-dimensional multivariate longitudinal data By Lee, Sze Ming; Chen, Yunxiao; Sit, Tony
  11. Online Learning in Semiparametric Econometric Models By Xiaohong Chen; Elie Tamer; Qingsong Yao
  12. Causal Inference under Algorithmic Interference: Identification and Estimation without SUTVA in Platform Economies By Jakub Ryłow
  13. TEA-Time: Transporting Effects Across Time By Harsh Parikh; Gabriel Levin-Konigsberg; Dominique Perrault-Joncas; Alexander Volfovsky
  14. Identification Verification for Structural Vector Autoregressions with Sparse Heterogeneous Markov Switching Heteroskedasticity By Fei Shang; Tomasz Wo\'zniak
  15. Statistical Inference for Score Decompositions By Timo Dimitriadis; Marius Puke
  16. Testing Full Mediation of Treatment Effects and the Identifiability of Causal Mechanisms By Martin Huber; Kevin Kloiber; Luk\'a\v{s} Laff\'ers
  17. Focused Weighted-Average Least Squares Estimator By Shou-Yung Yin
  18. Identification and Counterfactual Analysis in Incomplete Models with Support and Moment Restrictions By Lixiong Li
  19. Tractable Identification of Strategic Network Formation Models with Unobserved Heterogeneity By Wayne Yuan Gao; Ming Li; Zhengyan Xu
  20. Thin Sets Are Not Equally Thin: Minimax Learning of Submanifold Integrals By Xiaohong Chen; Wayne Yuan Gao
  21. Synthetic Control Misconceptions: Recommendations for Practice By Robert Pickett; Jennifer Hill; Sarah Cowan
  22. Quantile-based modeling of scale dynamics in financial returns for Value-at-Risk and Expected Shortfall forecasting By Xiaochun Liu; Richard Luger
  23. Bayesian Indicator-Saturated Regression for Climate Policy Evaluation By Lucas D. Konrad; Lukas Vashold; Jesus Crespo Cuaresma
  24. Identifying Common Trend Determinants in Panel Data By Yoonseok Lee; Peter C. B. Phillips; Suyong Song; Donggyu Sul
  25. Reserve Demand Estimation with Minimal Theory By Ricardo Lagos; Gastón Navarro
  26. Heterogeneous Elasticities, Aggregation, and Retransformation Bias By Ellen Munroe; Alexander Newton; Meet Shah
  27. Same Error, Different Function: The Optimizer as an Implicit Prior in Financial Time Series By Federico Vittorio Cortesi; Giuseppe Iannone; Giulia Crippa; Tomaso Poggio; Pierfrancesco Beneventano
  28. A New Model of Trend Inflation Using Disaggregates, Survey Expectations, and Uncertainty By Ellis W. Tallman; Saeed Zaman
  29. Measuring the depth of multidimensional poverty with ordinal data By Fernando Flores Tavares
  30. Gaussian Process-Based Mortality Monitoring using Multivariate Cumulative Sum Procedures By Barigou, Karim; Loisel, Stéphane; Salhi, Yahia; Vigneron, Rayane
  31. A Multi-Criteria Fair Gaussian Regressor for Insurance Premium By Jamotton, Charlotte; Hainaut, Donatien
  32. The Gibbs Posterior and Parametric Portfolio Choice By Christopher G. Lamoureux
  33. Neural Demand Estimation with Habit Formation and Rationality Constraints By Marta Grzeskiewicz

  1. By: Fernando Delbianco
    Abstract: Instrumental variable (IV) methods rely critically on the exclusion restriction, which is untestable in exactly-identified models under standard assumptions. We propose a framework combining IV analysis with the LiNGAM method to test this restriction by exploiting non-Gaussianity in the data. Under non-Gaussian structural errors, the exclusion violation parameter is point-identified without additional instruments. Five complementary tests (bootstrap percentile, asymptotic normal, permutation, likelihood ratio, and independence-based) are introduced to assess the restriction under varying data conditions. Monte Carlo simulations and an empirical application to the Card (1995) dataset demonstrate controlled Type I error rates and reasonable power against economically relevant violations.
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2603.13505
  2. By: Anna Baiardi; Paul S. Clarke; Andrea A. Naghi; Annalivia Polselli
    Abstract: Panel data methods are widely used in empirical analysis to address unobserved heterogeneity, but causal inference remains challenging when treatments are endogenous and confounding variables high-dimensional and potentially nonlinear. Standard instrumental variables (IV) estimators, such as two-stage least squares (2SLS), become unreliable when instrument validity requires flexibly conditioning on many covariates with potentially non-linear effects. This paper develops a Double Machine Learning estimator for static panel models with endogenous treatments (panel IV DML), and introduces weak-identification diagnostics for it. We revisit three influential migration studies that use shift-share instruments. In these settings, instrument validity depends on a rich covariate adjustment. In one application, panel IV DML strengthens the predictive power of the instrument and broadly confirms 2SLS results. In the other cases, flexible adjustment makes the instruments weak, leading to substantially more cautious causal inference than conventional 2SLS. Monte Carlo evidence supports these findings, showing that panel IV DML improves estimation accuracy under strong instruments and delivers more reliable inference under weak identification.
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2603.20464
  3. By: Ota, Yuta; Otsu, Taisuke
    Abstract: This paper proposes a Hausman-type statistic to the test specification of a parametric binary choice model by comparing the maximum likelihood estimator and the maximum score estimator. Although the convergence rates are different, it is still meaningful to compare these estimators to detect misspecification of parametric models. A simulation study illustrates that the proposed test offers better size properties than the conventional information matrix test, and exhibits reasonable power against common forms of misspecification, such as heavy-tailed distributions and heteroskedasticity.
    Keywords: binary choice; cube root asymptotics; maximum score; specification testing
    JEL: J1
    Date: 2026–05–31
    URL: https://d.repec.org/n?u=RePEc:ehl:lserod:137590
  4. By: Denis Chetverikov (University of California, Los Angeles); Jesper R.-V. S¿rensen (University of Copenhagen); Aleh Tsyvinski (Yale University)
    Abstract: In this paper, we propose a triple (or double-debiased) Lasso estimator for inference on a low-dimensional parameter in high-dimensional linear regression models. The estimator is based on a moment function that satisfies not only first- but also second-order Neyman orthogonality conditions, thereby eliminating both the leading bias and the second-order bias induced by regularization. We derive an asymptotic linear representation for the proposed estimator and show that its remainder terms are never larger and are often smaller in order than those in the corresponding asymptotic linear representation for the standard double Lasso estimator. Because of this improvement, the triple Lasso estimator often yields more accurate finite-sample inference and confidence intervals with better coverage. Monte Carlo simulations confirm these gains. In addition, we provide a general recursive formula for constructing higher-order Neyman orthogonal moment functions in Z-estimation problems, which underlies the proposed estimator as a special case.
    Date: 2026–03–23
    URL: https://d.repec.org/n?u=RePEc:cwl:cwldpp:2507
  5. By: Sterzinger, Philipp; Kosmidis, Ioannis; Moustaki, Irini
    Abstract: Estimation in exploratory factor analysis often yields estimates on the boundary of the parameter space. Such occurrences, called Heywood cases, are characterised by non-positive variance estimates and can cause numerical instability, convergence failures, and misleading inferences. We derive sufficient conditions on the model and a penalty to the log-likelihood function that guarantee the existence of maximum penalised likelihood estimates in the interior of the parameter space, and that the corresponding estimators possess desirable asymptotic properties expected by the maximum likelihood estimator, namely consistency and asymptotic normality. Consistency and asymptotic normality follow when penalisation is soft enough, in a way that adapts to the information accumulation about the model parameters. We formally show, for the first time, that the penalties of Akaike (1987) and Hirose et al. (2011) to the log-likelihood of the normal linear factor model satisfy the conditions for existence, and, hence, deal with Heywood cases. Their vanilla versions, though, can result in questionable finite-sample properties in estimation, inference, and model selection. Our maximum softly-penalised likelihood framework ensures that the resulting estimation and inference procedures are asymptotically optimal. Through comprehensive simulation studies and real data analyses, we illustrate the desirable finite-sample properties of the maximum softly penalised likelihood estimators.
    Keywords: Heywood cases; infinite estimates; singular variance components
    JEL: C1
    Date: 2026–02–18
    URL: https://d.repec.org/n?u=RePEc:ehl:lserod:137057
  6. By: Young Ahn; Hiroyuki Kasahara
    Abstract: We develop a new identification strategy for average treatment effects on the treated (ATT) in panel data with discrete outcomes. Standard difference-in-differences (DiD) relies on parallel trends, which is frequently violated in categorical settings due to mean reversion, out-of-bounds counterfactuals, and ill-defined trends for multi-category outcomes. We propose an alternative identification strategy with transition independence: absent treatment, transition dynamics conditional on pre-treatment outcomes are identical between control and treated groups. To capture unobserved heterogeneity, we introduce a latent-type Markov structure delivering type-specific and aggregate treatment effects from short panels. Three empirical applications yield ATT estimates substantially different from conventional DiD.
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2603.07914
  7. By: Yuhao Deng; Le Kang
    Abstract: The difference-in-differences (DiD) design is a quasi-experimental method for estimating treatment effects. In staggered DiD with multiple treatment groups and periods, estimation based on the two-way fixed effects model yields negative weights when averaging heterogeneous group-period treatment effects into an overall effect. To address this issue, we first define group-period average treatment effects on the treated (ATT), and then define groupwise, periodwise, dynamic, and overall ATTs nonparametrically, so that the estimands are model-free. We propose doubly robust estimators for these types of ATTs in the form of augmented inverse variance weighting (AIVW). The proposed framework allows time-varying covariates that partially explain the time trends in outcomes. Even if part of the working models is misspecified, the proposed estimators still consistently estimate the parameter of interest. The asymptotic variance can be explicitly computed from influence functions. Under a homoskedastic working model, the AIVW estimator is simplified to an augmented inverse probability weighting (AIPW) estimator. We demonstrate the desirable properties of the proposed estimators through simulation and an application that compares the effects of a parallel admission mechanism with immediate admission on the China National College Entrance Examination.
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2603.04080
  8. By: Alexander Lehner
    Abstract: Spatial autocorrelation in regression models can lead to downward biased standard errors and thus incorrect inference. The most common correction in applied economics is the spatial heteroskedasticity and autocorrelation consistent (HAC) standard error estimator introduced by Conley (1999). A critical input is the kernel bandwidth: the distance within which residuals are allowed to be correlated. However, this is still an unresolved problem and there is no formal guidance in the literature. In this paper, I first document that the relationship between the bandwidth and the magnitude of spatial HAC standard errors is inverse-U shaped. This implies that both too narrow and too wide bandwidths lead to underestimated standard errors, contradicting the conventional wisdom that wider bandwidths yield more conservative inference. I then propose a simple, non-parametric, data-driven bandwidth selector based on the empirical covariogram of regression residuals. In extensive Monte Carlo experiments calibrated to empirically relevant spatial correlation structures across the contiguous United States, I show that the proposed method controls the false positive rate at or near the nominal 5% level across a wide range of spatial correlation intensities and sample configurations. I compare six kernel functions and find that the Bartlett and Epanechnikov kernels deliver the best size control. An empirical application using U.S. county-level data illustrates the practical relevance of the method. The R package SpatialInference implements the proposed bandwidth selection method.
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2603.03997
  9. By: Daouia, Abdelaati; Hachem, Joseph; Stupfler, Gilles
    Abstract: A major mathematical difficulty in studying extreme value parameter estimators defined as empirical mean excesses is their reliance on high order statistics above a random threshold. Based on simple yet novel derandomization arguments, we provide sufficient conditions for deriving the joint asymptotic distribution of so-called tail empirical excesses and Expected Shortfall with the underlying threshold level. This high-level result allows for a strong degree of heterogeneity in the data-generating process as well as serial dependence. When the observations are independent and their average distribution is heavy-tailed, we obtain asymptotic normality results for the Hill estimator of the extreme value index, the Weissman estimator of extreme quantiles, and two estimators of Expected Shortfall above an extreme level, under substantially weaker, yet easily verifiable and interpretable conditions than those prevailing in the recent literature. In particular, we establish precise closed-form expressions for the asymptotic bias and variance of each estimator. Our assumptions hold in a wide range of models where existing results may not apply, including scenarios of contaminated samples, pooled samples from several populations, heterogeneous location-scale models and the situation where observed covariate information is ignored. We discuss practical consequences of our results on simulated data and two real data applications to cyber risk and financial risk management.
    Keywords: Derandomization; Expected Shortfall; Extreme quantile; Heavy tails; Heterogeneity; Hill estimator
    Date: 2026–03–17
    URL: https://d.repec.org/n?u=RePEc:tse:wpaper:131598
  10. By: Lee, Sze Ming; Chen, Yunxiao; Sit, Tony
    Abstract: High-dimensional multivariate longitudinal data, which arise when many outcome variables are measured repeatedly over time, are becoming increasingly common in social, behavioral and health sciences. We propose a latent variable model for drawing statistical inferences on covariate effects and predicting future outcomes based on high-dimensional multivariate longitudinal data. This model introduces unobserved factors to account for the between-variable and across-time dependence and assist the prediction. Statistical inference and prediction tools are developed under a general setting that allows outcome variables to be of mixed types and possibly unobserved for certain time points, for example, due to right censoring. A central limit theorem is established for drawing statistical inferences on regression coefficients. Additionally, an information criterion is introduced to choose the number of factors. The proposed model is applied to customer grocery shopping records to predict and understand shopping behavior.
    Keywords: factor model; missing data; recurrent event data
    JEL: C1
    Date: 2026–03–16
    URL: https://d.repec.org/n?u=RePEc:ehl:lserod:130619
  11. By: Xiaohong Chen; Elie Tamer; Qingsong Yao
    Abstract: Data in modern economic and financial applications often arrive as a stream, requiring models and inference to be updated in real time -- yet most semiparametric methods remain batch-based and computationally impractical in large-scale streaming settings. We develop an online learning framework for semiparametric monotone index models with an unknown monotone link function. Our approach uses a two-phase learning paradigm. In a warm-start phase, we introduce a new online algorithm for the finite-dimensional parameter that is globally stable, yielding consistent estimation from arbitrary initialization. In a subsequent rate-optimal phase, we update the finite-dimensional parameter using an orthogonalized score while learning the unknown link via an online sieve method; this phase achieves optimal convergence rates for both components. The procedure processes only the most recent data batch, making it suitable when data cannot be stored (e.g., memory, privacy, or security constraints), and its resulting parameter trajectories enable online inference such as confidence regions--on parameters including policy-effect analysis with negligible additional computation. Monte Carlo experiments on both simulated and real data show adequate performance especially relative to full sample methods.
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2603.08614
  12. By: Jakub Ryłow (Faculty of Economic Sciences, University of Warsaw)
    Abstract: The Stable Unit Treatment Value Assumption (SUTVA) fails systematically in platform economies where a deterministic algorithm mediates all interactions, making interference structural and mechanistically knowable. We introduce algorithmic interference — a structural potential-outcomes model in which spillovers flow through the platform's known decision rule — and construct the Debiased Algorithmic Instrumental Variable (DAIV) estimator: a cross-fitted semiparametric procedure combining Double Machine Learning with the IV equation implied by the algorithmic mechanism. Under local algorithmic monotonicity (LAM), both the ATE and CATE are point identified; without LAM the sharp identified set is characterised. DAIV is sqrt(n)-consistent, asymptotically normal, and semiparametrically efficient, with a formal LAM test supplied. A synthetic ride-sharing example (n = 10, 000) shows that standard DML overstates the treatment effect by 52% relative to DAIV; a Hausman-type specification test strongly rejects no algorithmic interference.
    Keywords: algorithmic interference, SUTVA, potential outcomes, causal identification, double machine learning, platform economies, instrumental variables, semiparametric efficiency, partial identification
    JEL: C14 C21 C26 D47 L86
    Date: 2026
    URL: https://d.repec.org/n?u=RePEc:war:wpaper:2026-7
  13. By: Harsh Parikh; Gabriel Levin-Konigsberg; Dominique Perrault-Joncas; Alexander Volfovsky
    Abstract: Treatment effects estimated from randomized controlled trials are local not only to the study population but also to the time at which the trial was conducted. We develop a framework for temporal transportation: extrapolating treatment effects to time periods where no experiment was conducted. We target the transported average treatment effect (TATE) and show that under a separable temporal effects assumption, the TATE decomposes into an observed average treatment effect and a temporal ratio. We provide two identification strategies -- one using replicated trials comparing the same treatments at different times, another using common treatment arms observed across time -- and develop doubly robust, semiparametrically efficient estimators for each. Monte Carlo simulations confirm that both estimators achieve nominal coverage, with the common arm strategy yielding substantial efficiency gains when its stronger assumptions hold. We apply our methods to A/B tests from the Upworthy Research Archive, demonstrating that the two strategies exhibit a variance-bias tradeoff: the common arm approach offers greater precision but may incur bias when treatments interact heterogeneously with temporal factors.
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2603.07018
  14. By: Fei Shang (Guangdong University of Foreign Studies); Tomasz Wo\'zniak (University of Melbourne)
    Abstract: We propose a structural vector autoregressive model with a new and flexible specification of the volatility process which we call Sparse Heterogeneous Markov-Switching Heteroskedasticity. In this model, the conditional variance of each structural shock changes in time according to its own Markov process. Additionally, it features a sparse representation of Markov processes, in which the number of regimes is set to exceed that of the data-generating process, with some regimes allowed to have zero occurrences throughout the sample. We complement these developments with a definition of a new distribution for normalised conditional variances that facilitates Gibbs sampling and identification verification. In effect, our model: (i) normalises the system and estimates the structural parameters more precisely than popular alternatives; (ii) can be used to verify homoskedasticity reliably and, thus, inform identification through heteroskedasticity; and (iii) features excellent forecasting performance comparable with Stochastic Volatility. Finally, revisiting a prominent macro-financial structural system, we provide evidence for the identification of the US monetary policy shock via heteroskedasticity, with estimates consistent with those reported in the literature.
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2603.16035
  15. By: Timo Dimitriadis; Marius Puke
    Abstract: We introduce inference methods for score decompositions, which partition scoring functions for predictive assessment into three interpretable components: miscalibration, discrimination, and uncertainty. Our estimation and inference relies on a linear recalibration of the forecasts, which is applicable to general multi-step ahead point forecasts such as means and quantiles due to its validity for both smooth and non-smooth scoring functions. This approach ensures desirable finite-sample properties, enables asymptotic inference, and establishes a direct connection to the classical Mincer-Zarnowitz regression. The resulting inference framework facilitates tests for equal forecast calibration or discrimination, which yield three key advantages. They enhance the information content of predictive ability tests by decomposing scores, deliver higher statistical power in certain scenarios, and formally connect scoring-function-based evaluation to traditional calibration tests, such as financial backtests. Applications demonstrate the method's utility. We find that for survey inflation forecasts, discrimination abilities can differ significantly even when overall predictive ability does not. In an application to financial risk models, our tests provide deeper insights into the calibration and information content of volatility and Value-at-Risk forecasts. By disentangling forecast accuracy from backtest performance, the method exposes critical shortcomings in current banking regulation.
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2603.04275
  16. By: Martin Huber; Kevin Kloiber; Luk\'a\v{s} Laff\'ers
    Abstract: In causal analysis, understanding the causal mechanisms through which an intervention or treatment affects an outcome is often of central interest. We propose a test to evaluate (i) whether the causal effect of a treatment that is randomly assigned conditional on covariates is fully mediated by, or operates exclusively through, observed intermediate outcomes (referred to as mediators or surrogate outcomes), and (ii) whether the various causal mechanisms operating through different mediators are identifiable conditional on covariates. We demonstrate that if both full mediation and identification of causal mechanisms hold, then the conditionally random treatment is conditionally independent of the outcome given the mediators and covariates. Furthermore, we extend our framework to settings with non-randomly assigned treatments. We show that, in this case, full mediation remains testable, while identification of causal mechanisms is no longer guaranteed. We propose a double machine learning framework for implementing the test that can incorporate high-dimensional covariates and is root-n consistent and asymptotically normal under specific regularity conditions. We also present a simulation study demonstrating good finite-sample performance of our method, along with two empirical applications revisiting randomized experiments on maternal mental health and social norms.
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2603.04109
  17. By: Shou-Yung Yin
    Abstract: We propose a focused weighted-average least squares (FWALS) estimator that addresses the computational burden of focused model averaging. By semi-orthogonalizing auxiliary regressors, the weighting problem is reduced from $2^{k_2}$ sub-models to at most $k_2$ regressor-wise weights, yielding a tractable sub-optimal procedure. Under local-to-zero conditions, we derive the limiting distribution of FWALS for smooth focused functions and provide a plug-in AMSE criterion for data-driven weight selection. Simulations show that FWALS closely matches the focused information criterion (FIC) benchmark and delivers stable performance when focused function is designed for impulse response function. Prior-based WALS can be competitive in some settings, but its performance depends on the signal regime and the design of focused parameter. Overall, FWALS offers a practical and robust alternative with substantial computational savings.
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2603.03008
  18. By: Lixiong Li
    Abstract: This paper develops a unified identification framework for counterfactual analysis in incomplete models characterized by support and moment restrictions. I demonstrate that identifying structural parameters and conducting counterfactual analyses are isomorphic tasks. By embedding counterfactual restrictions within an augmented structural model specification, this approach bypasses the conventional "estimate-then-simulate" workflow and the need to simulate outcomes from models with set predictions. To make this approach operational, I extend sharp identification results for the support-function approach beyond the integrable boundedness condition that is imposed in sharp random-set characterizations but may be violated in economically relevant counterfactual analyses. Under minimal regularity conditions, I prove that the support-function approach remains sharp for the $moment$ $closure$ of the identified set. Furthermore, I introduce an irreducibility condition requiring all support implications to be made explicit. I show that for irreducible models, the identified set and its moment closure are statistically indistinguishable in finite samples. Together, these results justify using support-function methods in counterfactual settings where traditional sharpness fails and clarify the distinct roles of support and moment restrictions in empirical practice.
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2603.07722
  19. By: Wayne Yuan Gao; Ming Li; Zhengyan Xu
    Abstract: We develop a tractable identification approach for strategic network formation models with both strategic link interdependence and individual unobserved heterogeneity (fixed effects). The key challenge is that endogenous network statistics (e.g. number of common friends) enter the link formation equation, while the mapping from model primitives to equilibrium network structure is generally intractable. Our approach sidesteps this difficulty using a ``bounding-by-$c$'' technique that treats endogenous covariates as random variables and exploits monotonicity restrictions to obtain identifying information. We derive a system of identifying restrictions based on subnetwork configurations: tetrad-based restrictions that completely eliminate all individual fixed effects, triad-based restrictions that partially difference out fixed effects, and general weighted cycle-based restrictions, along with point identification results. Preliminary simulations show that our approach can deliver informative bounds on the structural parameters.
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2603.08634
  20. By: Xiaohong Chen (Yale University); Wayne Yuan Gao (University of Pennsylvania)
    Abstract: Many economic parameters are identified by "thin sets" (submanifolds with Lebesgue measure zero) and hence difficult to recover from data in an ambient space. This paper provides a unified theory for estimation and inference of such "thin-set" identified functionals. We show that thin sets are not equally thin: their intrinsic dimensionality m matters in a precise manner. For a nonparametric regression h0 with Hšlder smoothness s and d-dimensional covariates in the ambient space, we show that n^-s/(2s+d-m) is the minimax optimal rate of estimating linear and nonlinear (e.g., quadratic, upper contour) integrals of h0 on an m-dimensional submanifold (0
    Date: 2026–03–05
    URL: https://d.repec.org/n?u=RePEc:cwl:cwldpp:2450r1
  21. By: Robert Pickett; Jennifer Hill; Sarah Cowan
    Abstract: To estimate the causal effect of an intervention, researchers need to identify a control group that represents what might have happened to the treatment group in the absence of that intervention. This is challenging without a randomized experiment and further complicated when few units (possibly only one) are treated. Nevertheless, when data are available on units over time, synthetic control (SC) methods provide an opportunity to construct a valid comparison by differentially weighting control units that did not receive the treatment so that their resulting pre-treatment trajectory is similar to that of the treated unit. The hope is that this weighted ``pseudo-counterfactual" can serve as a valid counterfactual in the post-treatment time period. Since its origin twenty years ago, SC has been used over 5, 000 times in the literature (Web of Science, December 2025), leading to a proliferation of descriptions of the method and guidance on proper usage that is not always accurate and does not always align with what the original developers appear to have intended. As such, a number of accepted pieces of wisdom have arisen: (1) SC is robust to various implementations; (2) covariates are unnecessary, and (3) pre-treatment prediction error should guide model selection. We describe each in detail and conduct simulations that suggest, both for standard and alternative implementations of SC, that these purported truths are not supported by empirical evidence and thus actually represent misconceptions about best practice. Instead of relying on these misconceptions, we offer practical advice for more cautious implementation and interpretation of results.
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2603.19211
  22. By: Xiaochun Liu; Richard Luger
    Abstract: We introduce a semiparametric approach for forecasting Value-at-Risk (VaR) and Expected Shortfall (ES) by modeling the conditional scale of financial returns, defined as the difference between two specified quantiles, via restricted quantile regression. Focusing on downside risk, VaR is derived from the left-tail quantile of rescaled returns, and ES is approximated by averaging quantiles below the VaR level. The method delivers robust, distribution-free estimates of extreme losses and captures skewness, heavy tails, and leverage effects. Simulation experiments and empirical analysis show that it often outperforms established models, including GARCH and joint VaR-ES conditional-quantile approaches. An application to daily returns on major international stock indices, spanning the COVID-19 period, highlights its effectiveness in capturing risk dynamics.
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2603.02357
  23. By: Lucas D. Konrad; Lukas Vashold; Jesus Crespo Cuaresma
    Abstract: Structural break identification methods are an important tool for evaluating the effectiveness of climate change mitigation policies. In this paper, we introduce a unified probabilistic framework for detecting structural breaks with unknown timing and arbitrary sequence in longitudinal data. The proposed Bayesian setup uses indicator-saturated regression and a spike-and-slab prior with an inverse-moment density as the slab component to ensure model selection consistency. Simulation results show that the method outperforms comparable frequentist approaches, particularly in environments with a high probability of structural breaks. We apply the framework to identify and evaluate the effects of climate policies in the European road transport sector.
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2603.04997
  24. By: Yoonseok Lee (Syracuse University); Peter C. B. Phillips (Yale University, University of Auckland, and Singapore Management University); Suyong Song (University of Iowa); Donggyu Sul (University of Texas at Dallas)
    Abstract: This paper develops a novel method for identifying observable determinants of latent common trends in nonstationary panel data, which are typically removed or controlled in two-way fixed effects regressions. By examining cross sectional dispersion processes, we assess whether panel series exhibit distributional convergence toward specific observed time series, revealing them as long run determinants of the underlying latent trend. The approach also offers a new perspective on cointegration between time series and panel data, focusing on the relative variation of the panel data with respect to the cointegration error. Applying this method to U.S. state-level crime rates demonstrates that the percentage of young adults is a key determinant of violent crime trends, while the incarceration rate drives property crime trends. These findings, which differ from standard two-way fixed effects analysis results, provide a compelling explanation for the sharp decline in U.S. crime rates since the early 1990s.
    Date: 2026–03–10
    URL: https://d.repec.org/n?u=RePEc:cwl:cwldpp:2504
  25. By: Ricardo Lagos; Gastón Navarro
    Abstract: We propose a new reserve-demand estimation strategy---a middle ground between atheoretical reduced-form econometric approaches and fully structural quantitative-theoretic approaches. The strategy consists of an econometric specification that satisfies core restrictions implied by theory and controls for changes in administered-rate spreads that induce rotations and shifts in reserve demand. The resulting approach is as user-friendly as existing reduced-form econometric methods but improves upon them by incorporating a minimal set of theoretical restrictions that any reserve demand must satisfy. We apply this approach to U.S. data and obtain reserve-demand estimates that are broadly consistent with the structural estimates.
    JEL: E4 E41 E5 E50
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:nbr:nberwo:34972
  26. By: Ellen Munroe; Alexander Newton; Meet Shah
    Abstract: Economists often interpret estimates from linear regressions with log dependent variables as elasticities. However, the coefficients from log-log regressions estimate the elasticity of the geometric mean of $y_i|x_i$, not the arithmetic mean. The unbounded difference between the two is known as retransformation bias and can take either sign. We develop a specification-robust debiased estimator of the average arithmetic elasticity and re-estimate 50 results from top 5 papers published in 2020. We find that 19 are significantly different, with the median absolute difference being 65% of the OLS elasticity estimate. Furthermore, we show standard instrumental variables assumptions with log dependent variables do not identify the elasticity. We specify a control function approach and re-estimate papers that use 2SLS with log dependent variables. We find that 13 of 19 results from top 5 papers are significantly different between the two approaches. Retransformation bias arises as a result of heterogeneous responses. The geometric mean elasticity corresponds to the average response. Arithmetic and geometric means are elements of the power mean family. We show power mean elasticities are sufficient statistics for a common class of decision problems.
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2603.12536
  27. By: Federico Vittorio Cortesi; Giuseppe Iannone; Giulia Crippa; Tomaso Poggio; Pierfrancesco Beneventano
    Abstract: Neural networks applied to financial time series operate in a regime of underspecification, where model predictors achieve indistinguishable out-of-sample error. Using large-scale volatility forecasting for S$\&$P 500 stocks, we show that different model-training-pipeline pairs with identical test loss learn qualitatively different functions. Across architectures, predictive accuracy remains unchanged, yet optimizer choice reshapes non-linear response profiles and temporal dependence differently. These divergences have material consequences for decisions: volatility-ranked portfolios trace a near-vertical Sharpe-turnover frontier, with nearly $3\times$ turnover dispersion at comparable Sharpe ratios. We conclude that in underspecified settings, optimization acts as a consequential source of inductive bias, thus model evaluation should extend beyond scalar loss to encompass functional and decision-level implications.
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2603.02620
  28. By: Ellis W. Tallman; Saeed Zaman
    Abstract: This paper develops a new empirical model that estimates trend inflation by combining modeling features that have advanced the literature on trend inflation over the past two decades. These features include incorporating information about long-term inflation expectations from surveys in a flexible way, modeling aggregate inflation via sectoral data (goods and services), allowing for stochastic volatility (SV) in the shocks to the trend and transitory components of inflation, allowing for a time-varying price Phillips curve, and allowing for time-varying uncertainty effects on the level of inflation. We estimate the model using state-of-the-art Bayesian methods. We document the competitive properties of the new model compared to variants that include only a subset of the above features. The new model provides a more interpretable historical decomposition of inflation data than the models it extends. The decomposition suggests that uncertainty effects play a greater role than cyclical effects in explaining inflation fluctuations.
    Keywords: disaggregates of inflation; inflation uncertainty; trend inflation; inflation expectations; nonlinear state space; Bayesian methods
    JEL: C11 C32 E31
    Date: 2026–03–24
    URL: https://d.repec.org/n?u=RePEc:fip:fedcwq:102922
  29. By: Fernando Flores Tavares
    Abstract: This paper proposes a positional poverty gap measure of multidimensional poverty within the Alkire-Foster counting framework. The measure captures the depth of deprivations even when indicators are ordinal, unlike the standard poverty gap, which requires cardinal variables. The proposed method draws on the fuzzy set literature and introduces a distribution-based measure of deprivation depth using the empirical cumulative distribution of each indicator, with the most deprived group as the benchmark. For each deprived individual, the method assigns a score based on the individual's relative position in the distribution. Depth is thus expressed as a difference in distributional positions, motivating the label positional poverty gap. The paper demonstrates that this measure preserves the identification and aggregation structure of the counting approach and satisfies its axiomatic properties when the reference distribution remains fixed over time. The framework remains flexible because it accommodates different identification rules, deprivation cutoffs, and variable types. Overall, it offers a simple, meaningful, and theoretically grounded way to incorporate depth into multidimensional poverty measurement with ordinal data.
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2603.15149
  30. By: Barigou, Karim (Université catholique de Louvain, LIDAM/ISBA, Belgium); Loisel, Stéphane; Salhi, Yahia; Vigneron, Rayane
    Abstract: This paper proposes an online multivariate cumulative sum (MCUSUM) monitoring procedure for detecting changes in mortality dynamics, with direct applications to mortality and longevity risk management for insurers and pension funds. The method is built on Gaussian process (GP) non-parametric mortality forecasts, and performs surveillance in real time by tracking multivariate forecast errors across ages. We develop MCUSUM schemes targeting two practically relevant forms of change: (i) a change in level, corresponding to an abrupt proportional shift in mortality rates, and (ii) a change in trend, corresponding to a shift in the rate of mortality improvement. In both cases, one-sided monitoring rules allow the practitioner to focus on either adverse mortality shocks or adverse longevity developments. By explicitly exploiting dependence between age groups, the proposed multivariate approach improves detection performance relative to collections of univariate control charts. We evaluate the procedure through simulation experiments and empirical applications to recent mortality data from France, Japan, Canada, and the USA, and we further illustrate its use on a real-world life insurance portfolio. Finally, we document the impact of age-pattern changes consistent with rectangularization of mortality curves and discuss how such dynamics can affect prospective monitoring and the interpretation of detection signals.
    Keywords: Mortality modeling ; Change-point detection ; Gaussian processes ; longevity risk management ; rectangularization
    Date: 2026–02–26
    URL: https://d.repec.org/n?u=RePEc:aiz:louvad:2026004
  31. By: Jamotton, Charlotte (Université catholique de Louvain, LIDAM/ISBA, Belgium); Hainaut, Donatien (Université catholique de Louvain, LIDAM/ISBA, Belgium)
    Abstract: This article studies how multiple notions of fairness can be incorporated into a single Bayesian non-parametric regression framework for insurance pricing, witha focus on claim frequency modeling under a log-link. We consider a Generalized Gaussian Process Regression (GGPR) model for count data with risk exposure and introduce fairness interventions in its architecture. Specifically, we addressnotions of individual fairness by altering the kernel structure to control the similarity between policies (e.g., to mitigate omitted variable bias). We also address group-level fairness by enforcing demographic parity through linear constraints affecting the posterior. This modified GGPR architecture allows us to jointly enforce multiple fairness definitions, spanning both group and individual-level criteria, within a single probabilistic model. We empirically explore trade-offs with actuarial fairness, and how different fairness criteria interact when combined. The results highlight the importance of adopting a multi-criteria, context-aware approach to fairness in insurance pricing.
    Keywords: Gaussian process regression ; count data ; confounding bias ; actuarial fairness ; omitted variable bias ; indirect discrimination ; direct discrimination
    Date: 2026–02–23
    URL: https://d.repec.org/n?u=RePEc:aiz:louvad:2026003
  32. By: Christopher G. Lamoureux
    Abstract: Parametric portfolio policies may experience estimation risk. I develop a generalized Bayesian framework that updates priors, delivering a posterior distribution over characteristic tilts and out-of-sample returns that is the unique belief-updating rule consistent with the investor's utility function, requiring no model for the return generating process. The Gibbs posterior is the closest distribution to the prior in Kullback-Leibler divergence subject to utility maximization. The posterior's scaling parameter $\lambda$ controls the weight placed on data relative to the prior. I develop a KNEEDLE algorithm to select optimal $\lambda^*$ in-sample by trading off posterior precision against numerical fragility, eliminating the need for out-of-sample validation. I apply this to U.S. equities (1955-2024), and confirm characteristic-based gains concentrate pre-2000. I find that $\lambda^*$ varies meaningfully with risk aversion and depends on higher-order moments.
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2603.02455
  33. By: Marta Grzeskiewicz
    Abstract: We develop a flexible neural demand system for continuous budget allocation that estimates budget shares on the simplex by minimizing KL divergence. Shares are produced via a softmax of a state-dependent preference scorer and disciplined with regularity penalties (monotonicity, Slutsky symmetry) to support coherent comparative statics and welfare without imposing a parametric utility form. State dependence enters through a habit stock defined as an exponentially weighted moving average of past consumption. Simulations recover elasticities and welfare accurately and show sizable gains when habit formation is present. In our empirical application using Dominick's analgesics data, adding habit reduces out-of-sample error by c.33%, reshapes substitution patterns, and increases CV losses from a 10% ibuprofen price rise by about 15-16% relative to a static model. The code is available at https://github.com/martagrz/neural_demand_habit .
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2603.02331

This nep-ecm issue is ©2026 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the Griffith Business School of Griffith University in Australia.