nep-ecm New Economics Papers
on Econometrics
Issue of 2026–01–12
29 papers chosen by
Sune Karlsson, Örebro universitet


  1. Inference in partially identified moment models via regularized optimal transport By Grigory Franguridi; Laura Liu
  2. Modewise Additive Factor Model for Matrix Time Series By Elynn Chen; Yuefeng Han; Jiayu Li; Ke Xu
  3. What Is a Causal Effect When Firms Interact? Counterfactuals and Interdependence By Mariluz Mate
  4. Inference for Forecasting Accuracy: Pooled versus Individual Estimators in High-dimensional Panel Data By Tim Kutta; Martin Schumann; Holger Dette
  5. Compound Estimation for Binomials By Yan Chen; Lihua Lei
  6. Imputation-Powered Inference for Missing Covariates By Junting Duan; Markus Pelger
  7. Extrapolating LATE with Weak IVs By Muyang Ren
  8. Sharp Structure-Agnostic Lower Bounds for General Functional Estimation By Jikai Jin; Vasilis Syrgkanis
  9. Difference-in-Kinks Design By Böckerman, Petri; Jysmä, Sami; Kanninen, Ohto
  10. Efficient two-stage estimation of cyclical ARCH models By Aknouche, Abdelhakim; Bentarzi, Mohamed
  11. Asymptotic and finite-sample distributions of one- and two-sample empirical relative entropy, with application to change-point detection By Matthieu Garcin; Louis Perot
  12. Evaluating Counterfactual Policies Using Instruments By Michal Koles\'ar; Jos\'e Luis Montiel Olea; Jonathan Roth
  13. Causal-Policy Forest for End-to-End Policy Learning By Masahiro Kato
  14. Difference-in-Differences using Double Negative Controls and Graph Neural Networks for Unmeasured Network Confounding By Zihan Zhang; Lianyan Fu; Dehui Wang
  15. Harvesting Differences-in-Differences and Event-Study Evidence By Alberto Abadie; Joshua Angrist; Brigham Frandsen; Jörn-Steffen Pischke
  16. Canonical correlation regression with noisy data By Isaac Meza; Rahul Singh
  17. TWICE: Tree-based Wage Inference with Clustering and Estimation By Aslan Bakirov; Francesco Del Prato; Paolo Zacchia
  18. Testing Monotonicity in a Finite Population By Jiafeng Chen; Jonathan Roth; Jann Spiess
  19. Continuous time asymptotic representations for adaptive experiments By Karun Adusumilli
  20. Scaling Causal Mediation for Complex Systems: A Framework for Root Cause Analysis By Alessandro Casadei; Sreyoshi Bhaduri; Rohit Malshe; Pavan Mullapudi; Raj Ratan; Ankush Pole; Arkajit Rakshit
  21. Nonparametric Identification of Demand without Exogenous Product Characteristics By Kirill Borusyak; Jiafeng Chen; Peter Hull; Lihua Lei
  22. Econometric Modeling of Input-Driven Output Risk through a Versatile CES Production Function By Ali Zeytoon-Nejad; Barry Goodwin
  23. Exponentially weighted estimands and the exponential family: filtering, prediction and smoothing By Simon Donker van Heel; Neil Shephard
  24. Random Placement but Real Bias By Schmandt, Marco; Tielkes, Constantin; Weinhardt, Felix
  25. How (Not) to Identify Demand Elasticities in Dynamic Asset Markets By Jules H. van Binsbergen; Benjamin David; Christian C. Opp
  26. Decomposing the Output Gap. Robust Univariate and Multivariate Hodrick–Prescott Filtering with Extreme Observations By Håvard Hungnes
  27. A Lifecycle Estimator of Intergenerational Income Mobility By Ursula Mello; Martin Nybom; Jan Stuhler
  28. Mixed Frequency Data in a Heterogenous Sticky Price Model By Andersson, Jonas; Nilsen, Øivind Anti; Skaug, Hans Julius
  29. Time Series Clustering in High Dimensional Cointegration Analysis: The Case of African Swine Fever in China By Peng, Rundong; Mallory, Mindy; Ma, Meilin; Wang, H. Holly

  1. By: Grigory Franguridi; Laura Liu
    Abstract: Partial identification often arises when the joint distribution of the data is known only up to its marginals. We consider the corresponding partially identified GMM model and develop a methodology for identification, estimation, and inference in this model. We characterize the sharp identified set for the parameter of interest via a support-function/optimal-transport (OT) representation. For estimation, we employ entropic regularization, which provides a smooth approximation to classical OT and can be computed efficiently by the Sinkhorn algorithm. We also propose a statistic for testing hypotheses and constructing confidence regions for the identified set. To derive the asymptotic distribution of this statistic, we establish a novel central limit theorem for the entropic OT value under general smooth costs. We then obtain valid critical values using the bootstrap for directionally differentiable functionals of Fang and Santos (2019). The resulting testing procedure controls size locally uniformly, including at parameter values on the boundary of the identified set. We illustrate its performance in a Monte Carlo simulation. Our methodology is applicable to a wide range of empirical settings, such as panels with attrition and refreshment samples, nonlinear treatment effects, nonparametric instrumental variables without large-support conditions, and Euler equations with repeated cross-sections.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.18084
  2. By: Elynn Chen; Yuefeng Han; Jiayu Li; Ke Xu
    Abstract: We introduce a Modewise Additive Factor Model (MAFM) for matrix-valued time series that captures row-specific and column-specific latent effects through an additive structure, offering greater flexibility than multiplicative frameworks such as Tucker and CP factor models. In MAFM, each observation decomposes into a row-factor component, a column-factor component, and noise, allowing distinct sources of variation along different modes to be modeled separately. We develop a computationally efficient two-stage estimation procedure: Modewise Inner-product Eigendecomposition (MINE) for initialization, followed by Complement-Projected Alternating Subspace Estimation (COMPAS) for iterative refinement. The key methodological innovation is that orthogonal complement projections completely eliminate cross-modal interference when estimating each loading space. We establish convergence rates for the estimated factor loading matrices under proper conditions. We further derive asymptotic distributions for the loading matrix estimators and develop consistent covariance estimators, yielding a data-driven inference framework that enables confidence interval construction and hypothesis testing. As a technical contribution of independent interest, we establish matrix Bernstein inequalities for quadratic forms of dependent matrix time series. Numerical experiments on synthetic and real data demonstrate the advantages of the proposed method over existing approaches.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.25025
  3. By: Mariluz Mate
    Abstract: Many empirical studies estimate causal effects in environments where economic units interact through spatial or network connections. In such settings, outcomes are jointly determined, and treatment induced shocks propagate across economically connected units. A growing literature highlights identification challenges in these models and questions the causal interpretation of estimated spillovers. This paper argues that the problem is more fundamental. Under interdependence, causal effects are not uniquely defined objects even when the interaction structure is correctly specified or consistently learned, and even under ideal identifying conditions. We develop a causal framework for firm-level economies in which interaction structures are unobserved but can be learned from predetermined characteristics. We show that learning the network, while necessary to model interdependence, is not sufficient for causal interpretation. Instead, causal conclusions hinge on explicit counterfactual assumptions governing how outcomes adjust following a treatment change. We formalize three economically meaningful counterfactual regimes partial equilibrium, local interaction, and network, consistent equilibrium, and show that standard spatial autoregressive estimates map into distinct causal effects depending on the counterfactual adopted. We derive identification conditions for each regime and demonstrate that equilibrium causal effects require substantially stronger assumptions than direct or local effects. A Monte Carlo simulation illustrates that equilibrium and partial-equilibrium effects differ mechanically even before estimation, and that network feedback can amplify bias when identifying assumptions fail. Taken together, our results clarify what existing spatial and network estimators can and cannot identify and provide practical guidance for empirical research in interdependent economic environments
    Date: 2026–01
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2601.00279
  4. By: Tim Kutta; Martin Schumann; Holger Dette
    Abstract: Panels with large time $(T)$ and cross-sectional $(N)$ dimensions are a key data structure in social sciences and other fields. A central question in panel data analysis is whether to pool data across individuals or to estimate separate models. Pooled estimators typically have lower variance but may suffer from bias, creating a fundamental trade-off for optimal estimation. We develop a new inference method to compare the forecasting performance of pooled and individual estimators. Specifically, we propose a confidence interval for the difference between their forecasting errors and establish its asymptotic validity. Our theory allows for complex temporal and cross-sectional dependence in the model errors and covers scenarios where $N$ can be much larger than $T$-including the independent case under the classical condition $N/T^2 \to 0$. The finite-sample properties of the proposed method are examined in an extensive simulation study.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.15592
  5. By: Yan Chen; Lihua Lei
    Abstract: Many applications involve estimating the mean of multiple binomial outcomes as a common problem -- assessing intergenerational mobility of census tracts, estimating prevalence of infectious diseases across countries, and measuring click-through rates for different demographic groups. The most standard approach is to report the plain average of each outcome. Despite simplicity, the estimates are noisy when the sample sizes or mean parameters are small. In contrast, the Empirical Bayes (EB) methods are able to boost the average accuracy by borrowing information across tasks. Nevertheless, the EB methods require a Bayesian model where the parameters are sampled from a prior distribution which, unlike the commonly-studied Gaussian case, is unidentified due to discreteness of binomial measurements. Even if the prior distribution is known, the computation is difficult when the sample sizes are heterogeneous as there is no simple joint conjugate prior for the sample size and mean parameter. In this paper, we consider the compound decision framework which treats the sample size and mean parameters as fixed quantities. We develop an approximate Stein's Unbiased Risk Estimator (SURE) for the average mean squared error given any class of estimators. For a class of machine learning-assisted linear shrinkage estimators, we establish asymptotic optimality, regret bounds, and valid inference. Unlike existing work, we work with the binomials directly without resorting to Gaussian approximations. This allows us to work with small sample sizes and/or mean parameters in both one-sample and two-sample settings. We demonstrate our approach using three datasets on firm discrimination, education outcomes, and innovation rates.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.25042
  6. By: Junting Duan; Markus Pelger
    Abstract: Missing covariate data is a prevalent problem in empirical research. We provide a novel framework for handling missing covariate data for estimation and inference in downstream tasks. Our general framework provides an automatic and easy-to-use pipeline for empirical researchers: First, missing values are imputed using virtually any imputation method under general observation patterns. Second, we automatically correct for the imputation bias and adaptively weight the imputed values according to their quality. Third, we use all available data, including imputed observations, to obtain more precise point estimates for the downstream task with valid confidence intervals. Our approach ensures valid inference while improving statistical efficiency by leveraging all available data. We establish the asymptotic normality of the proposed estimator under general missing data patterns and a broad class of imputation methods. Through simulations, we demonstrate the superior performance of our approach over natural benchmarks, as it achieves both lower bias and variance while being robust to imputation quality. In a comprehensive empirical study of the dependence of equity markets on carbon emissions, we show that properly accounting for missing emissions data yields no evidence of correlation between stock returns and emissions directly produced by companies, but a negative correlation with value chain emissions.
    JEL: C01 C1 C10 C12 C14 C19 C5 C53 C55 C58
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:nbr:nberwo:34535
  7. By: Muyang Ren
    Abstract: To evaluate the effectiveness of a counterfactual policy, it is often necessary to extrapolate treatment effects on compliers to broader populations. This extrapolation relies on exogenous variation in instruments, which is often weak in practice. This limited variation leads to invalid confidence intervals that are typically too short and cannot be accurately detected by classical methods. For instance, the F-test may falsely conclude that the instruments are strong. Consequently, I develop inference results that are valid even with limited variation in the instruments. These results lead to asymptotically valid confidence sets for various linear functionals of marginal treatment effects, including LATE, ATE, ATT, and policy-relevant treatment effects, regardless of identification strength. This is the first paper to provide weak instrument robust inference results for this class of parameters. Finally, I illustrate my results using data from Agan, Doleac, and Harvey (2023) to analyze counterfactual policies of changing prosecutors' leniency and their effects on reducing recidivism.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.23854
  8. By: Jikai Jin; Vasilis Syrgkanis
    Abstract: The design of efficient nonparametric estimators has long been a central problem in statistics, machine learning, and decision making. Classical optimal procedures often rely on strong structural assumptions, which can be misspecified in practice and complicate deployment. This limitation has sparked growing interest in structure-agnostic approaches -- methods that debias black-box nuisance estimates without imposing structural priors. Understanding the fundamental limits of these methods is therefore crucial. This paper provides a systematic investigation of the optimal error rates achievable by structure-agnostic estimators. We first show that, for estimating the average treatment effect (ATE), a central parameter in causal inference, doubly robust learning attains optimal structure-agnostic error rates. We then extend our analysis to a general class of functionals that depend on unknown nuisance functions and establish the structure-agnostic optimality of debiased/double machine learning (DML). We distinguish two regimes -- one where double robustness is attainable and one where it is not -- leading to different optimal rates for first-order debiasing, and show that DML is optimal in both regimes. Finally, we instantiate our general lower bounds by deriving explicit optimal rates that recover existing results and extend to additional estimands of interest. Our results provide theoretical validation for widely used first-order debiasing methods and guidance for practitioners seeking optimal approaches in the absence of structural assumptions. This paper generalizes and subsumes the ATE lower bound established in \citet{jin2024structure} by the same authors.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.17341
  9. By: Böckerman, Petri (University of Jyväskylä); Jysmä, Sami (Labour Institute for Economic Research); Kanninen, Ohto (LABORE Labour Institute for Economic Research)
    Abstract: This paper introduces the Difference-in-Kinks (DiK) design, an econometric framework that extends the standard regression kink design to settings in which the slope of a policy rule varies over time. By combining the key features of the regression kink and difference-in-differences approaches, the DiK design identifies causal effects from variation in kink intensity over time. We formalize both sharp and fuzzy versions of the estimator and derive the identification conditions under a parallel-trends assumption. Applying DiK to Finland’s 2011 guarantee pension reform demonstrates that changes in marginal incentives significantly increased the probability of retirement, while the standard regression kink design would have obtained implausibly large estimates in the opposite direction. The DiK design thus offers a flexible framework for policy evaluation in dynamic, nonlinear environments.
    Keywords: policy evaluation, difference-in-differences, regression kink design, causal inference, treatment effects
    JEL: C21 C14 J26
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:iza:izadps:dp18313
  10. By: Aknouche, Abdelhakim; Bentarzi, Mohamed
    Abstract: Two estimation algorithms for Periodic Autoregressive Conditionally Heteroskedastic (PARCH ) models are developed in this work. The first is the two-stage weighted least squares (2S-WLS) algorithm, which adapts the ordinary least squares method for use in the periodic ARCH framework. The second, 2S-RLS, is an adaptation of the former for recursive online estimation contexts. Both algorithms produce consistent and asymptotically normally distributed estimators. Furthermore, the second method is particularly well-suited to capturing the dynamic characteristics of financial time series that are increasingly being observed at high frequencies. It also enables effective monitoring of positivity and periodic stationarity constraints.
    Keywords: Periodic ARCH, recursive online estimation, two-stage weighted least squares, two-stage recursive least squares, asymptotic normality.
    JEL: C10 C13
    Date: 2025–12–20
    URL: https://d.repec.org/n?u=RePEc:pra:mprapa:127417
  11. By: Matthieu Garcin; Louis Perot
    Abstract: Relative entropy, as a divergence metric between two distributions, can be used for offline change-point detection and extends classical methods that mainly rely on moment-based discrepancies. To build a statistical test suitable for this context, we study the distribution of empirical relative entropy and derive several types of approximations: concentration inequalities for finite samples, asymptotic distributions, and Berry-Esseen bounds in a pre-asymptotic regime. For the latter, we introduce a new approach to obtain Berry-Esseen inequalities for nonlinear functions of sum statistics under some convexity assumptions. Our theoretical contributions cover both one- and two-sample empirical relative entropies. We then detail a change-point detection procedure built on relative entropy and compare it, through extensive simulations, with classical methods based on moments or on information criteria. Finally, we illustrate its practical relevance on two real datasets involving temperature series and volatility of stock indices.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.16411
  12. By: Michal Koles\'ar; Jos\'e Luis Montiel Olea; Jonathan Roth
    Abstract: We study settings in which a researcher has an instrumental variable (IV) and seeks to evaluate the effects of a counterfactual policy that alters treatment assignment, such as a directive encouraging randomly assigned judges to release more defendants. We develop a general and computationally tractable framework for computing sharp bounds on the effects of such policies. Our approach does not require the often tenuous IV monotonicity assumption. Moreover, for an important class of policy exercises, we show that IV monotonicity -- while crucial for a causal interpretation of two-stage least squares -- does not tighten the bounds on the counterfactual policy impact. We analyze the identifying power of alternative restrictions, including the policy invariance assumption used in the marginal treatment effect literature, and develop a relaxation of this assumption. We illustrate our framework using applications to quasi-random assignment of bail judges in New York City and prosecutors in Massachusetts.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.24096
  13. By: Masahiro Kato
    Abstract: This study proposes an end-to-end algorithm for policy learning in causal inference. We observe data consisting of covariates, treatment assignments, and outcomes, where only the outcome corresponding to the assigned treatment is observed. The goal of policy learning is to train a policy from the observed data, where a policy is a function that recommends an optimal treatment for each individual, to maximize the policy value. In this study, we first show that maximizing the policy value is equivalent to minimizing the mean squared error for the conditional average treatment effect (CATE) under $\{-1, 1\}$ restricted regression models. Based on this finding, we modify the causal forest, an end-to-end CATE estimation algorithm, for policy learning. We refer to our algorithm as the causal-policy forest. Our algorithm has three advantages. First, it is a simple modification of an existing, widely used CATE estimation method, therefore, it helps bridge the gap between policy learning and CATE estimation in practice. Second, while existing studies typically estimate nuisance parameters for policy learning as a separate task, our algorithm trains the policy in a more end-to-end manner. Third, as in standard decision trees and random forests, we train the models efficiently, avoiding computational intractability.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.22846
  14. By: Zihan Zhang; Lianyan Fu; Dehui Wang
    Abstract: Estimating causal effects from observational network data faces dual challenges of network interference and unmeasured confounding. To address this, we propose a general Difference-in-Differences framework that integrates double negative controls (DNC) and graph neural networks (GNNs). Based on the modified parallel trends assumption and DNC, semiparametric identification of direct and indirect causal effects is established. We then propose doubly robust estimators. Specifically, an approach combining GNNs with the generalized method of moments is developed to estimate the functions of high-dimensional covariates and network structure. Furthermore, we derive the estimator's asymptotic normality under the $\psi$-network dependence and approximate neighborhood interference. Simulations show the finite-sample performance of our estimators. Finally, we apply our method to analyze the impact of China's green credit policy on corporate green innovation.
    Date: 2026–01
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2601.00603
  15. By: Alberto Abadie; Joshua Angrist; Brigham Frandsen; Jörn-Steffen Pischke
    Abstract: This paper surveys econometric innovations related to differences-in-differences estimators and event-study models with time-varying treatment effects. Our discussion highlights tricky normalization issues, heterogeneous policy effects, the interpretation of exposure designs, pretrends pretesting, and the ever-bothersome question of logs versus levels. Key ideas are illustrated with applications.
    JEL: C23 C5
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:nbr:nberwo:34550
  16. By: Isaac Meza; Rahul Singh
    Abstract: We study instrumental variable regression in data rich environments. The goal is to estimate a linear model from many noisy covariates and many noisy instruments. Our key assumption is that true covariates and true instruments are repetitive, though possibly different in nature; they each reflect a few underlying factors, however those underlying factors may be misaligned. We analyze a family of estimators based on two stage least squares with spectral regularization: canonical correlations between covariates and instruments are learned in the first stage, which are used as regressors in the second stage. As a theoretical contribution, we derive upper and lower bounds on estimation error, proving optimality of the method with noisy data. As a practical contribution, we provide guidance on which types of spectral regularization to use in different regimes.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.22697
  17. By: Aslan Bakirov; Francesco Del Prato; Paolo Zacchia
    Abstract: How much do worker skills, firm pay policies, and their interaction contribute to wage inequality? Standard approaches rely on latent fixed effects identified through worker mobility, but sparse networks inflate variance estimates, additivity assumptions rule out complementarities, and the resulting decompositions lack interpretability. We propose TWICE (Tree-based Wage Inference with Clustering and Estimation), a framework that models the conditional wage function directly from observables using gradient-boosted trees, replacing latent effects with interpretable, observable-anchored partitions. This trades off the ability to capture idiosyncratic unobservables for robustness to sampling noise and out-of-sample portability. Applied to Portuguese administrative data, TWICE outperforms linear benchmarks out of sample and reveals that sorting and non-additive interactions explain substantially more wage dispersion than implied by standard AKM estimates.
    Date: 2026–01
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2601.00776
  18. By: Jiafeng Chen; Jonathan Roth; Jann Spiess
    Abstract: We consider the extent to which we can learn from a completely randomized experiment whether everyone has treatment effects that are weakly of the same sign, a condition we call monotonicity. From a classical sampling perspective, it is well-known that monotonicity is untestable. By contrast, we show from the design-based perspective -- in which the units in the population are fixed and only treatment assignment is stochastic -- that the distribution of treatment effects in the finite population (and hence whether monotonicity holds) is formally identified. We argue, however, that the usual definition of identification is unnatural in the design-based setting because it imagines knowing the distribution of outcomes over different treatment assignments for the same units. We thus evaluate the informativeness of the data by the extent to which it enables frequentist testing and Bayesian updating. We show that frequentist tests can have nontrivial power against some alternatives, but power is generically limited. Likewise, we show that there exist (non-degenerate) Bayesian priors that never update about whether monotonicity holds. We conclude that, despite the formal identification result, the ability to learn about monotonicity from data in practice is severely limited.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.25032
  19. By: Karun Adusumilli
    Abstract: This article develops a continuous-time asymptotic framework for analyzing adaptive experiments -- settings in which data collection and treatment assignment evolve dynamically in response to incoming information. A key challenge in analyzing fully adaptive experiments, where the assignment policy is updated after each observation, is that the sequence of policy rules often lack a well-defined asymptotic limit. To address this, we focus instead on the empirical allocation process, which captures the fraction of observations assigned to each treatment over time. We show that, under general conditions, any adaptive experiment and its associated empirical allocation process can be approximated by a limit experiment defined by Gaussian diffusions with unknown drifts and a corresponding continuous-time allocation process. This limit representation facilitates the analysis of optimal decision rules by reducing the dimensionality of the state-space and leveraging the tractability of Gaussian diffusions. We apply the framework to derive optimal estimators, analyze in-sample regret for adaptive experiments, and construct e-processes for anytime-valid inference. Notably, we introduce the first definition of any-time and any-experiment valid inference for multi-treatment settings.
    Date: 2026–01
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2601.00739
  20. By: Alessandro Casadei; Sreyoshi Bhaduri; Rohit Malshe; Pavan Mullapudi; Raj Ratan; Ankush Pole; Arkajit Rakshit
    Abstract: Modern operational systems ranging from logistics and cloud infrastructure to industrial IoT, are governed by complex, interdependent processes. Understanding how interventions propagate through such systems requires causal inference methods that go beyond direct effects to quantify mediated pathways. Traditional mediation analysis, while effective in simple settings, fails to scale to the high-dimensional directed acyclic graphs (DAGs) encountered in practice, particularly when multiple treatments and mediators interact. In this paper, we propose a scalable mediation analysis framework tailored for large causal DAGs involving multiple treatments and mediators. Our approach systematically decomposes total effects into interpretable direct and indirect components. We demonstrate its practical utility through applied case studies in fulfillment center logistics, where complex dependencies and non-controllable factors often obscure root causes.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.14764
  21. By: Kirill Borusyak; Jiafeng Chen; Peter Hull; Lihua Lei
    Abstract: We study the identification of differentiated product demand with exogenous supply-side instruments, allowing product characteristics to be endogenous. Past analyses have argued that exogenous characteristic-based instruments are essentially necessary given a sufficiently flexible demand model with a suitable index restriction. We show, however, that price counterfactuals are nonparametrically identified by recentered instruments -- which combine exogenous shocks to prices with endogenous product characteristics -- under a weaker index restriction and a new condition we term faithfulness. We argue that faithfulness, like the usual completeness condition for nonparametric identification with instruments, can be viewed as a technical requirement on the richness of identifying variation rather than a substantive economic restriction, and we show that it holds under a variety of non-nested conditions on either price-setting or the index.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.23211
  22. By: Ali Zeytoon-Nejad; Barry Goodwin
    Abstract: The conventional functional form of the Constant-Elasticity-of-Substitution (CES) production function is a general production function nesting a number of other forms of production functions. Examples of such functions include Leontief, Cobb-Douglas, and linear production functions. Nevertheless, the conventional form of the CES production specification is still restrictive in multiple aspects. One example is the fact that the marginal effect of increasing input use always has to be to increase the variability of output quantity by the conventional construction of this function. This paper proposes a generalized variant of the CES production function that allows for various input effects on the probability distribution of output. Failure to allow for this possible input-output risk structure is indeed one of the limitations of the conventional form of the CES production function. This limitation may result in false inferences about input-driven output risk. In light of this, the present paper proposes a solution to this problem. First, it is shown that the familiar CES formulation suffers from very restrictive structural assumptions regarding risk considerations, and that such restrictions may lead to biased and inefficient estimates of production quantity and production risk. Following the general theme of Just and Pope's approach, a CES-based production-function specification that overcomes this shortcoming of the original CES production function is introduced, and a three-stage Nonlinear Least-Squares (NLS) estimation procedure for the estimation of the proposed functional form is presented. To illustrate the proposed approaches in this paper, two empirical applications in irrigation and fertilizer response using the famous Hexem-Heady experimental dataset are provided. Finally, implications for modeling input-driven production risks are discussed.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.20910
  23. By: Simon Donker van Heel; Neil Shephard
    Abstract: We propose using a discounted version of a convex combination of the log-likelihood with the corresponding expected log-likelihood such that when they are maximized they yield a filter, predictor and smoother for time series. This paper then focuses on working out the implications of this in the case of the canonical exponential family. The results are simple exact filters, predictors and smoothers with linear recursions. A theory for these models is developed and the models are illustrated on simulated and real data.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.16745
  24. By: Schmandt, Marco (Technische Universität Berlin); Tielkes, Constantin (European University Viadrina, Frankfurt / Oder); Weinhardt, Felix (European University Viadrina, Frankfurt / Oder)
    Abstract: Many studies exploit the random placement of individuals into groups such as schools or regions to estimate the effects of group-level variables on these individuals. Assuming a simple data generating process, we show that the typical estimate contains three components: the causal effect of interest, ”multiple-treatment bias” (MTB), and ”mobility bias” (MB). The extent of these biases depends on the interrelations of group-level variables and onward mobility. We develop a checklist that can be used to assess the relevance of the biases based on observable quantities. We apply this framework to novel administrative data on randomly placed refugees in Germany and confirm empirically that MTB and MB cannot be ignored. The biases can even switch the signs of estimates of popular group-level variables, despite random placement. We discuss implications for the literature and alternative "ideal experiments''.
    Keywords: refugee integration, peer effects, group assignment, random placement, random dispersal policy
    JEL: F22 O15 R23
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:iza:izadps:dp18319
  25. By: Jules H. van Binsbergen; Benjamin David; Christian C. Opp
    Abstract: We evaluate approaches to estimating demand elasticities in dynamic asset markets, both theoretically and empirically. We establish strict, necessary conditions that the dynamics of instrumented asset price variation must satisfy for valid identification. We illustrate these insights in a general equilibrium model of dynamic trade and derive the magnitude of biases that arise when these conditions are violated. Estimates based on static IO models are severely biased when the instrumented price variation is persistent or predictable. Empirically, we show that commonly used instruments yield elasticity estimates that are off by orders of magnitude, or even have the wrong sign. In contrast to standard multiplier calculations, our theory characterizes the dynamic asset market interventions required to sustain a given price path support process, with direct implications for policies such as Quantitative Easing (QE).
    JEL: E10 G10
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:nbr:nberwo:34528
  26. By: Håvard Hungnes (Statistics Norway)
    Abstract: This paper introduces two methodological improvements to the Hodrick– Prescott (HP) filter for decomposing GDP into trend and cycle components. First, we propose a robust univariate filter that accounts for extreme observations — such as the COVID–19 pandemic — by treating them as additive outliers. Second, we develop a multivariate HP filter that incorporates time–varying, import– adjusted budget shares of GDP sub–components. This adaptive weighting minimizes cyclical variance and yields a more stable trend estimate. Applying the framework to U.S. data, we find that private investment is the dominant source of cyclical fluctuations, while government expenditure exhibits a persistent counter–cyclical pattern. The proposed approach enhances real–time policy analysis by reducing endpoint bias and improving the identification of cyclical dynamics.
    Keywords: output gap; Hodrick–Prescott filter; robust filtering; multivariate decomposition; additive outliers; time–varying budget shares; business cycle analysis
    JEL: E32 C22 E37 C43 C51
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:ssb:dispap:1031
  27. By: Ursula Mello; Martin Nybom; Jan Stuhler
    Abstract: Lacking lifetime income data, most intergenerational mobility estimates are subject to lifecycle bias. Using long income series from Sweden and the US, we illustrate that standard correction methods struggle to account for one important property of income processes: children from affluent families experience faster income growth, even conditional on their own characteristics. We propose a lifecycle estimator that captures this pattern and performs well across different settings. We apply the estimator to study mobility trends, including for recent cohorts that could not be considered in prior work. Despite rising income inequality, intergenerational mobility remained largely stable in both countries.
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2512.15368
  28. By: Andersson, Jonas (Dept. of Business and Management Science, Norwegian School of Economics and Business Administration); Nilsen, Øivind Anti (Dept. of Economics, Norwegian School of Economics and Business Administration); Skaug, Hans Julius (Dept. of Mathematics, University of Bergen)
    Abstract: We develop a model to estimate price-adjustment behavior when prices are observed more frequently than key explanatory variables such as wage costs. We propose a mixedfrequency stochastic (S, s)-model that accommodates infrequently observed costs and allows for plant-, product-, and season-specific heterogeneity. The model is estimated using a likelihood-based nonlinear state-space approach, enabling estimation as if all variables were observed at the same frequency. Applied to monthly price survey data matched with plant-level annual cost data for manufacturing producers, the model yields precise estimates of both cost pass-through and price-inaction thresholds, and reduces the blurring of intermittent price changes.
    Keywords: Latent Variables; Pass-Through; Panel Data
    JEL: C34 D43 E31 E37 L16
    Date: 2025–12–30
    URL: https://d.repec.org/n?u=RePEc:hhs:nhheco:2025_021
  29. By: Peng, Rundong; Mallory, Mindy; Ma, Meilin; Wang, H. Holly
    Abstract: Time series data have been extensively utilized in agricultural price analysis, with the Vector Auto-Regressive (VAR) and Vector Error Correction Model (VECM) being foundational tools. Over the past three decades, the availability of disaggregated agricultural commodity price data has increased, resulting in high-dimensional datasets. The efficacy of VECM and Johansen’s maximum likelihood test diminishes with increased dimensionality due to exponential growth in the required time series length, implying difficulty in extracting cointegrating relationships in high-dimensional data. This article addresses this challenge by employing time series clustering to reduce data dimensionality. Clusters are formed based on price similarity, dynamically adjusted for specified time period using hierarchical clustering with dynamic time warping. With clustered time series, we extract the mean price of each cluster and apply Johansen’s framework to estimate cointegration relationships. Applied to the Chinese hog market before and after the 2018 African Swine Fever outbreak, we show that the cointegrating relationship has changed suggesting less inter-provincial trade. The study identifies clusters based on price similarity and shows the advantages of this method compared to traditional geographical clustering.
    Keywords: Production Economics
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:ags:aaea25:360950

This nep-ecm issue is ©2026 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.