nep-ecm New Economics Papers
on Econometrics
Issue of 2026–04–13
29 papers chosen by
Sune Karlsson, Örebro universitet


  1. Robust Inference for Time Series Quantile Regression: A Dependent Wild Bootstrap-Based Approach By Zongwu Cai; Wei Long
  2. Confidence Sets under Weak Identification: Theory and Practice By Gustavo Schlemper; Marcelo J. Moreira
  3. Quantifying Omitted Variable Bias in Nonlinear Instrumental Variable Estimators By Yu-Min Yen
  4. Identification and Inference in Nonlinear Dynamic Network Models By Diego Vallarino
  5. Serial-correlation testing in error component models with moderately small T By Sebastian Kripfganz; Mehdi Hosseinkouchack; Matei Demetrescu
  6. Partially Conditional Average Treatment Effect on Treated for Difference-in-Differences By Jiale Yang; Zongwu Cai; Qixian Zhong
  7. Flexible Imputation of Incomplete Network Data By Ge Sun; Weisheng Zhang
  8. Robust Priors in Nonlinear Panel Models with Individual and Time Effects By Zizhong Yan; Zhengyu Zhang; Mingli Chen; Jingrong Li; Iv\'an Fern\'andez-Val
  9. Representativeness and Efficiency in Overidentified IV By Chun Pang Chow; Hiroyuki Kasahara
  10. Assessing Sensitivity to IV Exclusion and Exogeneity without First Stage Monotonicity By Paul Diegert; Matthew A. Masten; Alexandre Poirier
  11. Nonparametric Identification and Estimation of Production Functions Invariant to Productivity Dynamics By Rentaro Utamaru
  12. Generalized Poisson Dynamic Network Models By Giulia Carallo; Roberto Casarin; Antonio Peruzzi
  13. Seasonality in Mixed Causal-Noncausal Processes By Tom\'as del Barrio Castro; Alain Hecq; Sean Telg
  14. Identification in Dynamic Dyadic Network Formation Models with Fixed Effects By Wayne Yuan Gao; Yi Niu
  15. Unified Mixture Sampler for State-Space Models: Application to Stochastic Conditional Duration Models By Daichi Hiraki; Yasuhiro Omori
  16. Dynamic Factor Stochastic Volatility-in-Mean VAR for Large Macroeconomic Panels By Daichi Hiraki; Siddhartha Chib; Yasuhiro Omori
  17. Multiple monetary policy shocks from daily data: A heteroskedasticity IV approach By Marc Burri; Daniel Kaufmann
  18. Identification in (Endogenously) Nonlinear SVARs Is Easier Than You Think By James A. Duffy; Sophocles Mavroeidis
  19. Linear estimations of dynamic fixed effects logit models only with time effects By Yoshitsugu Kitazawa
  20. Testing for Monotone Equilibrium Strategies in Games of Incomplete Information By Yu-Chin Hsu; Tong Li; Chu-An Liu; Hidenori Takahashi
  21. You've Got to be Efficient: Ambiguity, Misspecification and Variational Preferences By Karun Adusumilli
  22. Using large language models as a source of human behavioral data in social science experiments By van Loon, Austin; Kanopka, Klint
  23. SBBTS: A Unified Schr\"odinger-Bass Framework for Synthetic Financial Time Series By Alexandre Alouadi; Gr\'egoire Loeper; C\'elian Marsala; Othmane Mazhar; Huy\^en Pham
  24. Subjective Earnings and Employment Dynamics By Manuel Arellano; Orazio Attanasio; Margherita Borella; Mariacristina De Nardi; Gonzalo Paz-Pardo
  25. Climate-Aware Copula Models for Sovereign Rating Migration Risk By Marina Palaisti
  26. A Dynamic Factor Model for Level and Volatility By Haroon Mumtaz; Sofia Velasco
  27. An econometrician's guide to optimal transport By Alfred Galichon; Marc Henry
  28. Robust Testing Of the Allais Paradox By Paired Choices vs. Paired Valuations By Federico Echenique; Gerelt Tserenjigmid
  29. Measuring What Cannot Be Surveyed: LLMs as Instruments for Latent Cognitive Variables in Labor Economics By Cristian Espinal Maya

  1. By: Zongwu Cai (Department of Economics, The University of Kansas, Lawrence, KS 66045, USA); Wei Long (Department of Economics, Tulane University, New Orleans, LA 70118, USA)
    Abstract: Quantile regression is widely used to study heterogeneous effects, but inference in time series settings remains challenging when regression errors are serially correlated. Building on the dependent wild bootstrap, we develop an inference procedure for linear time series quantile regression that reweights the restricted quantile score with tapered multipliers and employs a one-step bootstrap update together with the HAC-based studentization. The procedure avoids repeated solution of a non-smooth quantile regression problem within each bootstrap draw while targeting the same inferential object as robust HAC testing. Under strong mixing and standard smoothness and bandwidth conditions, we establish asymptotic validity of the bootstrap test and derive its local power under Pitman alternatives. Monte Carlo results indicate improved size control relative to conventional and robust HAC methods, especially under strong dependence, with only modest differences in power. An application to the determinants of U.S. housing prices over the past four decades illustrates the practical usefulness of the method.
    Keywords: Time series quantile regression; Dependent wild bootstrap; HAC inference
    JEL: C12 C22 C46
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:kan:wpaper:202612
  2. By: Gustavo Schlemper; Marcelo J. Moreira
    Abstract: We develop new methods for constructing confidence sets and intervals in linear instrumental variables (IV) models based on tests that remain valid under weak identification and under heteroskedastic, autocorrelated, or clustered errors. In practice, researchers typically recover such sets by grid search, a procedure that can miss parts of the confidence region, truncate unbounded sets, and deliver misleading inference. We replace grid inversion with exact and approximation-based methods that are both reliable and computationally efficient. Our approach exploits the polynomial and rational structure of the Anderson-Rubin and Lagrange multiplier statistics to obtain exact confidence sets via polynomial root finding. For the conditional quasi-likelihood ratio test, we derive an exact inversion algorithm based on the geometry of the statistic and its critical value function. For more general conditional tests, we construct polynomial approximations whose coverage error vanishes with approximation degree, allowing numerical accuracy to be made arbitrarily high. In many empirical applications with weak instruments, standard grid methods produce incorrect confidence regions, while our procedures reliably recover sets with correct nominal coverage. The framework extends beyond linear IV to models with piecewise polynomial or rational moment conditions, offering a general tool for reliable weak-identification robust inference.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.04279
  3. By: Yu-Min Yen
    Abstract: We develop a framework for quantifying omitted variable bias (OVB) in nonlinear instrumental variable (IV) estimators, including the local average treatment effect (LATE), the LATE for the treated (LATT), and the partially linear IV model (PLIVM). Extending sensitivity analysis beyond linear settings, we derive bias decompositions, establish partial identification bounds, and construct OVB-adjusted confidence intervals. We estimate OVB bounds and conduct inference using double machine learning (DML), allowing flexible control for high-dimensional covariates. An application to the U.S. Job Training Partnership Act (JTPA) experiment shows that, at conventional significance levels, first-stage compliance estimates are robust to omitted variables, whereas intention-to-treat and treatment effects are more sensitive. Program impacts are robust and significant for females but fragile for males.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.03544
  4. By: Diego Vallarino
    Abstract: We study identification and inference in nonlinear dynamic systems defined on unknown interaction networks. The system evolves through an unobserved dependence matrix governing cross-sectional shock propagation via a nonlinear operator. We show that the network structure is not generically identified, and that identification requires sufficient spectral heterogeneity. In particular, identification arises when the network induces non-exchangeable covariance patterns through heterogeneous amplification of eigenmodes. When the spectrum is concentrated, dependence becomes observationally equivalent to common shocks or scalar heterogeneity, leading to non-identification. We provide necessary and sufficient conditions for identification, characterize observational equivalence classes, and propose a semiparametric estimator with asymptotic theory. We also develop tests for network dependence whose power depends on spectral properties of the interaction matrix. The results apply to a broad class of economic models, including production networks, contagion models, and dynamic interaction systems.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.04961
  5. By: Sebastian Kripfganz (Department of Economics, University of Exeter); Mehdi Hosseinkouchack (EBS Business School, EBS University); Matei Demetrescu (Department of Statistics, TU Dortmund University)
    Abstract: When testing for unrestricted serial correlation in linear panel data models, the number of moment restrictions under the null hypothesis of no such correlation increases quadratically in the number of time periods T. Portmanteau tests designed for fixed T can quickly lose power even for time horizons that are typically still considered as small. To circumvent this problem, we propose refinements motivated by strategies to reduce the number of instruments in the estimation of dynamic panel data models. Furthermore, we propose a new test based on covariances between first differences and encompassing longer differences. Our test yields substantial power improvements against moving-average and autoregressive alternatives. It retains high power under random-walk alternatives and high variances of the group-specific error component. Moreover, we demonstrate that serial-correlation tests based on regression residuals can suffer from severe power losses when the initial estimator is inconsistent under the alternative. Finally, we re-analyze a widely used data set for the estimation of dynamic employment equations. Contrary to previous evidence, but in line with our power comparisons, our proposed test uncovers statistical evidence for the presence of serial correlation. Taken at face value, this in turn implies that the original regression results suffer from estimator inconsistency.
    Keywords: serial correlation, specification testing, panel data, dimensionality reduction, first differences, long differences
    JEL: C12 C23 C52
    Date: 2026–04–02
    URL: https://d.repec.org/n?u=RePEc:exe:wpaper:2603
  6. By: Jiale Yang (Department of Statistics and Data Science, Xiamen University, Xiamen, Fujian, China); Zongwu Cai (Department of Economics, The University of Kansas, Lawrence, KS 66045, USA); Qixian Zhong (Department of Statistics and Data Science, Xiamen University, Xiamen, Fujian, China)
    Abstract: Causal inference has been widely applied across various fields, with Difference-in-Differences (DiD) emerging as one of the most popular methods for estimating the causal effect of a treatment or policy change in settings where treatment timing is staggered or occurs at a specific point in time. However, in many scenarios, not all covariates are of primary interest. This paper studies the partially conditional average treatment effect on the treated and extends to the multi-period DiD settings under the assumption of conditional parallel trends. The proposed approach is a two-stage doubly robust estimator, which allows us to both feature the heterogeneity and enhance the interpretability of the treatment effects by focusing on a subset of covariates of interest. The property of double robustness ensures a consistent estimator as long as either the outcome regression model or the propensity score model is correctly specified. Beyond the proposed new methodology, we derive asymptotic theories by establishing the convergence rate and asymptotically normality of the nonparametric estimator and further examine the double robustness and practical applicability through several simulation studies. Finally, the proposed methods are used to study the impact of the Deferred Action for Childhood Arrivals program on educational outcomes for non-citizen immigrants in the US.
    Keywords: Double robustness; Heterogeneity; Kernel smoothing; Local linear regression; Multi-period DiD.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:kan:wpaper:202611
  7. By: Ge Sun; Weisheng Zhang
    Abstract: Sampled network data are common in empirical research because collecting full network information is costly, but using sampled networks can lead to biased estimates. We propose a nonparametric imputation method for sampled networks and show that empirical analysis based on imputed networks yields consistent parameter estimates. Our approach imputes missing network links by combining a projection onto covariates with a local two-way fixed-effects regression, which avoids parametric assumptions, does not rely on low-rank restrictions, and flexibly accommodates both observed covariates and unobserved heterogeneity. We establish entrywise convergence rates for the imputed matrix and prove the consistency of GMM estimators based on the imputed network. We further derive the convergence rate of the corresponding estimator in the linear-in-means peer-effects model. Simulations show strong performance of our method both in terms of imputation accuracy and in downstream empirical analysis. We illustrate our method with an application to the microfinance network data of Banerjee et al. (2013).
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.03171
  8. By: Zizhong Yan; Zhengyu Zhang; Mingli Chen; Jingrong Li; Iv\'an Fern\'andez-Val
    Abstract: We develop likelihood-based bias reduction for nonlinear panel models with additive individual and time effects. In two-way panels, integrated-likelihood corrections are attractive but challenging because the required integration is high dimensional and standard Laplace approximations may fail when the parameter dimension grows with the sample size. We propose a target-centered full-exponential Laplace--cumulant expansion that exploits the sparse higher-order derivative structure implied by additive effects, delivering a tractable approximation with a negligible remainder under large-$N, T$ asymptotics. The expansion motivates robust priors that yield bias reduction for both common parameters and fixed effects. We provide implementations for binary, ordered, and multinomial response models with two-way effects. For average partial effects, we show that the remaining first-order bias has a simple variance form and can be removed by a closed-form adjustment. Monte Carlo experiments and an empirical illustration show substantial bias reduction with accurate inference.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.03663
  9. By: Chun Pang Chow; Hiroyuki Kasahara
    Abstract: Under heterogeneous treatment effects, the GMM weighting matrix in overidentified IV models dictates the estimand. We show that efficient GMM downeights high-variance instruments and frequently assigning negative weights that undermine causal interpretation. Moreover, GMM cannot simultaneously achieve efficiency and accommodate researcher-specified weights. We resolve this trade-off by developing the Representative Targeting (RT) estimator. By averaging instrument-specific Wald estimators under Positive Regression Dependence, RT ensures non-negative weights while achieving the semiparametric efficiency bound for its targeted estimand. We demonstrate the heterogeneity penalty empirically in a class-size experiment and apply RT to recover the Policy-Relevant Treatment Effect within a patent leniency design.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.07131
  10. By: Paul Diegert; Matthew A. Masten; Alexandre Poirier
    Abstract: Exclusion and exogeneity are core assumptions in instrumental variable (IV) analyses, but their empirical validity is often debated. This paper develops new sensitivity analyses for these assumptions. Our results accommodate arbitrary heterogeneity in treatment effects and do not impose any monotonicity requirements on the first stage. Specifically, we derive identified sets for the marginal distributions of potential outcomes and their functionals, like average treatment effects, under a broad class of nonparametric relaxations of the exclusion and exogeneity assumptions. These identified sets are characterized as solutions to linear programs and have desirable theoretical properties. We explain how to estimate these solutions using computationally tractable methods even when the linear program is infinite-dimensional. We illustrate these methods with an empirical application to peer effects in movie viewership, using weather as a potentially imperfect instrument.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.07604
  11. By: Rentaro Utamaru
    Abstract: Production function estimates underpin the measurement of firm-level markups, allocative efficiency, and the productivity effects of policy interventions. Since Olley and Pakes (1996), every major proxy variable estimator has identified the production function through a first-order Markov assumption on unobserved productivity; I show that misspecification of this assumption generates persistent upward bias in the materials elasticity that propagates into overestimated markups and inflated treatment effects. I replace the Markov restriction with conditional independence across three intermediate input demands, a static condition grounded in input market segmentation, and establish nonparametric identification from a single cross-section. I develop a GMM estimator and establish consistency and asymptotic normality. Monte Carlo simulations confirm that the proposed estimator is unbiased across Markov and non-Markov environments, while the standard estimator exhibits persistent bias of up to 63 percent of the true materials elasticity. In 502 Japanese manufacturing industries, the proposed method yields systematically lower markups than the standard method across the entire distribution (median 0.93 vs. 1.03), reducing the share of industries with markups above unity from 54 to 37 percent. In a difference-in-differences analysis of the 2011 Tohoku earthquake, the standard method overstates the productivity loss by 0.40 percentage points, roughly $3.6 billion (400 billion yen) per year.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.04458
  12. By: Giulia Carallo; Roberto Casarin; Antonio Peruzzi
    Abstract: Count-weighted temporal networks often exhibit unequal dispersion in the edge weights, which cannot be fully explained by modelling observational heterogeneity through latent factors in the conditional mean. Therefore, we propose new dynamic network model classes exploiting the Generalized Poisson distribution to capture both under- and overdispersion. We consider three different dynamic specifications: latent factor dynamics, autoregressive dynamics, and latent position dynamics, and study some theoretical properties of the random networks, showing the impact of the dispersion parameter on the random network's connectivity. After discussing the parameter identification strategy, we present a Bayesian inference procedure along with a posterior sampling algorithm. A numerical illustration demonstrates the effectiveness of the designed algorithm and provides estimates of the misspecification bias when unequal dispersion is neglected. Our new models are then applied to two relevant dynamic datasets considered in previous studies: a set of bike-sharing dynamic networks and a set of dynamic media networks. Our results highlight the importance of explicitly modeling overdispersion for both an accurate in-sample fit and out-of-sample performance.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.05838
  13. By: Tom\'as del Barrio Castro; Alain Hecq; Sean Telg
    Abstract: This paper investigates the role of complex and negative roots in mixed causal-noncausal autoregressive (MAR) models. Using partial fraction decompositions, we show that seasonal roots can always be isolated in the moving average representation of purely causal and noncausal AR models. We find that this result extends to the MAR model, which means that no new joint seasonal effects can be generated despite the multiplicative structure of the causal and noncausal polynomials. This results has important consequences for the MAR model selection procedure and these are extensively studied in a Monte Carlo simulation study. An empirical application on COVID-19 and soybean data illustrates the main findings of the paper.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.07040
  14. By: Wayne Yuan Gao; Yi Niu
    Abstract: This paper establishes (set) identification results in a dynamic dyadic network formation model with time-varying observed covariates, lagged local network statistics, and unobserved heterogeneity in the form of fixed effects. Our framework accommodates observed-covariate homophily, transitivity through common friends, second-order or indirect-friend effects, and more general local subgraph statistics within a single dynamic index model. The analysis combines two complementary ways of handling fixed effects: inequalities that integrate out time-invariant dyad heterogeneity by treating each dyad as a short panel, and signed-subgraph comparisons that difference out fixed effects algebraically through intertemporal variation within each dyad. We show that the semiparametric identifying restrictions can be sharpened using either or both of the following assumptions: (i) error distribution is serially independent with a known distribution, (ii) pairwise fixed effect takes the form of additive individual fixed effects. Combining (i) and (ii) under i.i.d. logit shocks, we obtain an exact conditional logit representation and provide sufficient conditions for point identification.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.07488
  15. By: Daichi Hiraki; Yasuhiro Omori
    Abstract: We propose a unified mixture sampler (UMS) that provides a universal estimation framework for nonlinear state-space models with "exp-exp" likelihood kernels. Unlike existing methods that require deriving new mixture approximations for each specific distribution, our approach dynamically adapts the standard ten-component mixture from Omori et al. (2007) through a deterministic re-centering and rescaling algorithm. Applying this to the stochastic conditional duration (SCD) model, we demonstrate that the proposed sampler can efficiently handle unknown shape parameters - such as those in Weibull or Gamma distributions - by updating mixture components near-instantaneously during MCMC iterations. The UMS not only simplifies implementation but also ensures exact inference via a lightweight Metropolis-Hastings step. Numerical examples show that our method substantially outperforms the conventional slice sampling approach, significantly reducing autocorrelation in MCMC samples while maintaining high computational efficiency. This unified framework encompasses a wide range of applications, including logit, Poisson, and various SCD model specifications, providing a highly efficient alternative to model-specific samplers.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.04517
  16. By: Daichi Hiraki; Siddhartha Chib; Yasuhiro Omori
    Abstract: We develop a dynamic factor stochastic volatility-in-mean (SVM) specification for vector autoregressions (VARs) that embeds an SVM component within a dynamic factor stochastic volatility structure. A small number of latent volatility factors capture common movements in conditional variances, while volatility enters the conditional mean of the VAR. This specification allows time-varying uncertainty to influence macroeconomic dynamics through both second moments and expected outcomes while preserving tractability in large panels. We construct an efficient Markov chain Monte Carlo algorithm for estimation in this high-dimensional, non-Gaussian setting. Using quarterly data on twenty variables from the FRED-QD database, we compare predictive performance with the benchmark stochastic volatility VAR model. The dynamic factor SVM specification delivers superior forecasts for more variables during major macroeconomic disruptions such as the 2008 global financial crisis. The results indicate that allowing volatility to enter the mean captures an important transmission channel in macroeconomic dynamics.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.04529
  17. By: Marc Burri; Daniel Kaufmann
    Abstract: We extend the heteroskedasticity IV estimator of Rigobon and Sack (2004) from one to multiple monetary policy shocks by imposing recursive zero restrictions on the impact matrix. Unlike high-frequency identification, the approach requires neither intraday tick data nor precise announcement timestamps, making it applicable to countries or historical periods where such data are unavailable. Applied to US FOMC announcements, we find causal effects similar to those of high-frequency identification. The heteroskedasticity-based instrument passes weak-instrument tests for the target shock, whereas high-frequency surprises fail. For the path shock, we also find strong heteroskedasticity-based instruments in key specifications, and we show that the underlying shocks are similar to those based on high-frequency identification.
    Keywords: Monetary policy shocks, causal effects, forward guidance, heteroskedasticity, high-frequency, instrumental variables
    JEL: C3 E3 E4 E5
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:irn:wpaper:26-06
  18. By: James A. Duffy; Sophocles Mavroeidis
    Abstract: We study identification in structural vector autoregressions (SVARs) in which the endogenous variables enter nonlinearly on the left-hand side of the model, a feature we term endogenous nonlinearity, to distinguish it from the more familiar case in which nonlinearity arises only through exogenous or predetermined variables. This class of models accommodates asymmetric impact multipliers, endogenous regime switching, and occasionally binding constraints. We show that, under weak regularity conditions, the model parameters and structural shocks are (nonparametrically) identified up to an orthogonal transformation, exactly as in a linear SVAR. Our results have the powerful implication that most existing identification schemes for linear SVARs extend directly to our nonlinear setting, with the number of restrictions required to achieve exact identification remaining unchanged. We specialise our results to piecewise affine SVARs, which provide a convenient framework for the modelling of endogenous regime switching, and their smooth transition counterparts. We illustrate our methodology with an application to the nonlinear Phillips curve, providing a test for the presence of nonlinearity that is robust to the choice of identifying assumptions, and finding significant evidence for state-dependent inflation dynamics.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.07718
  19. By: Yoshitsugu Kitazawa (Faculty of Economics, Kyushu Sangyo University)
    Abstract: This paper proposes linear estimation methods for dynamic fixed effects logit models only with time effects (i.e., those only with time dummies and only with time trends). The linear estimators point-identify transformations of parameters of interest for the models if five or more time periods are provided and then point-identify the parameters of interest. What it boils down to is that root-N consistent estimations are attainable for these models. Monte Carlo results corroborate this conclusion.
    Keywords: Keywords: dynamic panel logit models; fixed effects; time dummies; time trends; point-identification; root-N consistent estimators; Monte Carlo experiments
    JEL: C23 C25 C26
    Date: 2026–03
    URL: https://d.repec.org/n?u=RePEc:kyu:dpaper:87
  20. By: Yu-Chin Hsu; Tong Li; Chu-An Liu; Hidenori Takahashi
    Abstract: This paper develops a unified framework for testing monotonicity of Bayesian Nash equilibrium strategies in unobserved types in games of incomplete information. We show that, under symmetric independent private types, monotonicity of differentiable equilibrium strategies is equivalent to monotonicity of a quasi-inverse strategy identified from observed actions. This allows the problem to be reformulated as testing a countable set of moment inequalities involving unconditional expectations. We propose a Cramer-von Mises-type statistic with bootstrap critical values. The method accommodates covariates and game heterogeneity. Monte Carlo simulations demonstrate finite-sample performance, and an application to procurement auctions illustrates cartel detection.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.06643
  21. By: Karun Adusumilli
    Abstract: This article introduces a framework for evaluating statistical decisions under both prior ambiguity and likelihood misspecification. We begin with an ambiguity set - a frequentist model that pairs a possibly misspecified likelihood with every possible prior - and uniformly expand it by a Kullback-Leibler radius to accommodate likelihood misspecification. We show that optimal decisions under this framework are equivalent to minimax decisions with an exponentially tilted loss function. Misspecification manifests as an exponential tilting of the loss, while ambiguity corresponds to a search for the least favorable prior. This separation between ambiguity and misspecification enables local asymptotic analysis under global misspecification, achieved by localizing the priors alone. Remarkably, for both estimation and treatment assignment, we show that optimal decisions coincide with those under correct specification, regardless of the degree of misspecification. These results extend to semi-parametric models. As a practical consequence, our findings imply that practitioners should prefer maximum likelihood over the simulated method of moments, and efficient GMM estimators - such as two-step GMM - over diagonally weighted alternatives.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.05327
  22. By: van Loon, Austin; Kanopka, Klint (New York University)
    Abstract: Large language models (LLMs) have prompted proposals to replace human subjects in social science experiments with simulated responses. Empirical evaluations suggest that this practice---often called silicon sampling---can sometimes approximate human behavior but is unreliable. We delineate where this approach may still provide value and where it may not, but primarily study an alternative approach: one in which model-based predictions are used not as substitutes for human data, but as auxiliary measurements within randomized experiments. We formalize the inference of causal estimands from mixed-subjects randomized controlled trials, in which outcomes are observed for a subset of units while predictions are available for all units. Under transparent design conditions, we derive a family of estimators that remain unbiased for the average treatment effect in finite samples while exploiting predictions to reduce variance. We characterize when prediction-powered, calibration-based, arm-specifically tuned, and difference-in-predictions estimators improve precision, and we provide a software package which operationalizes these results and aids researchers to jointly select estimators and allocate budgets between human data collection and prediction generation. Together, our results show how generative artificial intelligence can improve experimental social science without compromising scientific validity.
    Date: 2026–04–03
    URL: https://d.repec.org/n?u=RePEc:osf:socarx:y74mu_v1
  23. By: Alexandre Alouadi; Gr\'egoire Loeper; C\'elian Marsala; Othmane Mazhar; Huy\^en Pham
    Abstract: We study the problem of generating synthetic time series that reproduce both marginal distributions and temporal dynamics, a central challenge in financial machine learning. Existing approaches typically fail to jointly model drift and stochastic volatility, as diffusion-based methods fix the volatility while martingale transport models ignore drift. We introduce the Schr\"odinger-Bass Bridge for Time Series (SBBTS), a unified framework that extends the Schr\"odinger-Bass formulation to multi-step time series. The method constructs a diffusion process that jointly calibrates drift and volatility and admits a tractable decomposition into conditional transport problems, enabling efficient learning. Numerical experiments on the Heston model demonstrate that SBBTS accurately recovers stochastic volatility and correlation parameters that prior Schr\"odingerBridge methods fail to capture. Applied to S&P 500 data, SBBTS-generated synthetic time series consistently improve downstream forecasting performance when used for data augmentation, yielding higher classification accuracy and Sharpe ratio compared to real-data-only training. These results show that SBBTS provides a practical and effective framework for realistic time series generation and data augmentation in financial applications.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.07159
  24. By: Manuel Arellano; Orazio Attanasio; Margherita Borella; Mariacristina De Nardi; Gonzalo Paz-Pardo
    Abstract: We develop a new approach to estimating earnings, job, and employment dynamics using subjective expectations data from the NY Fed Survey of Consumer Expectations. These data provide beliefs about future earnings offers and acceptance probabilities, offering direct information on counterfactual outcomes and enabling identification under weaker assumptions. Our framework avoids biases from selection and unobserved heterogeneity that affect models using realized outcomes. First-step fixed-effects regressions identify risk, persistence, and transition effects; second-step GMM recovers the covariance structure of unobserved heterogeneities such as ability, mobility, and match quality. We find lower risk and persistence of the individual productivity component than in prior work, but greater heterogeneity in ability and match quality. Simulations show that reduced-form estimates overstate persistence and volatility on individual-level productivity due to job transitions and sorting. After accounting for heterogeneity, volatility declines and becomes flat across the earnings distribution. These results underscore the value of expectations data.
    JEL: C23 C8 D15 J01
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:nbr:nberwo:35027
  25. By: Marina Palaisti
    Abstract: This paper develops a copula-based time-series framework for modelling sovereign credit rating activity and its dependence dynamics, with extensions incorporating climate risk. We introduce a mixed-difference transformation that maps discrete annual counts of sovereign rating actions into a continuous domain, enabling flexible copula modelling. Building on a MAG(1) copula process, we extend the framework to a MAGMAR(1, 1) specification combining moving-aggregate and autoregressive dependence, and establish consistency and asymptotic normality of the associated maximum likelihood estimators. The empirical analysis uses a multi-agency panel of sovereign ratings and country-level carbon intensity, aggregated to an annual measure of global rating activity. Results reveal strong nonlinear dependence and pronounced clustering of high-activity years, with the Gumbel MAGMAR(1, 1) specification delivering the strongest empirical performance among the models considered, while standard Markov copulas and Poisson count models perform substantially worse. Climate covariates improve marginal models but do not materially enhance dependence dynamics, suggesting limited incremental explanatory power of the chosen aggregate climate proxy. The results highlight the value of parsimonious copula-based models for sovereign migration risk and stress testing.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.07567
  26. By: Haroon Mumtaz; Sofia Velasco
    Abstract: This paper develops a dynamic factor model in which common level and volatility factors evolve jointly, allowing conditional means and variances to interact endogenously within a large-information setting. The joint evolution of these factors provides a tractable framework for modeling risk, as fluctuations in volatility affect both the dispersion and the location of outcomes, generating state-dependent and asymmetric tail risks in predictive distributions. Volatility is captured by latent common factors that drive co-movement in second moments across a large panel, while heavy-tailed idiosyncratic shocks absorb transitory outliers and isolate persistent uncertainty dynamics. The framework embeds these interactions directly within a factor structure, allowing risk to arise endogenously from the joint dynamics of the system rather than being imposed through reduced-form approaches. Empirically, the model delivers systematic improvements in density forecast accuracy, particularly in the tails of the predictive distribution and at medium horizons. An application to international inflation highlights a dominant global level component in advanced economies and stronger regional and volatility contributions in emerging and developing economies, pointing to substantial heterogeneity in the role of uncertainty across countries.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.03681
  27. By: Alfred Galichon; Marc Henry
    Abstract: We propose an overview of optimal transport theory and its applications to econometric methodology. This review is specifically designed for practitioners, be they econometric theorists or applied econometricians. The review of applications of optimal transport to econometrics is organized around the particular aspects of the mathematical theory of optimal transport they rely on.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.04227
  28. By: Federico Echenique; Gerelt Tserenjigmid
    Abstract: McGranaghan, Nielsen, O'Donoghue, Somerville, and Sprenger [2024] argue that standard paired choice tests for the common ratio effect are structurally biased when choice is stochastic, proposing valuation tests as a robust alternative. Using valuation tests, they find no systematic evidence for the common ratio effect, seemingly overturning much of the extant literature. We evaluate this conclusion in light of stochastic choice theory. We demonstrate that valuation tests are inherently biased and lack predictive power under standard expected utility assumptions. In contrast, we advocate for a ``strong'' paired choice test, proving it remains robustly unbiased across standard models of stochastic choice. Applying this strong test to existing experimental data, we find that the common ratio effect remains highly prevalent.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.06050
  29. By: Cristian Espinal Maya
    Abstract: This paper establishes the theoretical and practical foundations for using Large Language Models (LLMs) as measurement instruments for latent economic variables -- specifically variables that describe the cognitive content of occupational tasks at a level of granularity not achievable with existing survey instruments. I formalize four conditions under which LLM-generated scores constitute valid instruments: semantic exogeneity, construct relevance, monotonicity, and model invariance. I then apply this framework to the Augmented Human Capital Index (AHC_o), constructed from 18, 796 O*NET task statements scored by Claude Haiku 4.5, and validated against six existing AI exposure indices. The index shows strong convergent validity (r = 0.85 with Eloundou GPT-gamma, r = 0.79 with Felten AIOE) and discriminant validity. Principal component analysis confirms that AI-related occupational measures span two distinct dimensions -- augmentation and substitution. Inter-rater reliability across two LLM models (n = 3, 666 paired scores) yields Pearson r = 0.76 and Krippendorff's alpha = 0.71. Prompt sensitivity analysis across four alternative framings shows that task-level rankings are robust. Obviously Related Instrumental Variables (ORIV) estimation recovers coefficients 25% larger than OLS, consistent with classical measurement error attenuation. The methodology generalizes beyond labor economics to any domain where semantic content must be quantified at scale.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.02403

This nep-ecm issue is ©2026 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the Griffith Business School of Griffith University in Australia.