nep-ecm New Economics Papers
on Econometrics
Issue of 2026–04–27
23 papers chosen by
Sune Karlsson, Örebro universitet


  1. Mixed difference integer-valued GARCH model for Z-valued time series By Aknouche, Abdelhakim; Francq, Christian; Goto, Yuichi
  2. Jackknife Instrumental Variable Inference By Federico Crudu; Giovanni Mellace; Zsolt S\'andor
  3. Factor-Augmented Panel Regressions and Variance-Weighted Treatment Effects By Art\=uras Juodis; Martin Weidner
  4. Integrating Diagnostic Checks into Estimation By Reca Sarfati; Vod Vilfort
  5. Nonparametric Point Identification of Treatment Effect Distributions via Rank Stickiness By Tengyuan Liang
  6. Generalized Bayesian Composite Quantile Regression with an Application to Equity Premium Forecasting By Hardy, Nicolas; Korobilis, Dimitris
  7. Subsample-based Estimation under Dynamic Contamination By Yukai Yang; Rickard Sandberg
  8. Bootstrap consistency for general double/debiased machine learning estimators By Ziming Lin; Fang Han
  9. The realized copula of volatility By Kim Christensen; Wenjing Liu; Zhi Liu; Yoann Potiron
  10. Informativeness under Model Uncertainty: Shadow Prices and Ridge Penalties By Jieun Lee; Esfandiar Maasoumi
  11. Identifying relationship-level effects using convariance restrictions By Olivier De Jonghe; Daniel Lewis
  12. Clustered Local Projections for Time-Varying Models By Ana Maria Herrera; Elena Pesavento; Alessia Scudiero
  13. Batch-Adaptive Causal Annotations By Ezinne Nwankwo; Lauri Goldkind; Angela Zhou
  14. Quantum Bayesian inference: an exploration By Jon Frost; Carlos Madeira; Yash Rastogi; Harald Uhlig
  15. Recent Advances in Causal Analysis of the Stochastic Frontier Model By Samuele Centorrino; Christopher F. Parmeter
  16. Causal inference for social network formation By Maximilian Kasy; Elizabeth Linos; Sanaz Mobasseri
  17. Text as Priors By Ge, S.; Li, S.; Linton, O. B.; Su, W.
  18. Path-Explosive Behaviour in Economic Time Series: A Realization-Centred Exploratory Framework By Jos\'e Francisco Perles-Ribes
  19. Post-Screening Portfolio Selection By Yoshimasa Uematsu; Shinya Tanaka
  20. Flexible Bayesian Models for Time-Varying Income Distributions By David Gunawan
  21. True and Pseudo-True Parameters By Isaiah Andrews; Harvey Barnhard; Jacob Carlson
  22. Biases in Loan Recovery-Rate Estimation By Jaime Leyva; Tiago Pinheiro
  23. Orthogonal reparametrization of the Nelson-Siegel-Svensson interest rate curve model: conditioning, diagnostics, and identifiability By Robert Flassig; Emrah G\"ulay; Daniel Guterding

  1. By: Aknouche, Abdelhakim; Francq, Christian; Goto, Yuichi
    Abstract: In this paper, we introduce flexible observation-driven Z-valued time series models constructed from mixtures of negative and non-negative components. Compared to models based on the standard Skellam distribution or on a difference of two integer-valued variables, our specification offers greater versatility. For example, it easily allows for skewness and bimodality. Furthermore, the observation of one component of the mixture makes interpretation and statistical analysis easier. We establish conditions for stationarity and mixing, and develop a mixed Poisson quasi-maximum likelihood estimator with proven asymptotic properties. A portmanteau test is proposed to diagnose residual serial dependence. The finite-sample performance of the methodology is assessed via simulation, and an empirical application on tick prices demonstrates its practical usefulness.
    Keywords: Discrete difference distribution; GARCH for tick-by-tick data, Mixed difference; Mixed Poisson QMLE; Random-weighting bootstrap; Z-valued time series.
    JEL: C12 C13 C22 C25 C58
    Date: 2026–03–13
    URL: https://d.repec.org/n?u=RePEc:pra:mprapa:128358
  2. By: Federico Crudu; Giovanni Mellace; Zsolt S\'andor
    Abstract: This paper introduces a class of jackknife-based test statistics for linear regression models with endogeneity and heteroskedasticity in the presence of many potentially weak instrumental variables. The tests may be used when considering hypotheses on the full parameter vector or hypotheses defined as linear restrictions. We show that in the limit and under the null the proposed statistics are distributed as a combination of chi squares but by modifying the objective function we derive more familiar chi square limits. An extensive simulation study shows the competitive finite sample properties of the proposed tests in particular against Anderson-Rubin-type of statistics. Finally, we provide an empirical illustration that applies the proposed tests to study the effect of alcohol consumption on body mass index using genetic variants as instrumental variables using the UK Biobank.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.15437
  3. By: Art\=uras Juodis; Martin Weidner
    Abstract: We revisit panel regressions with unobserved heterogeneity through the lens of variance-weighted average treatment effects. Building on established results for cross-sectional OLS and one-way fixed effects panels, we show that two-way panel estimators with latent factors, specifically the principal components estimator of Greenaway-McGrevy, Han and Sul (2012) and the interactive fixed effects estimator of Bai (2009), also converge to interpretable estimands under fully nonparametric assumptions. Both estimators consistently estimate the same variance-weighted average of unit-time-specific treatment effects, where the weights are proportional to the conditional variance of the regressor given the unobserved heterogeneity. The result requires the number of estimated factors to grow with the sample size and applies to the single regressor case. We discuss the challenges that arise when extending to multiple regressors and to inference.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.18078
  4. By: Reca Sarfati; Vod Vilfort
    Abstract: Empirical researchers often use diagnostic checks to assess the plausibility of their modeling assumptions, such as testing for covariate balance in RCTs, pre-trends in event studies, or instrument validity in IV designs. While these checks are traditionally treated as external hurdles to estimation, we argue they should be integrated into the estimation process itself. In particular, we propose residualizing one's baseline estimator against the vector of diagnostic check statistics to remove the component of baseline sampling variation explained by the diagnostic checks. This residualized estimator offers researchers a "free lunch, " delivering three properties simultaneously: (i) eliminating inference distortions from check-based selective reporting; (ii) reducing variance without changing the estimand when the baseline model is correctly specified; and (iii) minimizing worst-case bias under bounded local misspecification within the class of linear adjustments. We apply our method to the RCT in Kaur et al. (2024) and find that, even in a setting where all balance checks pass comfortably, residualization increases the magnitude of the baseline point estimate and reduces its standard error, equivalent to approximately a 10% increase in sample size.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.16690
  5. By: Tengyuan Liang
    Abstract: Treatment effect distributions are not identified without restrictions on the joint distribution of potential outcomes. Existing approaches either impose rank preservation -- a strong assumption -- or derive partial identification bounds that are often wide. We show that a single scalar parameter, rank stickiness, suffices for nonparametric point identification while permitting rank violations. The identified joint distribution -- the coupling that maximizes average rank correlation subject to a relative entropy constraint, which we call the Bregman-Sinkhorn copula -- is uniquely determined by the marginals and rank stickiness. Its conditional distribution is an exponential tilt of the marginal with a Bregman divergence as the exponent, yielding closed-form conditional moments and rank violation probabilities; the copula nests the comonotonic and Gaussian copulas as special cases. The empirical Bregman-Sinkhorn copula converges at the parametric $\sqrt{n}$-rate with a Gaussian process limit, despite the infinite-dimensional parameter space. We apply the framework to estimate the full treatment effect distribution, derive a variance estimator for the average treatment effect tighter than the Fr\'{e}chet--Hoeffding and Neyman bounds, and extend to observational studies under unconfoundedness.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.21548
  6. By: Hardy, Nicolas; Korobilis, Dimitris
    Abstract: Composite quantile regression (CQR) is a robust and efficient estimator under heavy-tailed and contaminated errors. Existing Bayesian extensions rely on working likelihoods that require latent-variable augmentation and can deliver poorly calibrated credible intervals. We develop generalized Bayesian CQR, which exponentiates the composite quantile loss directly, targeting the same objective as frequentist CQR. Because generalized Bayes replaces point optimization with posterior averaging over the loss surface, it is especially relevant under heavy-tailed errors where the composite quantile loss flattens near its minimum. In generalized Bayes posterior dispersion depends on a learning rate that we calibrate by matching marginal variances to their frequentist sandwich counterparts. The resulting credible intervals achieve near-nominal coverage in cross-sectional settings and substantially reduce the undercoverage of i.i.d.\ intervals under serial dependence, with a residual shortfall under high persistence that mirrors the finite-sample bias of frequentist HAC inference. The calibration has a closed-form solution under flat priors and extends to normal and spike-and-slab LASSO priors for shrinkage and variable selection. Sampling uses standard Metropolis-Hastings with no latent variables, achieving roughly 100-fold computational gains over likelihood-based Bayesian CQR at a common quantile grid. Monte Carlo experiments show competitive or improved point estimation relative to frequentist CQR, reliable coverage, and robust variable selection across Gaussian, heavy-tailed, and contaminated error distributions. An equity premium forecasting application demonstrates that the efficiency and robustness gains translate into economically meaningful improvements in out-of-sample portfolio performance.
    Keywords: Composite quantile regression, Gibbs posterior, Generalized Bayes, Learning rate calibration, Equity premium forecasting, Spike-and-slab priors
    JEL: C11 C14 C21 C52 C53 E37 G17
    Date: 2026–04–14
    URL: https://d.repec.org/n?u=RePEc:pra:mprapa:128752
  7. By: Yukai Yang; Rickard Sandberg
    Abstract: Subsample-based estimation is a standard tool for achieving robustness to outliers in econometric models. This paper shows that, in dynamic time series settings, such procedures are fundamentally invalid under contamination, even under oracle knowledge of contamination locations. The key issue is that contamination propagates through the model's residual filter and distorts the estimation criterion itself. As a result, removing contaminated observations does not, in general, restore the uncontaminated objective or ensure consistency. We characterise this failure as a structural incompatibility between pointwise subsampling and residual propagation. To address it, we propose a propagation-compatible transformation of index sets, formalised through a patch removal operator that removes the residual footprint of contamination. Under suitable conditions, the proposed operator leaves the estimator asymptotically unchanged under the uncontaminated model, while restoring consistency for the clean-data parameter under contamination. The results apply to a broad class of residual-based estimators and show that valid subsample-based estimation in dynamic models requires explicit control of residual propagation.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.17676
  8. By: Ziming Lin; Fang Han
    Abstract: Double/debiased machine learning (DML) provides a general framework for inference with high-dimensional or otherwise complex nuisance parameters by combining Neyman-orthogonal scores with cross-fitting, thereby circumventing classical Donsker-type conditions in many modern machine-learning settings. Despite its strong empirical performance, bootstrap inference for DML estimators has received little theoretical justification. This is particularly noteworthy since bootstrap methods are suggested ad used for inference on DML estimators, even though bootstrap procedures can fail for estimators that are root-$n$ consistent and asymptotically normal. This paper fills this gap by establishing bootstrap validity for DML estimators under general exchangeably weighted resampling schemes, with Efron's bootstrap as a special case. Under exactly the same conditions required for the validity of DML itself, we prove that the bootstrap law converges conditionally weakly to the sampling law of the original estimator.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.17239
  9. By: Kim Christensen; Wenjing Liu; Zhi Liu; Yoann Potiron
    Abstract: We study a new measure of codependency in the second moment of a continuous-time multivariate asset price process, which we name the realized copula of volatility. The statistic is based on local volatility estimates constructed from high-frequency asset returns and affords a nonparametric estimator of the empirical copula of the latent stochastic volatility. We show consistency of our estimator with in-fill asymptotic theory, either with a fixed or increasing time span. In the latter setting, we derive a functional central limit theorem for the empirical process associated with the measurement error of the time-invariant marginal copula of volatility. We also develop a goodness-of-fit test to evaluate hypotheses about the shape of the latter. In a simulation study, we demonstrate that our estimator is a good proxy of both the empirical and marginal copula of volatility, even with a moderate amount of high-frequency data recorded over a relatively short sample. The goodness-of-fit test is found to exhibit size control and excellent power. We implement our framework on high-frequency transaction data from futures contracts that track the U.S. equity and treasury bond market. A Gumbel copula is found to offer a near-perfect bind between the realized variance processes in these data.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.15811
  10. By: Jieun Lee; Esfandiar Maasoumi
    Abstract: We develop inference under model uncertainty due to weak, noisy, multiple candidate restrictions and theories, and nuisance control covariates. A unified framework is given with degrees of misspecification and corresponding shadow prices, based on a Lagrangian constrained optimization approach, and a data$-$driven tolerance parameter selected via a Stein$-$type (shrinkage) risk criterion. A debiasing step is based on Karush$-$Kuhn$-$Tucker conditions. We introduce individual shadow prices (ISP) for different restrictions to measure empirical relevance and propose a plateau rule to separate signal from noise. We establish consistency and asymptotic normality of the estimators and characterize the ISP. Simulations and an application to a Solow growth model illustrate the method$^{\prime}$s practical usefulness.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.15571
  11. By: Olivier De Jonghe; Daniel Lewis
    Abstract: We propose a new model in which relationship-specific effects or shocks are identified in a bipartite network under mild covariance restrictions, generalizing the influential Abowd et al. (1999) framework. For example, separate demand shocks are identified for each bank from which a firm borrows. We show how previous approaches break down when confronted with such heterogeneity, while our novel identification strategy yields a simple estimator that is consistent and asymptotically normal, under weaker network density assumptions than previous approaches. The methodology performs well in empirically-calibrated simulations. We apply our approach to identify relationship-level credit demand and supply shocks for thousands of firms and banks across nine Euro-area countries and three distinct economic episodes. We formally reject the Abowd et al. (1999) assumptions in nearly every country-period and show that within-firm/bank shock variation is of comparable scale to between firm/bank variation. We document considerable bias in Abowd et al. (1999) style estimates and associated regressions, while finding significant deleterious effects of the post-2022 monetary contraction on exposed firms. We highlight novel heterogeneity in the transmission of monetary policy.
    Date: 2026–04–16
    URL: https://d.repec.org/n?u=RePEc:azt:cemmap:06/26
  12. By: Ana Maria Herrera; Elena Pesavento; Alessia Scudiero
    Abstract: We propose a clustered local projection (clustered LP) method to estimate impulse response functions in a class of time-varying models where parameter variation is linked to a low-dimensional matrix of observables. We show that the clustered LP recovers the conditional average response when the driving variables are exogenous and a weighted average of the conditional marginal effects when they are endogenous. We propose an iterative estimation method that first classifies the data using k-means, estimates impulse response functions via GMM, and evaluates differences across clustered LP estimates. Our Monte Carlo simulations illustrate the ability of clustered LP to approximate the conditional average response function. We employ our technique to examine how uncertainty influences the transmission of a contractionary monetary policy shock to the 5- and 10-year U.S. nominal Treasury yields. Our estimation results suggest macroeconomic and monetary policy uncertainty operate through complementary but distinct channels: the former primarily amplifies the risk compensation embedded in the term premium, while the latter governs the speed and persistence with which markets revise their expectations about the future rate path following a monetary policy shock.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.18778
  13. By: Ezinne Nwankwo; Lauri Goldkind; Angela Zhou
    Abstract: Estimating the causal effects of interventions is crucial to policy and decision-making, yet outcome data are often missing or subject to non-standard measurement error. While ground-truth outcomes can sometimes be obtained through costly data annotation or follow-up, budget constraints typically allow only a fraction of the dataset to be labeled. We address this challenge by optimizing which data points should be sampled for outcome information in order to improve efficiency in average treatment effect estimation with missing outcomes. We derive a closed-form solution for the optimal batch sampling probability by minimizing the asymptotic variance of a doubly robust estimator for causal inference with missing outcomes. Motivated by our street outreach partners, we extend the framework to costly annotations of unstructured data, such as text or images in healthcare and social services. Across simulated and real-world datasets, including one of outreach interventions in homelessness services, our approach achieves substantially lower mean-squared error and recovers the AIPW estimate with fewer labels than existing baselines. In practice, we show that our method can match confidence intervals obtained with 361 random samples using only 90 optimized samples - saving 75% of the labeling budget.
    Date: 2025–02
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2502.10605
  14. By: Jon Frost; Carlos Madeira; Yash Rastogi; Harald Uhlig
    Abstract: This paper introduces a framework for performing Bayesian inference using quantum computation. It presents a proof-of-concept quantum algorithm that performs posterior sampling. We provide an accessible introduction to quantum computation for economists and a practical demonstration of quantum-based posterior sampling for Bayesian estimation. Our key contribution is the preparation of a quantum state whose measurement yields samples from a discretised posterior distribution. While the proposed approach does not yet offer computational speedups over classical techniques such as Markov Chain Monte Carlo, it demonstrates the feasibility of simulating Bayesian inference with quantum computation. This work serves as a first step in integrating quantum computation into the econometrician's toolbox. It highlights both the conceptual promise and practical challenges – especially those related to quantum state preparation – in leveraging quantum computation for Bayesian inference.
    Keywords: quantum computing; Bayesian estimator; Bayesian inference; Markov chain Monte Carlo (MCMC) algorithms; Gibbs sampling
    JEL: C11 C20 C30 C50 C60
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:bis:biswps:1342
  15. By: Samuele Centorrino; Christopher F. Parmeter
    Abstract: Causal inference methods (instrumental variables, difference-in-differences, regression discontinuity, etc.) are primary tools used across many social science milieus. One area where their application has lagged however, is in the study of productivity and efficiency. A main reason for this is that the nature of the stochastic frontier model does not immediately lend itself to a causal framework when interest hinges on an error component of the model. This paper reviews the nascent literature on attempts to merge the stochastic frontier literature with causal inference methods. We discuss modeling approaches and empirical issues that are likely to be relevant for applied researchers in this area. This review shows how this model can be easily put within the confines of causal analysis, reviews existing work that has already made inroads in this area, addresses challenges that have yet to be met and discusses core findings.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.19693
  16. By: Maximilian Kasy; Elizabeth Linos; Sanaz Mobasseri
    Abstract: This paper develops a framework for identification, estimation, and inference on the causal mechanisms driving endogenous social network formation. Identification is challenging because of unobserved confounders and reverse causality; inference is complicated by questions of equilibrium and sampling. We leverage repeated observations of a network over time and random variation in initial ties to address challenges to causal identification. Our design-based approach sidesteps questions of sampling and asymptotics by treating both the set of nodes (individuals) and potential outcomes as non-random. We apply our approach to data from a large professional services firm, where new hires are randomly assigned to project teams within offices. We estimate the causal effect on tie formation of indirect ties, network degree, and local network density. Indirect ties have a strong and significant positive effect on tie formation, while the effects of degree and density are smaller and less robust.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.17952
  17. By: Ge, S.; Li, S.; Linton, O. B.; Su, W.
    Abstract: This paper studies how textual information can be used as prior information in high-dimensional penalized estimation. Rather than treating text as an additional set of regressors, we use textual analysis to construct two classes of priors: linkage priors, which encode the relative relevance of covariates, and direction priors, which encode prior information about coefficient signs. We incorporate these priors through weighted and asymmetric LASSO procedures. We show that, when the priors are sufficiently informative, they improve variable selection by relaxing the irrepresentable condition required for selection consistency of the standard LASSO, especially in settings with strongly correlated covariates. We illustrate the framework in three applications based on Chinese financial news: cross-firm return prediction, large precision matrix estimation for portfolio construction, and high-dimensional text regression with sentiment-based sign restrictions. Overall, the results show that text can enhance high-dimensional estimation not only as data, but also as a source of economically meaningful prior information.
    Date: 2026–04–13
    URL: https://d.repec.org/n?u=RePEc:cam:camdae:2630
  18. By: Jos\'e Francisco Perles-Ribes
    Abstract: We propose a descriptive, realization-centred framework for detecting and characterising explosive and co-explosive behaviour in economic time series, which we term path-explosive behaviour. Departing from the data-generating-process (DGP) perspective that underlies recursive unit root testing, the approach operates directly on observable path properties of the realised series. Four diagnostic layers -- level geometry, growth rate dynamics, normalised curvature, and log-space behaviour -- yield statistics that discriminate between genuine self-reinforcing multiplicative growth and I(2) dynamics without distributional assumptions or asymptotic critical values. Two theoretically motivated absolute gate thresholds screen detected episodes before a composite intensity score is assigned. Co-explosive behaviour between pairs of series is assessed at the episode level through a Jaccard co-occurrence index and non-parametric intensity concordance measures. The theoretical motivation draws on the path dependence and planning irreversibility literatures to argue that, in settings where discrete institutional decisions shape growth trajectories, a realization-centred characterisation is epistemically more appropriate than a DGP-based test. A simulation study across four DGP regimes validates the framework's discriminating power and conservatism. An empirical application to real house prices, commodity prices, public debt, and Spanish tourism destinations illustrates the empirical content of the path-explosive concept and distinguishes it from speculative bubble detection.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.16186
  19. By: Yoshimasa Uematsu; Shinya Tanaka
    Abstract: We propose post-screening portfolio selection (PS$^2$), a two-step framework for high-dimensional mean--variance investing. First, assets are screened by Lasso-type regression of a constant on excess returns without an intercept. Second, portfolio weights are estimated on the selected set using standard low-dimensional methods. Because strong factors can destroy sparsity in real data, we further introduce PS$^2$ with factors (FPS$^2$), which defactors returns before screening and allows factor investing in the final step. We establish theoretical guarantees, and simulations and an empirical application show competitive performance, especially when sparse screening is appropriate or strong factors are explicitly accommodated.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.17593
  20. By: David Gunawan
    Abstract: Survey data are widely used to study how income inequality, poverty, and welfare evolve over time. A common practice is to estimate the income distribution separately for each year, treating annual observations as independent cross-sections. For population subgroups with relatively small sample sizes, however, this approach can produce unstable parameter estimates, imprecise inference for inequality and poverty measures, and potentially misleading posterior probabilities of Lorenz and stochastic dominance. This paper develops flexible Bayesian models for time-varying income distributions that borrow strength across adjacent years by allowing the parameters of income distributions to evolve dynamically. We consider a random walk specification and an extended model with shrinkage priors. The proposed framework yields coherent inference for the full income distributions over time, as well as for associated inequality measures, poverty indices, and dominance probabilities. Simulation studies show that, relative to independent year-by-year models, the proposed approach produces substantially more precise and stable inference, while avoiding spurious variation in welfare comparisons. An application to the Aboriginal and residents of the Australian Capital Territory (ACT) population subgroups in the Household, Income and Labour Dynamics in Australia survey shows that the dynamic models deliver improved inference for income distributions and related welfare measures, and can change conclusions about distributional dominance over time.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.21258
  21. By: Isaiah Andrews; Harvey Barnhard; Jacob Carlson
    Abstract: Parameter estimates in misspecified models converge to pseudo-true parameter values, which minimize a population objective function. Pseudo-true values often differ from quantities of economic interest, raising questions of how, if at all, they are relevant for decision-making. To study this question we consider Bayesian decision-makers facing a linear population minimum distance problem. Within a class of priors motivated by the minimum distance objective, we characterize prior sequences under which posteriors concentrate on the pseudo-true value. This convergence is fragile to small changes in priors, implying that pseudo-true values are relevant for decision-making only in special cases. Constructive results are nevertheless possible in this setting, and we derive simple confidence intervals that guarantee correct average coverage for the true parameter under every prior in the class we study, with no bound on the magnitude of misspecification.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.15563
  22. By: Jaime Leyva; Tiago Pinheiro
    Abstract: Sample selection and correlation between recoveries and the time to recovery are inherent to loan recovery datasets, leading to biased and inconsistent estimators of recovery rates. We characterize and quantify the biases of two common estimators, and we propose a class of estimators that corrects them. Simulations show that the bias increases with shorter observation windows and longer time to recovery. Real-world data on firm loans broadly confirms simulation results and shows that, even with sixty months of data, the biases of the two common estimators is at least ten percentage points and can be as high as twenty percentage points. These findings highlight the importance of addressing sample selection in recovery rate modeling for improved credit risk assessment and regulatory compliance.
    Date: 2026
    URL: https://d.repec.org/n?u=RePEc:ptu:wpaper:w202602
  23. By: Robert Flassig; Emrah G\"ulay; Daniel Guterding
    Abstract: The Nelson-Siegel-Svensson (NSS) interest rate curve model yields a separable nonlinear least-squares problem whose inner linear block is often ill-conditioned because the basis functions become nearly collinear. We analyze this instability via an exact orthogonal reparametrization of the design matrix. A thin QR decomposition produces orthogonal linear parameters for which, conditional on the nonlinear parameters, the Fisher information matrix is diagonal. We also derive a finite-horizon analytical orthogonalization: on $[0, T]$, the $4\times 4$ continuous Gram matrix has closed-form entries involving exponentials, logarithms, and the exponential integral $E_1$, yielding an explicit horizon-dependent orthogonal NSS basis. Together with Jacobian-rank and profile-likelihood arguments, this representation clarifies the degenerate manifold $\lambda_1=\lambda_2$, where the Svensson extension loses two degrees of freedom. Orthogonalization leaves the least-squares fit and uncertainty of the original linear parameters unchanged, but isolates the conditioning structure. When the decay parameters are estimated jointly, the full first-order covariance in orthogonal coordinates admits an explicit Schur-complement form. The approach also yields a scalar identifiability diagnostic through the QR element $R_{44}$ and separates model reduction from numerical instability. Synthetic experiments confirm that orthogonal parametrization eliminates correlations among the linear parameters and keeps their conditional uncertainty uniform. A daily U.S. Treasury study on a reduced fixed 9-tenor grid from 1981 to 2026 shows smoother orthogonal parameter series than classical NSS parameters while the moving QR basis remains nearly constant.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.19290

This nep-ecm issue is ©2026 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the Griffith Business School of Griffith University in Australia.