nep-ecm New Economics Papers
on Econometrics
Issue of 2025–06–16
thirty papers chosen by
Sune Karlsson, Örebro universitet


  1. Model-based Estimation of Difference-in-Differences with Staggered Treatments By Siddhartha Chib; Kenichi Shimizu
  2. Design-Based Inference under Random Potential Outcomes via Riesz Representation By Yukai Yang
  3. Detecting multiple change points in linear models with heteroscedastic errors By Lajos Horvath; Gregory Rice; Yuqian Zhao
  4. Model Checks in a Kernel Ridge Regression Framework By Yuhao Li
  5. Estimating the Number of Components in Panel Data Finite Mixture Regression Models with an Application to Production Function Heterogeneity By Yu Hao; Hiroyuki Kasahara
  6. What Makes Treatment Effects Identifiable? Characterizations and Estimators Beyond Unconfoundedness By Yang Cai; Alkis Kalavasis; Katerina Mamali; Anay Mehrotra; Manolis Zampetakis
  7. Evaluating financial tail risk forecasts: Testing Equal Predictive Ability By Lukas Bauer
  8. Identification and estimation of dynamic random coefficient models By Wooyong Lee
  9. Causal Inference with Endogenous Price Response By Przemyslaw Jeziorski; Dingzhe Leng; Stephan Seiler
  10. Sub-Gaussian Estimation of the Scatter Matrix in Ultra-High Dimensional Elliptical Factor Models with 2 + eth Moment By Yi Ding; Xinghua Zheng
  11. Frequentist Model Averaging with Nash Bargaining: A Stochastic Dominance Approach By Stelios Arvanitis
  12. Causal Inference in Counterbalanced Within-Subjects Designs By Justin Ho; Jonathan Min
  13. Large Bayesian VARs for Binary and Censored Variables By Joshua C. C. Chan; Michael Pfarrhofer
  14. Data Fusion for Partial Identification of Causal Effects By Quinn Lanners; Cynthia Rudin; Alexander Volfovsky; Harsh Parikh
  15. A Powerful Chi-Square Specification Test with Support Vectors By Yuhao Li; Xiaojun Song
  16. Bayesian Deep Learning for Discrete Choice By Daniel F. Villarraga; Ricardo A. Daziano
  17. The Conventional Impulse Response Prior in VAR Models with Sign Restrictions By Atsushi Inoue; Lutz Kilian
  18. Large structural VARs with multiple linear shock and impact inequality restrictions By Lukas Berend; Jan Pr\"user
  19. Slope Consistency of Quasi-Maximum Likelihood Estimator for Binary Choice Models By Yoosoon Chang; Joon Y. Park; Guo Yan
  20. Using Discrepancies to Correct for False Matches in Historical Linked Data By Yuya Sasaki; Ariell Zimran
  21. Empirically Implementing a Social Welfare Inference Framework By Charles Beach; Russell Davidson
  22. Reconciling Engineers and Economists: the Case of a Cost Function for the Distribution of Gas By Florens, Jean-Pierre; Fève, Frédérique; Simar, Léopold
  23. Tails of Cross-Sectional Return Distributions at High Frequencies By Torben G. Andersen; Yi Ding; Viktor Todorov
  24. Random Utility with Aggregated Alternatives By Yuexin Liao; Kota Saito; Alec Sandroni
  25. On the Estimation of Climate Normals and Anomalies By Tommaso Proietti; Alessandro Giovannelli
  26. Latent Variable Estimation in Bayesian Black-Litterman Models By Thomas Y. L. Lin; Jerry Yao-Chieh Hu; Paul W. Chiou; Peter Lin
  27. Computation of Policy Counterfactuals in Sequence Space By James Hebden; Fabian Winkler
  28. Deep Impulse Response Functions for Macroeconomic Dynamics: A Hybrid LSTM-Wavelet Approach Compared to an ANN-Wavelet and VECM Models By Bahaa Aly, Tarek
  29. Machine-learning Growth at Risk By Tobias Adrian; Hongqi Chen; Max-Sebastian Dov\`i; Ji Hyung Lee
  30. Recalibrating binary probabilistic classifiers By Dirk Tasche

  1. By: Siddhartha Chib; Kenichi Shimizu
    Abstract: We propose a model-based framework for estimating treatment effects in Difference-in-Differences (DiD) designs with multiple time-periods and variation in treatment timing. We first present a simple model for potential outcomes that respects the identifying conditions for the average treatment effects on the treated (ATT's). The model-based perspective is particularly valuable in applications with small sample sizes, where existing estimators that rely on asymptotic arguments may yield poor approximations to the sampling distribution of group-time ATT's. To improve parsimony and guide prior elicitation, we reparametrize the model in a way that reduces the effective number of parameters. Prior information about treatment effects is incorporated through black-box training sample priors and, in small-sample settings, by thick-tailed t-priors that shrink ATT's of small magnitudes toward zero. We provide a straightforward and computationally efficient Bayesian estimation procedure and establish a Bernstein-von Mises-type result that justifies posterior inference for the treatment effects. Simulation studies confirm that our method performs well in both large and small samples, offering credible uncertainty quantification even in settings that challenge standard estimators. We illustrate the practical value of the method through an empirical application that examines the effect of minimum wage increases on teen employment in the United States.
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2505.18391
  2. By: Yukai Yang
    Abstract: We introduce a general framework for design-based causal inference that accommodates stochastic potential outcomes, thereby extending the classical Neyman-Rubin setup in which outcomes are treated as fixed. In our formulation, each unit's potential outcome is modelled as a function $\tilde{y}_i(z, \omega)$, where $\omega$ denotes latent randomness external to the treatment assignment. Building on recent work that connects design-based estimation with the Riesz representation theorem, we construct causal estimators by embedding potential outcomes in a Hilbert space and defining treatment effects as linear functionals. This allows us to derive unbiased and consistent estimators, even when potential outcomes exhibit random variation. The framework retains the key advantage of design-based analysis, namely, the use of a known randomisation scheme for identification, while enabling inference in settings with inherent stochasticity. We establish large-sample properties under local dependence, provide a variance estimator compatible with sparse dependency structures, and illustrate the method through a simulation. Our results unify design-based reasoning with random-outcome modelling, broadening the applicability of causal inference in complex experimental environments.
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2505.01324
  3. By: Lajos Horvath; Gregory Rice; Yuqian Zhao
    Abstract: The problem of detecting change points in the regression parameters of a linear regression model with errors and covariates exhibiting heteroscedasticity is considered. Asymptotic results for weighted functionals of the cumulative sum (CUSUM) processes of model residuals are established when the model errors are weakly dependent and non-stationary, allowing for either abrupt or smooth changes in their variance. These theoretical results illuminate how to adapt standard change point test statistics for linear models to this setting. We studied such adapted change-point tests in simulation experiments, along with a finite sample adjustment to the proposed testing procedures. The results suggest that these methods perform well in practice for detecting multiple change points in the linear model parameters and controlling the Type I error rate in the presence of heteroscedasticity. We illustrate the use of these approaches in applications to test for instability in predictive regression models and explanatory asset pricing models.
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2505.01296
  4. By: Yuhao Li
    Abstract: We propose new reproducing kernel-based tests for model checking in conditional moment restriction models. By regressing estimated residuals on kernel functions via kernel ridge regression (KRR), we obtain a coefficient function in a reproducing kernel Hilbert space (RKHS) that is zero if and only if the model is correctly specified. We introduce two classes of test statistics: (i) projection-based tests, using RKHS inner products to capture global deviations, and (ii) random location tests, evaluating the KRR estimator at randomly chosen covariate points to detect local departures. The tests are consistent against fixed alternatives and sensitive to local alternatives at the $n^{-1/2}$ rate. When nuisance parameters are estimated, Neyman orthogonality projections ensure valid inference without repeated estimation in bootstrap samples. The random location tests are interpretable and can visualize model misspecification. Simulations show strong power and size control, especially in higher dimensions, outperforming existing methods.
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2505.01161
  5. By: Yu Hao; Hiroyuki Kasahara
    Abstract: This paper develops statistical methods for determining the number of components in panel data finite mixture regression models with regression errors independently distributed as normal or more flexible normal mixtures. We analyze the asymptotic properties of the likelihood ratio test (LRT) and information criteria (AIC and BIC) for model selection in both conditionally independent and dynamic panel settings. Unlike cross-sectional normal mixture models, we show that panel data structures eliminate higher-order degeneracy problems while retaining issues of unbounded likelihood and infinite Fisher information. Addressing these challenges, we derive the asymptotic null distribution of the LRT statistic as the maximum of random variables and develop a sequential testing procedure for consistent selection of the number of components. Our theoretical analysis also establishes the consistency of BIC and the inconsistency of AIC. Empirical application to Chilean manufacturing data reveals significant heterogeneity in production technology, with substantial variation in output elasticities of material inputs and factor-augmented technological processes within narrowly defined industries, indicating plant-specific variation in production functions beyond Hicks-neutral technological differences. These findings contrast sharply with the standard practice of assuming a homogeneous production function and highlight the necessity of accounting for unobserved plant heterogeneity in empirical production analysis.
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2506.09666
  6. By: Yang Cai; Alkis Kalavasis; Katerina Mamali; Anay Mehrotra; Manolis Zampetakis
    Abstract: Most of the widely used estimators of the average treatment effect (ATE) in causal inference rely on the assumptions of unconfoundedness and overlap. Unconfoundedness requires that the observed covariates account for all correlations between the outcome and treatment. Overlap requires the existence of randomness in treatment decisions for all individuals. Nevertheless, many types of studies frequently violate unconfoundedness or overlap, for instance, observational studies with deterministic treatment decisions -- popularly known as Regression Discontinuity designs -- violate overlap. In this paper, we initiate the study of general conditions that enable the identification of the average treatment effect, extending beyond unconfoundedness and overlap. In particular, following the paradigm of statistical learning theory, we provide an interpretable condition that is sufficient and nearly necessary for the identification of ATE. Moreover, this condition characterizes the identification of the average treatment effect on the treated (ATT) and can be used to characterize other treatment effects as well. To illustrate the utility of our condition, we present several well-studied scenarios where our condition is satisfied and, hence, we prove that ATE can be identified in regimes that prior works could not capture. For example, under mild assumptions on the data distributions, this holds for the models proposed by Tan (2006) and Rosenbaum (2002), and the Regression Discontinuity design model introduced by Thistlethwaite and Campbell (1960). For each of these scenarios, we also show that, under natural additional assumptions, ATE can be estimated from finite samples. We believe these findings open new avenues for bridging learning-theoretic insights and causal inference methodologies, particularly in observational studies with complex treatment mechanisms.
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2506.04194
  7. By: Lukas Bauer
    Abstract: This paper provides comprehensive simulation results on the finite sample properties of the Diebold-Mariano (DM) test by Diebold and Mariano (1995) and the model confidence set (MCS) testing procedure by Hansen et al. (2011) applied to the asymmetric loss functions specific to financial tail risk forecasts, such as Value-at-Risk (VaR) and Expected Shortfall (ES). We focus on statistical loss functions that are strictly consistent in the sense of Gneiting (2011a). We find that the tests show little power against models that underestimate the tail risk at the most extreme quantile levels, while the finite sample properties generally improve with the quantile level and the out-of-sample size. For the small quantile levels and out-of-sample sizes of up to two years, we observe heavily skewed test statistics and non-negligible type III errors, which implies that researchers should be cautious about using standard normal or bootstrapped critical values. We demonstrate both empirically and theoretically how these unfavorable finite sample results relate to the asymmetric loss functions and the time varying volatility inherent in financial return data.
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2505.23333
  8. By: Wooyong Lee
    Abstract: I study panel data linear models with predetermined regressors (such as lagged dependent variables) where coefficients are individual-specific, allowing for heterogeneity in the effects of the regressors on the dependent variable. I show that the model is not point-identified in a short panel context but rather partially identified, and I characterize the identified sets for the mean, variance, and CDF of the coefficient distribution. This characterization is general, accommodating discrete, continuous, and unbounded data, and it leads to computationally tractable estimation and inference procedures. I apply the method to study lifecycle earnings dynamics among U.S. households using the Panel Study of Income Dynamics (PSID) dataset. The results suggest substantial unobserved heterogeneity in earnings persistence, implying that households face varying levels of earnings risk which, in turn, contribute to heterogeneity in their consumption and savings behaviors.
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2505.01600
  9. By: Przemyslaw Jeziorski; Dingzhe Leng; Stephan Seiler
    Abstract: We study the estimation of causal treatment effects on demand when treatment is randomly assigned but prices adjust in response to treatment. We show that regressions of demand on treatment or on treatment and price lead to biased estimates of the direct treatment effect. The bias in both cases depends on the correlation of price with treatment and points in the same direction. In most cases including an endogenous price control reduces bias but does not remove it. We show how to test whether bias from an endogenous price response arises and how to recover an unbiased treatment effect (holding price constant) using a price instrument. We apply our approach to the estimation of the impact of feature advertising across several product categories using supermarket scanner data and show that the bias when not instrumenting for price can be substantial.
    Keywords: causal Inference, endogeneity, endogenous controls, instrumental variables
    JEL: C26 C31 D12 M31
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:ces:ceswps:_11898
  10. By: Yi Ding (Faculty of Business Administration, University of Macau); Xinghua Zheng (Department of ISOM, Hong Kong University of Science and Technology)
    Abstract: We study the estimation of scatter matrices in elliptical factor models with 2 + eth moment. For such heavy-tailed data, robust estimators like the Hubertype estimator in Fan et al. (2018) cannot achieve a sub-Gaussian convergence rate. In this paper, we develop an idiosyncratic-projected self-normalization method to remove the effect of the heavy-tailed scalar component and propose a robust estimator of the scatter matrix that achieves the sub-Gaussian rate under an ultra-high dimensional setting. Such a high convergence rate leads to superior performance in estimating high-dimensional global minimum variance portfolios.
    Keywords: High-dimension, elliptical model, factor model, scatter matrix, robust estimation
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:boa:wpaper:202529
  11. By: Stelios Arvanitis (Department of Economics, AUEB)
    Abstract: Within the Frequentist Model Averaging framework for linear models, we introduce a multi-objective model averaging methodology that extends both the generalized Jackknife Model Averaging (JMA) and the Mallows Model Averaging (MMA) criteria. Our approach constructs estimators based on stochastic dominance principles and explores averaging methods that minimize multiple scalarizations of the joint criterion integrating MMA and JMA. Additionally, we propose an estimator that can be interpreted as a Nash bargaining solution between the competing scalar criteria. We establish the asymptotic properties of these estimators under both correct specification and global misspecification. Monte Carlo simulations demonstrate that some of the proposed averaging estimators outperform JMA and MMA in terms of MSE/MAE. In an empirical application to economic growth data, our model averaging methods assign greater weight to fundamental Solow-type growth variables while also incorporating regressors that capture the role of geography and institutional quality.
    Keywords: frequentistic model averaging, Jacknife MA, Mallows MA, multi-objective optimization, stochastic dominance, approximate bound, â„“p-scalarization, Nash bargaining solution, growth regressions
    JEL: C51 C52
    Date: 2025–03
    URL: https://d.repec.org/n?u=RePEc:qed:wpaper:1535
  12. By: Justin Ho; Jonathan Min
    Abstract: Experimental designs are fundamental for estimating causal effects. In some fields, within-subjects designs, which expose participants to both control and treatment at different time periods, are used to address practical and logistical concerns. Counterbalancing, a common technique in within-subjects designs, aims to remove carryover effects by randomizing treatment sequences. Despite its appeal, counterbalancing relies on the assumption that carryover effects are symmetric and cancel out, which is often unverifiable a priori. In this paper, we formalize the challenges of counterbalanced within-subjects designs using the potential outcomes framework. We introduce sequential exchangeability as an additional identification assumption necessary for valid causal inference in these designs. To address identification concerns, we propose diagnostic checks, the use of washout periods, and covariate adjustments, and alternative experimental designs to counterbalanced within-subjects design. Our findings demonstrate the limitations of counterbalancing and provide guidance on when and how within-subjects designs can be appropriately used for causal inference.
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2505.03937
  13. By: Joshua C. C. Chan; Michael Pfarrhofer
    Abstract: We extend the standard VAR to jointly model the dynamics of binary, censored and continuous variables, and develop an efficient estimation approach that scales well to high-dimensional settings. In an out-of-sample forecasting exercise, we show that the proposed VARs forecast recessions and short-term interest rates well. We demonstrate the utility of the proposed framework using a wide rage of empirical applications, including conditional forecasting and a structural analysis that examines the dynamic effects of a financial shock on recession probabilities.
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2506.01422
  14. By: Quinn Lanners; Cynthia Rudin; Alexander Volfovsky; Harsh Parikh
    Abstract: Data fusion techniques integrate information from heterogeneous data sources to improve learning, generalization, and decision making across data sciences. In causal inference, these methods leverage rich observational data to improve causal effect estimation, while maintaining the trustworthiness of randomized controlled trials. Existing approaches often relax the strong no unobserved confounding assumption by instead assuming exchangeability of counterfactual outcomes across data sources. However, when both assumptions simultaneously fail - a common scenario in practice - current methods cannot identify or estimate causal effects. We address this limitation by proposing a novel partial identification framework that enables researchers to answer key questions such as: Is the causal effect positive or negative? and How severe must assumption violations be to overturn this conclusion? Our approach introduces interpretable sensitivity parameters that quantify assumption violations and derives corresponding causal effect bounds. We develop doubly robust estimators for these bounds and operationalize breakdown frontier analysis to understand how causal conclusions change as assumption violations increase. We apply our framework to the Project STAR study, which investigates the effect of classroom size on students' third-grade standardized test performance. Our analysis reveals that the Project STAR results are robust to simultaneous violations of key assumptions, both on average and across various subgroups of interest. This strengthens confidence in the study's conclusions despite potential unmeasured biases in the data.
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2505.24296
  15. By: Yuhao Li; Xiaojun Song
    Abstract: Specification tests, such as Integrated Conditional Moment (ICM) and Kernel Conditional Moment (KCM) tests, are crucial for model validation but often lack power in finite samples. This paper proposes a novel framework to enhance specification test performance using Support Vector Machines (SVMs) for direction learning. We introduce two alternative SVM-based approaches: one maximizes the discrepancy between nonparametric and parametric classes, while the other maximizes the separation between residuals and the origin. Both approaches lead to a $t$-type test statistic that converges to a standard chi-square distribution under the null hypothesis. Our method is computationally efficient and capable of detecting any arbitrary alternative. Simulation studies demonstrate its superior performance compared to existing methods, particularly in large-dimensional settings.
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2505.04414
  16. By: Daniel F. Villarraga; Ricardo A. Daziano
    Abstract: Discrete choice models (DCMs) are used to analyze individual decision-making in contexts such as transportation choices, political elections, and consumer preferences. DCMs play a central role in applied econometrics by enabling inference on key economic variables, such as marginal rates of substitution, rather than focusing solely on predicting choices on new unlabeled data. However, while traditional DCMs offer high interpretability and support for point and interval estimation of economic quantities, these models often underperform in predictive tasks compared to deep learning (DL) models. Despite their predictive advantages, DL models remain largely underutilized in discrete choice due to concerns about their lack of interpretability, unstable parameter estimates, and the absence of established methods for uncertainty quantification. Here, we introduce a deep learning model architecture specifically designed to integrate with approximate Bayesian inference methods, such as Stochastic Gradient Langevin Dynamics (SGLD). Our proposed model collapses to behaviorally informed hypotheses when data is limited, mitigating overfitting and instability in underspecified settings while retaining the flexibility to capture complex nonlinear relationships when sufficient data is available. We demonstrate our approach using SGLD through a Monte Carlo simulation study, evaluating both predictive metrics--such as out-of-sample balanced accuracy--and inferential metrics--such as empirical coverage for marginal rates of substitution interval estimates. Additionally, we present results from two empirical case studies: one using revealed mode choice data in NYC, and the other based on the widely used Swiss train choice stated preference data.
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2505.18077
  17. By: Atsushi Inoue; Lutz Kilian
    Abstract: Some studies have expressed concern that the Gaussian-inverse Wishart-Haar prior typically employed in estimating sign-identified VAR models may be unintentionally informative about the implied prior for the structural impulse responses. We discuss how this prior may be reported and make explicit what impulse response priors a number of recently published studies specified, allowing the readers to decide whether they are comfortable with this prior. We discuss what features to look for in this prior in the absence of specific prior information about the responses, building on the notion of weakly informative priors in Gelman et al. (2013), and in the presence of such information. Our empirical examples illustrate that the Gaussian-inverse Wishart-Haar prior need not be unintentionally informative about the impulse responses. Moreover, even when it is, there are empirically verifiable conditions under which this fact becomes immaterial for the substantive conclusions.
    Keywords: Gaussian-inverse Wishart prior; Haar prior; impulse response; set indentification
    JEL: C22 C32 C52 E31 Q43
    Date: 2025–05–09
    URL: https://d.repec.org/n?u=RePEc:fip:feddwp:99955
  18. By: Lukas Berend; Jan Pr\"user
    Abstract: We propose a high-dimensional structural vector autoregression framework capable of accommodating a large number of linear inequality restrictions on impact impulse responses, structural shocks, and their element-wise products. Combining impact- and shock-inequality restrictions can be flexibly used to sharpen inference and to disentangle structurally interpretable shocks through sign and shock constraints. To estimate the model, we develop a highly efficient sampling algorithm that scales well with model dimension and the number of inequality restrictions on impact responses, as well as structural shocks. It remains computationally feasible even when existing algorithms may break down. To demonstrate the practical utility of our approach, we identify five structural shocks and examine the dynamic responses of thirty macroeconomic variables, highlighting the model's flexibility and feasibility in complex empirical settings. We provide empirical evidence that financial shocks are the most important driver of the dynamics of the business cycle.
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2505.19244
  19. By: Yoosoon Chang; Joon Y. Park; Guo Yan
    Abstract: This paper revisits the slope consistency of QMLE for binary choice models. Ruud (1983, \emph{Econometrica}) introduced a set of conditions under which QMLE may yield a constant multiple of the slope coefficient of binary choice models asymptotically. However, he did not fully establish slope consistency of QMLE, which requires the existence of a positive multiple of slope coefficient identified as an interior maximizer of the population QMLE likelihood function over an appropriately restricted parameter space. We fill this gap by providing a formal proof for slope consistency under the same set of conditions for any binary choice model identified as in Horowitz (1992, \emph{Econometrica}). Our result implies that the logistic regression, which is used extensively in machine learning to analyze binary outcomes associated with a large number of covariates, yields a consistent estimate for the slope coefficient of binary choice models under suitable conditions.
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2505.02327
  20. By: Yuya Sasaki; Ariell Zimran
    Abstract: We propose a method to correct estimates from historical linked data for bias arising from type-I error—"false matches." We estimate the rate of false matching from the disagreement rate in characteristics that should agree across the two linked datasets. Combined with an understanding of the empirical patterns arising from false matches, knowledge of this rate enables us to correct for bias from false matches. Our method enables correction of estimates of both population moments and regression coefficients with valid inference. We illustrate the properties of our method via simulation and demonstrate them using linked US census data.
    JEL: C10 C23 C49 C55 C81 C83 J61 J62 N30 N31 N32
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:nbr:nberwo:33881
  21. By: Charles Beach; Russell Davidson (McGill University)
    Abstract: This paper builds on recent econometric developments establishing distribution-free statistical inference methods for quantile means and income shares for a sample distribution of microdata to propose an approach to empirically Implement several dominance criteria for comparing economic well-being and general income inequality between distributions. It provides straightforward variance-covariance formulas in a set of practical empirical procedures for formally testing economic well-being and inequality comparisons such as rank dominance, Lorenz dominance and generalized Lorenz dominance between distributions.The tests and procedures are illustrated with Canadian census data between 2000 and 2020 on women's and men's incomes. It is found that both women's and men's economic well-being statistically significantly improved over this period, while income inequality significantly increased over 2000-15 and then fell over 2015-20.
    Keywords: social welfare tests, income distribution comparisons, implementing social welfare
    JEL: C10 D31 D63 I31
    Date: 2025–02
    URL: https://d.repec.org/n?u=RePEc:qed:wpaper:1530
  22. By: Florens, Jean-Pierre; Fève, Frédérique; Simar, Léopold
    Abstract: The analysis of cost functions is an important topic in econometrics both for scientific studies and for industrial applications. The object of interest may be the cost of a firm or the cost of a specific production, in particular in case of a proposal to a procurement. Engineer methods evaluate the technical cost given the main characteristics of the output using the decomposition of the production process in elementary tasks and are based on physical laws. The error terms in these models may be viewed as idiosyncratic chocs. The economist usually observes ex post the cost and the characteristics of the product. The difference between theoretical cost and the observed one may be modeled by the inefficiency of the production process. In this case, econometric models are cost frontier models. In this paper we propose to take advantage of the situation where we have information from both approaches. We consider a system of two equations, one being a standard regression model (for the technical cost function) and one being a stochastic frontier model for the economic cost function where inefficiencies are explicitly introduced. We derive estimators of this joint model and derive its asymptotic properties. The models are presented in classical parametric approach, with few assumptions on the stochastic properties of the joint error terms. We suggest also a way to extend the model to a nonparametric approach, the latter provides an original way to model and estimate nonparametric stochastic frontier models. The techniques are illustrated in the case of the cost function for the distribution of gas in France.
    Date: 2025–05–19
    URL: https://d.repec.org/n?u=RePEc:tse:wpaper:130551
  23. By: Torben G. Andersen (Finance Department, Kellogg School of Management, Northwestern University); Yi Ding (Faculty of Business Administration, University of Macau); Viktor Todorov (Finance Department, Kellogg School of Management, Northwestern University)
    Abstract: We develop nonparametric estimates for tail risk in the cross-section of asset prices at high frequencies. We show that the tail behavior of the crosssectional return distribution depends on whether the time interval contains a systematic jump event. If so, the cross-sectional return tail is governed by the assets’ exposures to the systematic event while, otherwise, it is determined by the idiosyncratic jump tails of the stocks. We develop an estimator for the tail shape of the cross-sectional return distribution that display distinct properties with and without systematic jumps. Empirically, we provide evidence for symmetric cross-sectional return tails at high-frequency that exhibit nontrivial and persistent time series variation. A hypothesis of equal cross-sectional return tail shapes during periods with and without systematic jump events is strongly rejected by the data.
    Keywords: Cross-sectional return distribution, extreme value theory, highfrequency data, tail risk
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:boa:wpaper:202530
  24. By: Yuexin Liao; Kota Saito; Alec Sandroni
    Abstract: This paper studies when discrete choice data involving aggregated alternatives such as categorical data or an outside option can be rationalized by a random utility model (RUM). Aggregation introduces ambiguity in composition: the underlying alternatives may differ across individuals and remain unobserved by the analyst. We characterize the observable implications of RUMs under such ambiguity and show that they are surprisingly weak, implying only monotonicity with respect to adding aggregated alternatives and standard RUM consistency on unaggregated menus. These are insufficient to justify the use of an aggregated RUM. We identify two sufficient conditions that restore full rationalizability: non-overlapping preferences and menu-independent aggregation. Simulations show that violations of these conditions generate estimation bias, highlighting the practical importance of how aggregated alternatives are defined.
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2506.00372
  25. By: Tommaso Proietti (CEIS & DEF, University of Rome "Tor Vergata"); Alessandro Giovannelli (University of L’Aquila)
    Abstract: The quantification of the interannual component of variability in climatological time series is essential for the assessment and prediction of the El Ni˜no - Southern Oscillation phenomenon. This is achieved by estimating the deviation of a climate variable (e.g., temperature, pressure, precipitation, or wind strength) from its normal conditions, defined by its baseline level and seasonal patterns. Climate normals are currently estimated by simple arithmetic averages calculated over the most recent 30-year period ending in a year divisible by 10. The suitability of the standard methodology has been questioned in the context of a changing climate, characterized by nonstationary conditions. The literature has focused on the choice of the bandwidth and the ability to account for trends induced by climate change. The paper contributes to the literature by proposing a regularized real time filter based on local trigonometric regression, optimizing the estimation bias-variance trade-off in the presence of climate change, and by introducing a class of seasonal kernels enhancing the localization of the estimates of climate normals. Application to sea surface temperature series in the Ni˜no 3.4 region and zonal and trade winds strength in the equatorial and tropical Pacific region, illustrates the relevance of our proposal.
    Keywords: Climate change; Seasonality; El Ni˜no - Southern Oscillation; Local Trigonometric Regression.
    JEL: C22 C32 C53
    Date: 2025–06–04
    URL: https://d.repec.org/n?u=RePEc:rtv:ceisrp:602
  26. By: Thomas Y. L. Lin; Jerry Yao-Chieh Hu; Paul W. Chiou; Peter Lin
    Abstract: We revisit the Bayesian Black-Litterman (BL) portfolio model and remove its reliance on subjective investor views. Classical BL requires an investor "view": a forecast vector $q$ and its uncertainty matrix $\Omega$ that describe how much a chosen portfolio should outperform the market. Our key idea is to treat $(q, \Omega)$ as latent variables and learn them from market data within a single Bayesian network. Consequently, the resulting posterior estimation admits closed-form expression, enabling fast inference and stable portfolio weights. Building on these, we propose two mechanisms to capture how features interact with returns: shared-latent parametrization and feature-influenced views; both recover classical BL and Markowitz portfolios as special cases. Empirically, on 30-year Dow-Jones and 20-year sector-ETF data, we improve Sharpe ratios by 50% and cut turnover by 55% relative to Markowitz and the index baselines. This work turns BL into a fully data-driven, view-free, and coherent Bayesian framework for portfolio optimization.
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2505.02185
  27. By: James Hebden; Fabian Winkler
    Abstract: We propose an efficient procedure to solve for policy counterfactuals in linear models with occasionally binding constraints in sequence space. Forecasts of the variables relevant for the policy problem, and their impulse responses to anticipated policy shocks, constitute sufficient information to construct valid counterfactuals. Knowledge of the structural model equations or filtering of structural shocks is not required. We solve for deterministic and stochastic paths under instrument rules as well as under optimal policy with commitment or subgame-perfect discretion. As an application, we compute counterfactuals of the U.S. economy after the pandemic shock of 2020 under several monetary policy regimes.
    Keywords: Sequence space; DSGE; Occasionally binding constraints; Optimal policy; Commitment; Discretion
    JEL: C61 C63 E52
    Date: 2024–09–01
    URL: https://d.repec.org/n?u=RePEc:fip:fedgfe:100035
  28. By: Bahaa Aly, Tarek
    Abstract: This study presents a novel hybrid framework that integrated Long Short-Term Memory (LSTM) networks with Daubechies wavelet transforms to estimate Deep Impulse Response Functions (DIRF) for monthly macroeconomic time series, across five economies: Brazil, Egypt, Indonesia, United States, and the United Kingdom. Eight key variables, yield curve latent factors (LEVEL, SLOPE, CURVATURE), foreign exchange rates, equity indices, central bank policy rates, GDP growth rates, and inflation rates, were modeled using the proposed LSTM-Wavelet approach, and were compared against an ANN-Wavelet hybrid, and a traditional Vector Error Correction Model (VECM). The LSTM-Wavelet model achieved a superior overall median R2, outperforming the ANN-Wavelet and VECM. The approach excelled in capturing nonlinear dynamics and temporal dependencies for variables such as equity indices, policy rates, GDP, and inflation. Db4 was superior for capturing short and medium-term patterns in macroeconomic variables like GDP, EQUITY, and FX, cause its shorter filter and moderate smoothing excelled at isolating cyclical patterns in noisy, volatile data. Cumulative DIRFs revealed consistent cross variable dynamics e.g., yield curve shocks propagated to equity, FX, policy rates, GDP, and inflation, in line with economic theory. These findings underscored the hybrid model’s ability to capture non-linearity, multiscale interactions in macroeconomic data, offering valuable insights for forecasting and policy analysis.
    Keywords: Deep Impulse Response Function, Long Short-Term Memory, Daubechies Wavelet transform, Macroeconomics, nonlinearity, Forecasting
    JEL: C5 C53 C58
    Date: 2025–05–30
    URL: https://d.repec.org/n?u=RePEc:pra:mprapa:124905
  29. By: Tobias Adrian; Hongqi Chen; Max-Sebastian Dov\`i; Ji Hyung Lee
    Abstract: We analyse growth vulnerabilities in the US using quantile partial correlation regression, a selection-based machine-learning method that achieves model selection consistency under time series. We find that downside risk is primarily driven by financial, labour-market, and housing variables, with their importance changing over time. Decomposing downside risk into its individual components, we construct sector-specific indices that predict it, while controlling for information from other sectors, thereby isolating the downside risks emanating from each sector.
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2506.00572
  30. By: Dirk Tasche
    Abstract: Recalibration of binary probabilistic classifiers to a target prior probability is an important task in areas like credit risk management. We analyse methods for recalibration from a distribution shift perspective. Distribution shift assumptions linked to the area under the curve (AUC) of a probabilistic classifier are found to be useful for the design of meaningful recalibration methods. Two new methods called parametric covariate shift with posterior drift (CSPD) and ROC-based quasi moment matching (QMM) are proposed and tested together with some other methods in an example setting. The outcomes of the test suggest that the QMM methods discussed in the paper can provide appropriately conservative results in evaluations with concave functionals like for instance risk weights functions for credit risk.
    Date: 2025–05
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2505.19068

This nep-ecm issue is ©2025 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.