nep-ecm New Economics Papers
on Econometrics
Issue of 2025–09–22
29 papers chosen by
Sune Karlsson, Örebro universitet


  1. Design-Based and Network Sampling-Based Uncertainties in Network Experiments By Kensuke Sakamoto; Yuya Shimizu
  2. Robust Inference when Nuisance Parameters may be Partially Identified with Applications to Synthetic Controls By Joseph Fry
  3. An Empirical Comparison of Weak-IV-Robust Procedures in Just-Identified Models By Wenze Li
  4. Dynamic Local Average Treatment Effects in Time Series By Alessandro Casini; Adam McCloskey; Luca Rolla; Raimondo Pala
  5. Robust Inference with High-Dimensional Instruments By Qu Feng; Sombut Jaidee; Wenjie Wang
  6. Optimal Estimation for General Gaussian Processes By Tetsuya Takabatake; Jun Yu; Chen Zhang
  7. Out-of-Sample Inference with Annual Benchmark Revisions By Silvia Goncalves; Michael W. McCracken; Yongxu Yao
  8. Analytic inference with two-way clustering By Laurent Davezies; Xavier D'Haultf{\oe}uille; Yannick Guyonvarch
  9. A Bayesian Gaussian Process Dynamic Factor Model By Tony Chernis; Niko Hauzenberger; Haroon Mumtaz; Michael Pfarrhofer
  10. Orthogonality conditions for convex regression By Sheng Dai; Timo Kuosmanen; Xun Zhou
  11. Single-Index Quantile Factor Model with Observed Characteristics By Ruofan Xu; Qingliang Fan
  12. Extrapolation in Regression Discontinuity Design Using Comonotonicity By Ben Deaner; Soonwoo Kwon
  13. Estimating Peer Effects Using Partial Network Data By Vincent Boucher; Aristide Houndetoungan
  14. Bayesian Inference for Confounding Variables and Limited Information By Ellis Scharfenaker; Duncan K. Foley
  15. Functional Regression with Nonstationarity and Error Contamination: Application to the Economic Impact of Climate Change By Kyungsik Nam; Won-Ki Seo
  16. Treatment Effects of Multi-Valued Treatments in Hyper-Rectangle Model By Xunkang Tian
  17. Causal Inference for Aggregated Treatment By Carolina Caetano; Gregorio Caetano; Brantly Callaway; Derek Dyal
  18. Finite-sample non-parametric bounds with an application to the causal effect of workforce gender diversity on firm performance By Lordan, Grace; Salehzadeh Nobari, Kaveh
  19. Testing parametric additive time-varying GARCH models By Niklas Ahlgren; Alexander Back; Timo Ter\"asvirta
  20. A Simplified Klein–Spady Estimator for Binary Choice Models By Hjertstrand, Per; Proctor, Andrew; Westerlund, Joakim
  21. Policy-relevant causal effect estimation using instrumental variables with interference By Didier Nibbering; Matthijs Oosterveen
  22. Bayesian Estimation of DSGE Models: An Update By Pablo A. Guerron-Quintana; James M. Nason
  23. On the Identification of Diagnostic Expectations: Econometric Insights from DSGE Models By Jinting Guo
  24. P-CRE-DML: A Novel Approach for Causal Inference in Non-Linear Panel Data By Amarendra Sharma
  25. Modèles de volatilité stochastique à haute dimension: applications à l’incertitude macroéconomique au Québec et au Canada By MD Nazmul Ahsan; Jean-Marie Dufour; Gabriel Rodriguez
  26. Empirical estimator of diversification quotient By Xia Han; Liyuan Lin; Mengshi Zhao
  27. Automated regime classification in multidimensional time series data using sliced Wasserstein k-means clustering By Luan, Qinmeng; Hamp, James
  28. Beyond GARCH: Bayesian Neural Stochastic Volatility By Guo, Hongfei; Marín Díazaraque, Juan Miguel; Veiga, Helena
  29. Polynomial Log-Marginals and Tweedie's Formula : When Is Bayes Possible? By Jyotishka Datta; Nicholas G. Polson

  1. By: Kensuke Sakamoto; Yuya Shimizu
    Abstract: OLS estimators are widely used in network experiments to estimate spillover effects via regressions on exposure mappings that summarize treatment and network structure. We study the causal interpretation and inference of such OLS estimators when both design-based uncertainty in treatment assignment and sampling-based uncertainty in network links are present. We show that correlations among elements of the exposure mapping can contaminate the OLS estimand, preventing it from aggregating heterogeneous spillover effects for clear causal interpretation. We derive the estimator's asymptotic distribution and propose a network-robust variance estimator. Simulations and an empirical application reveal sizable contamination bias and inflated spillover estimates.
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2506.22989
  2. By: Joseph Fry
    Abstract: When conducting inference for the average treatment effect on the treated with a Synthetic Control Estimator, the vector of control weights is a nuisance parameter which is often constrained, high-dimensional, and may be only partially identified even when the average treatment effect on the treated is point-identified. All three of these features of a nuisance parameter can lead to failure of asymptotic normality for the estimate of the parameter of interest when using standard methods. I provide a new method yielding asymptotic normality for an estimate of the parameter of interest, even when all three of these complications are present. This is accomplished by first estimating the nuisance parameter using a regularization penalty to achieve a form of identification, and then estimating the parameter of interest using moment conditions that have been orthogonalized with respect to the nuisance parameter. I present high-level sufficient conditions for the estimator and verify these conditions in an example involving Synthetic Controls.
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2507.00307
  3. By: Wenze Li
    Abstract: Instrumental variable (IV) regression is recognized as one of the five core methods for causal inference, as identified by Angrist and Pischke (2008). This paper compares two leading approaches to inference under weak identification for just-identified IV models: the classical Anderson-Rubin (AR) procedure and the recently popular tF method proposed by Lee et al. (2022). Using replication data from the American Economic Review (AER) and Monte Carlo simulation experiments, we evaluate the two procedures in terms of statistical significance testing and confidence interval (CI) length. Empirically, we find that the AR procedure typically offers higher power and yields shorter CIs than the tF method. Nonetheless, as noted by Lee et al. (2022), tF has a theoretical advantage in terms of expected CI length. Our findings suggest that the two procedures may be viewed as complementary tools in empirical applications involving potentially weak instruments.
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2506.18001
  4. By: Alessandro Casini; Adam McCloskey; Luca Rolla; Raimondo Pala
    Abstract: This paper discusses identification, estimation, and inference on dynamic local average treatment effects (LATEs) in instrumental variables (IVs) settings. First, we show that compliers--observations whose treatment status is affected by the instrument--can be identified individually in time series data using smoothness assumptions and local comparisons of treatment assignments. Second, we show that this result enables not only better interpretability of IV estimates but also direct testing of the exclusion restriction by comparing outcomes among identified non-compliers across instrument values. Third, we document pervasive weak identification in applied work using IVs with time series data by surveying recent publications in leading economics journals. However, we find that strong identification often holds in large subsamples for which the instrument induces changes in the treatment. Motivated by this, we introduce a method based on dynamic programming to detect the most strongly-identified subsample and show how to use this subsample to improve estimation and inference. We also develop new identification-robust inference procedures that focus on the most strongly-identified subsample, offering efficiency gains relative to existing full sample identification-robust inference when identification fails over parts of the sample. Finally, we apply our results to heteroskedasticity-based identification of monetary policy effects. We find that about 75% of observations are compliers (i.e., cases where the variance of the policy shifts up on FOMC announcement days), and we fail to reject the exclusion restriction. Estimation using the most strongly-identified subsample helps reconcile conflicting IV and GMM estimates in the literature.
    Date: 2025–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2509.12985
  5. By: Qu Feng; Sombut Jaidee; Wenjie Wang
    Abstract: We propose a weak-identification-robust test for linear instrumental variable (IV) regressions with high-dimensional instruments, whose number is allowed to exceed the sample size. In addition, our test is robust to general error dependence, such as network dependence and spatial dependence. The test statistic takes a self-normalized form and the asymptotic validity of the test is established by using random matrix theory. Simulation studies are conducted to assess the numerical performance of the test, confirming good size control and satisfactory testing power across a range of various error dependence structures.
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2506.23834
  6. By: Tetsuya Takabatake; Jun Yu; Chen Zhang
    Abstract: This paper proposes a novel exact maximum likelihood (ML) estimation method for general Gaussian processes, where all parameters are estimated jointly. The exact ML estimator (MLE) is consistent and asymptotically normally distributed. We prove the local asymptotic normality (LAN) property of the sequence of statistical experiments for general Gaussian processes in the sense of Le Cam, thereby enabling optimal estimation and facilitating statistical inference. The results rely solely on the asymptotic behavior of the spectral density near zero, allowing them to be widely applied. The established optimality not only addresses the gap left by Adenstedt(1974), who proposed an efficient but infeasible estimator for the long-run mean $\mu$, but also enables us to evaluate the finite-sample performance of the existing method -- the commonly used plug-in MLE, in which the sample mean is substituted into the likelihood. Our simulation results show that the plug-in MLE performs nearly as well as the exact MLE, alleviating concerns that inefficient estimation of $\mu$ would compromise the efficiency of the remaining parameter estimates.
    Date: 2025–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2509.04987
  7. By: Silvia Goncalves; Michael W. McCracken; Yongxu Yao
    Abstract: This paper examines the properties of out-of-sample predictability tests evaluated with real-time data subject to annual benchmark revisions. The presence of both regular and annual revisions can create time heterogeneity in the moments of the real-time forecast evaluation function, which is not compatible with the standard covariance stationarity assumption used to derive the asymptotic theory of these tests. To cover both regular and annual revisions, we replace this standard assumption with a periodic covariance stationarity assumption that allows for periodic patterns of time heterogeneity. Despite the lack of stationarity, we show that the Clark and McCracken (2009) test statistic is robust to the presence of annual benchmark revisions. A similar robustness property is shared by the bootstrap test of Goncalves, McCracken, and Yao (2025). Monte Carlo experiments indicate that both tests provide satisfactory finite sample size and power properties even in modest sample sizes. We conclude with an application to U.S. employment forecasting in the presence of real-time data.
    Keywords: real-time data; bootstrap; prediction; forecast evaluation
    JEL: C53 C12 C52
    Date: 2025–09–11
    URL: https://d.repec.org/n?u=RePEc:fip:fedlwp:101742
  8. By: Laurent Davezies; Xavier D'Haultf{\oe}uille; Yannick Guyonvarch
    Abstract: This paper studies analytic inference along two dimensions of clustering. In such setups, the commonly used approach has two drawbacks. First, the corresponding variance estimator is not necessarily positive. Second, inference is invalid in non-Gaussian regimes, namely when the estimator of the parameter of interest is not asymptotically Gaussian. We consider a simple fix that addresses both issues. In Gaussian regimes, the corresponding tests are asymptotically exact and equivalent to usual ones. Otherwise, the new tests are asymptotically conservative. We also establish their uniform validity over a certain class of data generating processes. Independently of our tests, we highlight potential issues with multiple testing and nonlinear estimators under two-way clustering. Finally, we compare our approach with existing ones through simulations.
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2506.20749
  9. By: Tony Chernis; Niko Hauzenberger; Haroon Mumtaz; Michael Pfarrhofer
    Abstract: We propose a dynamic factor model (DFM) where the latent factors are linked to observed variables with unknown and potentially nonlinear functions. The key novelty and source of flexibility of our approach is a nonparametric observation equation, specified via Gaussian Process (GP) priors for each series. Factor dynamics are modeled with a standard vector autoregression (VAR), which facilitates computation and interpretation. We discuss a computationally efficient estimation algorithm and consider two empirical applications. First, we forecast key series from the FRED-QD dataset and show that the model yields improvements in predictive accuracy relative to linear benchmarks. Second, we extract driving factors of global inflation dynamics with the GP-DFM, which allows for capturing international asymmetries.
    Date: 2025–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2509.04928
  10. By: Sheng Dai; Timo Kuosmanen; Xun Zhou
    Abstract: Econometric identification generally relies on orthogonality conditions, which usually state that the random error term is uncorrelated with the explanatory variables. In convex regression, the orthogonality conditions for identification are unknown. Applying Lagrangian duality theory, we establish the sample orthogonality conditions for convex regression, including additive and multiplicative formulations of the regression model, with and without monotonicity and homogeneity constraints. We then propose a hybrid instrumental variable control function approach to mitigate the impact of potential endogeneity in convex regression. The superiority of the proposed approach is shown in a Monte Carlo study and examined in an empirical application to Chilean manufacturing data.
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2506.21110
  11. By: Ruofan Xu; Qingliang Fan
    Abstract: We propose a characteristics-augmented quantile factor (QCF) model, where unknown factor loading functions are linked to a large set of observed individual-level (e.g., bond- or stock-specific) covariates via a single-index projection. The single-index specification offers a parsimonious, interpretable, and statistically efficient way to nonparametrically characterize the time-varying loadings, while avoiding the curse of dimensionality in flexible nonparametric models. Using a three-step sieve estimation procedure, the QCF model demonstrates high in-sample and out-of-sample accuracy in simulations. We establish asymptotic properties for estimators of the latent factor, loading functions, and index parameters. In an empirical study, we analyze the dynamic distributional structure of U.S. corporate bond returns from 2003 to 2020. Our method outperforms the benchmark quantile Fama-French five-factor model and quantile latent factor model, particularly in the tails ($\tau=0.05, 0.95$). The model reveals state-dependent risk exposures driven by characteristics such as bond and equity volatility, coupon, and spread. Finally, we provide economic interpretations of the latent factors.
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2506.19586
  12. By: Ben Deaner; Soonwoo Kwon
    Abstract: We present a novel approach for extrapolating causal effects away from the margin between treatment and non-treatment in sharp regression discontinuity designs with multiple covariates. Our methods apply both to settings in which treatment is a function of multiple observables and settings in which treatment is determined based on a single running variable. Our key identifying assumption is that conditional average treated and untreated potential outcomes are comonotonic: covariate values associated with higher average untreated potential outcomes are also associated with higher average treated potential outcomes. We provide an estimation method based on local linear regression. Our estimands are weighted average causal effects, even if comonotonicity fails. We apply our methods to evaluate counterfactual mandatory summer school policies.
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2507.00289
  13. By: Vincent Boucher; Aristide Houndetoungan
    Abstract: We study the estimation of peer effects through social networks when researchers do not observe the entire network structure. Special cases include sampled networks, censored networks, and misclassified links. We assume that researchers can obtain a consistent estimator of the distribution of the network. We show that this assumption is sufficient for estimating peer effects using a linear-in-means model. We provide an empirical application to the study of peer effects on students' academic achievement using the widely used Add Health database, and show that network data errors have a large downward bias on estimated peer effects.
    Date: 2025–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2509.08145
  14. By: Ellis Scharfenaker; Duncan K. Foley
    Abstract: A central challenge in statistical inference is the presence of confounding variables that may distort observed associations between treatment and outcome. Conventional "causal" methods, grounded in assumptions such as ignorability, exclude the possibility of unobserved confounders, leading to posterior inferences that overstate certainty. We develop a Bayesian framework that relaxes these assumptions by introducing entropy-favoring priors over hypothesis spaces that explicitly allow for latent confounding variables and partial information. Using the case of Simpson's paradox, we demonstrate how this approach produces logically consistent posterior distributions that widen credibly intervals in the presence of potential confounding. Our method provides a generalizable, information-theoretic foundation for more robust predictive inference in observational sciences.
    Date: 2025–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2509.05520
  15. By: Kyungsik Nam; Won-Ki Seo
    Abstract: This paper studies a functional regression model with nonstationary dependent and explanatory functional observations, in which the nonstationary stochastic trends of the dependent variable are explained by those of the explanatory variable, and the functional observations may be error-contaminated. We develop novel autocovariance-based estimation and inference methods for this model. The methodology is broadly applicable to economic and statistical functional time series with nonstationary dynamics. To illustrate our methodology and its usefulness, we apply it to the evaluation of the global economic impact of climate change, an issue of intrinsic importance.
    Date: 2025–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2509.08591
  16. By: Xunkang Tian
    Abstract: This study investigates the identification of marginal treatment responses within multi-valued treatment models. Extending the hyper-rectangle model introduced by Lee and Salanie (2018), this paper relaxes restrictive assumptions, including the requirement of known treatment selection thresholds and the dependence of treatments on all unobserved heterogeneity. By incorporating an additional ranked treatment assumption, this study demonstrates that the marginal treatment responses can be identified under a broader set of conditions, either point or set identification. The framework further enables the derivation of various treatment effects from the marginal treatment responses. Additionally, this paper introduces a hypothesis testing method to evaluate the effectiveness of policies on treatment effects, enhancing its applicability to empirical policy analysis.
    Date: 2025–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2509.05177
  17. By: Carolina Caetano; Gregorio Caetano; Brantly Callaway; Derek Dyal
    Abstract: In this paper, we study causal inference when the treatment variable is an aggregation of multiple sub-treatment variables. Researchers often report marginal causal effects for the aggregated treatment, implicitly assuming that the target parameter corresponds to a well-defined average of sub-treatment effects. We show that, even in an ideal scenario for causal inference such as random assignment, the weights underlying this average have some key undesirable properties: they are not unique, they can be negative, and, holding all else constant, these issues become exponentially more likely to occur as the number of sub-treatments increases and the support of each sub-treatment grows. We propose approaches to avoid these problems, depending on whether or not the sub-treatment variables are observed.
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2506.22885
  18. By: Lordan, Grace; Salehzadeh Nobari, Kaveh
    Abstract: Classical Manski bounds identify average treatment effects under minimal assumptions but, in finite samples, assume that latent conditional expectations are bounded by the sample’s own extrema or that the population extrema are known a priori—often untrue in firm-level data with heavy tails. We develop a finite-sample, concentration-driven band (concATE) that replaces that assumption with a Dvoretzky–Kiefer–Wolfowitz tail bound, combines it with delta-method variance, and allocates size via Bonferroni. The band extends to a group-sequential design that controls the family-wise error when the first “significant” diversity threshold is data-chosen. Applied to 945 listed firms (2015 Q2–2022 Q1) concATE shows that senior-level gender diversity raises Tobin’s Q once representation exceeds ≈ 30% in growth sectors and ≈ 65% in cyclical sectors.
    JEL: C21 C14 M14 L25 J16
    Date: 2025–09–03
    URL: https://d.repec.org/n?u=RePEc:ehl:lserod:129445
  19. By: Niklas Ahlgren; Alexander Back; Timo Ter\"asvirta
    Abstract: We develop misspecification tests for building additive time-varying (ATV-)GARCH models. In the model, the volatility equation of the GARCH model is augmented by a deterministic time-varying intercept modeled as a linear combination of logistic transition functions. The intercept is specified by a sequence of tests, moving from specific to general. The first test is the test of the standard stationary GARCH model against an ATV-GARCH model with one transition. The alternative model is unidentified under the null hypothesis, which makes the usual LM test invalid. To overcome this problem, we use the standard method of approximating the transition function by a Taylor expansion around the null hypothesis. Testing proceeds until the first non-rejection. We investigate the small-sample properties of the tests in a comprehensive simulation study. An application to the VIX index indicates that the volatility of the index is not constant over time but begins a slow increase around the 2007-2008 financial crisis.
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2506.23821
  20. By: Hjertstrand, Per (Research Institute of Industrial Economics (IFN)); Proctor, Andrew (Department of Economics, Ludwig Maximilian University of Munich); Westerlund, Joakim (Department of Economics, Lund University, Sweden, and)
    Abstract: One of the most cited studies within the field of binary choice models is that of Klein and Spady (1993), in which the authors propose an estimator that is not only non-parametric with respect to the choice density but also asymptotically efficient. However, while theoretically appealing, the estimator has been found to be very difficult to implement with poor small-sample properties. This paper proposes a simplified version of the Klein–Spady estimator, which is shown to be easy to implement, numerically relatively more stable, and with excellent small-sample and asymptotic properties.
    Keywords: Binary choice; Maximum likelihood; Semi-parametric estimation
    JEL: C14 C25 D91
    Date: 2025–09–15
    URL: https://d.repec.org/n?u=RePEc:hhs:iuiwop:1535
  21. By: Didier Nibbering; Matthijs Oosterveen
    Abstract: Many policy evaluations using instrumental variable (IV) methods include individuals who interact with each other, potentially violating the standard IV assumptions. This paper defines and partially identifies direct and spillover effects with a clear policy-relevant interpretation under relatively mild assumptions on interference. Our framework accommodates both spillovers from the instrument to treatment and from treatment to outcomes and allows for multiple peers. By generalizing monotone treatment response and selection assumptions, we derive informative bounds on policy-relevant effects without restricting the type or direction of interference. The results extend IV estimation to more realistic social contexts, informing program evaluation and treatment scaling when interference is present.
    Date: 2025–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2509.12538
  22. By: Pablo A. Guerron-Quintana; James M. Nason
    Abstract: This chapter surveys Bayesian methods for estimating dynamic stochastic general equilibrium (DSGE) models. We focus on New Keynesian (NK)DSGE models because of the ongoing interest shown in this class of models by economists in academic and policy-making institutions. Their interest stems from the ability of this class of DSGE model to transmit monetary policy shocks into endogenous fluctuations at business cycle frequencies. Intuition about this propagation mechanism is developed by reviewing the structure of a canonical NKDSGE model. Estimation and evaluation of the NKDSGE model rests on detrending its optimality and equilibrium conditions to construct a linear approximation of the model from which we solve for its linear decision rules. This solution is mapped into a linear state space model. It allows us to run the Kalman filter generating predictions and updates of the detrended state and control variables and the predictive likelihood of the linear approximate NKDSGE model. The predictions, updates, and likelihood are inputs needed to operate the Metropolis-Hastings Markov chain Monte Carlo sampler from which we draw the posterior distribution of the NKDSGE model. The sampler also requires the analyst to pick priors for the NKDSGE model parameters and initial conditions to start the sampler. We review pseudo-code that implements this sampler before reporting estimates of a canonical NKDSGE model across samples that begin in 1982Q1 and end in 2019Q4, 2020Q4, 2021Q4, and 2022Q4. The estimates are compared across the four samples. This survey also gives a short history of DSGE model estimation as well as pointing to issues that are at the frontier of this research agenda.
    Keywords: dynamic stochastic general equilibrium, Bayesian, Metropolis-Hastings, Markov Chain Monte Carlo, Kalman filter, likelihood
    JEL: C32 E10 E32
    Date: 2025–09
    URL: https://d.repec.org/n?u=RePEc:een:camaaa:2025-52
  23. By: Jinting Guo
    Abstract: This paper provides the first econometric evidence for diagnostic expectations (DE) in DSGE models. Using the identification framework of Qu and Tkachenko (2017), I show that DE generate dynamics unattainable under rational expectations (RE), with no RE parameterization capable of matching the volatility and persistence patterns implied by DE. Consequently, DE are not observationally equivalent to RE and constitute an endogenous source of macroeconomic fluctuations, distinct from both structural frictions and exogenous shocks. From an econometric perspective, DE preserve overall model identification but weaken the identification of shock variances. To ensure robust conclusions across estimation methods and equilibrium conditions, I extend Bayesian estimation with Sequential Monte Carlo sampling to the indeterminacy domain. These findings advance the econometric study of expectations and highlight the macroeconomic relevance of diagnostic beliefs.
    Date: 2025–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2509.08472
  24. By: Amarendra Sharma
    Abstract: This paper introduces a novel Proxy-Enhanced Correlated Random Effects Double Machine Learning (P-CRE-DML) framework to estimate causal effects in panel data with non-linearities and unobserved heterogeneity. Combining Double Machine Learning (DML, Chernozhukov et al., 2018), Correlated Random Effects (CRE, Mundlak, 1978), and lagged variables (Arellano & Bond, 1991) and innovating within the CRE-DML framework (Chernozhukov et al., 2022; Clarke & Polselli, 2025; Fuhr & Papies, 2024), we apply P-CRE-DML to investigate the effect of social trust on GDP growth across 89 countries (2010-2020). We find positive and statistically significant relationship between social trust and economic growth. This aligns with prior findings on trust-growth relationship (e.g., Knack & Keefer, 1997). Furthermore, a Monte Carlo simulation demonstrates P-CRE-DML's advantage in terms of lower bias over CRE-DML and System GMM. P-CRE-DML offers a robust and flexible alternative for panel data causal inference, with applications beyond economic growth.
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2506.23297
  25. By: MD Nazmul Ahsan; Jean-Marie Dufour; Gabriel Rodriguez
    Abstract: Stochastic covariances are critical for macroeconomic and financial modelling, particularly in capturing uncertainty and dynamic interdependencies. This study introduces the Dynamic Factor Augmented VAR with Higher-order Multivariate Stochastic Volatility (DFAVAR-HMSV) framework, along with a computationally efficient estimation methodology. The proposed model captures complex dynamic interdependencies, leverage effects, and higher-order persistence in volatility structures. Applying this framework to construct uncertainty indices for Canada and Québec, the study provides critical insights into regional and national macroeconomic dynamics. Les covariances stochastiques sont essentielles pour la modélisation macroéconomique et financière, en particulier pour capturer l’incertitude et les interdépendances dynamiques. Cette étude introduit le cadre VAR avec facteurs dynamiques et volatilité multivariée stochastique d’ordre supérieur (DFAVAR-HMSV) et propose une méthodologie d’estimation computationnellement efficace. Le modèle proposé capture des interdépendances dynamiques complexes, des effets de levier et une persistance d’ordre supérieur dans les structures de volatilité. En appliquant ce cadre à la construction des indices d’incertitude pour le Canada et le Québec, cette étude fournit des informations critiques sur les dynamiques macroéconomiques régionales et nationales.
    Keywords: Macroeconomic uncertainty, multivariate stochastic volatility, dynamic factor models, high-dimensional econometrics, forecasting, policy analysis, Incertitude macroéconomique, volatilité stochastique multivariée, modèles factoriels dynamiques, économétrie à haute dimension, prévision, analyse politique
    JEL: C32 C53 C55 E37
    Date: 2025–09–08
    URL: https://d.repec.org/n?u=RePEc:cir:cirpro:2025rp-19
  26. By: Xia Han; Liyuan Lin; Mengshi Zhao
    Abstract: The Diversification Quotient (DQ), introduced by Han et al. (2025), is a recently proposed measure of portfolio diversification that quantifies the reduction in a portfolio's risk-level parameter attributable to diversification. Grounded in a rigorous theoretical framework, DQ effectively captures heavy tails, common shocks, and enhances efficiency in portfolio optimization. This paper further explores the convergence properties and asymptotic normality of empirical DQ estimators based on Value at Risk (VaR) and Expected Shortfall (ES), with explicit calculation of the asymptotic variance. In contrast to the diversification ratio (DR) proposed by Tasche (2007), which may exhibit diverging asymptotic variance due to its lack of location invariance, the DQ estimators demonstrate greater robustness under various distributional settings. We further evaluate their performance under elliptical distributions and conduct a simulation study to examine their finite-sample behavior. The results offer a solid statistical foundation for the application of DQ in financial risk management and decision-making.
    Date: 2025–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2506.20385
  27. By: Luan, Qinmeng; Hamp, James
    Abstract: Recent work has proposed Wasserstein k-means (Wk-means) clustering as a powerful method to classify regimes in time series data, and one-dimensional asset returns in particular. In this paper, we begin by studying in detail the behaviour of the Wasserstein k-means clustering algorithm applied to synthetic one-dimensional time series data. We extend the previous work by studying, in detail, the dynamics of the clustering algorithm and how varying the hyperparameters impacts the performance over different random initialisations. We compute simple metrics that we find to be useful in identifying high-quality clusterings. We then extend the technique of Wasserstein k-means clustering to multidimensional time series data by approximating the multidimensional Wasserstein distance as a sliced Wasserstein distance, resulting in a method we call 'sliced Wasserstein k-means (sWk-means) clustering'. We apply the sWk-means clustering method to the problem of automated regime classification in multidimensional time series data, using synthetic data to demonstrate the validity and effectiveness of the approach. Finally, we show that the sWk-means method is able to identify distinct market regimes in real multidimensional financial time series, using publicly available foreign exchange spot rate data as a case study. We conclude with remarks about some limitations of our approach and potential complementary or alternative approaches.
    Keywords: Wasserstein metric; market regimes; regime classification; time series; unsupervised learning
    JEL: C14 C63
    Date: 2025–08–29
    URL: https://d.repec.org/n?u=RePEc:ehl:lserod:129537
  28. By: Guo, Hongfei; Marín Díazaraque, Juan Miguel; Veiga, Helena
    Abstract: Accurately forecasting volatility is central to risk management, portfolio allocation, and asset pricing. While high-frequency realised measures have been shown to improve predictive accuracy, their value is not uniform across markets or horizons. This paper introduces a class of Bayesian neural network stochastic volatility (NN-SV) models that combine the flexibility of machine learning with the structure of stochastic volatility models. The specifications incorporate realised variance, jump variation, and semivariance from daily and intraday data, and model uncertainty is addressed through a Bayesian stacking ensemble that adaptively aggregates predictive distributions. Using data from the DAX, FTSE 100, and S&P 500 indices, the models are evaluated against classical GARCH and parametric SV benchmarks. The results show that the predictive content of high-frequency measures is horizon- and market-specific. The Bayesian ensemble further enhances robustness by exploiting complementary model strengths. Overall, NN-SV models not only outperform established benchmarks in many settings but also provide new insights into market-specific drivers of volatility dynamics.
    Keywords: Ensemble forecasts; GARCH; Neural networks; Realised volatility; Stochastic volatility
    JEL: C11 C32 C45 C53 C58
    Date: 2025–09–16
    URL: https://d.repec.org/n?u=RePEc:cte:wsrepe:47944
  29. By: Jyotishka Datta; Nicholas G. Polson
    Abstract: Motivated by Tweedie's formula for the Compound Decision problem, we examine the theoretical foundations of empirical Bayes estimators that directly model the marginal density $m(y)$. Our main result shows that polynomial log-marginals of degree $k \ge 3 $ cannot arise from any valid prior distribution in exponential family models, while quadratic forms correspond exactly to Gaussian priors. This provides theoretical justification for why certain empirical Bayes decision rules, while practically useful, do not correspond to any formal Bayes procedures. We also strengthen the diagnostic by showing that a marginal is a Gaussian convolution only if it extends to a bounded solution of the heat equation in a neighborhood of the smoothing parameter, beyond the convexity of $c(y)=\tfrac12 y^2+\log m(y)$.
    Date: 2025–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2509.05823

This nep-ecm issue is ©2025 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.