nep-ecm New Economics Papers
on Econometrics
Issue of 2023‒07‒31
fifteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Doubly Robust Estimation of Direct and Indirect Quantile Treatment Effects with Machine Learning By Yu-Chin Hsu; Martin Huber; Yu-Min Yen
  2. Specification testing with grouped fixed effects By Pigini, Claudia; Pionati, Alessandro; Valentini, Francesco
  3. Expected Shortfall LASSO By Sander Barendse
  4. Optimization of the Generalized Covariance Estimator in Noncausal Processes By Gianluca Cubadda; Francesco Giancaterini; Alain Hecq; Joann Jasiak
  5. A Nonparametric Test of $m$th-degree Inverse Stochastic Dominance By Shiyun Hu; Hongyi Jiang; Zhenting Sun
  6. Identification of Non-Additive Fixed Effects Models: Is the Return to Teacher Quality Homogeneous? By Jinyong Hahn; John D. Singleton; Neşe Yildiz
  7. Assessing Heterogeneity of Treatment Effects By Tetsuya Kaji; Jianfei Cao
  8. Nonparametric Causal Decomposition of Group Disparities By Ang Yu; Felix Elwert
  9. Modelling and Forecasting Macroeconomic Risk with Time Varying Skewness Stochastic Volatility Models By Andrea Renzetti
  10. Identifying Socially Disruptive Policies By Eric Auerbach; Yong Cai
  11. Successive one-sided Hodrick-Prescott filter with incremental filtering algorithm for nonlinear economic time series By Yuxia Liu; Qi Zhang; Wei Xiao; Tianguang Chu
  12. The Yule-Frisch-Waugh-Lovell Theorem By Deepankar Basu
  13. Formal Covariate Benchmarking to Bound Omitted Variable Bias By Deepankar Basu
  14. Identifying News Shocks from Forecasts By Jonathan J Adams; Philip Barrett
  15. Probabilistic forecasting of electricity prices using an augmented LMARX-model By Andersson, Jonas; Sheybanivaziri, Samaneh

  1. By: Yu-Chin Hsu; Martin Huber; Yu-Min Yen
    Abstract: We suggest double/debiased machine learning estimators of direct and indirect quantile treatment effects under a selection-on-observables assumption. This permits disentangling the causal effect of a binary treatment at a specific outcome rank into an indirect component that operates through an intermediate variable called mediator and an (unmediated) direct impact. The proposed method is based on the efficient score functions of the cumulative distribution functions of potential outcomes, which are robust to certain misspecifications of the nuisance parameters, i.e., the outcome, treatment, and mediator models. We estimate these nuisance parameters by machine learning and use cross-fitting to reduce overfitting bias in the estimation of direct and indirect quantile treatment effects. We establish uniform consistency and asymptotic normality of our effect estimators. We also propose a multiplier bootstrap for statistical inference and show the validity of the multiplier bootstrap. Finally, we investigate the finite sample performance of our method in a simulation study and apply it to empirical data from the National Job Corp Study to assess the direct and indirect earnings effects of training.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.01049&r=ecm
  2. By: Pigini, Claudia; Pionati, Alessandro; Valentini, Francesco
    Abstract: We propose a bootstrap generalized Hausman test for the correct specification of unobserved heterogeneity in fixed-effects panel data models. We consider as null hypotheses two scenarios in which the unobserved heterogeneity is either time-invariant or specified as additive individual and time effects. We contrast the standard fixed- effects estimators with the recently developed two-step grouped fixed-effects estimator, that is consistent in the presence of time-varying heterogeneity under minimal specification and distributional assumptions for the unobserved effects. The Hausman test exploits the general formulation for the variance of the vector of contrasts and critical values are computed via parametric percentile bootstrap, so as to account for the non-centrality of the asymptotic χ 2 distribution arising from the incidental parameters and approximation biases. Monte Carlo evidence shows that the test has correct size and good power in both linear and non linear specification.
    Keywords: Additive effects, Asymptotic bias, Hausman test, Parametric bootstrap, Time-varying heterogeneity
    JEL: C12 C23 C25
    Date: 2023–07–04
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:117821&r=ecm
  3. By: Sander Barendse
    Abstract: We propose an $\ell_1$-penalized estimator for high-dimensional models of Expected Shortfall (ES). The estimator is obtained as the solution to a least-squares problem for an auxiliary dependent variable, which is defined as a transformation of the dependent variable and a pre-estimated tail quantile. Leveraging a sparsity condition, we derive a nonasymptotic bound on the prediction and estimator errors of the ES estimator, accounting for the estimation error in the dependent variable, and provide conditions under which the estimator is consistent. Our estimator is applicable to heavy-tailed time-series data and we find that the amount of parameters in the model may grow with the sample size at a rate that depends on the dependence and heavy-tailedness in the data. In an empirical application, we consider the systemic risk measure CoES and consider a set of regressors that consists of nonlinear transformations of a set of state variables. We find that the nonlinear model outperforms an unpenalized and untransformed benchmark considerably.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.01033&r=ecm
  4. By: Gianluca Cubadda; Francesco Giancaterini; Alain Hecq; Joann Jasiak
    Abstract: This paper investigates the performance of the Generalized Covariance estimator (GCov) in estimating mixed causal and noncausal Vector Autoregressive (VAR) models. The GCov estimator is a semi-parametric method that minimizes an objective function without making any assumptions about the error distribution and is based on nonlinear autocovariances to identify the causal and noncausal orders of the mixed VAR. When the number and type of nonlinear autocovariances included in the objective function of a GCov estimator is insufficient/inadequate, or the error density is too close to the Gaussian, identification issues can arise, resulting in local minima in the objective function of the estimator at parameter values associated with incorrect causal and noncausal orders. Then, depending on the starting point, the optimization algorithm may converge to a local minimum, leading to inaccurate estimates. To circumvent this issue, the paper proposes the use of the Simulated Annealing (SA) optimization algorithm as an alternative to conventional numerical optimization methods. The results demonstrate that the SA optimization algorithm performs effectively when applied to multivariate mixed VAR models, successfully eliminating the effects of local minima. The approach is illustrated by simulations and an empirical application of a bivariate mixed VAR model with commodity price series.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.14653&r=ecm
  5. By: Shiyun Hu; Hongyi Jiang; Zhenting Sun
    Abstract: This paper proposes a nonparametric test for $m$th-degree inverse stochastic dominance which is a powerful tool for ranking distribution functions according to social welfare. We construct the test based on empirical process theory. The test is shown to be asymptotically size controlled and consistent. The good finite sample properties of the test are illustrated via Monte Carlo simulations. We apply our test to the inequality growth in the United Kingdom from 1995 to 2010.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.12271&r=ecm
  6. By: Jinyong Hahn; John D. Singleton; Neşe Yildiz
    Abstract: Panel or grouped data are often used to allow for unobserved individual heterogeneity in econometric models via fixed effects. In this paper, we discuss identification of a panel data model in which the unobserved heterogeneity both enters additively and interacts with treatment variables. We present identification and estimation methods for parameters of interest in this model under both strict and weak exogeneity assumptions. The key identification insight is that other periods' treatment variables are instruments for the unobserved fixed effects. We apply our proposed estimator to matched student-teacher data used to estimate value-added models of teacher quality. We show that the common assumption that the return to unobserved teacher quality is the same for all students is rejected by the data. We also present evidence that No Child Left Behind-era school accountability increased the effectiveness of teacher quality for lower performing students.
    JEL: C12 C14 C31 C36 C52 H75 I21 I24
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:31384&r=ecm
  7. By: Tetsuya Kaji; Jianfei Cao
    Abstract: Treatment effect heterogeneity is of major interest in economics, but its assessment is often hindered by the fundamental lack of identification of the individual treatment effects. For example, we may want to assess the effect of insurance on the health of otherwise unhealthy individuals, but it is infeasible to insure only the unhealthy, and thus the causal effects for those are not identified. Or, we may be interested in the shares of winners from a minimum wage increase, while without observing the counterfactual, the winners are not identified. Such heterogeneity is often assessed by quantile treatment effects, which do not come with clear interpretation and the takeaway can sometimes be equivocal. We show that, with the quantiles of the treated and control outcomes, the ranges of these quantities are identified and can be informative even when the average treatment effects are not significant. Two applications illustrate how these ranges can inform us about heterogeneity of the treatment effects.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.15048&r=ecm
  8. By: Ang Yu; Felix Elwert
    Abstract: We propose a causal framework for decomposing a group disparity in an outcome in terms of an intermediate treatment variable. Our framework captures the contributions of group differences in baseline potential outcome, treatment prevalence, average treatment effect, and selection into treatment. This framework is counterfactually formulated and readily informs policy interventions. The decomposition component for differential selection into treatment is particularly novel, revealing a new mechanism for explaining and ameliorating disparities. This framework reformulates the classic Kitagawa-Blinder-Oaxaca decomposition in causal terms, supplements causal mediation analysis by explaining group disparities instead of group effects, and resolves conceptual difficulties of recent random equalization decompositions. We also provide a conditional decomposition that allows researchers to incorporate covariates in defining the estimands and corresponding interventions. We develop nonparametric estimators based on efficient influence functions of the decompositions. We show that, under mild conditions, these estimators are $\sqrt{n}$-consistent, asymptotically normal, semiparametrically efficient, and doubly robust. We apply our framework to study the causal role of education in intergenerational income persistence. We find that both differential prevalence of and differential selection into college graduation significantly contribute to the disparity in income attainment between income origin groups.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.16591&r=ecm
  9. By: Andrea Renzetti
    Abstract: In this paper I propose a parametric framework for modelling and forecasting macroeconomic tail risk based on stochastic volatility models with Skew-Normal and Skew-t shocks featuring stochastic skewness. The paper develops posterior simulation samplers for Bayesian estimation of both univariate and VAR models of this type. In an application, I use the models to predict downside risk to GDP growth and I show that this approach represents a competitive alternative to quantile regression. Finally, estimating a medium scale VAR on US data I show that time varying skewness is a relevant feature of macroeconomic and financial shocks.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.09287&r=ecm
  10. By: Eric Auerbach; Yong Cai
    Abstract: Social disruption occurs when a policy creates or destroys many network connections between agents. It is a costly side effect of many interventions and so a growing empirical literature recommends measuring and accounting for social disruption when evaluating the welfare impact of a policy. However, there is currently little work characterizing what can actually be learned about social disruption from data in practice. In this paper, we consider the problem of identifying social disruption in a research design that is popular in the literature. We provide two sets of identification results. First, we show that social disruption is not generally point identified, but informative bounds can be constructed using the eigenvalues of the network adjacency matrices observed by the researcher. Second, we show that point identification follows from a theoretically motivated monotonicity condition, and we derive a closed form representation. We apply our methods in two empirical illustrations and find large policy effects that otherwise might be missed by alternatives in the literature.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.15000&r=ecm
  11. By: Yuxia Liu; Qi Zhang; Wei Xiao; Tianguang Chu
    Abstract: We propose a successive one-sided Hodrick-Prescott (SOHP) filter from multiple time scale decomposition perspective to derive trend estimate for a time series. The idea is to apply the one-sided HP (OHP) filter recursively on the updated cyclical component to extract the trend residual on multiple time scales, thereby to improve the trend estimate. To address the issue of optimization with a moving horizon as that of the SOHP filter, we present an incremental HP filtering algorithm, which greatly simplifies the involved inverse matrix operation and reduces the computational demand of the basic HP filtering. Actually, the new algorithm also applies effectively to other HP-type filters, especially for large-size or expanding data scenario. Numerical examples on real economic data show the better performance of the SOHP filter in comparison with other known HP-type filters.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.12439&r=ecm
  12. By: Deepankar Basu
    Abstract: This paper traces the historical and analytical development of what is known in the econometrics literature as the Frisch-Waugh-Lovell theorem. This theorem demonstrates that the coefficients on any subset of covariates in a multiple regression is equal to the coefficients in a regression of the residualized outcome variable on the residualized subset of covariates, where residualization uses the complement of the subset of covariates of interest. In this paper, I suggest that the theorem should be renamed as the Yule-Frisch-Waugh-Lovell (YFWL) theorem to recognize the pioneering contribution of the statistician G. Udny Yule in its development. Second, I highlight recent work by the statistician, P. Ding, which has extended the YFWL theorem to a comparison of estimated covariance matrices of coefficients from multiple and partial, i.e. residualized regressions. Third, I show that, in cases where Ding's results do not apply, one can still resort to a computational method to conduct statistical inference about coefficients in multiple regressions using information from partial regressions.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.00369&r=ecm
  13. By: Deepankar Basu
    Abstract: Covariate benchmarking is an important part of sensitivity analysis about omitted variable bias and can be used to bound the strength of the unobserved confounder using information and judgments about observed covariates. It is common to carry out formal covariate benchmarking after residualizing the unobserved confounder on the set of observed covariates. In this paper, I explain the rationale and details of this procedure. I clarify some important details of the process of formal covariate benchmarking and highlight some of the difficulties of interpretation that researchers face in reasoning about the residualized part of unobserved confounders. I explain all the points with several empirical examples.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.10562&r=ecm
  14. By: Jonathan J Adams (Department of Economics, University of Florida); Philip Barrett (International Monetary Fund)
    Abstract: We propose a method to identify the anticipated components of macroeconomic shocks in a structural VAR: we include empirical forecasts about each time series in the VAR, which introduces enough linear restrictions to identify each structural shock and to further decompose each one into “news†and “surprise†shocks. We estimate our VAR on US time series using forecast data from the SPF, CBO, Federal Reserve, and asset prices. The fiscal stimulus and interest rate shocks that we identify have typical effects that comport with existing evidence. In our news-surprise decomposition, we find that news contributes to a third of US business cycle volatility, where the effect of fiscal shocks is mostly anticipated, while the effect of monetary policy shocks is mostly unexpected. Finally, we use the news structure of the shocks to estimate counterfactual policy rules, and compare the ability of fiscal and monetary policy to moderate output and inflation.
    JEL: C32 E32 E52 E62
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:ufl:wpaper:001010&r=ecm
  15. By: Andersson, Jonas (Dept. of Business and Management Science, Norwegian School of Economics); Sheybanivaziri, Samaneh (Dept. of Business and Management Science, Norwegian School of Economics)
    Abstract: In this paper, we study the performance of prediction intervals in situations applicable to electricity markets. In order to do so we first introduce an extension of the logistic mixture autoregressive with exogenous variables (LMARX) model, see (Wong, Li, 2001), where we allow for multiplicative seasonality and lagged mixture probabilities. The reason for using this model is the prevalence of spikes in electricity prices. This feature creates a quickly varying, and sometimes bimodal, forecast distribution. The model is fitted to the price data from the electricity market forecasting competition GEFCom2014. Additionally, we compare the outcomes of our presumably more accurate representation of reality, the LMARX model, with other widely utilized approaches that have been employed in the literature.
    Keywords: Prediction intervals; probabilistic forecasts; electricity prices; spikes; mixture models
    JEL: C10 C50 C53
    Date: 2023–07–11
    URL: http://d.repec.org/n?u=RePEc:hhs:nhhfms:2023_011&r=ecm

This nep-ecm issue is ©2023 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.