nep-ecm New Economics Papers
on Econometrics
Issue of 2023‒07‒24
nineteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Simple estimation of semiparametric models with measurement errors By Andrei Zeleneev; Kirill Evdokimov
  2. Marginal effects for probit and tobit with endogeneity By Kirill Evdokimov; Ilze Kalnina; Andrei Zeleneev
  3. Difference-in-Differences with Interference: A Finite Population Perspective By Ruonan Xu
  4. How to weight in moments matchings: A new approach and applications to earnings dynamics By Andrew Shephard; Xu Cheng; Alejándro Sanchez-Becerra
  5. Narrowest Significance Pursuit: inference for multiple change-points in linear models By Fryzlewicz, Piotr
  6. Exact Likelihood for Inverse Gamma Stochastic Volatility Models By Roberto Leon-Gonzalez; Blessings Majoni
  7. Inference in IV models with clustered dependence, many instruments and weak identification By Johannes W. Ligtenberg
  8. Bias-Correction in Time Series Quantile Regression Models By Marian Vavra
  9. Assessment of generalised Bayesian structural equation models for continuous and binary data By Vamvourellis, Konstantinos; Kalogeropoulos, Konstantinos; Moustaki, Irini
  10. Blended Identification in Structural VARs By Andrea Carriero; Massimiliano Marcellino; Tommaso Tornese
  11. Can we falsify the justification of the validity of Wald confidence intervals of doubly robust functionals, without assumptions? By Lin Liu; Rajarshi Mukherjee; James M. Robins
  12. Price elasticity of electricity demand: Using instrumental variable regressions to address endogeneity and autocorrelation of high-frequency time series By Silvana Tiedemann; Raffaele Sgarlato; Lion Hirth
  13. Dynamic Causal Forests, with an Application to Payroll Tax Incidence in Norway By Evelina Gavrilova; Audun Langørgen; Floris T. Zoutman; Floris Zoutman
  14. Robust transformations for multiple regression via additivity and variance stabilization By Riani, Marco; Atkinson, Anthony C.; Corbellini, Aldo
  15. Deep Neural Network Estimation in Panel Data Models By Ilias Chronopoulos; Katerina Chrysikou; George Kapetanios; James Mitchell; Aristeidis Raftapostolos
  16. Semiparametric Efficiency Gains from Parametric Restrictions on the Generalized Propensity Score By Haruki Kono
  17. Flexible Bayesian MIDAS: time‑variation, group‑shrinkage and sparsity By Kohns, David; Potjagailo, Galina
  18. Design-based identification with formula instruments: A review By Kirill Borusyak; Peter Hull; Xavier Jaravel
  19. A regime switching limited information maximum likelihood estimator By Shakil, Golam Saroare; Marsh, Thomas L.

  1. By: Andrei Zeleneev; Kirill Evdokimov
    Abstract: We develop a practical way of addressing the Errors-In-Variables (EIV) problem in the Generalized Method of Moments (GMM) framework. We focus on the settings in which the variability of the EIV is a fraction of that of the mismeasured variables, which is typical for empirical applications. For any initial set of moment conditions our approach provides a "corrected" set of moment conditions that are robust to the EIV. We show that the GMM estimator based on these moments is √n-consistent, with the standard tests and confidence intervals providing valid inference. This is true even when the EIV are so large that naïve estimators (that ignore the EIV problem) may be heavily biased with the confidence intervals having 0% coverage. Our approach involves no nonparametric estimation, which is particularly important for applications with multiple covariates, and settings with multivariate, serially correlated, or non-classical EIV.
    Date: 2023–06–28
    URL: http://d.repec.org/n?u=RePEc:azt:cemmap:10/23&r=ecm
  2. By: Kirill Evdokimov; Ilze Kalnina; Andrei Zeleneev
    Abstract: When evaluating partial effects, it is important to distinguish between structural endogeneity and measurement errors. In contrast to linear models, these two sources of endogeneity affect partial effects differently in nonlinear models. We study this issue focusing on the Instrumental Variable (IV) Probit and Tobit models. We show that even when a valid IV is available, failing to differentiate between the two types of endogeneity can lead to either under- or over-estimation of the partial effects. We develop simple estimators of the bounds on the partial effects and provide easy to implement confidence intervals that correctly account for both types of endogeneity. We illustrate the methods in a Monte Carlo simulation and an empirical application.
    Date: 2023–06–28
    URL: http://d.repec.org/n?u=RePEc:azt:cemmap:11/23&r=ecm
  3. By: Ruonan Xu
    Abstract: In many scenarios, such as the evaluation of place-based policies, potential outcomes are not only dependent upon the unit's own treatment but also its neighbors' treatment. In spite of this, the "difference-in-differences" (DID) type estimators typically ignore such interference among neighbors. I show in this paper that the canonical DID estimators generally do not identify interesting causal effects in the presence of neighborhood interference. To incorporate interference structure into DID estimation, I propose doubly robust estimators for the direct average treatment effect on the treated as well as the average spillover effects under a modified parallel trends assumption. When spillover effects are of interest, we often sample the entire population. Thus, I adopt a finite population perspective in the sense that the estimands are defined as population averages and inference is conditional on the attributes of all population units. The general and unified approach in this paper relaxes common restrictions in the literature, such as partial interference and correctly specified spillover functions. Moreover, robust inference is discussed based on the asymptotic distribution of the proposed estimators.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.12003&r=ecm
  4. By: Andrew Shephard; Xu Cheng; Alejándro Sanchez-Becerra
    Abstract: Following the seminal paper by Altonji and Segal (1996), empirical studies have widely embraced equal or diagonal weighting in minimum distance estimation to mitigate the finite-sample bias caused by sampling errors in the weighting matrix. This paper introduces a new weighting scheme that combines cross-fitting and regularized weighting matrix estimation. We also provide a new cross-fitting standard error, applying cross-fitting to estimate the asymptotic variance. In a many-moment asymptotic framework, we demonstrate the effectiveness of cross-fitting in eliminating a first-order asymptotic bias due to weighting matrix sampling errors. Additionally, we demonstrate that some economic models in the earnings dynamics literature meet certain sparsity conditions, ensuring that the proposed regularized weighting matrix behaves similarly to the oracle weighting matrix for these applications. Extensive simulation studies based on the earnings dynamics literature validate the superiority of our approach over commonly employed alternative weighting schemes.
    Date: 2023–06–30
    URL: http://d.repec.org/n?u=RePEc:azt:cemmap:13/23&r=ecm
  5. By: Fryzlewicz, Piotr
    Abstract: We propose Narrowest Significance Pursuit (NSP), a general and flexible methodology for automatically detecting localized regions in data sequences which each must contain a change-point (understood as an abrupt change in the parameters of an underlying linear model), at a prescribed global significance level. NSP works with a wide range of distributional assumptions on the errors, and guarantees important stochastic bounds which directly yield exact desired coverage probabilities, regardless of the form or number of the regressors. In contrast to the widely studied “post-selection inference” approach, NSP paves the way for the concept of “post-inference selection.” An implementation is available in the R package nsp. Supplementary materials for this article are available online.
    Keywords: confidence intervals; structural breaks; post-selection inference; wild binary segmentation; narrowest-over-threshold; EP/V053639/1; Taylor & Francis deal
    JEL: C1
    Date: 2023–06–09
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:118795&r=ecm
  6. By: Roberto Leon-Gonzalez (National Graduate Institute for Policy Studies, Japan; Rimini Centre for Economic Analysis); Blessings Majoni (National Graduate Institute for Policy Studies, Japan)
    Abstract: We obtain a novel analytic expression of the likelihood for a stationary inverse gamma Stochastic Volatility (SV) model. This allows us to obtain the Maximum Likelihood Estimator for this non linear non Gaussian state space model. Further, we obtain both the filtering and smoothing distributions for the inverse volatilities as mixture of gammas and therefore we can provide the smoothed estimates of the volatility. We show that by integrating out the volatilities the model that we obtain has the resemblance of a GARCH in the sense that the formulas are similar, which simplifies computations significantly. The model allows for fat tails in the observed data. We provide empirical applications using exchange rates data for 7 currencies and quarterly inflation data for four countries. We find that the empirical fit of our proposed model is overall better than alternative models for 4 countries currency data and for 2 countries inflation data.
    Keywords: Hypergeometric Function, Particle Filter, Parallel Computing, Euler Acceleration
    JEL: C32 C58
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:rim:rimwps:23-11&r=ecm
  7. By: Johannes W. Ligtenberg
    Abstract: Data clustering reduces the effective sample size down from the number of observations towards the number of clusters. For instrumental variable models this implies more restrictive requirements on the strength of the instruments and makes the number of instruments more quickly non-negligible compared to the effective sample size. Clustered data therefore increases the need for many and weak instrument robust tests. However, none of the previously developed many and weak instrument robust tests can be applied to this type of data as they all require independent observations. I therefore adapt two of such tests to clustered data. First, I derive a cluster jackknife Anderson-Rubin test by removing clusters rather than individual observations from the Anderson-Rubin statistic. Second, I propose a cluster many instrument Anderson-Rubin test which improves on the first test by using a more optimal, but more complex, weighting matrix. I show that if the clusters satisfy an invariance assumption the higher complexity poses no problems. By revisiting a study on the effect of queenly reign on war I show the empirical relevance of the new tests.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.08559&r=ecm
  8. By: Marian Vavra (National Bank of Slovakia)
    Abstract: This paper examines the small sample properties of a linear programming estimator in time series quantile regression models. Under certain regularity conditions, the estimator produces consistent and asymptotically normally distributed estimates of model parameters. However, despite these desirable asymptotic properties, we find that the estimator performs rather poorly in small samples. We suggest the use of a subsampling method to correct for a bias and discuss a simple rule of thumb for setting a block size. Our simulation results show that the subsampling method can effectively reduce the bias at very low computational costs and without significantly increasing the root mean squared error of the estimated parameters. The importance of bias correction for economic policy is highlighted in a growth-at-risk application.
    JEL: C15 C22
    Date: 2023–04
    URL: http://d.repec.org/n?u=RePEc:svk:wpaper:1094&r=ecm
  9. By: Vamvourellis, Konstantinos; Kalogeropoulos, Konstantinos; Moustaki, Irini
    Abstract: The paper proposes a novel model assessment paradigm aiming to address shortcoming of posterior predictive p-values, which provide the default metric of fit for Bayesian structural equation modelling (BSEM). The model framework presented in the paper focuses on the approximate zero approach (Psychological Methods, 17, 2012, 313), which involves formulating certain parameters (such as factor loadings) to be approximately zero through the use of informative priors, instead of explicitly setting them to zero. The introduced model assessment procedure monitors the out-of-sample predictive performance of the fitted model, and together with a list of guidelines we provide, one can investigate whether the hypothesised model is supported by the data. We incorporate scoring rules and cross-validation to supplement existing model assessment metrics for BSEM. The proposed tools can be applied to models for both continuous and binary data. The modelling of categorical and non-normally distributed continuous data is facilitated with the introduction of an item-individual random effect. We study the performance of the proposed methodology via simulation experiments as well as real data on the ‘Big-5’ personality scale and the Fagerstrom test for nicotine dependence.
    Keywords: Bayesian model assessment; cross-validation; factor analysis; scoring rules; Wiley deal
    JEL: C1
    Date: 2023–07–04
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:119473&r=ecm
  10. By: Andrea Carriero; Massimiliano Marcellino; Tommaso Tornese
    Abstract: We propose a blended approach which combines identification via heteroskedasticity with the widely used methods of sign restrictions, narrative restrictions, and external instruments. Since heteroskedasticity in the reduced form can be exploited to point identify a set of orthogonal shocks, its use results in a sharp reduction of the potentially large identified sets stemming from the typical approaches. Conversely, the identifying information in the form of sign and narrative restrictions or external instruments can prove necessary when the conditions for point identification through heteroskedasticity are not met and offers a natural solution to the labeling problem inherent in purely statistical identification strategies. As a result, we argue that blending these methods together resolves their respective key issues and leverages their advantages, which allows to sharpen identification. We illustrate the blending approach in an artificial data experiment first, and then apply it to several examples taken from recent and influential literature. Specifically, we consider labour market shocks, oil market shocks, monetary and fiscal policy shocks, and find that their effects can be rather different from what previously obtained with simpler identification strategies.
    Keywords: SVAR, Identification, Heteroskedasticity, Sign restrictions, Proxy variables
    JEL: C11 C32 D81 E32
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:baf:cbafwp:cbafwp23200&r=ecm
  11. By: Lin Liu; Rajarshi Mukherjee; James M. Robins
    Abstract: In this article we develop a feasible version of the assumption-lean tests in Liu et al. 20 that can falsify an analyst's justification for the validity of a reported nominal $(1 - \alpha)$ Wald confidence interval (CI) centered at a double machine learning (DML) estimator for any member of the class of doubly robust (DR) functionals studied by Rotnitzky et al. 21. The class of DR functionals is broad and of central importance in economics and biostatistics. It strictly includes both (i) the class of mean-square continuous functionals that can be written as an expectation of an affine functional of a conditional expectation studied by Chernozhukov et al. 22 and the class of functionals studied by Robins et al. 08. The present state-of-the-art estimators for DR functionals $\psi$ are DML estimators $\hat{\psi}_{1}$. The bias of $\hat{\psi}_{1}$ depends on the product of the rates at which two nuisance functions $b$ and $p$ are estimated. Most commonly an analyst justifies the validity of her Wald CIs by proving that, under her complexity-reducing assumptions, the Cauchy-Schwarz (CS) upper bound for the bias of $\hat{\psi}_{1}$ is $o (n^{- 1 / 2})$. Thus if the hypothesis $H_{0}$: the CS upper bound is $o (n^{- 1 / 2})$ is rejected by our test, we will have falsified the analyst's justification for the validity of her Wald CIs. In this work, we exhibit a valid assumption-lean falsification test of $H_{0}$, without relying on complexity-reducing assumptions on $b, p$, or their estimates $\hat{b}, \hat{p}$. Simulation experiments are conducted to demonstrate how the proposed assumption-lean test can be used in practice. An unavoidable limitation of our methodology is that no assumption-lean test of $H_{0}$, including ours, can be a consistent test. Thus failure of our test to reject is not meaningful evidence in favor of $H_{0}$.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.10590&r=ecm
  12. By: Silvana Tiedemann (Hertie School, Centre for Sustainability, Germany); Raffaele Sgarlato (Hertie School, Centre for Sustainability, Germany); Lion Hirth (Hertie School, Centre for Sustainability, Germany; Neon Neue Energie\"okonomik GmbH, Germany)
    Abstract: This paper examines empirical methods for estimating the response of aggregated electricity demand to high-frequency price signals, the short-term elasticity of electricity demand. We investigate how the endogeneity of prices and the autocorrelation of the time series, which are particularly pronounced at hourly granularity, affect and distort common estimators. After developing a controlled test environment with synthetic data that replicate key statistical properties of electricity demand, we show that not only the ordinary least square (OLS) estimator is inconsistent (due to simultaneity), but so is a regular instrumental variable (IV) regression (due to autocorrelation). Using wind as an instrument, as it is commonly done, may result in an estimate of the demand elasticity that is inflated by an order of magnitude. We visualize the reason for the Thams bias using causal graphs and show that its magnitude depends on the autocorrelation of both the instrument, and the dependent variable. We further incorporate and adapt two extensions of the IV estimation, conditional IV and nuisance IV, which have recently been proposed by Thams et al. (2022). We show that these extensions can identify the true short-term elasticity in a synthetic setting and are thus particularly promising for future empirical research in this field.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.12863&r=ecm
  13. By: Evelina Gavrilova; Audun Langørgen; Floris T. Zoutman; Floris Zoutman
    Abstract: This paper develops a machine-learning method that allows researchers to estimate heterogeneous treatment effects with panel data in a setting with many covariates. Our method, which we name the dynamic causal forest (DCF) method, extends the causal-forest method of Wager and Athey (2018) by allowing for the estimation of dynamic treatment effects in a difference-in-difference setting. Regular causal forests require conditional independence to consistently estimate heterogeneous treatment effects. In contrast, DCFs provide a consistent estimate for heterogeneous treatment effects under the weaker assumption of parallel trends. DCFs can be used to create event-study plots which aid in the inspection of pre-trends and treatment effect dynamics. We provide an empirical application, where DCFs are applied to estimate the incidence of payroll tax on wages paid to employees. We consider treatment effect heterogeneity associated with personal- and firm-level variables. We find that on average the incidence of the tax is shifted onto workers through incidental payments, rather than contracted wages. Heterogeneity is mainly explained by firm-and workforce-level variables. Firms with a large and heterogeneous workforce are most effective in passing on the incidence of the tax to workers.
    Keywords: causal forest, treatment effect heterogeneity, payroll tax incidence, administrative data
    JEL: C18 H22 J31 M54
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_10532&r=ecm
  14. By: Riani, Marco; Atkinson, Anthony C.; Corbellini, Aldo
    Abstract: Outliers can have a major effect on the estimated transformation of the response in linear regression models, as they can on the estimates of the coefficients of the fitted model. The effect is more extreme in the Generalized Additive Models (GAMs) that are the subject of this article, as the forms of terms in the model can also be affected. We develop, describe and illustrate robust methods for the nonparametric transformation of the response and estimation of the terms in the model. Numerical integration is used to calculate the estimated variance stabilizing transformation. Robust regression provides outlier free input to the polynomial smoothers used in the calculation of the response transformation and in the backfitting algorithm for estimation of the functions of the GAM. Our starting point was the AVAS (Additivity and VAriance Stabilization) algorithm of Tibshirani. Even if robustness is not required, we have made four further general optional improvements to AVAS which greatly improve the performance of Tibshirani’s original Fortran program. We provide a publicly available and fully documented interactive program for our procedure which is a robust form of Tibshirani’s AVAS that allows many forms of robust regression. We illustrate the efficacy of our procedure through data analyses. A refinement of the backfitting algorithm has interesting implications for robust model selection. Supplementary materials for this article are available online.
    Keywords: augmented star plot; AVAS; backfitting; forward search; generalized additive models; robust model selection; T&F deal
    JEL: C1
    Date: 2023–05–26
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:118699&r=ecm
  15. By: Ilias Chronopoulos; Katerina Chrysikou; George Kapetanios; James Mitchell; Aristeidis Raftapostolos
    Abstract: In this paper we study neural networks and their approximating power in panel data models. We provide asymptotic guarantees on deep feed-forward neural network estimation of the conditional mean, building on the work of Farrell et al. (2021), and explore latent patterns in the cross-section. We use the proposed estimators to forecast the progression of new COVID-19 cases across the G7 countries during the pandemic. We find significant forecasting gains over both linear panel and nonlinear time-series models. Containment or lockdown policies, as instigated at the national level by governments, are found to have out-of-sample predictive power for new COVID-19 cases. We illustrate how the use of partial derivatives can help open the “black box” of neural networks and facilitate semi-structural analysis: school and workplace closures are found to have been effective policies at restricting the progression of the pandemic across the G7 countries. But our methods illustrate significant heterogeneity and time variation in the effectiveness of specific containment policies.
    Keywords: Machine Learning; Neural Networks; Panel Data; Nonlinearity; Forecasting; COVID-19; Policy Interventions
    JEL: C33 C45
    Date: 2023–07–05
    URL: http://d.repec.org/n?u=RePEc:fip:fedcwq:96408&r=ecm
  16. By: Haruki Kono
    Abstract: Knowledge of the propensity score weakly improves efficiency when estimating causal parameters, but what kind of knowledge is more useful? To examine this, we first derive the semiparametric efficiency bound of multivalued treatment effects when the propensity score is correctly specified by a parametric model. We then reveal which parametric structure on the propensity score enhances the efficiency even when the the model is large. Finally, we apply the general theory we develop to a stratified experiment setup and find that knowing the strata improves the efficiency, especially when the size of each stratum component is small.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.04177&r=ecm
  17. By: Kohns, David (Aalto University); Potjagailo, Galina (Bank of England)
    Abstract: We propose a mixed‑frequency regression prediction approach that models a time‑varying trend, stochastic volatility and fat tails in the variable of interest. The coefficients of high‑frequency indicators are regularised via a shrinkage prior that accounts for the grouping structure and within‑group correlation among lags. A new sparsification algorithm on the posterior motivated by Bayesian decision theory derives inclusion probabilities over lag groups, thus making the results easy to communicate without imposing sparsity a priori. An empirical application on nowcasting UK GDP growth suggests that group‑shrinkage in combination with the time‑varying components substantially increases nowcasting performance by reading signals from an economically meaningful subset of indicators, whereas the time‑varying components help by allowing the model to switch between indicators. Over the data release cycle, signals initially stem from survey data and then shift towards few ‘hard’ real activity indicators. During the Covid pandemic, the model performs relatively well since it shifts towards indicators for the service and housing sectors that capture the disruptions from economic lockdowns.
    Keywords: Bayesian MIDAS regressions; forecasting; time‑variation and fat tails; grouped horseshoe prior; decision analysis
    JEL: C11 C32 C44 C53 E37
    Date: 2023–06–02
    URL: http://d.repec.org/n?u=RePEc:boe:boeewp:1025&r=ecm
  18. By: Kirill Borusyak; Peter Hull; Xavier Jaravel
    Abstract: Many studies in economics use instruments or treatments which combine a set of exogenous shocks with other predetermined variables by a known formula. Examples include shift-share instruments and measures of social or spatial spillovers. We review recent econometric tools for this setting, which leverage the assignment process of the exogenous shocks and the structure of the formula for identification. We compare this design-based approach with conventional estimation strategies based on conditional unconfoundedness, and contrast it with alternative strategies that leverage a model for unobservables.
    Date: 2023–06–28
    URL: http://d.repec.org/n?u=RePEc:azt:cemmap:12/23&r=ecm
  19. By: Shakil, Golam Saroare; Marsh, Thomas L.
    Keywords: Research Methods/Statistical Methods, Agricultural and Food Policy, Agricultural Finance
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:ags:aaea22:335839&r=ecm

This nep-ecm issue is ©2023 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.