nep-ecm New Economics Papers
on Econometrics
Issue of 2021‒09‒27
twenty papers chosen by
Sune Karlsson
Örebro universitet

  1. Testing Conditional Independence in Macroeconomic Policy Evaluation for Time Series Data By Ying Fang; Ming Lin; Shengfang Tang; Zongwu Cai
  2. Short and Simple Confidence Intervals when the Directions of Some Effects are Known By Philipp Ketz; Adam McCloskey
  3. Regression Discontinuity Design with Potentially Many Covariates By Yoichi Arai; Taisuke Otsu; Myung Hwan Seo
  4. Do you know your biases? A Monte Carlo analysis of dynamic panel data estimators By Vadim Kufenko; Klaus Prettner
  5. Refining Set-Identification in VARs through Independence By Thorsten Drautzburg; Jonathan H. Wright
  6. Estimations of the Conditional Tail Average Treatment Effect By Le-Yu Chen; Yu-Min Yen
  7. Unifying Design-based Inference: On Bounding and Estimating the Variance of any Linear Estimator in any Experimental Design By Joel A. Middleton
  8. Tests for Group-Specific Heterogeneity in High-Dimensional Factor Models By Antoine Djogbenou; Razvan Sufana
  9. A Wavelet Method for Panel Models with Jump Discontinuities in the Parameters By Oualid Bada; Alois Kneip; Dominik Liebl; Tim Mensinger; James Gualtieri; Robin C. Sickles
  10. Implicit Copulas: An Overview By Michael Stanley Smith
  11. Reverse mode differentiation for DSGE models By Alfred Duncan
  12. "Particle Rolling MCMC with Double-Block Sampling " By Naoki Awaya; Yasuhiro Omori
  13. How Serious is the Measurement-Error Problem in Risk-Aversion Tasks? By Fabien Perez; Guillaume Hollard; Radu Vranceanu
  14. Gibbs posterior inference on a Levy density under discrete sampling By Zhe Wang; Ryan Martin
  15. Algorithms for Inference in SVARs Identified with Sign and Zero Restrictions By Matthew Read
  16. Composite Likelihood for Stochastic Migration Model with Unobserved Factor By Antoine Djogbenou; Christian Gouri\'eroux; Joann Jasiak; Maygol Bandehali
  17. Estimation of Income Inequality from Grouped Data By Vanesa Jorda; José María Sarabia; Markus Jäntti
  18. Did2s: Two-Stage Difference-in-Differences By Kyle Butts; John Gardner
  19. Trade, Gravity and Aggregation By Holger Breinlich; Dennis Novy; Joao M.C. Santos Silva
  20. Modeling and Analysis of Discrete Response Data: Applications to Public Opinion on Marijuana Legalization in the United States By Mohit Batham; Soudeh Mirghasemi; Mohammad Arshad Rahman; Manini Ojha

  1. By: Ying Fang (The Wang Yanan Institute for Studies in Economics, Xiamen University, Xiamen, Fujian 361005, China and Department of Statistics, School of Economics, Xiamen University, Xiamen, Fujian 361005, China); Ming Lin (The Wang Yanan Institute for Studies in Economics, Xiamen University, Xiamen, Fujian 361005, China and Department of Statistics, School of Economics, Xiamen University, Xiamen, Fujian 361005, China); Shengfang Tang (Department of Statistics, School of Economics, Xiamen University, Xiamen, Fujian 361005, China); Zongwu Cai (Department of Economics, The University of Kansas, Lawrence, KS 66045, USA)
    Abstract: In this paper, we propose a new procedure to test conditional independence assumption for macroeconomic policy evaluation in a time series context. The unconfoundedness assumption is transformed to a nonparametric conditional moment test using auxiliary variables which are allowed to affect potential outcomes but the dependence can be fully captured by potential outcomes and observable controls. When the policy choice is binary, a nonparametric statistic test is developed further for testing the unconfoundedness assumption conditional on policy propensity score. The proposed test statistics are shown to have the limiting normal distribution under the null hypotheses for time series data. Monte Carlo simulations are conducted to examine the finite sample performances of the proposed test statistics. Finally, the proposed test method is applied to testing the conditional unconfoundedness in a real example as considered in Angrist and Kuersteiner (2011).
    Keywords: Bootstrap; Macroeconomic policy evaluation; Moment test; Nonparametric estimation; Selection on observable; Treatment effect; Unconfoundedness assumption.
    JEL: C12 C13 C14 C23
    Date: 2021–09
    URL: http://d.repec.org/n?u=RePEc:kan:wpaper:202118&r=
  2. By: Philipp Ketz; Adam McCloskey
    Abstract: We provide adaptive confidence intervals on a parameter of interest in the presence of nuisance parameters when some of the nuisance parameters have known signs. The confidence intervals are adaptive in the sense that they tend to be short at and near the points where the nuisance parameters are equal to zero. We focus our results primarily on the practical problem of inference on a coefficient of interest in the linear regression model when it is unclear whether or not it is necessary to include a subset of control variables whose partial effects on the dependent variable have known directions (signs). Our confidence intervals are trivial to compute and can provide significant length reductions relative to standard confidence intervals in cases for which the control variables do not have large effects. At the same time, they entail minimal length increases at any parameter values. We prove that our confidence intervals are asymptotically valid uniformly over the parameter space and illustrate their length properties in an empirical application to a factorial design field experiment and a Monte Carlo study calibrated to the empirical application.
    Date: 2021–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2109.08222&r=
  3. By: Yoichi Arai; Taisuke Otsu; Myung Hwan Seo
    Abstract: This paper studies the case of possibly high-dimensional covariates in the regression discontinuity design (RDD) analysis. In particular, we propose estimation and inference methods for the RDD models with covariate selection which perform stably regardless of the number of covariates. The proposed methods combine the local approach using kernel weights with `1-penalization to handle high-dimensional covariates, and the combination is new in the literature. We provide theoretical and numerical results which illustrate the usefulness of the proposed methods. Theoretically, we present risk and coverage properties for our point estimation and inference methods, respectively. Numerically, our simulation experiments and empirical example show the robust behaviors of the proposed methods to the number of covariates in terms of bias and variance for point estimation and coverage probability and interval length for inference.
    Date: 2021–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2109.08351&r=
  4. By: Vadim Kufenko (Department of Economics, Vienna University of Economics and Business); Klaus Prettner (Department of Economics, Vienna University of Economics and Business)
    Abstract: We assess the performance of widely-used dynamic panel data estimators based on Monte Carlo simulations of a dynamic economic process. Knowing the true underlying coefficient of the autoregressive term, we show that most estimators exhibit a severe bias even in the absence of measurement errors, omitted variables, and endogeneity issues. We analyze how the bias changes with the sample size, the autoregressive coefficient, and the estimation options. Based on our insights, we recommend i) carefully choosing appropriate estimators given the underlying structure of the data and ii) scrutinizing the estimation results based on the insights of simulation studies.
    Keywords: Theory-Based Monte Carlo Simulation, Dynamic Panel Data Estimators, Estimator Bias, Robustness of Empirical Results
    JEL: C10 C13 C33 C52 C53 O47
    Date: 2021–09
    URL: http://d.repec.org/n?u=RePEc:wiw:wiwwuw:wuwp316&r=
  5. By: Thorsten Drautzburg; Jonathan H. Wright
    Abstract: Identification in VARs has traditionally mainly relied on second moments. Some researchers have considered using higher moments as well, but there are concerns about the strength of the identification obtained in this way. In this paper, we propose refining existing identification schemes by augmenting sign restrictions with a requirement that rules out shocks whose higher moments significantly depart from independence. This approach does not assume that higher moments help with identification; it is robust to weak identification. In simulations we show that it controls coverage well, in contrast to approaches that assume that the higher moments deliver point-identification. However, it requires large sample sizes and/or considerable non-normality to reduce the width of confidence intervals by much. We consider some empirical applications. We find that it can reject many possible rotations. The resulting confidence sets for impulse responses may be non-convex, corresponding to disjoint parts of the space of rotation matrices. We show that in this case, augmenting sign and magnitude restrictions with an independence requirement can yield bigger gains
    Keywords: vector-autoregression; sign restrictions; set-identification; weak identification; non-convex confidence set; independent shock
    JEL: C51 C32
    Date: 2021–09–17
    URL: http://d.repec.org/n?u=RePEc:fip:fedpwp:93062&r=
  6. By: Le-Yu Chen; Yu-Min Yen
    Abstract: We study estimation of the conditional tail average treatment effect (CTATE), defined as a difference between conditional tail expectations of potential outcomes. The CTATE can capture heterogeneity and deliver aggregated local information of treatment effects over different quantile levels, and is closely related to the notion of second order stochastic dominance and the Lorenz curve. These properties render it a valuable tool for policy evaluations. We consider a semiparametric treatment effect framework under endogeneity for the CTATE estimation using a newly introduced class of consistent loss functions jointly for the conditioanl tail expectation and quantile. We establish asymptotic theory of our proposed CTATE estimator and provide an efficient algorithm for its implementation. We then apply the method to the evaluation of effects from participating in programs of the Job Training Partnership Act in the US.
    Date: 2021–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2109.08793&r=
  7. By: Joel A. Middleton
    Abstract: This paper provides a design-based framework for variance (bound) estimation in experimental analysis. Results are applicable to virtually any combination of experimental design, linear estimator (e.g., difference-in-means, OLS, WLS) and variance bound, allowing for unified treatment and a basis for systematic study and comparison of designs using matrix spectral analysis. A proposed variance estimator reproduces Eicker-Huber-White (aka. "robust", "heteroskedastic consistent", "sandwich", "White", "Huber-White", "HC", etc.) standard errors and "cluster-robust" standard errors as special cases. While past work has shown algebraic equivalences between design-based and the so-called "robust" standard errors under some designs, this paper motivates them for a wide array of design-estimator-bound triplets. In so doing, it provides a clearer and more general motivation for variance estimators.
    Date: 2021–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2109.09220&r=
  8. By: Antoine Djogbenou; Razvan Sufana
    Abstract: Standard high-dimensional factor models assume that the comovements in a large set of variables could be modeled using a small number of latent factors that affect all variables. In many relevant applications in economics and finance, heterogenous comovements specific to some known groups of variables naturally arise, and reflect distinct cyclical movements within those groups. This paper develops two new statistical tests that can be used to investigate whether there is evidence supporting group-specific heterogeneity in the data. The first test statistic is designed for the alternative hypothesis of group-specific heterogeneity appearing in at least one pair of groups; the second is for the alternative of group-specific heterogeneity appearing in all pairs of groups. We show that the second moment of factor loadings changes across groups when heterogeneity is present, and use this feature to establish the theoretical validity of the tests. We also propose and prove the validity of a permutation approach for approximating the asymptotic distributions of the two test statistics. The simulations and the empirical financial application indicate that the proposed tests are useful for detecting group-specific heterogeneity.
    Date: 2021–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2109.09049&r=
  9. By: Oualid Bada; Alois Kneip; Dominik Liebl; Tim Mensinger; James Gualtieri; Robin C. Sickles
    Abstract: While a substantial literature on structural break change point analysis exists for univariate time series, research on large panel data models has not been as extensive. In this paper, a novel method for estimating panel models with multiple structural changes is proposed. The breaks are allowed to occur at unknown points in time and may affect the multivariate slope parameters individually. Our method adapts Haar wavelets to the structure of the observed variables in order to detect the change points of the parameters consistently. We also develop methods to address endogenous regressors within our modeling framework. The asymptotic property of our estimator is established. In our application, we examine the impact of algorithmic trading on standard measures of market quality such as liquidity and volatility over a time period that covers the financial meltdown that began in 2007. We are able to detect jumps in regression slope parameters automatically without using ad-hoc subsample selection criteria.
    Date: 2021–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2109.10950&r=
  10. By: Michael Stanley Smith
    Abstract: Implicit copulas are the most common copula choice for modeling dependence in high dimensions. This broad class of copulas is introduced and surveyed, including elliptical copulas, skew $t$ copulas, factor copulas, time series copulas and regression copulas. The common auxiliary representation of implicit copulas is outlined, and how this makes them both scalable and tractable for statistical modeling. Issues such as parameter identification, extended likelihoods for discrete or mixed data, parsimony in high dimensions, and simulation from the copula model are considered. Bayesian approaches to estimate the copula parameters, and predict from an implicit copula model, are outlined. Particular attention is given to implicit copula processes constructed from time series and regression models, which is at the forefront of current research. Two econometric applications -- one from macroeconomic time series and the other from financial asset pricing -- illustrate the advantages of implicit copula models.
    Date: 2021–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2109.04718&r=
  11. By: Alfred Duncan
    Keywords: DSGE; Reverse mode differentiation; Hamiltonian Monte Carlo; No U-Turn Sampler; Bayesian estimation
    JEL: C11 C13 C32
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:ukc:ukcedp:2108&r=
  12. By: Naoki Awaya (Graduate School of Economics, The University of Tokyo); Yasuhiro Omori (Faculty of Economics, The University of Tokyo)
    Abstract: An efficient particle Markov chain Monte Carlo methodology is proposed for the rollingwindow estimation of state space models. The particles are updated to approximate the long sequence of posterior distributions as we move the estimation window. To overcome the wellknown weight degeneracy problem that causes the poor approximation, we introduce a practical double-block sampler with the conditional sequential Monte Carlo update where we choose one lineage from multiple candidates for the set of current state variables. Our proposed sampler is justified in the augmented space through theoretical discussions. In the illustrative examples, it is shown to be successful to accurately estimate the posterior distributions of the model parameters.
    Date: 2021–09
    URL: http://d.repec.org/n?u=RePEc:tky:fseres:2021cf1175&r=
  13. By: Fabien Perez (CREST, ENSAE, INSEE); Guillaume Hollard (CREST, EcolePolytechnique, CNRS); Radu Vranceanu (ESSEC Business School and THEMA)
    Abstract: This paper analyzes within-session test/retest data from four different tasks used to elicit risk attitudes. Maximum-likelihood and non-parametric estimations on 16 datasets reveal that, irrespective of the task, measurement error accounts for approximately 50% of the variance of the observed variable capturing risk attitudes. The consequences of this large noise element are evaluated by means of simulations. First, as predicted by theory, the coefficient on the risk measure in univariate OLS regressions is attenuated to approximately half of its true value, irrespective of the sample size. Second, the risk-attitude measure may spuriously appear to be insignificant, especially in small samples. Unlike the measurement error arising from within-individual variability, rounding has little influence on significance and biases. In the last part, we show that instrumental-variable estimation and the ORIV method, developed by Gillen et al. (2019), both of which require test/retest data, can eliminate the attenuation bias, but do not fully solve the insignificance problem in small samples. Increasing the number of observations to N=500 removes most of the insignificance issues.
    Keywords: Measurement error, Risk-aversion, Test/retest, ORIV, Sample size
    JEL: C18 C26 C91 D81
    Date: 2021–07–08
    URL: http://d.repec.org/n?u=RePEc:crs:wpaper:2021-13&r=
  14. By: Zhe Wang; Ryan Martin
    Abstract: In mathematical finance, Levy processes are widely used for their ability to model both continuous variation and abrupt, discontinuous jumps. These jumps are practically relevant, so reliable inference on the feature that controls jump frequencies and magnitudes, namely, the Levy density, is of critical importance. A specific obstacle to carrying out model-based (e.g., Bayesian) inference in such problems is that, for general Levy processes, the likelihood is intractable. To overcome this obstacle, here we adopt a Gibbs posterior framework that updates a prior distribution using a suitable loss function instead of a likelihood. We establish asymptotic posterior concentration rates for the proposed Gibbs posterior. In particular, in the most interesting and practically relevant case, we give conditions under which the Gibbs posterior concentrates at (nearly) the minimax optimal rate, adaptive to the unknown smoothness of the true Levy density.
    Date: 2021–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2109.06567&r=
  15. By: Matthew Read
    Abstract: I develop algorithms to facilitate Bayesian inference in structural vector autoregressions that are set-identified with sign and zero restrictions by showing that the system of restrictions is equivalent to a system of sign restrictions in a lower-dimensional space. Consequently, algorithms applicable under sign restrictions can be extended to allow for zero restrictions. Specifically, I extend algorithms proposed in Amir-Ahmadi and Drautzburg (2021) to check whether the identified set is nonempty and to sample from the identified set without rejection sampling. I compare the new algorithms to alternatives by applying them to variations of the model considered by Arias et al (2019), who estimate the effects of US monetary policy using sign and zero restrictions on the monetary policy reaction function. The new algorithms are particularly useful when a large number of sign restrictions substantially truncate the identified set given the zero restrictions.
    Date: 2021–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2109.10676&r=
  16. By: Antoine Djogbenou; Christian Gouri\'eroux; Joann Jasiak; Maygol Bandehali
    Abstract: We introduce the Maximum Composite Likelihood (MCL) estimation method for the stochastic factor ordered Probit model of credit rating transitions of firms. This model is recommended to banks and financial institutions as part of internal credit risk assessment procedures under the Basel III regulations. However, its exact likelihood function involves a high-dimensional integral, which can be approximated numerically and next this approximation can be maximized. However, the associated estimates of migration risk and corresponding required capital are generally quite sensitive to the quality of this approximation, leading to statistical regulatory arbitrage. The proposed MCL estimator maximizes the composite log-likelihood of the factor ordered Probit model. We present three MCL estimators of different complexity and prove their consistency and asymptotic normality. The performance of these estimators is examined in a simulation study.
    Date: 2021–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2109.09043&r=
  17. By: Vanesa Jorda; José María Sarabia; Markus Jäntti
    Abstract: Grouped data in the form of income shares have conventionally been used to estimate income inequality due to the lack of individual records. We provide guidance on the choice between parametric and nonparametric methods and its estimation, for which we develop the GB2group R package. We present a systematic evaluation of the performance of parametric distributions to estimate economic inequality. The accuracy of these estimates is compared with those obtained by nonparametric techniques in more than 5000 datasets. Our results indicate that even the simplest parametric models provide reliable estimates of inequality measures. The nonparametric approach, however, fails to represent income distributions accurately.
    JEL: D31 C13 C18
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:lis:liswps:804&r=
  18. By: Kyle Butts; John Gardner
    Abstract: Recent work has highlighted the difficulties of estimating difference-in-differences models when treatment timing occurs at different times for different units. This article introduces the R package did2s which implements the estimator introduced in Gardner (2021). The article provides an approachable review of the underlying econometric theory and introduces the syntax for the function did2s. Further, the package introduces a function, event_study, that provides a common syntax for all the modern event-study estimators and plot_event_study to plot the results of each estimator.
    Date: 2021–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2109.05913&r=
  19. By: Holger Breinlich (University of Surrey); Dennis Novy (University of Warwick); Joao M.C. Santos Silva (University of Surrey)
    Abstract: Gravity regressions are a common tool in the empirical international trade literature and serve an important function for many policy purposes. We study to what extent micro-level parameters can be recovered from gravity regressions estimated with aggregate data. We show that estimation of gravity equations in their original multiplicative form via Poisson pseudo maximum likelihood (PPML) is more robust to aggregation than estimation of log-linearized gravity equations via ordinary least squares (OLS). In the leading case where regressors do not vary at the micro level, PPML estimates obtained with aggregate data have a clear interpretation as trade-weighted averages of micro-level parameters that is not shared by OLS estimates. However, when regressors vary at the micro level, using disaggregated data is essential because in this case not even PPML can recover parameters of interest. We illustrate our results with an application to Baier and Bergstrand’s (2007) influential study of the effects of trade agreements on trade flows. We examine how their findings change when estimation is performed at different levels of aggregation, and explore the consequences of aggregation for predicting the effects of trade agreements.
    JEL: C23 C43 F14 F15 F17
    Date: 2021–09
    URL: http://d.repec.org/n?u=RePEc:sur:surrec:0721&r=
  20. By: Mohit Batham; Soudeh Mirghasemi; Mohammad Arshad Rahman; Manini Ojha
    Abstract: This chapter presents an overview of a specific form of limited dependent variable models, namely discrete choice models, where the dependent (response or outcome) variable takes values which are discrete, inherently ordered, and characterized by an underlying continuous latent variable. Within this setting, the dependent variable may take only two discrete values (such as 0 and 1) giving rise to binary models (e.g., probit and logit models) or more than two values (say $j=1,2, \ldots, J$, where $J$ is some integer, typically small) giving rise to ordinal models (e.g., ordinal probit and ordinal logit models). In these models, the primary goal is to model the probability of responses/outcomes conditional on the covariates. We connect the outcomes of a discrete choice model to the random utility framework in economics, discuss estimation techniques, present the calculation of covariate effects and measures to assess model fitting. Some recent advances in discrete data modeling are also discussed. Following the theoretical review, we utilize the binary and ordinal models to analyze public opinion on marijuana legalization and the extent of legalization -- a socially relevant but controversial topic in the United States. We obtain several interesting results including that past use of marijuana, belief about legalization and political partisanship are important factors that shape the public opinion.
    Date: 2021–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2109.10122&r=

This nep-ecm issue is ©2021 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.