nep-ecm New Economics Papers
on Econometrics
Issue of 2021‒09‒20
thirteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Nonparametric inference for extremal conditional quantiles By Daisuke Kurisu; Taisuke Otsu
  2. Standard Errors for Calibrated Parameters By Matthew D. Cocci; Mikkel Plagborg-M{\o}ller
  3. Semiparametric Estimation of Treatment Effects in Randomized Experiments By Susan Athey; Peter J. Bickel; Aiyou Chen; Guido W. Imbens; Michael Pollmann
  4. Inference in the Nonparametric Stochastic Frontier Model By Parmeter, Christopher F.; Simar, Léopold; Van Keilegom, Ingrid; Zelenyuk, Valentin
  5. Bias-Adjusted Treatment Effects Under Equal Selection By Deepankar Basu
  6. The tenets of indirect inference in Bayesian models By Perepolkin, Dmytro; Goodrich, Benjamin; Sahlin, Ullrika
  7. Bounding Sets for Treatment Effects with Proportional Selection By Deepankar Basu
  8. Optimal transport weights for causal inference By Eric Dunipace
  9. Evaluating forecast performance with state dependence By Florens Odendahl; Barbara Rossi; Tatevik Sekhposyan
  10. Predictability of Aggregated Time Series By Reinhard Ellwanger, Stephen Snudden
  11. A Framework for Using Value-Added in Regressions By Antoine Deeb
  12. Structural Estimation of Matching Markets with Transferable Utility By Alfred Galichon; Bernard Salani\'e
  13. Geographic Difference-in-Discontinuities By Kyle Butts

  1. By: Daisuke Kurisu; Taisuke Otsu
    Abstract: This paper studies asymptotic properties of the local linear quantile estimator under the extremal order quantile asymptotics, and develops a practical inference method for conditional quantiles in extreme tail areas. By using a point process technique, the asymptotic distribution of the local linear quantile estimator is derived as a minimizer of certain functional of a Poisson point process that involves nuisance parameters. To circumvent difficulty of estimating those nuisance parameters, we propose a subsampling inference method for conditional extreme quantiles based on a self-normalized version of the local linear estimator. A simulation study illustrates usefulness of our subsampling inference to investigate extremal phenomena.
    Keywords: Quantile regression, Extreme value theory, Point process, Subsampling
    JEL: C14
    Date: 2021–09
    URL: http://d.repec.org/n?u=RePEc:cep:stiecm:616&r=
  2. By: Matthew D. Cocci; Mikkel Plagborg-M{\o}ller
    Abstract: Calibration, the practice of choosing the parameters of a structural model to match certain empirical moments, can be viewed as minimum distance estimation. Existing standard error formulas for such estimators require a consistent estimate of the correlation structure of the empirical moments, which is often unavailable in practice. Instead, the variances of the individual empirical moments are usually readily estimable. Using only these variances, we derive conservative standard errors and confidence intervals for the structural parameters that are valid even under the worst-case correlation structure. In the over-identified case, we show that the moment weighting scheme that minimizes the worst-case estimator variance amounts to a moment selection problem with a simple solution. Finally, we develop tests of over-identifying or parameter restrictions. We apply our methods empirically to a model of menu cost pricing for multi-product firms and to a heterogeneous agent New Keynesian model.
    Date: 2021–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2109.08109&r=
  3. By: Susan Athey; Peter J. Bickel; Aiyou Chen; Guido W. Imbens; Michael Pollmann
    Abstract: We develop new semiparametric methods for estimating treatment effects. We focus on a setting where the outcome distributions may be thick tailed, where treatment effects are small, where sample sizes are large and where assignment is completely random. This setting is of particular interest in recent experimentation in tech companies. We propose using parametric models for the treatment effects, as opposed to parametric models for the full outcome distributions. This leads to semiparametric models for the outcome distributions. We derive the semiparametric efficiency bound for this setting, and propose efficient estimators. In the case with a constant treatment effect one of the proposed estimators has an interesting interpretation as a weighted average of quantile treatment effects, with the weights proportional to (minus) the second derivative of the log of the density of the potential outcomes. Our analysis also results in an extension of Huber's model and trimmed mean to include asymmetry and a simplified condition on linear combinations of order statistics, which may be of independent interest.
    Date: 2021–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2109.02603&r=
  4. By: Parmeter, Christopher F. (University of Miami); Simar, Léopold (Université catholique de Louvain, LIDAM/ISBA, Belgium); Van Keilegom, Ingrid (Université catholique de Louvain, LIDAM/ISBA, Belgium); Zelenyuk, Valentin (University of Queensland)
    Abstract: This paper is the first in the literature to discuss in detail how to conduct various types of inference in the stochastic frontier model when it is estimated using non-parametric methods. We discuss a general and versatile inferential technique that allows for a range of practical hypotheses of interest to be tested. We also discuss several challenges that currently exist in this framework in an effort to alert researchers to potential pitfalls. Namely, it appears that when one wishes to estimate a stochastic frontier in a fully non-parametric framework, separability between inputs and determinants of inefficiency is an essential ingredient for the correct empirical size of a test. We showcase the performance of the test with a variety of Monte Carlo simulations.
    Keywords: Stochastic Frontier Analysis, Efficiency, Productivity Analysis, Local-Polynomial Least- Squares
    JEL: C1 C14 C13
    Date: 2021–09–09
    URL: http://d.repec.org/n?u=RePEc:aiz:louvad:2021029&r=
  5. By: Deepankar Basu (Department of Economics, University of Massachusetts Amherst)
    Abstract: In a recent contribution, Oster (2019) has proposed a method to generate bounds on treatment effects in the presence of unobservable con- founders. The method can only be implemented if a crucial problem of non-uniqueness is addressed. In this paper I demonstrate that one of the proposed methods to address non-uniqueness that relies on computing bias-adjusted treatment effects under the assumption of equal selection on observables and unobservables, is problematic on several counts. First, additional assumptions, which cannot be justified on theoretical grounds, are needed to ensure a unique solution; second, the method will not work when estimate of the treatment effect declines with the addition of controls; and third, the solution, and therefore conclusions about bias, can change dramatically if we deviate from equal selection even by a small magnitude.
    Keywords: treatment effect, omitted variable bias
    JEL: C21
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:ums:papers:2021-05&r=
  6. By: Perepolkin, Dmytro; Goodrich, Benjamin; Sahlin, Ullrika
    Abstract: This paper extends the application of Bayesian inference to probability distributions defined in terms of its quantile function. We describe the method of *indirect likelihood* to be used in the Bayesian models with sampling distributions which lack an explicit cumulative distribution function. We provide examples and demonstrate the equivalence of the "quantile-based" (indirect) likelihood to the conventional "density-defined" (direct) likelihood. We consider practical aspects of the numerical inversion of quantile function by root-finding required by the indirect likelihood method. In particular, we consider a problem of ensuring the validity of an arbitrary quantile function with the help of Chebyshev polynomials and provide useful tips and implementation of these algorithms in Stan and R. We also extend the same method to propose the definition of an *indirect prior* and discuss the situations where it can be useful
    Date: 2021–09–09
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:enzgs&r=
  7. By: Deepankar Basu (Department of Economics, University of Massachusetts Amherst)
    Abstract: In linear econometric models with proportional selection on unobservables, omitted variable bias in estimated treatment effects are roots of a cubic equation involving estimated parameters from a short and intermediate regression, the former excluding and the latter including all observable controls. The roots of the cubic are functions of delta, the degree of proportional selection on unobservables, and R_max, the R-squared in a hypothetical long regression that includes the unobservable confounder and all observable controls. In this paper a simple method is proposed to compute roots of the cubic over meaningful regions of the delta-R_max plane and use the roots to construct bounding sets for the true treatment effect. The proposed method is illustrated with both a simulated and an observational data set.
    Keywords: treatment effect, omitted variable bias
    JEL: C21
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:ums:papers:2021-10&r=
  8. By: Eric Dunipace
    Abstract: Weighting methods are a common tool to de-bias estimates of causal effects. And though there are an increasing number of seemingly disparate methods, many of them can be folded into one unifying regime: causal optimal transport. This new method directly targets distributional balance by minimizing optimal transport distances between treatment and control groups or, more generally, between a source and target population. Our approach is model-free but can also incorporate moments or any other important functions of covariates that the researcher desires to balance. We find that the causal optimal transport outperforms competitor methods when both the propensity score and outcome models are misspecified, indicating it is a robust alternative to common weighting methods. Finally, we demonstrate the utility of our method in an external control study examining the effect of misoprostol versus oxytocin for treatment of post-partum hemorrhage.
    Date: 2021–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2109.01991&r=
  9. By: Florens Odendahl; Barbara Rossi; Tatevik Sekhposyan
    Abstract: We propose a novel forecast evaluation methodology to assess models' absolute and relative forecasting performance when it is a state-dependent function of economic variables. In our framework, the forecasting performance, measured by a forecast error loss function, is modeled via a hard or smooth threshold model with unknown threshold values. Existing tests either assume a constant out-of-sample forecast performance or use non-parametric techniques robust to time-variation; consequently, they may lack power against state-dependent predictability. Our tests can be applied to relative forecast comparisons, forecast encompassing, forecast efficiency, and, more generally, moment-based tests of forecast evaluation. Monte Carlo results suggest that our proposed tests perform well in finite samples and have better power than existing tests in selecting the best forecast or assessing its efficiency in the presence of state dependence. Our tests uncover "pockets of predictability" in U.S. equity premia; although the term spread is not a useful predictor on average over the sample, it forecasts significantly better than the benchmark forecast when real GDP growth is low. In addition, we find that leading indicators, such as measures of vacancy postings and new orders for durable goods, improve the forecasts of U.S. industrial production when financial conditions are tight.
    Keywords: State dependence, forecast evaluation, predictive ability testing, moment-based tests; pockets of predictability
    JEL: C52 C53 E17 G17
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:upf:upfgen:1800&r=
  10. By: Reinhard Ellwanger, Stephen Snudden (Wilfrid Laurier University)
    Abstract: Macroeconomic series are often aggregated from higher-frequency data. We show that this seemingly innocent feature has far-reaching consequences for the predictability of such series. First, the series are predictable by construction. Second, conventional tests of predictability are less informative about the data-generating process than frequently assumed. Third, a simple improvement to the conventional test leads to a sizeable correction, making it necessary to re-evaluate existing forecasting approaches. Fourth, forecasting models should be estimated with end-of-period observations even when the goal is to forecast the aggregated series. We highlight the relevance of these insights for forecasts of several macroeconomic variables.
    Keywords: Forecasting and Prediction Methods, Interest Rates, Exchange Rates, Asset Prices, Oil Prices, Commodity Prices
    JEL: C1 C53 E47 F37 G17 Q47
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:wlu:lcerpa:bm0127&r=
  11. By: Antoine Deeb
    Abstract: Estimated value-added (VA) measures have become popular metrics of worker and institutional quality among economists and policy makers, and recent studies increasingly use these measures either as dependent or explanatory variables in regressions. For example, VA is used as an explanatory variable when examining the relationship between teacher VA and students' long-run outcomes. Due to the multi-step nature of VA estimation and the correlations between the observable characteristics of the students and true teacher quality, the standard errors researchers routinely use when including VA measures in OLS regressions are incorrect. In this paper, I construct correct standard errors for regressions that use VA as an explanatory variable and for regressions where VA is the outcome. I do so by showing how the assumptions underpinning VA models naturally lead to a generalized method of moments (GMM) framework. I propose corrected standard error estimators derived using GMM, and discuss the need to adjust standard errors under different sets of assumptions. Finally, I show that models using VA as an explanatory variable can be written as overidentified systems resembling instrumental variable systems, and propose a more efficient estimator.
    Date: 2021–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2109.01741&r=
  12. By: Alfred Galichon; Bernard Salani\'e
    Abstract: This paper provides an introduction to structural estimation methods for matching markets with transferable utility.
    Date: 2021–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2109.07932&r=
  13. By: Kyle Butts
    Abstract: A recent econometric literature has critiqued the use of regression discontinuities where administrative borders serves as the 'cutoff'. Identification in this context is difficult since multiple treatments can change at the cutoff and individuals can easily sort on either side of the border. This note extends the difference-in-discontinuities framework discussed in Grembi et. al. (2016) to a geographic setting. The paper formalizes the identifying assumptions in this context which will allow for the removal of time-invariant sorting and compound-treatments similar to the difference-in-differences methodology.
    Date: 2021–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2109.07406&r=

This nep-ecm issue is ©2021 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.