Econometrics
http://lists.repec.org/mailman/listinfo/nep-ecm
Econometrics
2021-09-20
Nonparametric inference for extremal conditional quantiles
http://d.repec.org/n?u=RePEc:cep:stiecm:616&r=&r=ecm
This paper studies asymptotic properties of the local linear quantile estimator under the extremal order quantile asymptotics, and develops a practical inference method for conditional quantiles in extreme tail areas. By using a point process technique, the asymptotic distribution of the local linear quantile estimator is derived as a minimizer of certain functional of a Poisson point process that involves nuisance parameters. To circumvent difficulty of estimating those nuisance parameters, we propose a subsampling inference method for conditional extreme quantiles based on a self-normalized version of the local linear estimator. A simulation study illustrates usefulness of our subsampling inference to investigate extremal phenomena.
Daisuke Kurisu
Taisuke Otsu
Quantile regression, Extreme value theory, Point process, Subsampling
2021-09
Standard Errors for Calibrated Parameters
http://d.repec.org/n?u=RePEc:arx:papers:2109.08109&r=&r=ecm
Calibration, the practice of choosing the parameters of a structural model to match certain empirical moments, can be viewed as minimum distance estimation. Existing standard error formulas for such estimators require a consistent estimate of the correlation structure of the empirical moments, which is often unavailable in practice. Instead, the variances of the individual empirical moments are usually readily estimable. Using only these variances, we derive conservative standard errors and confidence intervals for the structural parameters that are valid even under the worst-case correlation structure. In the over-identified case, we show that the moment weighting scheme that minimizes the worst-case estimator variance amounts to a moment selection problem with a simple solution. Finally, we develop tests of over-identifying or parameter restrictions. We apply our methods empirically to a model of menu cost pricing for multi-product firms and to a heterogeneous agent New Keynesian model.
Matthew D. Cocci
Mikkel Plagborg-M{\o}ller
2021-09
Semiparametric Estimation of Treatment Effects in Randomized Experiments
http://d.repec.org/n?u=RePEc:arx:papers:2109.02603&r=&r=ecm
We develop new semiparametric methods for estimating treatment effects. We focus on a setting where the outcome distributions may be thick tailed, where treatment effects are small, where sample sizes are large and where assignment is completely random. This setting is of particular interest in recent experimentation in tech companies. We propose using parametric models for the treatment effects, as opposed to parametric models for the full outcome distributions. This leads to semiparametric models for the outcome distributions. We derive the semiparametric efficiency bound for this setting, and propose efficient estimators. In the case with a constant treatment effect one of the proposed estimators has an interesting interpretation as a weighted average of quantile treatment effects, with the weights proportional to (minus) the second derivative of the log of the density of the potential outcomes. Our analysis also results in an extension of Huber's model and trimmed mean to include asymmetry and a simplified condition on linear combinations of order statistics, which may be of independent interest.
Susan Athey
Peter J. Bickel
Aiyou Chen
Guido W. Imbens
Michael Pollmann
2021-09
Inference in the Nonparametric Stochastic Frontier Model
http://d.repec.org/n?u=RePEc:aiz:louvad:2021029&r=&r=ecm
This paper is the first in the literature to discuss in detail how to conduct various types of inference in the stochastic frontier model when it is estimated using non-parametric methods. We discuss a general and versatile inferential technique that allows for a range of practical hypotheses of interest to be tested. We also discuss several challenges that currently exist in this framework in an effort to alert researchers to potential pitfalls. Namely, it appears that when one wishes to estimate a stochastic frontier in a fully non-parametric framework, separability between inputs and determinants of inefficiency is an essential ingredient for the correct empirical size of a test. We showcase the performance of the test with a variety of Monte Carlo simulations.
Parmeter, Christopher F.
Simar, LĂ©opold
Van Keilegom, Ingrid
Zelenyuk, Valentin
Stochastic Frontier Analysis, Efficiency, Productivity Analysis, Local-Polynomial Least- Squares
2021-09-09
Bias-Adjusted Treatment Effects Under Equal Selection
http://d.repec.org/n?u=RePEc:ums:papers:2021-05&r=&r=ecm
In a recent contribution, Oster (2019) has proposed a method to generate bounds on treatment effects in the presence of unobservable con- founders. The method can only be implemented if a crucial problem of non-uniqueness is addressed. In this paper I demonstrate that one of the proposed methods to address non-uniqueness that relies on computing bias-adjusted treatment effects under the assumption of equal selection on observables and unobservables, is problematic on several counts. First, additional assumptions, which cannot be justified on theoretical grounds, are needed to ensure a unique solution; second, the method will not work when estimate of the treatment effect declines with the addition of controls; and third, the solution, and therefore conclusions about bias, can change dramatically if we deviate from equal selection even by a small magnitude.
Deepankar Basu
treatment effect, omitted variable bias
2021
The tenets of indirect inference in Bayesian models
http://d.repec.org/n?u=RePEc:osf:osfxxx:enzgs&r=&r=ecm
This paper extends the application of Bayesian inference to probability distributions defined in terms of its quantile function. We describe the method of *indirect likelihood* to be used in the Bayesian models with sampling distributions which lack an explicit cumulative distribution function. We provide examples and demonstrate the equivalence of the "quantile-based" (indirect) likelihood to the conventional "density-defined" (direct) likelihood. We consider practical aspects of the numerical inversion of quantile function by root-finding required by the indirect likelihood method. In particular, we consider a problem of ensuring the validity of an arbitrary quantile function with the help of Chebyshev polynomials and provide useful tips and implementation of these algorithms in Stan and R. We also extend the same method to propose the definition of an *indirect prior* and discuss the situations where it can be useful
Perepolkin, Dmytro
Goodrich, Benjamin
Sahlin, Ullrika
2021-09-09
Bounding Sets for Treatment Effects with Proportional Selection
http://d.repec.org/n?u=RePEc:ums:papers:2021-10&r=&r=ecm
In linear econometric models with proportional selection on unobservables, omitted variable bias in estimated treatment effects are roots of a cubic equation involving estimated parameters from a short and intermediate regression, the former excluding and the latter including all observable controls. The roots of the cubic are functions of delta, the degree of proportional selection on unobservables, and R_max, the R-squared in a hypothetical long regression that includes the unobservable confounder and all observable controls. In this paper a simple method is proposed to compute roots of the cubic over meaningful regions of the delta-R_max plane and use the roots to construct bounding sets for the true treatment effect. The proposed method is illustrated with both a simulated and an observational data set.
Deepankar Basu
treatment effect, omitted variable bias
2021
Optimal transport weights for causal inference
http://d.repec.org/n?u=RePEc:arx:papers:2109.01991&r=&r=ecm
Weighting methods are a common tool to de-bias estimates of causal effects. And though there are an increasing number of seemingly disparate methods, many of them can be folded into one unifying regime: causal optimal transport. This new method directly targets distributional balance by minimizing optimal transport distances between treatment and control groups or, more generally, between a source and target population. Our approach is model-free but can also incorporate moments or any other important functions of covariates that the researcher desires to balance. We find that the causal optimal transport outperforms competitor methods when both the propensity score and outcome models are misspecified, indicating it is a robust alternative to common weighting methods. Finally, we demonstrate the utility of our method in an external control study examining the effect of misoprostol versus oxytocin for treatment of post-partum hemorrhage.
Eric Dunipace
2021-09
Evaluating forecast performance with state dependence
http://d.repec.org/n?u=RePEc:upf:upfgen:1800&r=&r=ecm
We propose a novel forecast evaluation methodology to assess models' absolute and relative forecasting performance when it is a state-dependent function of economic variables. In our framework, the forecasting performance, measured by a forecast error loss function, is modeled via a hard or smooth threshold model with unknown threshold values. Existing tests either assume a constant out-of-sample forecast performance or use non-parametric techniques robust to time-variation; consequently, they may lack power against state-dependent predictability. Our tests can be applied to relative forecast comparisons, forecast encompassing, forecast efficiency, and, more generally, moment-based tests of forecast evaluation. Monte Carlo results suggest that our proposed tests perform well in finite samples and have better power than existing tests in selecting the best forecast or assessing its efficiency in the presence of state dependence. Our tests uncover "pockets of predictability" in U.S. equity premia; although the term spread is not a useful predictor on average over the sample, it forecasts significantly better than the benchmark forecast when real GDP growth is low. In addition, we find that leading indicators, such as measures of vacancy postings and new orders for durable goods, improve the forecasts of U.S. industrial production when financial conditions are tight.
Florens Odendahl
Barbara Rossi
Tatevik Sekhposyan
State dependence, forecast evaluation, predictive ability testing, moment-based tests; pockets of predictability
2021-07
Predictability of Aggregated Time Series
http://d.repec.org/n?u=RePEc:wlu:lcerpa:bm0127&r=&r=ecm
Macroeconomic series are often aggregated from higher-frequency data. We show that this seemingly innocent feature has far-reaching consequences for the predictability of such series. First, the series are predictable by construction. Second, conventional tests of predictability are less informative about the data-generating process than frequently assumed. Third, a simple improvement to the conventional test leads to a sizeable correction, making it necessary to re-evaluate existing forecasting approaches. Fourth, forecasting models should be estimated with end-of-period observations even when the goal is to forecast the aggregated series. We highlight the relevance of these insights for forecasts of several macroeconomic variables.
Reinhard Ellwanger, Stephen Snudden
Forecasting and Prediction Methods, Interest Rates, Exchange Rates, Asset Prices, Oil Prices, Commodity Prices
2021
A Framework for Using Value-Added in Regressions
http://d.repec.org/n?u=RePEc:arx:papers:2109.01741&r=&r=ecm
Estimated value-added (VA) measures have become popular metrics of worker and institutional quality among economists and policy makers, and recent studies increasingly use these measures either as dependent or explanatory variables in regressions. For example, VA is used as an explanatory variable when examining the relationship between teacher VA and students' long-run outcomes. Due to the multi-step nature of VA estimation and the correlations between the observable characteristics of the students and true teacher quality, the standard errors researchers routinely use when including VA measures in OLS regressions are incorrect. In this paper, I construct correct standard errors for regressions that use VA as an explanatory variable and for regressions where VA is the outcome. I do so by showing how the assumptions underpinning VA models naturally lead to a generalized method of moments (GMM) framework. I propose corrected standard error estimators derived using GMM, and discuss the need to adjust standard errors under different sets of assumptions. Finally, I show that models using VA as an explanatory variable can be written as overidentified systems resembling instrumental variable systems, and propose a more efficient estimator.
Antoine Deeb
2021-09
Structural Estimation of Matching Markets with Transferable Utility
http://d.repec.org/n?u=RePEc:arx:papers:2109.07932&r=&r=ecm
This paper provides an introduction to structural estimation methods for matching markets with transferable utility.
Alfred Galichon
Bernard Salani\'e
2021-09
Geographic Difference-in-Discontinuities
http://d.repec.org/n?u=RePEc:arx:papers:2109.07406&r=&r=ecm
A recent econometric literature has critiqued the use of regression discontinuities where administrative borders serves as the 'cutoff'. Identification in this context is difficult since multiple treatments can change at the cutoff and individuals can easily sort on either side of the border. This note extends the difference-in-discontinuities framework discussed in Grembi et. al. (2016) to a geographic setting. The paper formalizes the identifying assumptions in this context which will allow for the removal of time-invariant sorting and compound-treatments similar to the difference-in-differences methodology.
Kyle Butts
2021-09