|
on Econometrics |
By: | Jun Ma; Vadim Marmer; Zhengfei Yu |
Abstract: | In nonseparable triangular models with a binary endogenous treatment and a binary instrumental variable, Vuong and Xu (2017) show that the individual treatment effects (ITEs) are identifiable. Feng, Vuong and Xu (2019) show that a kernel density estimator that uses nonparametrically estimated ITEs as observations is uniformly consistent for the density of the ITE. In this paper, we establish the asymptotic normality of the density estimator of Feng, Vuong and Xu (2019) and show that despite their faster rate of convergence, ITEs' estimation errors have a non-negligible effect on the asymptotic distribution of the density estimator. We propose asymptotically valid standard errors for the density of the ITE that account for estimated ITEs as well as bias correction. Furthermore, we develop uniform confidence bands for the density of the ITE using nonparametric or jackknife multiplier bootstrap critical values. Our uniform confidence bands have correct coverage probabilities asymptotically with polynomial error rates and can be used for inference on the shape of the ITE's distribution. |
Date: | 2021–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2107.05559&r= |
By: | Claudia Noack; Tomasz Olma; Christoph Rothe |
Abstract: | Empirical regression discontinuity (RD) studies often use covariates to increase the precision of their estimates. In this paper, we propose a novel class of estimators that use such covariate information more efficiently than the linear adjustment estimators that are currently used widely in practice. Our approach can accommodate a possibly large number of either discrete or continuous covariates. It involves running a standard RD analysis with an appropriately modified outcome variable, which takes the form of the difference between the original outcome and a function of the covariates. We characterize the function that leads to the estimator with the smallest asymptotic variance, and show how it can be estimated via modern machine learning, nonparametric regression, or classical parametric methods. The resulting estimator is easy to implement, as tuning parameters can be chosen as in a conventional RD analysis. An extensive simulation study illustrates the performance of our approach. |
Date: | 2021–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2107.07942&r= |
By: | Javier Espinosa; Christian Hennig |
Abstract: | The proportional odds cumulative logit model (POCLM) is a standard regression model for an ordinal response. Ordinality of predictors can be incorporated by monotonicity constraints for the corresponding parameters. It is shown that estimators defined by optimization, such as maximum likelihood estimators, for an unconstrained model and for parameters in the interior set of the parameter space of a constrained model are asymptotically equivalent. This is used in order to derive asymptotic confidence regions and tests for the constrained model, involving simple modifications for finite samples. The finite sample coverage probability of the confidence regions is investigated by simulation. Tests concern the effect of individual variables, monotonicity, and a specified monotonicity direction. The methodology is applied on real data related to the assessment of school performance. |
Date: | 2021–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2107.04946&r= |
By: | Andrew Chesher (Institute for Fiscal Studies and University College London); Adam Rosen (Institute for Fiscal Studies and Duke University) |
Abstract: | Models of simultaneous discrete choice may be incomplete, delivering multiple values of outcomes at certain values of the latent variables and covariates, and incoherent, delivering no values. Alternative approaches to accommodating incompleteness and incoherence are considered in a unifying framework afforded by the Generalized Instrumental Variable models introduced in Chesher and Rosen (2017). Sharp identification regions for parameters and functions of interest defined by systems of conditional moment equalities and inequalities are provided. Almost all empirical analysis of simultaneous discrete choice uses models that include parametric specifications of the distribution of unobserved variables. The paper provides characterizations of identified sets and outer regions for structural functions and parameters allowing for any distribution of unobservables independent of exogenous variables. The methods are applied to the models and data of Mazzeo (2002) and Kline and Tamer (2016) in order to study the sensitivity of empirical results to restrictions on equilibrium selection and the distribution of unobservable payoff shifters, respectively. Confidence intervals for individual parameter components are provided using a recently developed inference approach from Belloni, Bugni, and Chernozhukov (2018). The relaxation of equilibrium selection and distributional restrictions in these applications is found to greatly increase the width of resulting confidence intervals, but nonetheless the models continue to sign strategic interaction parameters. |
Date: | 2020–02–20 |
URL: | http://d.repec.org/n?u=RePEc:ifs:cemmap:9/20&r= |
By: | Daisuke Kurisu; Taisuke Otsu |
Abstract: | By utilizing intermediate Gaussian approximations, this paper establishes asymptotic linear representations of nonparametric deconvolution estimators for the classical measurement error model with repeated measurements. Our result is applied to derive confidence bands for the density and distribution functions of the error-free variable of interest and to establish faster convergence rates of the estimators than the ones obtained in the existing literature. Keywords: measurement error, deconvolution, asymptotic linear representation, intermediate Gaussian approximation, confidence band. |
Keywords: | Measurement error, Deconvolution, Confidence band |
JEL: | C14 |
Date: | 2021–07 |
URL: | http://d.repec.org/n?u=RePEc:cep:stiecm:615&r= |
By: | Pasha Andreyanov; Grigory Franguridi |
Abstract: | In a classic model of the first-price auction, we propose a nonparametric estimator of the quantile function of bidders' valuations, based on weighted bid spacings. We derive the Bahadur-Kiefer expansion of this estimator with a pivotal influence function and an explicit uniform remainder rate. This expansion allows us to develop a simple algorithm for the associated uniform confidence bands that does not rely on bootstrap. Monte Carlo experiments show satisfactory statistical and computational performance of the estimator and the confidence bands. Estimation and inference for related functionals is also considered. |
Date: | 2021–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2106.13856&r= |
By: | Florian Huber; Gary Koop |
Abstract: | Macroeconomists using large datasets often face the choice of working with either a large Vector Autoregression (VAR) or a factor model. In this paper, we develop methods for combining the two using a subspace shrinkage prior. Subspace priors shrink towards a class of functions rather than directly forcing the parameters of a model towards some pre-specified location. We develop a conjugate VAR prior which shrinks towards the subspace which is defined by a factor model. Our approach allows for estimating the strength of the shrinkage as well as the number of factors. After establishing the theoretical properties of our proposed prior, we carry out simulations and apply it to US macroeconomic data. Using simulations we show that our framework successfully detects the number of factors. In a forecasting exercise involving a large macroeconomic data set we find that combining VARs with factor models using our prior can lead to forecast improvements. |
Date: | 2021–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2107.07804&r= |
By: | Shosei Sakaguchi |
Abstract: | This paper studies identification and inference in transformation models with endogenous censoring. Many kinds of duration models, such as the accelerated failure time model, proportional hazard model, and mixed proportional hazard model, can be viewed as transformation models. We allow the censoring of a duration outcome to be arbitrarily correlated with observed covariates and unobserved heterogeneity. We impose no parametric restrictions on either the transformation function or the distribution function of the unobserved heterogeneity. In this setting, we develop bounds on the regression parameters and the transformation function, which are characterized by conditional moment inequalities involving U-statistics. We provide inference methods for them by constructing an inference approach for conditional moment inequality models in which the sample analogs of moments are U-statistics. We apply the proposed inference methods to evaluate the effect of heart transplants on patients' survival time using data from the Stanford Heart Transplant Study. |
Date: | 2021–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2107.00928&r= |
By: | Marc Hallin; Daniel Hlubinka; Sarka Hudecova |
Abstract: | Extending rank-based inference to a multivariate setting such as multiple-output regression or MANOVA with unspecified d-dimensional error density has remained an open problem for more than half a century. None of the many solutions proposed so far is enjoying the combination of distribution-freeness and efficiency that makes rank-based inference a successful tool in the univariate setting. A concept of center- outward multivariate ranks and signs based on measure transportation ideas has been introduced recently. Center-outward ranks and signs are not only distribution-free but achieve in dimension d > 1 the (essential) maximal ancillarity property of traditional univariate ranks. In the present case, we show that fully distribution-free testing procedures based on center-outward ranks can achieve parametric efficiency. We establish the Hajek representation and asymptotic normality results required in the construction of such tests in multiple-output regression and MANOVA models. Simulations and an empirical study demonstrate the excellent performance of the proposed procedures. |
Keywords: | Distribution-free tests; Multivariate ranks; Multivariate signs; Hjajek representation. |
Date: | 2021–07 |
URL: | http://d.repec.org/n?u=RePEc:eca:wpaper:2013/327641&r= |
By: | Carsten Chong; Thomas Delerue; Guoying Li |
Abstract: | We consider the problem of estimating volatility for high-frequency data when the observed process is the sum of a continuous It\^o semimartingale and a noise process that locally behaves like fractional Brownian motion with Hurst parameter H. The resulting class of processes, which we call mixed semimartingales, generalizes the mixed fractional Brownian motion introduced by Cheridito [Bernoulli 7 (2001) 913-934] to time-dependent and stochastic volatility. Based on central limit theorems for variation functionals, we derive consistent estimators and asymptotic confidence intervals for H and the integrated volatilities of both the semimartingale and the noise part, in all cases where these quantities are identifiable. When applied to recent stock price data, we find strong empirical evidence for the presence of fractional noise, with Hurst parameters H that vary considerably over time and between assets. |
Date: | 2021–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2106.16149&r= |
By: | Bo E. Honor\'e; Chris Muris; Martin Weidner |
Abstract: | We study a dynamic ordered logit model for panel data with fixed effects. We establish the validity of a set of moment conditions that are free of the fixed effects and that can be computed using four or more periods of data. We establish sufficient conditions for these moment conditions to identify the regression coefficients, the autoregressive parameters, and the threshold parameters. The parameters can be estimated using generalized method of moments. We document the performance of this estimator using Monte Carlo simulations and an empirical illustration to self-reported health status using the British Household Panel Survey. |
Date: | 2021–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2107.03253&r= |
By: | Jeffrey D. Michler; Anna Josephson |
Abstract: | We provide a review of recent developments in the calculation of standard errors and test statistics for statistical inference. While much of the focus of the last two decades in economics has been on generating unbiased coefficients, recent years has seen a variety of advancements in correcting for non-standard standard errors. We synthesize these recent advances in addressing challenges to conventional inference, like heteroskedasticity, clustering, serial correlation, and testing multiple hypotheses. We also discuss recent advancements in numerical methods, such as the bootstrap, wild bootstrap, and randomization inference. We make three specific recommendations. First, applied economists need to clearly articulate the challenges to statistical inference that are present in data as well as the source of those challenges. Second, modern computing power and statistical software means that applied economists have no excuse for not correctly calculating their standard errors and test statistics. Third, because complicated sampling strategies and research designs make it difficult to work out the correct formula for standard errors and test statistics, we believe that in the applied economics profession it should become standard practice to rely on asymptotic refinements to the distribution of an estimator or test statistic via bootstrapping. Throughout, we reference built-in and user-written Stata commands that allow one to quickly calculate accurate standard errors and relevant test statistics. |
Date: | 2021–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2107.09736&r= |
By: | Stefano Giglio; Dacheng Xiu; Dake Zhang |
Abstract: | Estimation and testing of factor models in asset pricing requires choosing a set of test assets. The choice of test assets determines how well different factor risk premia can be identified: if only few assets are exposed to a factor, that factor is weak, which makes standard estimation and inference incorrect. In other words, the strength of a factor is not an inherent property of the factor: it is a property of the cross-section used in the analysis. We propose a novel way to select assets from a universe of test assets and estimate the risk premium of a factor of interest, as well as the entire stochastic discount factor, that explicitly accounts for weak factors and test assets with highly correlated risk exposures. We refer to our methodology as supervised principal component analysis (SPCA), because it iterates an asset selection step and a principal-component estimation step. We provide the asymptotic properties of our estimator, and compare its limiting behavior with that of alternative estimators proposed in the recent literature, which rely on PCA, Ridge, Lasso, and Partial Least Squares (PLS). We find that the SPCA is superior in the presence of weak factors, both in theory and in finite samples. We illustrate the use of SPCA by applying it to estimate the risk premia of several tradable and nontradable factors, to evaluate asset managers’ performance, and to de-noise asset pricing factors. |
JEL: | C58 G12 |
Date: | 2021–07 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:29002&r= |
By: | Walter Beckert (Institute for Fiscal Studies and Birkbeck, University of London); Daniel Kaliski (Institute for Fiscal Studies and Birkbeck) |
Abstract: | We investigate the consequences of discreteness in the assignment variable in regression-discontinuity designs for cases where the outcome variable is itself discrete. We find that constructing confidence intervals that have the correct level of coverage in these cases is sensitive to the assumed distribution of unobserved heterogeneity. Since local linear estimators are improperly centered, a smaller variance for unobserved heterogeneity in discrete outcomes actually requires larger confidence intervals, since standard confidence intervals become narrower around a biased estimator, leading to a higher-than-nominal false positive rate. We provide a method for mapping structural assumptions regarding the distribution and variance of unobserved heterogeneity to the construction of "honest" confidence intervals that have the correct level of coverage. An application to retirement behavior reveals that the spike in retirement at age 62 in the United States can be reconciled with a wider range of values for the variance of unobserved heterogeneity (due to reservation wages or offers) than the spike at age 65. |
Date: | 2019–12–09 |
URL: | http://d.repec.org/n?u=RePEc:ifs:cemmap:67/19&r= |
By: | Guy Tchuente |
Abstract: | The finite sample properties of estimators are usually understood or approximated using asymptotic theories. Two main asymptotic constructions have been used to characterize the presence of many instruments. The first assumes that the number of instruments increases with the sample size. I demonstrate that in this case, one of the key assumptions used in the asymptotic construction may imply that the number of ``effective" instruments should be finite, resulting in an internal contradiction. The second asymptotic representation considers that the number of instrumental variables (IVs) may be finite, infinite, or even a continuum. The number does not change with the sample size. In this scenario, the regularized estimator obtained depends on the topology imposed on the set of instruments as well as on a regularization parameter. These restrictions may induce a bias or restrict the set of admissible instruments. However, the assumptions are internally coherent. The limitations of many IVs asymptotic assumptions provide support for finite sample distributional studies to better understand the behavior of many IV estimators. |
Date: | 2021–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2106.15003&r= |
By: | Zdzislaw Burda; Andrzej Jarosz |
Abstract: | A non-linear shrinkage estimator of large-dimensional covariance matrices is derived in a setting of auto-correlated samples, thus generalizing the recent formula by Ledoit-P\'{e}ch\'{e}. The calculation is facilitated by random matrix theory. The result is turned into an efficient algorithm, and an associated Python library, shrinkage, with help of Ledoit-Wolf kernel estimation technique. An example of exponentially-decaying auto-correlations is presented. |
Date: | 2021–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2107.01352&r= |
By: | Martin Bladt; Alexander J. McNeil |
Abstract: | Stationary and ergodic time series can be constructed using an s-vine decomposition based on sets of bivariate copula functions. The extension of such processes to infinite copula sequences is considered and shown to yield a rich class of models that generalizes Gaussian ARMA and ARFIMA processes to allow both non-Gaussian marginal behaviour and a non-Gaussian description of the serial partial dependence structure. Extensions of classical causal and invertible representations of linear processes to general s-vine processes are proposed and investigated. A practical and parsimonious method for parameterizing s-vine processes using the Kendall partial autocorrelation function is developed. The potential of the resulting models to give improved statistical fits in many applications is indicated with an example using macroeconomic data. |
Date: | 2021–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2107.00960&r= |
By: | Gaurab Aryal; Federico Zincenko |
Abstract: | We propose an empirical framework for Cournot oligopoly with private information about costs. First, considering a linear demand with a random intercept, we characterize the Bayesian Cournot-Nash equilibrium and determine its testable implications. Then we establish nonparametric identification of the joint distribution of demand and technology shock and firm-specific cost distributions. Finally, we propose a likelihood-based estimation method and apply it to the global crude oil market. Using counterfactuals, we also quantify the effect of firms sharing information about their costs on consumer welfare. We also extend the model to include either firm-specific conduct parameters, nonlinear demand, or selective entry. |
Date: | 2021–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2106.15035&r= |
By: | Giacomo Bormetti; Fulvio Corsi |
Abstract: | We propose an observation-driven time-varying SVAR model where, in agreement with the Lucas Critique, structural shocks drive both the evolution of the macro variables and the dynamics of the VAR parameters. Contrary to existing approaches where parameters follow a stochastic process with random and exogenous shocks, our observation-driven specification allows the evolution of the parameters to be driven by realized past structural shocks, thus opening the possibility to gauge the impact of observed shocks and hypothetical policy interventions on the future evolution of the economic system. |
Date: | 2021–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2107.05263&r= |
By: | Brantly Callaway; Tong Li; Irina Murtazashvili |
Abstract: | This paper considers nonlinear measures of intergenerational income mobility such as (i) the effect of parents' permanent income on the entire distribution of their child's permanent income, (ii) transition matrices, and (iii) rank-rank correlations, among others. A central issue in the literature on intergenerational income mobility is that the researcher typically observes annual income rather than permanent income. Following the existing literature, we treat annual income as a measured-with-error version of permanent income. Studying these types of distributional effects, which are inherently nonlinear, while simultaneously allowing for measurement error requires developing new methods. In particular, we develop a new approach to studying distributional effects with "two-sided" measurement error -- that is, measurement error in both an outcome and treatment variable in a general nonlinear model. Our idea is to impose restrictions on the reduced forms for the outcome and the treatment separately, and then to show that these restrictions imply that the joint distribution of the outcome and the treatment is identified, and, hence, any parameter that depends on this joint distribution is identified -- this includes essentially all parameters of interest in the intergenerational mobility literature. Importantly, we do not require an instrument or repeated observations to obtain identification. These results are new, and this part of the paper provides an independent contribution to the literature on nonlinear models with measurement error. We use our approach to study intergenerational mobility using recent data from the 1997 National Longitudinal Study of Youth. Accounting for measurement error notably reduces various estimates of intergenerational mobility relative to estimates coming directly from the observed data that ignore measurement error. |
Date: | 2021–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2107.09235&r= |
By: | Gautier Marti; Victor Goubet; Frank Nielsen |
Abstract: | We propose a methodology to approximate conditional distributions in the elliptope of correlation matrices based on conditional generative adversarial networks. We illustrate the methodology with an application from quantitative finance: Monte Carlo simulations of correlated returns to compare risk-based portfolio construction methods. Finally, we discuss about current limitations and advocate for further exploration of the elliptope geometry to improve results. |
Date: | 2021–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2107.10606&r= |
By: | Alfred Galichon |
Abstract: | Optimal transport has become part of the standard quantitative economics toolbox. It is the framework of choice to describe models of matching with transfers, but beyond that, it allows to: extend quantile regression; identify discrete choice models; provide new algorithms for computing the random coefficient logit model; and generalize the gravity model in trade. This paper offer a brief review of the basics of the theory, its applications to economics, and some extensions. |
Date: | 2021–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2107.04700&r= |