|
on Econometrics |
By: | Powell, David |
Abstract: | Panel data are often used in empirical work to account for additive fixed time and unit effects. More recently, the synthetic control estimator relaxes the assumption of additive fixed effects for case studies, using pre-treatment outcomes to create a weighted average of other units which best approximate the treated unit. The synthetic control estimator is currently limited to case studies in which the treatment variable can be represented by a single indicator variable. Applying this estimator more generally, such as applications with multiple treatment variables or a continuous treatment variable, is problematic. This paper generalizes the case study synthetic control estimator to permit estimation of the effect of multiple treatment variables, which can be discrete or continuous. The estimator jointly estimates the impact of the treatment variables and creates a synthetic control for each unit. Additive fixed effect models are a special case of this estimator. Because the number of units in panel data and synthetic control applications is often small, I discuss an inference procedure for fixed N. The estimation technique generates correlations across clusters so the inference procedure will also account for this dependence. Simulations show that the estimator works well even when additive fixed effect models do not. I estimate the impact of the minimum wage on the employment rate of teenagers. I estimate an elasticity of -0.44, substantially larger than estimates generated using additive fixed effect models, and reject the null hypothesis that there is no effect. |
Keywords: | synthetic control estimation, finite inference, minimum wage, teen employment, panel data, interactive fixed effects, correlated clusters |
JEL: | C33 J23 J31 |
Date: | 2016–02 |
URL: | http://d.repec.org/n?u=RePEc:ran:wpaper:1142&r=ecm |
By: | Arbués, Ignacio; Ledo, Ramiro; Matilla-García, Mariano |
Abstract: | There are a number of econometrics tools to deal with the different type of situations in which cointegration can appear: I(1), I(2), seasonal, polynomial, etc. There are also different kinds of Vector Error Correction models related to these situations. We propose a unified theoretical and practical framework to deal with many of these situations. To this aim: (i) a general class of models is introduced in this paper and (ii) an automatic method to identify models, based on estimating the Smith form of an autoregressive model, is provided. Our simulations suggest the power of the new proposed methodology. An empirical example illustrates the methodology. |
Keywords: | time series,unit root,cointegration,error correction,model identification,Smith form |
JEL: | C01 C22 C32 C51 C52 |
Date: | 2016 |
URL: | http://d.repec.org/n?u=RePEc:zbw:ifwedp:201633&r=ecm |
By: | Clark, Todd E. (Federal Reserve Bank of Cleveland); Carriero, Andrea; Marcellino, Massimiliano |
Abstract: | Recent research has shown that a reliable vector autoregressive model (VAR) for forecasting and structural analysis of macroeconomic data requires a large set of variables and modeling time variation in their volatilities. Yet, there are no papers jointly allowing for stochastic volatilities and large datasets, due to computational complexity. Moreover, homoskedastic VAR models for large datasets so far restrict substantially the allowed prior distributions on the parameters. In this paper we propose a new Bayesian estimation procedure for (possibly very large) VARs featuring time varying volatilities and general priors. This is important both for reduced form applications, such as forecasting, and for more structural applications, such as computing response functions to structural shocks. We show that indeed empirically the new estimation procedure performs very well for both tasks. |
Keywords: | forecasting; models; structural shocks; |
JEL: | C11 C13 C33 C53 |
Date: | 2016–06–30 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedcwp:1617&r=ecm |
By: | Chan, Mark K.; Kwok, Simon |
Abstract: | We develop an alternative estimator for policy evaluation in the presence of interactive fixed effects. It extends Pesaran (2006)’s two-stage procedure to a difference-in- differences-type program evaluation framework, and extracts principal components from the control group to form factor proxies. Consistency and asymptotic distributions are derived under stationary factors, as well as nonstationary factors with any integration order. Simulation exercises demonstrate excellent performance of our estimator relative to existing methods. We present empirical results from microeconomic and macroeconomic applications. We find that our estimator generates the most robust treatment effect estimates, and our weights for control group units deliver strong economic interpretation regarding the nature of the underlying factors. |
Keywords: | Program evaluation; Interactive fixed effects; Difference-in-differences |
Date: | 2016–07 |
URL: | http://d.repec.org/n?u=RePEc:syd:wpaper:2016-11&r=ecm |
By: | Huber, Martin; Camponovo, Lorenzo; Bodory, Hugo; Lechner, Michael |
Abstract: | We introduce a wild bootstrap algorithm for the approximation of the sampling distribution of pair or one-to-many propensity score matching estimators. Unlike the conventional iid bootstrap, the proposed wild bootstrap approach does not construct bootstrap samples by randomly resampling from the observations with uniform weights. Instead, it fixes the covariates and constructs the bootstrap approximation by perturbing the martingale representation for matching estimators. We also conduct a simulation study in which the suggested wild bootstrap performs well even when the sample size is relatively small. Finally, we provide an empirical illustration by analyzing an information intervention in rural development programs. |
Keywords: | Inference; Propensity Score Matching Estimators; Wild Bootstrap |
JEL: | C14 C15 C21 |
Date: | 2016–07–07 |
URL: | http://d.repec.org/n?u=RePEc:fri:fribow:fribow00470&r=ecm |
By: | Matthew Masten (Institute for Fiscal Studies); Alexandre Poirier (Institute for Fiscal Studies) |
Abstract: | We analyze identi cation of nonseparable models under three kinds of exogeneity assumptions weaker than full statistical independence. The first is based on quantile independence. Selection on unobservables drives deviations from full independence. We show that such deviations based on quantile independence require non-monotonic and oscillatory propensity scores. Our second and third approaches are based on a distance-from-independence metric, using either a conditional cdf or a propensity score. Under all three approaches we obtain simple analytical characterizations of identi ed sets for various parameters of interest. We do this in three models: the exogenous regressor model of Matzkin (2003), the instrumental variable model of Chernozhukov and Hansen (2005), and the binary choice model with nonparametric latent utility of Matzkin (1992). |
Keywords: | Nonparametric Identi cation, Partial Identi cation, Sensitivity Analysis, Nonseparable Models, Selection on Unobservables, Instrumental Variables, Binary Choice |
JEL: | C14 C21 C25 C26 C51 |
Date: | 2016–06–21 |
URL: | http://d.repec.org/n?u=RePEc:ifs:cemmap:26/16&r=ecm |
By: | Joonhwi Joo (University of Chicago); Ali Hortacsu (University of Chicago) |
Abstract: | We develop a demand estimation framework with observed and unobserved product char- acteristics based on CES preferences. We show that our demand system can nest the logit demand system with observed and unobserved product characteristics, which has been widely used since Berry (1994); Berry et al. (1995). Furthermore, the demand system we develop can directly accommodate zero market shares by separating the extensive and the intensive margins. We apply our framework to the scanner data of cola sales, which shows that the estimated demand curves can even be upward sloping if zero market shares are not properly accommodated. |
Date: | 2016 |
URL: | http://d.repec.org/n?u=RePEc:red:sed016:36&r=ecm |
By: | Yuan Liao (Rutgers University); Anna Simoni (CREST) |
Abstract: | Inference on partially identified models plays an important role in econometrics. This paper proposes novel Bayesian procedures for these models when the identified set is closed and convex and so is completely characterized by its support function. We shed new light on the connection between Bayesian and frequentist inference for partially identified convex models. We construct Bayesian credible sets for the identified set and uniform credible bands for the support function, as well as a Bayesian procedure for marginal inference, where we may be interested in just one component of the partially identified parameter. Importantly, our procedure is shown to be an asymptotically valid frequentist procedure as well. It is computationally efficient, and we describe several algorithms to implement it. We also construct confidence sets for the partially identified parameter by using the posterior distribution of the support function and show that they have correct frequentist coverage asymptotically. In addition, we establish a local linear approximation of the support function which facilitates set inference and numerical implementation of our method, and allows us to establish the Bernstein-von Mises theorem of the posterior distribution of the support function. |
Keywords: | partial identication, Bayesian credible sets, support function, moment inequality models, Bernstein-von Mises theorem |
JEL: | C11 |
Date: | 2016–07–05 |
URL: | http://d.repec.org/n?u=RePEc:rut:rutres:201607&r=ecm |
By: | Somwrita Sarkar; Sanjay Chawla |
Abstract: | Inference methods in traditional statistics, machine learning and data mining assume that data is generated from an independent and identically distributed (iid) process. Spatial data exhibits behavior for which the iid assumption must be relaxed. For example, the standard approach in spatial regression is to assume the existence of a contiguity matrix which captures the spatial autoregressive properties of the data. However all spatial methods, till now, have assumed that the contiguity matrix is given apriori or can be estimated by using a spatial similarity function. In this paper we propose a convex optimization formulation to solve the spatial autoregressive regression (SAR) model in which both the contiguity matrix and the non-spatial regression parameters are unknown and inferred from the data. We solve the problem using the alternating direction method of multipliers (ADMM) which provides a solution which is both robust and efficient. While our approach is general we use data from housing markets of Boston and Sydney to both guide the analysis and validate our results. A novel side effect of our approach is the automatic discovery of spatial clusters which translate to submarkets in the housing data sets. |
Date: | 2016–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1607.01999&r=ecm |
By: | Brennan S. Thompson (Department of Economics, Ryerson University); Matthew D. Webb (Department of Economics, Carleton University) |
Abstract: | In this paper, we utilize a recently-proposed graphical procedure to show how multiple treatments can be compared while controlling the familywise error rate (the probability of finding one or more spurious differences between the parameters of interest). Monte Carlo simulations suggest that this procedure adequately controls the familywise error rate in finite samples, and has average power nearly identical to a max-$T$ procedure. We demonstrate the flexibility of our proposed approach using two different empirical examples. |
Keywords: | multiple comparisons; familywise error rate; treatment effects; bootstrap |
Date: | 2015–01 |
URL: | http://d.repec.org/n?u=RePEc:rye:wpaper:wp063&r=ecm |
By: | Daouia, Abdelaati; Florens, Jean-Pierre; Simar, Léopold |
Abstract: | The aim of this paper is to construct a robust nonparametric estimator for the production frontier. The main tool is a concept of robust regression boundary defined as a special probability-weighted moment (PWM). We first study this problem under a regression model with one-sided errors where the regression function defines the achievable maximum output, for a given level of inputs-usage, and the regression error defines the inefficiency term. Then we consider a stochastic frontier model where the regression errors are assumed to be composite. It is more realistic to assume that the actually observed outputs are contaminated by a stochastic noise. The additive regression errors in the frontier model are then composed from this noise term and the one-sided ineficiency term. In contrast to the one-sided error model, where the direct use of empirical PWMs is fruitful, the composite error problem requires a substantial different treatment based on deconvolution techniques. To ensure the identifiability of the model we can only assume an independent Gaussian noise. In doing so, the estimation of the robust PWM frontiers, including the true regression boundary, necessitates the computation of a survival function estimator from an ill-posed equation. A Tikhonov regularized solution is constructed and nonparametric frontier estimation is performed. We unravel the asymptotic behavior of the resulting frontier estimators in both one-sided and composite error models. The procedure is very easy and fast to implement. Practical guidelines to effect the necessary computations are described via a simulated example. The usefulness of the approach is discussed through two concrete data sets from the sector of Delivery Services. |
Keywords: | Deconvolution, Nonparametric estimation, Probability-weighted moment, Production function, Robustness, Stochastic frontier, Tikhonov regularization. |
JEL: | C1 C13 C14 C49 |
Date: | 2016–06 |
URL: | http://d.repec.org/n?u=RePEc:tse:wpaper:30543&r=ecm |
By: | de Luna, Xavier (Umeå University); Fowler, Philip (Umeå University); Johansson, Per (Uppsala University) |
Abstract: | Proxy variables are often used in linear regression models with the aim of removing potential confounding bias. In this paper we formalise proxy variables within the potential outcome framework, giving conditions under which it can be shown that causal effects are nonparametrically identified. We characterise two types of proxy variables and give concrete examples where the proxy conditions introduced may hold by design. |
Keywords: | average treatment effect, observational studies, potential outcomes, unobserved confounders |
JEL: | C14 |
Date: | 2016–07 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp10057&r=ecm |
By: | Audrey Laporte; Adrian Rohit Dass; Brian S. Ferguson |
Keywords: | Arellano-Bond, dynamic panel |
JEL: | C23 I12 |
Date: | 2016–07 |
URL: | http://d.repec.org/n?u=RePEc:cch:wpaper:160008&r=ecm |
By: | Duleep, Harriet (College of William and Mary); Liu, Xingfei (University of Alberta) |
Abstract: | The importance of using natural experiments and experimental data in economic research has long been recognized. Yet, it is only in recent years that these approaches have become an integral part of the economist's analytical toolbox, thanks to the efforts of Meyer, Card, Peters, Krueger, Gruber, and others. This use has shed new light on a variety of public policy issues and has already caused a major challenge to some tightly held beliefs in economics, most vividly illustrated by the finding of a positive effect of a minimum wage increase on the employment of low-wage workers. Although currently in vogue in economic research, the analysis of experimental data and natural experiments could be substantially strengthened. This paper discusses how analysts could increase the precision with which they measure treatment effects. An underlying theme is how best to measure the effect of a treatment on a variable, as opposed to explaining a level or change in a variable. |
Keywords: | precision of treatment effects, differences in averages, average of differences, experimental approach, natural experiment, policy evaluation |
JEL: | C1 J1 |
Date: | 2016–07 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp10055&r=ecm |