|
on Econometrics |
By: | Paul Clarke; Frank Windmeijer |
Abstract: | Instrumental variables (IVs) can be used to construct estimators of exposure effects on the outcomes of studies affected by non-ignorable selection of the exposure. Estimators which fail to adjust for the effects of non-ignorable selection will be biased and inconsistent. Such situations commonly arise in observational studies, but even randomised controlled trials can be affected by non-ignorable participant non-compliance. In this paper, we review IV estimators for studies in which the outcome is binary. Recent work on identification is interpreted using an integrated structural modelling and potential outcomes framework, within which we consider the links between different approaches developed in statistics and econometrics. The implicit assumptions required for bounding causal effects and point-identification by each estimator are highlighted and compared within our framework. Finally, the implications for practice are discussed. |
Keywords: | bounds, causal inference, generalized method of moments, local average treatment effects, marginal structural models, non-compliance, parameter identification, potential outcomes, structural mean models, structural models |
JEL: | C13 C14 |
Date: | 2010–06 |
URL: | http://d.repec.org/n?u=RePEc:bri:cmpowp:10/239&r=ecm |
By: | Joel Horowitz (Institute for Fiscal Studies and Northwestern University); Sokbae 'Simon' Lee (Institute for Fiscal Studies and University College London) |
Abstract: | <p><p>This paper is concerned with developing uniform confidence bands for functions estimated nonparametrically with instrumental variables. We show that a sieve nonparametric instrumental variables estimator is pointwise asymptotically normally distributed. The asymptotic normality result holds in both mildly and severely ill-posed cases. We present an interpolation method to obtain a uniform confidence band and show that the bootstrap can be used to obtain the required critical values. Monte Carlo experiments illustrate the finite-sample performance of the uniform confidence band.</P> </p><p><p>This paper is a revised version of CWP18/09.</P></p> |
Date: | 2010–07 |
URL: | http://d.repec.org/n?u=RePEc:ifs:cemmap:19/10&r=ecm |
By: | James G. MacKinnon (Queen`s University); Morten Ørregaard Nielsen (Queen`s University and CREATES) |
Abstract: | We calculate numerically the asymptotic distribution functions of likelihood ratio tests for fractional unit roots and cointegration rank. Because these distributions depend on a real-valued parameter, b, which must be estimated, simple tabulation is not feasible. Partly due to the presence of this parameter, the choice of model specification for the response surface regressions used to obtain the numerical distribution functions is more involved than is usually the case. We deal with model uncertainty by model averaging rather than by model selection. We make available a computer program which, given the dimension of the problem, q, and a value of b, provides either a set of critical values or the asymptotic P value for any value of the likelihood ratio statistic. The use of this program is illustrated by means of an empirical example involving opinion poll data. |
Keywords: | cofractional process, fractional unit root, fractional cointegration, response surface regression, cointegration rank, numerical distribution function, model averaging |
JEL: | C12 C16 C22 C32 |
Date: | 2010–07 |
URL: | http://d.repec.org/n?u=RePEc:qed:wpaper:1240&r=ecm |
By: | Sokbae 'Simon' Lee (Institute for Fiscal Studies and University College London); Arthur Lewbel (Institute for Fiscal Studies and Boston College) |
Abstract: | <p>We provide new conditions for identification of accelerated failure time competing risks models. These include Roy models and some auction models. In our set up, unknown regression functions and the joint survivor function of latent disturbance terms are all nonparametric. We show that this model is identified given covariates that are independent of latent errors, provided that a certain rank condition is satisfied. We present a simple example in which our rank condition for identification is verified. Our identification strategy does not depend on identification at infinity or near zero, and it does not require exclusion assumptions. Given our identification, we show estimation can be accomplished using sieves.</p> |
Date: | 2010–06 |
URL: | http://d.repec.org/n?u=RePEc:ifs:cemmap:14/10&r=ecm |
By: | Tatsuya Kubokawa (Faculty of Economics, University of Tokyo) |
Abstract: | The empirical best linear unbiased predictor (EBLUP) or the empirical Bayes estimator (EB) in the linear mixed model is recognized useful for the small area estimation, because it can increase the estimation precision by using the information from the related areas. Two of the measures of uncertainty of EBLUP is the estimation of the mean squared error (MSE) and the confidence interval, which have been studied under the second-order accuracy in the literature. This paper provides the general analytical results for these two measures in the unified framework, namely, we derive the conditions on the general consistent estimators of the variance components to satisfy the third-order accuracy in the MSE estimation and the confidence interval in the general linear mixed normal models. Those conditions are shown to be satisfied by not only the maximum likelihood (ML) and restricted maximum likelihood (REML), but also the other estimators including the Prasad-Rao and Fay-Herriot estimators in specific models. |
Date: | 2010–07 |
URL: | http://d.repec.org/n?u=RePEc:tky:fseres:2010cf754&r=ecm |
By: | Alexandre Belloni; Victor Chernozhukov (Institute for Fiscal Studies and Massachusetts Institute of Technology) |
Abstract: | <p><p><p><p><p><p>In this paper we study post-penalized estimators which apply ordinary, unpenalized linear regression to the model selected by first-step penalized estimators, typically LASSO. It is well known that LASSO can estimate the regression function at nearly the oracle rate, and is thus hard to improve upon. We show that post-LASSO performs at least as well as LASSO in terms of the rate of convergence, and has the advantage of a smaller bias. Remarkably, this performance occurs even if the LASSO-based model selection 'fails' in the sense of missing some components of the 'true' regression model. By the 'true' model we mean here the best s-dimensional approximation to the regression function chosen by the oracle. Furthermore, post-LASSO can perform strictly better than LASSO, in the sense of a strictly faster rate of convergence, if the LASSO-based model selection correctly includes all components of the 'true' model as a subset and also achieves a sufficient sparsity. In the extreme case, when LASSO perfectly selects the 'true' model, the post-LASSO estimator becomes the oracle estimator. An important ingredient in our analysis is a new sparsity bound on the dimension of the model selected by LASSO which guarantees that this dimension is at most of the same order as the dimension of the 'true' model. Our rate results are non-asymptotic and hold in both parametric and nonparametric models. Moreover, our analysis is not limited to the LASSO estimator in the first step, but also applies to other estimators, for example, the trimmed LASSO, Dantzig selector, or any other estimator with good rates and good sparsity. Our analysis covers both traditional trimming and a new practical, completely data-driven trimming scheme that induces maximal sparsity subject to maintaining a certain goodness-of-fit. The latter scheme has theoretical guarantees similar to those of LASSO or post-LASSO, but it dominates these procedures as well as traditional trimming in a wide variety of experiments.</p></p></p></p></p></p> |
Date: | 2010–06 |
URL: | http://d.repec.org/n?u=RePEc:ifs:cemmap:13/10&r=ecm |
By: | J. M. C. Santos Silva; Silvana Tenreyro; Frank Windmeijer (Institute for Fiscal Studies and University of Bristol) |
Abstract: | <p>In many economic applications, the variate of interest is non-negative and its distribution is characterized by a mass-point at zero and a long right-tail. Many regression strategies have been proposed to deal with data of this type. Although there has been a long debate in the literature on the appropriateness of different models, formal statistical tests to choose between the competing specifications, or to assess the validity of the preferred model, are not often used in practice. In this paper we propose a novel and simple regression-based specification test that can be used to test these models against each other.</p> |
Date: | 2010–07 |
URL: | http://d.repec.org/n?u=RePEc:ifs:cemmap:20/10&r=ecm |
By: | Schluter, C.; Trede, M. |
Abstract: | While earning processes are commonly unobservable income flows which evolve in continuous time, observable income data are usually discrete, having been aggregated over time. We consider continuous-time earning processes, specifically (non-linearly) transformed Ornstein-Uhlenbeck processes, and the associated integrated, i.e. time aggregated process. Both processes are characterised, and we show that time aggregation alters important statistical properties. The parameters of the earning process are estimable by GMM, and the finite sample properties of the estimator are investigated. Our methods are applied to annual earnings data for the US. It is demonstrated that the model replicates well important features of the earnings distribution. <br><br> Keywords; integrated non-linearly transformed Ornstein-Uhlenbeck process, temporal aggregation. <br><br> JEL Classification: D31, C01, C22, C51, J31 |
Date: | 2010–07–01 |
URL: | http://d.repec.org/n?u=RePEc:stn:sotoec:1014&r=ecm |
By: | Theodore W Anderson (Institute for Fiscal Studies and Stanford) |
Abstract: | <p>Consider testing the null hypothesis that a single structural equation has specified coefficients. The alternative hypothesis is that the relevant part of the reduced form matrix has proper rank, that is, that the equation is identified. The usual linear model with normal disturbances is invariant with respect to linear transformations of the endogenous and of the exogenous variables. When the disturbance covariance matrix is known, it can be set to the identity, and the invariance of the endogenous variables is with respect to orthogonal transformations. The likelihood ratio test is invariant with respect to these transformations and is the best invariant test. Furthermore it is admissible in the class of all tests. Any other test has lower power and/or higher significance level.</p> |
Date: | 2010–07 |
URL: | http://d.repec.org/n?u=RePEc:ifs:cemmap:18/10&r=ecm |
By: | Ariel Pakes (Institute for Fiscal Studies and Harvard University) |
Abstract: | <p>Behavioral choice models generate inequalities which, when combined with additional assumptions, can be used as a basis for estimation. This paper considers two sets of such assumptions and uses them in two empirical examples. The second example examines the structure of payments resulting from the upstream interactions in a vertical market. We then mimic the empirical setting for this example in a numerical analysis which computes actual equilibria, examines how their characteristics vary with the market setting, and compares them to the empirical results. The final section uses the numerical results in a Monte Carlo analysis of the robustness of the two approaches to estimation to their underlying assumptions.</p> |
Date: | 2010–07 |
URL: | http://d.repec.org/n?u=RePEc:ifs:cemmap:21/10&r=ecm |
By: | Francisco Peñaranda; Enrique Sentana |
Abstract: | Two main approaches are commonly used to empirically evaluate linear factor pricing models: regression and SDF methods, with centred and uncentred versions of the latter. We show that unlike standard two-step or iterated GMM procedures, single-step estimators such as continuously updated GMM yield numerically identical values for prices of risk, pricing errors, Jensen’s alphas and overidentifying restrictions tests irrespective of the model validity. Therefore, there is arguably a single approach regardless of the factors being traded or not, or the use of excess or gross returns. We illustrate our results by revisiting Lustig and Verdelhan’s (2007) empirical analysis of currency returns. |
Keywords: | CU-GMM, Factor pricing models, Forward premium puzzle, Generalised Empirical Likelihood, Stochastic discount factor. |
JEL: | G11 G12 C12 C13 |
Date: | 2010–07 |
URL: | http://d.repec.org/n?u=RePEc:upf:upfgen:1229&r=ecm |
By: | Nicholas Christakis; James Fowler; Guido Imbens (Institute for Fiscal Studies and Harvard University); Karthik Kalyanaraman (Institute for Fiscal Studies and UCL) |
Abstract: | <p>We develop and analyze a tractable empirical model for strategic network formation that can be estimated with data from a single network at a single point in time. We model the network formation as a sequential process where in each period a single randomly selected pair of agents has the opportunity to form a link. Conditional on such an opportunity, a link will be formed if both agents view the link as beneficial to them. They base their decision on their own characateristics, the characteristics of the potential partner, and on features of the current state of the network, such as whether the the two potential partners already have friends in common. A key assumption is that agents do not take into account possible future changes to the network. This assumption avoids complications with the presence of multiple equilibria, and also greatly simplifies the computational burden of anlyzing these models. We use Bayesian markov-chain-monte-carlo methods to obtain draws from the posterior distribution of interest. We apply our methods to a social network of 669 high school students, with, on average, 4.6 friends. We then use the model to evaluate the effect of an alternative assignment to classes on the topology of the network.</p> |
Date: | 2010–06 |
URL: | http://d.repec.org/n?u=RePEc:ifs:cemmap:16/10&r=ecm |
By: | Vollmer, Sebastian; Holzmann, Hajo; Weisbrod, Julian |
Abstract: | We analyze the cross-national distribution of GDP per capita and its evolution from 1970 to 2003. We argue that peaks are not a suitable measure for distinct growth regimes, because the number of peaks is not invariant under strictly monotonic transformations of the data (e.g. original vs. log scale). Instead, we model the distribution as a finite mixture, and determine its number of components (and hence of distinct growth regimes) from the data by rigorous statistical testing. We find that the distribution appears to have only two components in 1970-1975, but consists of three components from 1976 onwards. The level of GDP per capita stagnated in the poorest component, and the richest component grew much faster than the medium component. These findings empirically confirm the predictions of the unified growth theory. |
Keywords: | twin peaks, economic growth, convergence |
JEL: | C12 O11 O47 F01 |
Date: | 2010–07 |
URL: | http://d.repec.org/n?u=RePEc:han:dpaper:dp-452&r=ecm |
By: | Alfarano, Simone; Eva, Camacho; Josep, Domènech |
Abstract: | The aim of our contribution relies on studying the possibility of implementing a genetic algorithm in order to reproduce some characteristics of a simple laboratory experiment with human subjects. The novelty of our paper regards the estimation of the key-parameters of the algorithm, and the analysis of the characteristics of the estimator. |
Keywords: | Estimation; genetic algoritms; experimenst |
JEL: | C13 C63 |
Date: | 2010–04 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:24138&r=ecm |
By: | James J. Heckman (University of Chicago, University College Dublin, Yale University and the American Bar Foundation); Seong Hyeok Moon (Department of Economics, University of Chicago); Rodrigo Pinto (Department of Economics, University of Chicago); Peter A. Savelyev (Department of Economics, University of Chicago); Adam Yavitz (Economic Research Center, University of Chicago) |
Abstract: | Social experiments are powerful sources of information about the effectiveness of interventions. In practice, initial randomization plans are almost always compromised. Multiple hypotheses are frequently tested. "Signicant" effects are often reported with p-values that do not account for preliminary screening from a large candidate pool of possible effects. This paper develops tools for analyzing data from experiments as they are actually implemented. We apply these tools to analyze the influential HighScope Perry Preschool Program. The Perry program was a social experiment that provided preschool education and home visits to disadvantaged children during their preschool years. It was evaluated by the method of random assignment. Both treatments and controls have been followed from age 3 through age 40. Previous analyses of the Perry data assume that the planned randomization protocol was implemented. In fact, as in many social experiments, the intended randomization protocol was compromised. Accounting for compromised randomization, multiple-hypothesis testing, and small sample sizes, we find statistically significant and economically important program effects for both males and females. We also examine the representativeness of the Perry study. |
Keywords: | early childhood intervention; compromised randomization; social experiment; multiple-hypothesis testing |
JEL: | I21 C93 J15 |
Date: | 2010–07–22 |
URL: | http://d.repec.org/n?u=RePEc:ucd:wpaper:201034&r=ecm |
By: | Ludlow, Jorge |
Abstract: | Economic models that incorporate expectations require non causal time series theory. We provide a general method useful to solve in closed form any forward linear rational expectations multivariate model. An anticipative VARMA model is likely to explain a behavioral relation were a tentative future guides the today action. The work develops general conditions to get the unique stationary closed solution, backward or forward, so extends over the well known accepted results on causal invertible multivariate models and shows that to incorporate non causal models one should rely on Complex Analysis. |
Keywords: | Anticipative Times Series; anticipative VARMA; anticipative model; backward looking; forward looking; linear processes; linear filter; non casual model. |
JEL: | C32 C50 C22 |
Date: | 2010–07 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:24139&r=ecm |
By: | Duo Qin (Queen Mary, University of London) |
Abstract: | This study examines the evolution of econometric research in business cycle analysis during the 1960-90 period. It shows how the research was dominated by an assimilation of the tradition of NBER business cycle analysis by the Haavelmo-Cowles Commission approach, catalysed by time-series statistical methods. Methodological consequences of the assimilation are critically evaluated in light of the meagre achievement of the research in predicting the current global recession. |
Keywords: | Business cycles, NBER, Forecasting |
JEL: | B23 |
Date: | 2010–07 |
URL: | http://d.repec.org/n?u=RePEc:qmw:qmwecw:wp669&r=ecm |
By: | Hospido, Laura (Bank of Spain) |
Abstract: | This paper develops an error components model that is used to examine the impact of job changes on the dynamics and variance of individual log earnings. I use data on work histories drawn from the Panel Study of Income Dynamics (PSID) that makes it possible to distinguish between voluntary and involuntary job-to-job changes. The potential endogeneity of job mobility in relation to earnings is circumvented by means of an instrument variable estimation method that also allows to control for unobserved individual-job specific heterogeneity. Once controlled for individual and job-specific effects, the persistence within jobs is almost zero, whereas across jobs is significant but small. |
Keywords: | panel data, dynamic models, individual-job specific fixed effects, job changes, individual wages |
JEL: | C23 J31 |
Date: | 2010–07 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp5088&r=ecm |
By: | Buss, Ginters |
Abstract: | I describe preliminary results for seasonal decomposition procedure using a modified Hodrick-Prescott (Leser) filter. The procedure is simpler to implement compared to two currently most popular seasonal decomposition procedures - X-11 filters developed by the U.S. Census Bureau and SEATS developed by the Bank of Spain. A case study for Latvia's quarterly gross domestic product shows the procedure is able to extract a stable seasonal component, yet allowing for structural changes in seasonality. |
Keywords: | seasonal decomposition; Hodrick-Prescott filter; quarterly GDP |
JEL: | C13 C14 C22 |
Date: | 2010–07–28 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:24133&r=ecm |