
on Econometrics 
By:  A. Belloni; D. Chen; Victor Chernozhukov (Institute for Fiscal Studies and Massachusetts Institute of Technology); Christian Hansen (Institute for Fiscal Studies and Chicago GSB) 
Abstract:  <p><p><p>We develop results for the use of LASSO and PostLASSO methods to form firststage predictions and estimate optimal instruments in linear instrumental variables (IV) models with many instruments, p, that apply even when p is much larger than the sample size, n. We rigorously develop asymptotic distribution and inference theory for the resulting IV estimators and provide conditions under which these estimators are asymptotically oracleefficient. In simulation experiments, the LASSObased IV estimator with a datadriven penalty performs well compared to recently advocated manyinstrumentrobust procedures. In an empirical example dealing with the effect of judicial eminent domain decisions on economic outcomes, the LASSObased IV estimator substantially reduces estimated standard errors allowing one to draw much more precise conclusions about the economic effects of these decisions.</p> </p><p></p><p><p>Optimal instruments are conditional expectations; and in developing the IV results, we also establish a series of new results for LASSO and PostLASSO estimators of nonparametric conditional expectation functions which are of independent theoretical and practical interest. Specifically, we develop the asymptotic theory for these estimators that allows for nonGaussian, heteroscedastic disturbances, which is important for econometric applications. By innovatively using moderate deviation theory for selfnormalized sums, we provide convergence rates for these estimators that are as sharp as in the homoscedastic Gaussian case under the weak condition that log p = o(n <sup>1/3</sup>). Moreover, as a practical innovation, we provide a fully datadriven method for choosing the userspecified penalty that must be provided in obtaining LASSO and PostLASSO estimates and establish its asymptotic validity under nonGaussian, heteroscedastic disturbances.</p></p> 
Date:  2010–10 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:31/10&r=ecm 
By:  Zenetti, German 
Abstract:  In this note on the paper from (Jiang, Manchanda & Rossi 2009) I want to discuss a simple alternative estimation method of the multinomial logit model for aggregated data, the so called BLP model, named after (Berry, Levinsohn & Pakes 1995). The estimation is conducted through a bayesian estimation similar to (Jiang et al. 2009). But in difference to them here the time intensive contraction mapping for assessing the mean utility in every iteration step of the estimation procedure is not needed. This is because the likelihood function is computed via a special case of the control function method ((Petrin & Train 2002) and (Park & Gupta 2009)) and hence a full random walk MCMC algorithm is applied. In difference to (Park & Gupta 2009) the uncorrelated error, which is explicitly introduced through the control function procedure, is not integrated out, but sampled with a random walk MCMC. The introduced proceeding enables to use the whole information from the data set in the estimation and beyond that accelerates the computation. 
Keywords:  Bayesian estimation; random coefficient logit; aggregate share models 
JEL:  M3 C11 
Date:  2010–11–05 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:26449&r=ecm 
By:  Ralf Becker; Adam Clements; Robert O'Neill 
Abstract:  In this paper we propose a novel methodology for forecasting variance convariance matrices (VCM) using kernel estimates. While the popular Riskmetrics methodology can be seen as a special case of our methodology, the generalisation is significant as it allows the researcher to use a number of variables to determine the kernel weights of past VCM. The complexity of the methodology scales with the number of explanatory variables used and not with the size of the VCM. This, as well as the automatic positive definiteness of the VCM forecasts are major improvements on currently available forecasting methods. An empirical analysis establishes the usefulness of our proposed methodology. 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:man:cgbcrp:151&r=ecm 
By:  Søren Johansen (Department of Economics, University of Copenhagen and CREATES, University of Aarhus); Katarina Juselius (Department of Economics, University of Copenhagen, University of Aarhus) 
Abstract:  It is well known that if X(t) is a nonstationary process and Y(t) is a linear function of X(t), then cointegration of Y(t) implies cointegration of X(t). We want to find an analogous result for common trends if X(t) is generated by a finite order VAR. We first show that Y(t) has an infinite order VAR representation in terms of its prediction errors, which are a linear process in the prediction error for X(t). We then apply this result to show that the limit of the common trends for Y(t) are linear functions of the common trends for X(t). We illustrate the findings with a small analysis of the term structure of interest rates. 
Keywords:  Cointegration vectors, common trends, prediction errors. 
JEL:  C32 
Date:  2010–10–31 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201072&r=ecm 
By:  George Kapetanios (Queen Mary, University of London); James Mitchell (NIESR); Yongcheol Shin (University of Leeds) 
Abstract:  This paper proposes a new panel model of crosssectional dependence. The model has a number of potential structural interpretations that relate to economic phenomena such as herding in financial markets. On an econometric level it provides a flexible approach to the modelling of interactions across panel units and can generate endogenous crosssectional dependence that can resemble such dependence arising in a variety of existing models such as factor or spatial models. We discuss the theoretical properties of the model and ways in which inference can be carried out. We supplement this analysis with a detailed Monte Carlo study and two empirical illustrations. 
Keywords:  Crosssectional dependence, Nonlinearity, Factor models, Panel models, Fixed effects 
JEL:  C31 C32 C33 G14 
Date:  2010–11 
URL:  http://d.repec.org/n?u=RePEc:qmw:qmwecw:wp673&r=ecm 
By:  Marek Jarociński (European Central Bank, Kaiserstrasse 29, D60311 Frankfurt am Main, Germany.); Albert Marcet (London School of Economics.) 
Abstract:  We propose a benchmark prior for the estimation of vector autoregressions: a prior about initial growth rates of the modeled series. We first show that the Bayesian vs frequentist small sample bias controversy is driven by different default initial conditions. These initial conditions are usually arbitrary and our prior serves to replace them in an intuitive way. To implement this prior we develop a technique for translating priors about observables into priors about parameters. We find that our prior makes a big difference for the estimated persistence of output responses to monetary policy shocks in the United States. JEL Classification: C11, C22, C32. 
Keywords:  Vector Autoregression, Initial Condition, Bayesian Estimation, Prior about Growth Rate, Monetary Policy Shocks, Small Sample Distribution, Bias Correction. 
Date:  2010–11 
URL:  http://d.repec.org/n?u=RePEc:ecb:ecbwps:20101263&r=ecm 
By:  Goos P.; Gilmour S.G. 
Abstract:  Many factorial experiments yield categorical response data. Moreover, the experiments are often run under a restricted randomization for logistical reasons and/or because of time and cost constraints. The combination of categorical data and restricted randomization necessitates the use of generalized linear mixed models. In this paper, we demonstrate the use of Hasse diagrams for laying out the randomization structure of a complex factorial design involving seven twolevel factors, four threelevel factors and a fivelevel factor, and three repeated observations for each experimental unit. The Hasse diagrams form the basis of the mixed model analysis of the ordered categorical data produced by the experiment. We also discuss the added value of categorical data over binary data and difficulties with the estimation of variance components and, consequently, with the statistical inference. Finally, we show how to deal with repeats in the presence of categorical data, and describe a general strategy for building a suitable generalized linear mixed model. 
Date:  2010–09 
URL:  http://d.repec.org/n?u=RePEc:ant:wpaper:2010021&r=ecm 
By:  Giovanni Lombardo (European Central Bank, Kaiserstrasse 29, D60311 Frankfurt am Main, Germany.) 
Abstract:  We show how to use a simple perturbation method to solve nonlinear rational expectation models. Drawing from the applied mathematics literature we propose a method consisting of series expansions of the nonlinear system around a known solution. The variables are represented in terms of their orders of approximation with respect to a perturbation parameter. The final solution, therefore, is the sum of the different orders. This approach links to formal arguments the idea that each order of approximation is solved recursively taking as given the lower order of approximation. Therefore, this method is not subject to the ambiguity concerning the order of the variables in the resulting statespace representation as, for example, has been discussed by Kim et al. (2008). Provided that the model is locally stable, the approximation technique discussed in this paper delivers stable solutions at any order of approximation. JEL Classification: C63, E0. 
Keywords:  Solving dynamic stochastic general equilibrium models, Perturbation methods, Series expansions, Nonlinear difference equations. 
Date:  2010–11 
URL:  http://d.repec.org/n?u=RePEc:ecb:ecbwps:20101264&r=ecm 
By:  Akter, Sonia; Bennett, Jeff 
Abstract:  The numerical certainty scale (NCS) and polychotomous choice (PC) methods are two widely used techniques for measuring preference uncertainty in contingent valuation (CV) studies. The NCS follows a numerical scale and the PC is based on a verbal scale. This report presents results of two experiments that use these preference uncertainty measurement techniques. The first experiment was designed to compare and contrast the uncertainty scores obtained from the NCS and the PC method. The second experiment was conducted to test a preference uncertainty measurement scale that combines verbal expressions with numerical and graphical interpretations: a composite certainty scale (CCS). The construct validity of the certainty scores obtained from these three techniques was tested by estimating three separate ordered probit regression models. The results of the study can be summarised in three key findings. First, the PC method generates a higher proportion of âyesâ responses than the conventional dichotomous choice elicitation format. Second, the CCS method generates a significantly higher proportion of certain responses than the NCS and the PC methods. Finally, the NCS method performs poorly in terms of construct validity. Overall, the verbal measures perform better than the numerical measure. The CCS is a promising method to measure preference uncertainty in CV studies. To better understand its strengths and weaknesses however, further empirical applications are needed. 
Keywords:  preference uncertainty, contingent valuation, numerical certainty scale, polychotomous choice method, composite certainty scale, climate change, Australia., Environmental Economics and Policy, Research Methods/ Statistical Methods, Q51, Q54, 
Date:  2010–01 
URL:  http://d.repec.org/n?u=RePEc:ags:eerhrr:94942&r=ecm 
By:  Franziska Schulze 
Abstract:  This paper proposes a spatial panel model for German matching functions to avoid possibly biased and inefficient estimates due to spatial dependence. We provide empirical evidence for the presence of spatial dependencies in matching data. Based on an official data set containing monthly information for 176 local employment offices, we show that neglecting spatial dependencies in the data results in overestimated coefficients. For the incorporation of spatial information into our model, we use data on commuting relations between local employment offices. Furthermore, our results suggest that a dynamic modeling is more appropriate for matching functions. 
Keywords:  Empirical Matching, Geographic Labor Mobility, Spatial Dependence, Regional Unemployment 
JEL:  C21 C23 J64 J63 R12 
Date:  2010–11 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2010054&r=ecm 