nep-ecm New Economics Papers
on Econometrics
Issue of 2010‒11‒13
ten papers chosen by
Sune Karlsson
Orebro University

  1. Sparse models and methods for optimal instruments with an application to eminent domain By A. Belloni; D. Chen; Victor Chernozhukov; Christian Hansen
  2. A Note on 'Bayesian analysis of the random coefficient model using aggregate data', an alternative approach By Zenetti, German
  3. A Kernel Technique for Forecasting the Variance-Covariance Matrix By Ralf Becker; Adam Clements; Robert O'Neill
  4. An invariance property of the common trends under linear transformations of the data By Søren Johansen; Katarina Juselius
  5. A Nonlinear Panel Model of Cross-sectional Dependence By George Kapetanios; James Mitchell; Yongcheol Shin
  6. Autoregressions in small samples, priors about observables and initial conditions By Marek Jarociński; Albert Marcet
  7. Analyzing Categorical Data from Split-Plot and Other Multi-Stratum Experiments By Goos P.; Gilmour S.G.
  8. On approximating DSGE models by series expansions By Giovanni Lombardo
  9. Testing construct validity of verbal versus numerical measures of preference uncertainty in contingent valuation By Akter, Sonia; Bennett, Jeff
  10. Spatial Dependencies in German Matching Functions By Franziska Schulze

  1. By: A. Belloni; D. Chen; Victor Chernozhukov (Institute for Fiscal Studies and Massachusetts Institute of Technology); Christian Hansen (Institute for Fiscal Studies and Chicago GSB)
    Abstract: <p><p><p>We develop results for the use of LASSO and Post-LASSO methods to form first-stage predictions and estimate optimal instruments in linear instrumental variables (IV) models with many instruments, p, that apply even when p is much larger than the sample size, n. We rigorously develop asymptotic distribution and inference theory for the resulting IV estimators and provide conditions under which these estimators are asymptotically oracle-efficient. In simulation experiments, the LASSO-based IV estimator with a data-driven penalty performs well compared to recently advocated many-instrument-robust procedures. In an empirical example dealing with the effect of judicial eminent domain decisions on economic outcomes, the LASSO-based IV estimator substantially reduces estimated standard errors allowing one to draw much more precise conclusions about the economic effects of these decisions.</p> </p><p></p><p><p>Optimal instruments are conditional expectations; and in developing the IV results, we also establish a series of new results for LASSO and Post-LASSO estimators of non-parametric conditional expectation functions which are of independent theoretical and practical interest. Specifically, we develop the asymptotic theory for these estimators that allows for non-Gaussian, heteroscedastic disturbances, which is important for econometric applications. By innovatively using moderate deviation theory for self-normalized sums, we provide convergence rates for these estimators that are as sharp as in the homoscedastic Gaussian case under the weak condition that log p = o(n <sup>1/3</sup>). Moreover, as a practical innovation, we provide a fully data-driven method for choosing the user-specified penalty that must be provided in obtaining LASSO and Post-LASSO estimates and establish its asymptotic validity under non-Gaussian, heteroscedastic disturbances.</p></p>
    Date: 2010–10
  2. By: Zenetti, German
    Abstract: In this note on the paper from (Jiang, Manchanda & Rossi 2009) I want to discuss a simple alternative estimation method of the multinomial logit model for aggregated data, the so called BLP model, named after (Berry, Levinsohn & Pakes 1995). The estimation is conducted through a bayesian estimation similar to (Jiang et al. 2009). But in difference to them here the time intensive contraction mapping for assessing the mean utility in every iteration step of the estimation procedure is not needed. This is because the likelihood function is computed via a special case of the control function method ((Petrin & Train 2002) and (Park & Gupta 2009)) and hence a full random walk MCMC algorithm is applied. In difference to (Park & Gupta 2009) the uncorrelated error, which is explicitly introduced through the control function procedure, is not integrated out, but sampled with a random walk MCMC. The introduced proceeding enables to use the whole information from the data set in the estimation and beyond that accelerates the computation.
    Keywords: Bayesian estimation; random coefficient logit; aggregate share models
    JEL: M3 C11
    Date: 2010–11–05
  3. By: Ralf Becker; Adam Clements; Robert O'Neill
    Abstract: In this paper we propose a novel methodology for forecasting variance convariance matrices (VCM) using kernel estimates. While the popular Riskmetrics methodology can be seen as a special case of our methodology, the generalisation is significant as it allows the researcher to use a number of variables to determine the kernel weights of past VCM. The complexity of the methodology scales with the number of explanatory variables used and not with the size of the VCM. This, as well as the automatic positive definiteness of the VCM forecasts are major improvements on currently available forecasting methods. An empirical analysis establishes the usefulness of our proposed methodology.
    Date: 2010
  4. By: Søren Johansen (Department of Economics, University of Copenhagen and CREATES, University of Aarhus); Katarina Juselius (Department of Economics, University of Copenhagen, University of Aarhus)
    Abstract: It is well known that if X(t) is a nonstationary process and Y(t) is a linear function of X(t), then cointegration of Y(t) implies cointegration of X(t). We want to find an analogous result for common trends if X(t) is generated by a finite order VAR. We first show that Y(t) has an infinite order VAR representation in terms of its prediction errors, which are a linear process in the prediction error for X(t). We then apply this result to show that the limit of the common trends for Y(t) are linear functions of the common trends for X(t). We illustrate the findings with a small analysis of the term structure of interest rates.
    Keywords: Cointegration vectors, common trends, prediction errors.
    JEL: C32
    Date: 2010–10–31
  5. By: George Kapetanios (Queen Mary, University of London); James Mitchell (NIESR); Yongcheol Shin (University of Leeds)
    Abstract: This paper proposes a new panel model of cross-sectional dependence. The model has a number of potential structural interpretations that relate to economic phenomena such as herding in financial markets. On an econometric level it provides a flexible approach to the modelling of interactions across panel units and can generate endogenous cross-sectional dependence that can resemble such dependence arising in a variety of existing models such as factor or spatial models. We discuss the theoretical properties of the model and ways in which inference can be carried out. We supplement this analysis with a detailed Monte Carlo study and two empirical illustrations.
    Keywords: Cross-sectional dependence, Nonlinearity, Factor models, Panel models, Fixed effects
    JEL: C31 C32 C33 G14
    Date: 2010–11
  6. By: Marek Jarociński (European Central Bank, Kaiserstrasse 29, D-60311 Frankfurt am Main, Germany.); Albert Marcet (London School of Economics.)
    Abstract: We propose a benchmark prior for the estimation of vector autoregressions: a prior about initial growth rates of the modeled series. We first show that the Bayesian vs frequentist small sample bias controversy is driven by different default initial conditions. These initial conditions are usually arbitrary and our prior serves to replace them in an intuitive way. To implement this prior we develop a technique for translating priors about observables into priors about parameters. We find that our prior makes a big difference for the estimated persistence of output responses to monetary policy shocks in the United States. JEL Classification: C11, C22, C32.
    Keywords: Vector Autoregression, Initial Condition, Bayesian Estimation, Prior about Growth Rate, Monetary Policy Shocks, Small Sample Distribution, Bias Correction.
    Date: 2010–11
  7. By: Goos P.; Gilmour S.G.
    Abstract: Many factorial experiments yield categorical response data. Moreover, the experiments are often run under a restricted randomization for logistical reasons and/or because of time and cost constraints. The combination of categorical data and restricted randomization necessitates the use of generalized linear mixed models. In this paper, we demonstrate the use of Hasse diagrams for laying out the randomization structure of a complex factorial design involving seven two-level factors, four three-level factors and a five-level factor, and three repeated observations for each experimental unit. The Hasse diagrams form the basis of the mixed model analysis of the ordered categorical data produced by the experiment. We also discuss the added value of categorical data over binary data and difficulties with the estimation of variance components and, consequently, with the statistical inference. Finally, we show how to deal with repeats in the presence of categorical data, and describe a general strategy for building a suitable generalized linear mixed model.
    Date: 2010–09
  8. By: Giovanni Lombardo (European Central Bank, Kaiserstrasse 29, D-60311 Frankfurt am Main, Germany.)
    Abstract: We show how to use a simple perturbation method to solve non-linear rational expectation models. Drawing from the applied mathematics literature we propose a method consisting of series expansions of the non-linear system around a known solution. The variables are represented in terms of their orders of approximation with respect to a perturbation parameter. The final solution, therefore, is the sum of the different orders. This approach links to formal arguments the idea that each order of approximation is solved recursively taking as given the lower order of approximation. Therefore, this method is not subject to the ambiguity concerning the order of the variables in the resulting state-space representation as, for example, has been discussed by Kim et al. (2008). Provided that the model is locally stable, the approximation technique discussed in this paper delivers stable solutions at any order of approximation. JEL Classification: C63, E0.
    Keywords: Solving dynamic stochastic general equilibrium models, Perturbation methods, Series expansions, Non-linear difference equations.
    Date: 2010–11
  9. By: Akter, Sonia; Bennett, Jeff
    Abstract: The numerical certainty scale (NCS) and polychotomous choice (PC) methods are two widely used techniques for measuring preference uncertainty in contingent valuation (CV) studies. The NCS follows a numerical scale and the PC is based on a verbal scale. This report presents results of two experiments that use these preference uncertainty measurement techniques. The first experiment was designed to compare and contrast the uncertainty scores obtained from the NCS and the PC method. The second experiment was conducted to test a preference uncertainty measurement scale that combines verbal expressions with numerical and graphical interpretations: a composite certainty scale (CCS). The construct validity of the certainty scores obtained from these three techniques was tested by estimating three separate ordered probit regression models. The results of the study can be summarised in three key findings. First, the PC method generates a higher proportion of âyesâ responses than the conventional dichotomous choice elicitation format. Second, the CCS method generates a significantly higher proportion of certain responses than the NCS and the PC methods. Finally, the NCS method performs poorly in terms of construct validity. Overall, the verbal measures perform better than the numerical measure. The CCS is a promising method to measure preference uncertainty in CV studies. To better understand its strengths and weaknesses however, further empirical applications are needed.
    Keywords: preference uncertainty, contingent valuation, numerical certainty scale, polychotomous choice method, composite certainty scale, climate change, Australia., Environmental Economics and Policy, Research Methods/ Statistical Methods, Q51, Q54,
    Date: 2010–01
  10. By: Franziska Schulze
    Abstract: This paper proposes a spatial panel model for German matching functions to avoid possibly biased and inefficient estimates due to spatial dependence. We provide empirical evidence for the presence of spatial dependencies in matching data. Based on an official data set containing monthly information for 176 local employment offices, we show that neglecting spatial dependencies in the data results in overestimated coefficients. For the incorporation of spatial information into our model, we use data on commuting relations between local employment offices. Furthermore, our results suggest that a dynamic modeling is more appropriate for matching functions.
    Keywords: Empirical Matching, Geographic Labor Mobility, Spatial Dependence, Regional Unemployment
    JEL: C21 C23 J64 J63 R12
    Date: 2010–11

This nep-ecm issue is ©2010 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.