nep-ecm New Economics Papers
on Econometrics
Issue of 2008‒08‒21
sixteen papers chosen by
Sune Karlsson
Orebro University

  1. Specification Tests of Parametric Dynamic Conditional Quantiles By Juan Carlos Escanciano; Carlos Velasco
  2. GEL methods for non-smooth moment indicators By Paulo Parente; Richard Smith
  3. Treating missing values in INAR(1) models By Andersson, Jonas; Karlis, Dimitris
  4. Improving point and interval estimates of monotone functions by rearrangement By Victor Chernozhukov; Ivan Fernandez-Val; Alfred Galichon
  5. Improved Jive Estimators for Overidentified Linear Models with and without Heteroskedasticity By Daniel A. Ackerberg; Paul J. Devereux
  6. Zero variance in Markov chain Monte Carlo with an application to credit risk estimation By Tenconi Paolo
  7. Identification with imperfect instruments By Aviv Nevo; Adam Rosen
  8. More on confidence intervals for partially identified parameters By Jörg Stoye
  9. Sharp identification regions in games By Arie Beresteanu; Ilya Molchanov; Francesca Molinari
  10. Testing for stochastic monotonicity By Sokbae 'Simon' Lee; Oliver Linton; Yoon-Jae Whang
  11. A goodness-of-fit test for copulas By Prokhorov, Artem
  12. Exploring nonlinearity with random field regression By Derek Bond; Michael J Harrison; Edward J O’Brien
  13. The Hessian Method (Highly Efficient State Smoothing, In a Nutshell) By McCAUSLAND, William
  14. Generating functions and short recursions, with applications to the moments of quadratic forms in noncentral normal vectors By Grant Hillier; Raymond Kan; Xiaolu Wang
  15. Competing methods for representing random taste heterogeneity in discrete choice models By Fosgerau, Mogens; Hess, Stephane
  16. Recent Developments in the Econometrics of Program Evaluation By Guido M. Imbens; Jeffrey M. Wooldridge

  1. By: Juan Carlos Escanciano (Indiana University Bloomington); Carlos Velasco (Universidad Carlos III de Madrid)
    Abstract: This article proposes omnibus specification tests of parametric dynamic quantile regression models. Contrary to the existing procedures, we allow for a flexible and general specification framework where a possibly continuum of quantiles are simultaneously specified. This is the case for many econometric applications for both time series and cross section data which require a global diagnostic tool. We study the asymptotic distribution of the test statistics under fairly weak conditions on the serial dependence in the underlying data generating process. It turns out that the asymptotic null distribution depends on the data generating process and the hypothesized model. We propose a subsampling procedure for approximating the asymptotic critical values of the tests. An appealing property of the proposed tests is that they do not require estimation of the non-parametric (conditional) sparsity function. A Monte Carlo study compares the proposed tests and shows that the asymptotic results provide good approximations for small sample sizes. Finally, an application to some European stock indexes provides evidence that our methodology is a powerful and flexible alternative to standard backtesting procedures in evaluating market risk by using information from a range of quantiles in the lower tail of returns.
    JEL: C14 C52
    Date: 2008–08
    URL: http://d.repec.org/n?u=RePEc:inu:caeprp:2008-021&r=ecm
  2. By: Paulo Parente; Richard Smith (Institute for Fiscal Studies and University of Cambridge)
    Abstract: <p>This paper considers the first order large sample properties of the GEL class of estimators for models specified by non-smooth indicators. The GEL class includes a number of estimators recently introduced as alternatives to the efficient GMM estimator which may suffer from substantial biases in finite samples. These include EL, ET and the CUE. This paper also establishes the validity of tests suggested in the smooth moment indicators case for over-dentifying restrictions and specification. In particular, a number of these tests avoid the necessity of providing an estimator for the Jacobian matrix which may be problematic for the sample sizes typically encountered in practice. </p>
    Date: 2008–07
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:19/08&r=ecm
  3. By: Andersson, Jonas (Dept. of Finance and Management Science, Norwegian School of Economics and Business Administration); Karlis, Dimitris (Department of Statistics, Athens University of Economics and Business)
    Abstract: Time series models for count data have found increased interest in recent days. The existing literature refers to the case of data that have been fully observed. In the present paper, methods for estimating the parameters of the first-order integer-valued autoregressive model in the presence of missing data are proposed. The first method maximizes a conditional likelihood constructed via the observed data based on the k-step-ahead conditional distributions to account for the gaps in the data. The second approach is based on an iterative scheme where missing values are imputed in order to update the estimated parameters. The first method is useful when the predictive distributions have simple forms. We derive in full details this approach when the innovations are assumed to follow a finite mixture of Poisson distributions. The second method is applicable when there are not closed form expressions for the conditional likelihood or they are hard to derive. Simulation results and comparisons of the methods are reported. The proposed methods are applied to a data set concerning syndromic surveillance during the Athens 2004 Olympic Games.
    Keywords: Imputation; Markov Chain EM algorithm; mixed Poisson; discrete valued time series
    JEL: C32
    Date: 2008–08–13
    URL: http://d.repec.org/n?u=RePEc:hhs:nhhfms:2008_014&r=ecm
  4. By: Victor Chernozhukov (Institute for Fiscal Studies and Massachusetts Institute of Technology); Ivan Fernandez-Val; Alfred Galichon (Institute for Fiscal Studies and Ecole Polytechnique)
    Abstract: <p><p>Suppose that a target function is monotonic, namely weakly increasing, and an original estimate of this target function is available, which is not weakly increasing. Many common estimation methods used in statistics produce such estimates. We show that these estimates can always be improved with no harm by using rearrangement techniques: The rearrangement methods, univariate and multivariate, transform the original estimate to a monotonic estimate, and the resulting estimate is closer to the true curve in common metrics than the original estimate. The improvement property of the rearrangement also extends to the construction of confidence bands for monotone functions. Let l and u be the lower and upper endpoint functions of a simultaneous confidence interval [l,u] that covers the true function with probability (1-a), then the rearranged confidence interval, defined by the rearranged lower and upper end-point functions, is shorter in length in common norms than the original interval and covers the true function with probability greater or equal to (1-a). We illustrate the results with a computational example and an empirical example dealing with age-height growth charts. </p><p></p><p><b>Please note:</b> This paper is a revised version of cemmap working Paper CWP09/07.</p></p>
    Date: 2008–07
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:17/08&r=ecm
  5. By: Daniel A. Ackerberg (UCLA); Paul J. Devereux (University College Dublin)
    Abstract: We introduce two simple new variants of the Jackknife Instrumental Variables (JIVE) estimator for overidentified linear models and show that they are superior to the existing JIVE estimator, signifi- cantly improving on its small sample bias properties. We also compare our new estimators to existing Nagar (1959) type estimators. We show that, in models with heteroskedasticity, our estimators have superior properties to both the Nagar estimator and the related B2SLS estimator suggested in Donald and Newey (2001). These theoretical results are verified in a set of Monte-Carlo experiments and then applied to estimating the returns to schooling using actual data.
    JEL: C31 J24
    Date: 2008–08–12
    URL: http://d.repec.org/n?u=RePEc:ucn:wpaper:200817&r=ecm
  6. By: Tenconi Paolo (Department of Economics, University of Insubria, Italy)
    Abstract: We propose a general purpose variance reduction technique for Markov Chain Monte Carlo estimators based on the Zero-Variance principle introduced in the physics lit- erature by Assaraf and Caarel ( 1999). The potential of the new idea is illustrated with some toy examples and a real application to Bayesian inference for credit risk estimation.
    Keywords: Markov chain Monte Carlo, Metropolis-Hastings algorithm, Variance re- duction, Zero-Variance principle.
    Date: 2008–04
    URL: http://d.repec.org/n?u=RePEc:ins:quaeco:qf0803&r=ecm
  7. By: Aviv Nevo (Institute for Fiscal Studies and Berkeley); Adam Rosen (Institute for Fiscal Studies and University College London)
    Abstract: <p>Dealing with endogenous regressors is a central challenge of applied research. The standard solution is to use instrumental variables that are assumed to be uncorrelated with unobservables. We instead assume (i) the correlation between the instrument and the error term has the same sign as the correlation between the endogenous regressor and the error term, and (ii) that the instrument is less correlated with the error term than is the endogenous regressor. Using these assumptions, we derive analytic bounds for the parameters. We demonstrate the method in two applications.</p>
    Date: 2008–06
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:16/08&r=ecm
  8. By: Jörg Stoye (Institute for Fiscal Studies and New York University)
    Abstract: <p><p>This paper extends Imbens and Manski's (2004) analysis of confidence intervals for interval identified parameters. For their final result, Imbens and Manski implicitly assume superefficient estimation of a nuisance parameter. This appears to have gone unnoticed before, and it limits the result's applicability. I re-analyze the problem both with assumptions that merely weaken the superefficiency condition and with assumptions that remove it altogether. Imbens and Manski's confidence region is found to be valid under weaker assumptions than theirs, yet superefficiency is required. I also provide a different confidence interval that is valid under superefficiency but can be adapted to the general case, in which case it embeds a specification test for nonemptiness of the identified set. A methodological contribution is to notice that the difficulty of inference comes from a boundary problem regarding a nuisance parameter, clarifying the connection to other work on partial identification.</p></p>
    JEL: C10 C14
    Date: 2008–04
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:11/08&r=ecm
  9. By: Arie Beresteanu (Institute for Fiscal Studies and Duke); Ilya Molchanov; Francesca Molinari
    Abstract: <p>We study identification in static, simultaneous move finite games of complete information, where the presence of multiple Nash equilibria may lead to partial identification of the model parameters. The identification regions for these parameters proposed in the related literature are known not to be sharp. Using the theory of random sets, we show that the sharp identification region can be obtained as the set of minimizers of the distance from the conditional distribution of game's outcomes given covariates, to the conditional Aumann expectation given covariates of a properly defined random set. This is the random set of probability distributions over action profiles given profit shifters implied by mixed strategy Nash equilibria. The sharp identification region can be approximated arbitrarily accurately through a finite number of moment inequalities based on the support function of the conditional Aumann expectation. When only pure strategy Nash equilibria are played, the sharp identification region is exactly determined by a finite number of moment inequalities. We discuss how our results can be extended to other solution concepts, such as for example correlated equilibrium or rationality and rationalizability. We show that calculating the sharp identification region using our characterization is computationally feasible. We also provide a simple algorithm which finds the set of inequalities that need to be checked in order to insure sharpness. We use examples analyzed in the literature to illustrate the gains in identification afforded by our method.</p></p>
    Keywords: Identification, Random Sets, Aumann Expectation, Support Function, Capacity Functional, Normal Form Games, Inequality Constraints.
    JEL: C14 C72
    Date: 2008–06
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:15/08&r=ecm
  10. By: Sokbae 'Simon' Lee (Institute for Fiscal Studies and University College London); Oliver Linton (Institute for Fiscal Studies and London School of Economics); Yoon-Jae Whang (Institute for Fiscal Studies and Seoul National University)
    Abstract: <p><p>We propose a test of the hypothesis of stochastic monotonicity. This hypothesis is of interest in many applications in economics. Our test is based on the supremum of a rescaled U-statistic. We show that its asymptotic distribution is Gumbel. The proof is diffcult because the approximating Gaussian stochastic process contains both a stationary and a nonstationary part and so we have to extend existing results that only apply to either one or the other case. We also propose a refinement to the asymptotic approximation that we show works much better infinite samples. We apply our test to the study of intergenerational income mobility. </p></p>
    Date: 2008–07
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:21/08&r=ecm
  11. By: Prokhorov, Artem
    Abstract: A new goodness-of-fit test of copulas is proposed. It is based on restrictions on certain elements of the information matrix and so relates to the White (1982) specification test. The test avoids the need to correctly specify and consistently estimate a parametric model for the marginal distributions. It does not involve kernel weighting and bandwidth selection or parametric bootstrap and is relatively simple compared to other available tests.
    JEL: C13
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:9998&r=ecm
  12. By: Derek Bond (University of Ulster); Michael J Harrison (University College Dublin); Edward J O’Brien (European Central Bank)
    Abstract: Random field regression models provide an extremely flexible way to investigate nonlinearity in economic data. This paper introduces a new approach to interpreting such models, which may allow for improved inference about the possible parametric specification of nonlinearity.
    Date: 2007–11–19
    URL: http://d.repec.org/n?u=RePEc:ucn:wpaper:200717&r=ecm
  13. By: McCAUSLAND, William
    Abstract: I introduce the HESSIAN method for semi-Gaussian state space models with univariate states. The vector of states a=(a^1; ... ; a^n) is Gaussian and the observed vector y= (y^1 ; ... ; y^n )> need not be. I describe a close approximation g(a) to the density f(a|y). It is easy and fast to evaluate g(a) and draw from the approximate distribution. In particular, no simulation is required to approximate normalization constants. Applications include likelihood approximation using importance sampling and posterior simulation using Markov chain Monte Carlo (MCMC). HESSIAN is an acronym but it also refers to the Hessian of log f(a|y), which gures prominently in the derivation. I compute my approximation for a basic stochastic volatility model and compare it with the multivariate Gaussian approximation described in Durbin and Koopman (1997) and Shephard and Pitt (1997). For a wide range of plausible parameter values, I estimate the variance of log f(a|y) - log g(a) with respect to the approximate density g(a). For my approximation, this variance ranges from 330 to 39000 times smaller.
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:mtl:montde:2008-03&r=ecm
  14. By: Grant Hillier (Institute for Fiscal Studies and University of Southampton); Raymond Kan; Xiaolu Wang
    Abstract: <p><p>Using generating functions, the top-order zonal polynomials that occur in much distribution theory under normality can be recursively related to other symmetric functions (power-sum and elementary symmetric functions, Ruben, Hillier, Kan, and Wang). Typically, in a recursion of this type the <i>k</i>-th object of interest, <i>d<sub>k</sub></i> say, is expressed in terms of all lower-order <i>d<sub>j</sub></i>'s. In Hillier, Kan, and Wang we pointed out that, in the case of top-order zonal polynomials (and generalizations of them), a shorter (i.e., fixed length) recursion can be deduced. The present paper shows that the argument in generalizes to a large class of objects/generating functions. The results thus obtained are then applied to various problems involving quadratic forms in noncentral normal vectors.</p></p>
    JEL: C16 C46 C63
    Date: 2008–06
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:14/08&r=ecm
  15. By: Fosgerau, Mogens; Hess, Stephane
    Abstract: This paper reports the findings of a systematic study using Monte Carlo experiments and a real dataset aimed at comparing the performance of various ways of specifying random taste heterogeneity in a discrete choice model. Specifically, the analysis compares the performance of two recent advanced approaches against a background of four commonly used continuous distribution functions. The first of these two approaches improves on the flexibility of a base distribution by adding in a series approximation using Legendre polynomials. The second approach uses a discrete mixture of multiple continuous distributions. Both approaches allows the researcher to increase the number of parameters as desired. The paper provides a range of evidence on the ability of the various approaches to recover various distributions from data. The two advanced approaches are comparable in terms of the likelihoods achieved, but each has its own advantages and disadvantages.
    Keywords: random taste heterogeneity; mixed logit; method of sieves; mixtures of distributions
    JEL: R40 C14
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:10038&r=ecm
  16. By: Guido M. Imbens; Jeffrey M. Wooldridge
    Abstract: Many empirical questions in economics and other social sciences depend on causal effects of programs or policiy interventions. In the last two decades much research has been done on the econometric and statistical analysis of the effects of such programs or treatments. This recent theoretical literature has built on, and combined features of, earlier work in both the statistics and econometrics literatures. It has by now reached a level of maturity that makes it an important tool in many areas of empirical research in economics, including labor economics, public finance, development economics, industrial organization and other areas of empirical micro-economics. In this review we discuss some of the recent developments. We focus primarily on practical issues for empirical researchers, as well as provide a historical overview of the area and give references to more technical research.
    JEL: C01
    Date: 2008–08
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:14251&r=ecm

This nep-ecm issue is ©2008 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.