nep-ecm New Economics Papers
on Econometrics
Issue of 2011‒02‒12
fourteen papers chosen by
Sune Karlsson
Orebro University

  1. Inference on Time-Invariant Variables using Panel Data: A Pre-Test Estimator with an Application to the Returns to Schooling By Jean-Bernard Chatelain; Kirsten Ralf
  2. Seemingly Unrelated Regressions with Spatial Error Components By Badi H. Baltagi; Alain Pirotte
  3. A test of singularity for distribution functions By Victoria Zinde-Walsh; John Galbraith
  4. Unconditional Quantile Regression for Exogenous or Endogenous Treatment Variables By David Powell
  5. Unconditional Quantile Treatment Effects in the Presence of Covariates By David Powell
  6. Relative Efficiency of a Quantile Method for Estimating Parameters in Censored Two-Parameter Weibull Distributions By Jonsson, Robert
  7. Interpreting Dummy Variables in Semi-logarithmic Regression Models: Exact Distributional Results By David E. Giles
  8. Incorrectly accounting for taste heterogeneity in choice experiments: Does it really matter for welfare measurement? By Catalina M. Torres Figuerola; Nick Hanley; Sergio Colombo
  9. Moment Conditions and Neglected Endogeneity in Panel Data Models By Giorgio Calzolari; Laura Magazzini
  10. Estimating the Leverage Parameter of Continuous-time Stochastic Volatility Models Using High Frequency S&P 500 and VIX By Isao Ishida; Michael McAleer; Kosuke Oya
  11. Simple conservative confidence intervals for comparing matched proportions By Jonsson, Robert
  12. A multilevel approach for nonnegative matrix factorization By GILLIS, Nicolas; GLINEUR, François
  13. School system evaluation by value-added analysis under endogeneity By MANZI, Jorge; SAN MARTIN, Ernesto; VAN BELLEGEM, Sébastien
  14. How to Think About Time-Use Data: What Inferences Can We Make About Long- and Short-Run Time Use from Time Diaries? By Harley Frazis; Jay Stewart

  1. By: Jean-Bernard Chatelain (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I); Kirsten Ralf (PSE - Paris-Jourdan Sciences Economiques - CNRS : UMR8545 - Ecole des Hautes Etudes en Sciences Sociales (EHESS) - Ecole des Ponts ParisTech - Ecole Normale Supérieure de Paris - ENS Paris)
    Abstract: This paper proposes a new pre-test estimator of panel data models including time invariant variables based upon the Mundlak-Krishnakumar estimator and an "unrestricted” Hausman-Taylor estimator. The paper evaluates the biases of currently used restricted estimators, omitting the average-over-time of at least one endogenous time-varying explanatory variable. Repeated Between, Ordinary Least Squares, Two stage restricted Between and Oaxaca-Geisler estimator, Fixed Effect Vector Decomposition, Generalized least squares may lead to wrong conclusions regarding the statistical significance of the estimated parameter values of time-invariant variables.
    Keywords: Time-Invariant Variables, Panel data, Time-Series Cross-Sections, Pre-Test Estimator, Mundlak Estimator, Fixed Effects Vector Decomposition
    Date: 2010–01–15
    URL: http://d.repec.org/n?u=RePEc:hal:psewpa:hal-00492039&r=ecm
  2. By: Badi H. Baltagi (Center for Policy Research, Maxwell School, Syracuse University, Syracuse, NY 13244-1020); Alain Pirotte (ERMES (CNRS) and TEPP (CNRS), Université Panthéon-Assas Paris II, France INRETS-DEST, National Institute of Research on Transports and Safety, France)
    Abstract: This paper considers various estimators using panel data seemingly unrelated regressions (SUR) with spatial error correlation. The true data generating process is assumed to be SUR with spatial error of the autoregressive or moving average type. Moreover, the remainder term of the spatial process is assumed to follow an error component structure. Both maximum likelihood and generalized moments (GM) methods of estimation are used. Using Monte Carlo experiments, we check the performance of these estimators and their forecasts under misspecification of the spatial error process, various spatial weight matrices, and heterogeneous versus homogeneous panel data models.
    Keywords: Seemingly unrelated regressions, panel data, spatial dependence, heterogeneity, forecasting.
    JEL: C33
    Date: 2010–09
    URL: http://d.repec.org/n?u=RePEc:max:cprwps:125&r=ecm
  3. By: Victoria Zinde-Walsh; John Galbraith
    Abstract: Many non- and semi- parametric estimators have asymptotic properties that have been established under conditions that exclude the possibility of singular parts in the distribution. It is thus important to be able to test for absence of singularities. Methods of testing that focus on specific singularities do exist, but there are few generally applicable approaches. A general test based on kernel density estimation was proposed by Frigyesi and Hössjer (1998), but this statistic can diverge for some absolutely continuous distributions. Here we use a result in Zinde-Walsh (2008) to characterize distributions with varying degrees of smoothness, via functionals that reveal the behavior of the bias of the kernel density estimator. The statistics proposed here have well defined asymptotic distributions that are asymptotically pivotal in some class of distributions (e.g. for continuous density) and diverge for distributions in an alternative class, at a rate that can be explicitly evaluated and controlled. <P>
    Keywords: generalized function, kernel density estimator, singularity ,
    JEL: C14
    Date: 2011–01–01
    URL: http://d.repec.org/n?u=RePEc:cir:cirwor:2011s-06&r=ecm
  4. By: David Powell
    Abstract: This paper introduces an unconditional quantile regression (UQR) estimator that can be used for exogenous or endogenous treatment variables. Traditional quantile estimators provide conditional treatment effects. Typically, we are interested in unconditional quantiles, characterizing the distribution of the outcome variable for different values of the treatment variables. Conditioning on additional covariates, however, may be necessary for identification of these treatment effects. With conditional quantile models, the inclusion of additional covariates changes the interpretation of the estimates. The UQR and IV-UQR estimators allow for one to condition on covariates without altering the interpretation. This estimator is a more general version of traditional quantile estimators.
    Keywords: unconditional quantile treatment effects, quantile regression, instrumental variables, identification
    JEL: C14 C31 C51
    Date: 2011–01
    URL: http://d.repec.org/n?u=RePEc:ran:wpaper:824&r=ecm
  5. By: David Powell
    Abstract: Many economic applications have found quantile models useful when the explanatory variables may have varying impacts throughout the distribution of the outcome variable. Traditional quantile estimators provide conditional quantile treatment effects. Typically, we are interested in unconditional quantiles, characterizing the distribution of the outcome variable for different values of the treatment variables. Conditioning on additional covariates, however, may be necessary for identification of these treatment effects. With conditional quantile models, the inclusion of additional covariates changes the interpretation of the estimates. This paper discusses identification of unconditional quantile treatment effects when it is necessary or simply desirable to condition on covariates. It discusses identification for both exogenous and endogenous treatment variables, which can be discrete or continuous, without functional form assumptions.
    Keywords: expectations, portfolio choice, subjective probabilities, uncertatinty, Health and Retirement Study
    JEL: G11 D14 D81 D01
    Date: 2010–12
    URL: http://d.repec.org/n?u=RePEc:ran:wpaper:816&r=ecm
  6. By: Jonsson, Robert (Department of Economics, School of Business, Economics and Law, University of Gothenburg)
    Abstract: n simulation studies the computer time can be much reduced by using censoring. Here a simple method based on quantiles (Q method) is compared with the Maximum Likelihood (ML) method when estimating the parameters in censored two-parameter Weibull distributions. The ML estimates being obtained using the SAS procedure NLMIXED. It is demonstrated that the estimators obtained by the Q method are less efficient than the ML estimators, but this can be compensated for by increasing the sample size whi... morech nevertheless requires much less computer time than the ML method. The ML estimates can only be obtained by an iterative process and this opens the possibility for failures in the sense that reasonable estimates are presented as unreliable, or anomalous estimates are presented as reliable. Such anomalies were never obtained with the Q method.
    Keywords: Relative Efficiency; Quantile Method; Censored Two-Parameter Weibull Distributions
    JEL: C10
    Date: 2011–02–04
    URL: http://d.repec.org/n?u=RePEc:hhs:gunsru:2010_003&r=ecm
  7. By: David E. Giles (Department of Economics, University of Victoria)
    Abstract: Care must be taken when interpreting the coefficients of dummy variables in semi-logarithmic regression models. Existing results in the literature provide the best unbiased estimator of the percentage change in the dependent variable, implied by the coefficient of a dummy variable, and of the variance of this estimator. We extend these results by establishing the exact sampling distribution of an unbiased estimator of the implied percentage change. This distribution is nonnormal, and is positively skewed in small samples. We discuss the construction of bootstrap confidence intervals for the implied percentage change, and illustrate our various results with two applications: one involving a wage equation, and one involving the constructions of an hedonic price index for computer disk drives.
    Keywords: Semi-logarithmic regression, dummy variable, percentage change, confidence interval
    JEL: C13 C20 C52
    Date: 2011–01–31
    URL: http://d.repec.org/n?u=RePEc:vic:vicewp:1101&r=ecm
  8. By: Catalina M. Torres Figuerola (Centre de Recerca Econòmica (UIB · Sa Nostra)); Nick Hanley (University of Stirling); Sergio Colombo (Agricultural Economics Department (IFAPA))
    Abstract: A range of empirical approaches to representing preference heterogeneity have emerged in choice modelling. Researchers have been able to explore the differences which selection of a particular approach makes to welfare measures in a particular dataset, and indeed have been able to implement a number of tests for which approach best fits a particular set of data. However, the question as to the degree of error in welfare estimation from an inappropriate choice of empirical approach has not been addressed. In this paper, we use Monte Carlo analysis to address this question. Given the high popularity of both the random parameter logit (RPL) and latent class models among choice modellers, we examine the errors in welfare estimates from using the incorrect model to account for taste heterogeneity. Our main finding is that using an RPL specification with log-normally distributed preferences seems the best bet.
    Keywords: Preference heterogeneity, welfare measurement, accuracy, efficiency, choice experiments, Monte Carlo analysis
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:pdm:wpaper:2011/1&r=ecm
  9. By: Giorgio Calzolari (University of Florence); Laura Magazzini (Department of Economics (University of Verona))
    Abstract: This paper develops a new moment condition for estimation of linear panel data models. When added to the set of instruments devised by Anderson, Hsiao (1981, 1982) for the dynamic model, the proposed approach can outperform the GMM methods customarily employed for estimation. The proposal builds on the properties of the iterated GLS, that, contrary to conventional wisdom, can lead to a consistent estimator in particular cases where endogeneity of the explanatory variables is neglected. The targets achieved are a reduction in the number of moment conditions and a better performance over the most widely adopted techniques.
    Keywords: panel data, dynamic model, GMM estimation, endogeneity
    JEL: C23
    Date: 2011–02
    URL: http://d.repec.org/n?u=RePEc:ver:wpaper:02/2011&r=ecm
  10. By: Isao Ishida; Michael McAleer (University of Canterbury); Kosuke Oya
    Abstract: This paper proposes a new method for estimating continuous-time stochastic volatility (SV) models for the S&P 500 stock index process using intraday high-frequency observations of both the S&P 500 index and the Chicago Board of Exchange (CBOE) implied (or expected) volatility index (VIX). Intraday high-frequency observations data have become readily available for an increasing number of financial assets and their derivatives in recent years, but it is well known that attempts to estimate the parameters of popular continuous-time models can lead to nonsensical estimates due to severe intraday seasonality. A primary purpose of the paper is to estimate the leverage parameter, ρ, that is, the correlation between the two Brownian motions driving the diffusive components of the price process and its spot variance process, respectively. We show that, under the special case of Heston’s (1993) square-root SV model without measurement errors, the “realized leverage”, or the realized covariation of the price and VIX processes divided by the product of the realized volatilities of the two processes, converges to ρ in probability as the time intervals between observations shrink to zero, even if the length of the whole sample period is fixed. Finite sample simulation results show that the proposed estimator delivers accurate estimates of the leverage parameter, unlike existing methods.
    Keywords: Continuous time; high frequency data; stochastic volatility; S&P 500; implied volatility; VIX
    JEL: G13 G32
    Date: 2011–02–01
    URL: http://d.repec.org/n?u=RePEc:cbt:econwp:11/11&r=ecm
  11. By: Jonsson, Robert (Department of Economics)
    Abstract: Unconditional confidence intervals (CIs) for the difference between marginal proportions in matched pairs data have essentially been based on improvements of Wald’s large-sample statistic. The latter are approximate and non-conservative. In some situations it may be of importance that CIs are conservative, e.g. when claiming bio-equivalence in small samples. Existing methods for constructing conservative CIs are computer intensive and are not suitable for sample size determination in planned stu... moredies. This paper presents a new simple method by which conservative CIs are readily computed. The method gives CIs that are comparable with earlier conservative methods concerning coverage probabilities and lengths. However, the new method can only be used if the proportions in the discordant cells p and q satisfies , but this is luckily the case in most applications and several examples are given. The new method is compared with previously suggested approximate and exact methods in large-scale simulations.
    Keywords: Binomial variables; Conservative limits; Pivotal statistic
    JEL: C10
    Date: 2011–02–04
    URL: http://d.repec.org/n?u=RePEc:hhs:gunsru:2011_001&r=ecm
  12. By: GILLIS, Nicolas (Université catholique de Louvain, CORE, B-1348 Louvain-la-Neuve, Belgium); GLINEUR, François (Université catholique de Louvain, CORE, B-1348 Louvain-la-Neuve, Belgium)
    Abstract: Nonnegative Matrix Factorization (NMF) is the problem of approximating a nonnegative matrix with the product of two low-rank nonnegative matrices and has been shown to be particularly useful in many applications, e.g., in text mining, image processing, computational biology, etc. In this paper, we explain how algorithms for NMF can be embedded into the framework of multi- level methods in order to accelerate their convergence. This technique can be applied in situations where data admit a good approximate representation in a lower dimensional space through linear transformations preserving nonnegativity. A simple multilevel strategy is described and is experi- mentally shown to speed up significantly three popular NMF algorithms (alternating nonnegative least squares, multiplicative updates and hierarchical alternating least squares) on several standard image datasets.
    Keywords: nonnegative matrix factorization, algorithms, multigrid and multilevel methods, image processing
    Date: 2010–07–01
    URL: http://d.repec.org/n?u=RePEc:cor:louvco:2010047&r=ecm
  13. By: MANZI, Jorge (Measurement Center MIDE UC, Pontificia Universidad Católica de Chile, Chile); SAN MARTIN, Ernesto (Measurement Center MIDE UC & Dep. Of Statistics, Pontificia Universidad Católica de Chile, Chile); VAN BELLEGEM, Sébastien (Toulouse School of Economics, France; Université catholique de Louvain, CORE, B-1348 Louvain-la-Neuve, Belgium)
    Abstract: Value-added analysis is a common tool in analysing school performances. In this paper, we analyse the SIMCE panel data which provides individual scores of about 200,000 students in Chile, and whose aim is to rank schools according to their educational achievement. Based on the data collection procedure and on empirical evidences, we argue that the exogeneity of some covariates is questionable. This means that a nonvanishing correlation appears between the school-specific effect and some covariates. We show the impact of this phenomenon on the calculation of the value-added and on the ranking, and provide an estimation method that is based on instrumental variables in order to correct the bias of endogeneity. Revisiting the definition of the value-added, we propose a new calculation robust to endogeneity that we illustrate on the SIMCE data.
    Keywords: value-added, school effectiveness, multilevel model, endogeneity, instrumental variables
    JEL: C33 C51 I21
    Date: 2010–07–01
    URL: http://d.repec.org/n?u=RePEc:cor:louvco:2010046&r=ecm
  14. By: Harley Frazis (U.S. Bureau of Labor Statistics); Jay Stewart (U.S. Bureau of Labor Statistics)
    Abstract: Time-use researchers are typically interested in the time use of individuals, but time use data are samples of person-days. Given day-to-day variation in how people spend their time, this distinction is analytically important. We examine the conditions necessary to make inferences about the time use of individuals from a sample of person-days. We also discuss whether and how surveys with multiple household members or multiple days are an improvement over single-diary surveys.
    Keywords: Time use, survey methods, estimation
    JEL: C81 D13
    Date: 2010–11
    URL: http://d.repec.org/n?u=RePEc:bls:wpaper:ec100100&r=ecm

This nep-ecm issue is ©2011 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.