nep-ecm New Economics Papers
on Econometrics
Issue of 2007‒06‒18
six papers chosen by
Sune Karlsson
Orebro University

  1. Joint and Marginal Diagnostic Tests for Conditional Mean and Variance Specifications By Juan Carlos Escanciano
  2. A Markov Chain Monte Carlo Multiple Imputation Procedure for Dealing with Item Nonresponse in the German SAVE Survey By Daniel Schunk
  3. Evaluating the Precision of Estimators of Quantile-Based Risk Measures By Cotter, John; Dowd, Kevin
  4. Estimating Inefficiency and Total Factor Productivity: An Application to Irish Dairy Farming By James Carroll; Carol Newman; Fiona Thorne
  5. Regression Coefficient Identification Decay in the Presence of Infrequent Classification Errors By Kreider, Brent
  6. A new approach for analyzing multiple bounded WTP data - Certainty dependent payment card intervals By Broberg, Thomas

  1. By: Juan Carlos Escanciano (Indiana University Bloomington)
    Abstract: This article proposes a general class of joint and marginal diagnostic tests for parametric conditional mean and variance models of possibly nonlinear non-Markovian time series sequences. The use of joint and marginal tests is motivated from the fact that marginal tests for the conditional variance may lead misleading conclusions when the conditional mean is misspecified. The new tests are based on a generalized spectral approach and, contrary to existing procedures, they do not need to choose a lag order depending on the sample size or to smooth the data. Moreover, the proposed tests are robust to higher order dependence of unknown form, in particular to conditional skewness and kurtosis. It turns out that the asymptotic null distributions of the new tests depend on the data generating process, so a new bootstrap procedure is proposed and theoretically justified. A simulation study compares the finite sample performance of the proposed and competing tests and shows that our tests can play a valuable role in time series modeling. Finally, an application to the S&P 500 highlights the merits of our approach.
    JEL: C12 C14 C52
    Date: 2007–06
  2. By: Daniel Schunk (Mannheim Research Institute for the Economics of Aging (MEA))
    Abstract: Important empirical information on household behavior is obtained from surveys. However, various interdependent factors that can only be controlled to a limited extent lead to unit and item nonresponse, and missing data on certain items is a frequent source of difficulties in statistical practice. This paper presents the theoretical underpinnings of a Markov Chain Monte Carlo multiple imputation procedure and applies this procedure to a socio-economic survey of German households, the SAVE survey. I discuss convergence properties and results of the iterative multiple imputation method and I compare them briefly with other imputation approaches. Concerning missing data in the SAVE survey, the results suggest that item nonresponse is not occurring randomly but is related to the included covariates. The analysis further indicates that there might be differences in the character of nonresponse across asset types. Concerning the methodology of imputation, the paper underlines that it would be of particular interest to apply different imputation methods to the same dataset and to compare the findings.
    Date: 2007–05–30
  3. By: Cotter, John; Dowd, Kevin
    Abstract: This paper examines the precision of estimators of Quantile-Based Risk Measures (Value at Risk, Expected Shortfall, Spectral Risk Measures). It first addresses the question of how to estimate the precision of these estimators, and proposes a Monte Carlo method that is free of some of the limitations of existing approaches. It then investigates the distribution of risk estimators, and presents simulation results suggesting that the common practice of relying on asymptotic normality results might be unreliable with the sample sizes commonly available to them. Finally, it investigates the relationship between the precision of different risk estimators and the distribution of underlying losses (or returns), and yields a number of useful conclusions.
    JEL: G00
    Date: 2007
  4. By: James Carroll (Department of Economics, Trinity College Dublin); Carol Newman (Department of Economics, Trinity College Dublin); Fiona Thorne (Rural Economy Research Centre, Teagasc, Dublin, Ireland)
    Abstract: This paper compares standard stochastic frontier models for panel data with a number of recently developed models designed to remove unobserved heterogeneity from the inefficiency component. Results are used to construct a generalised Malmquist total factor productivity (TFP) index. We conclude that the choice of approach makes little difference where the purpose of the study is to analyse aggregate trends in TFP and its components. However, where inefficiency estimates and their dispersion are of interest, attention should be paid to how the analyst’s interpretation of inefficiency relates to the underlying assumptions of the model that is used.
    Keywords: Efficiency, panel data, total factor productivity, stochastic production frontier, ‘true’ effects models, dairy sector
    JEL: D24 Q12
    Date: 2007–06
  5. By: Kreider, Brent
    Abstract: Although validation data consistently reveal the presence of large degrees of reporting error in popular survey datasets, measurement issues are mostly ignored in empirical research. Recent evidence from Bound et al. (2001) and Black et al. (2003) suggests that reporting errors routinely violate all of the classical measurement error assumptions. This paper highlights the potential severity of the identification problem for regression coefficients given the presence of even infrequent arbitrary classification errors in a binary regressor. In an experimental setting, health insurance misclassification rates of less than 1.3 percent generate double-digit percentage point ranges of uncertainty about the variable's true marginal effect on the use of health services.
    Keywords: Nonclassical measurement error, health insurance, corrupt sampling, binary regressor, classification error
    Date: 2007–06–12
  6. By: Broberg, Thomas (Department of Economics, Umeå University)
    Abstract: In this paper we analyze the multiple-bounded (MB) format in which uncertainty is directly incorporated into the WTP question. We introduce a new approach to estimate mean and median willingness to pay (WTP) using MB data by allowing respondents to expand their WTP intervals by shifting their upper bound. Thus, less certain respondents will state a wider WTP interval. This differs from the Welsh and Poe (1998) approach (WP) which shifts the entire WTP interval and likely overestimates mean and median WTP when uncertainty is introduced. To compare empirically our expansion approach to the WP-approach, we use survey data from 2004 that elicited WTP for implementation of a predator protection policy in Sweden. In addition to its more intuitive appeal, our results indicate that the interval expansion approach better fits the data and provides a smaller range of estimated WTP. It also with better precision estimates the mean and median WTP when preference uncertainty is considered, and its estimates are less sensitive to alternative distributional assumptions.
    Keywords: contingent valuation; preference uncertainty; elicitation format; multiple-bounded; payment card; willingness to pay; predators
    JEL: C81 Q20 Q26 Q28
    Date: 2007–06–07

This nep-ecm issue is ©2007 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.