nep-dcm New Economics Papers
on Discrete Choice Models
Issue of 2013‒07‒20
eight papers chosen by
Edoardo Marcucci
Universita' di Roma Tre

  1. Testing for state dependence in binary panel data with individual covariates By Bartolucci, Francesco; Nigro, Valentina; Pigini, Claudia
  2. Merger Simulation with Nested Logit Demand - Implementation using Stata By Björnerstedt, Jonas; Verboven, Frank
  3. Valuing User Preferences for Improvements in Public Nature Trails Around the Sundays River Estuary, Eastern Cape, South Africa By Deborah E. Lee, Stephen G. Hosking and Mario du Preez
  4. Consistency and Aggregation in Individual Choice Under Uncertainty By Jeff Birchby; Gary Gigliotti; Barry Sopher
  5. Change versus choice: eliciting attitudes to fair compensations By John Bone; Paolo Crosetto; John D. Hey; Carmen Pasca
  6. A Proposed Estimator for Dynamic Probit Models By Gao, Wei; Yao, Qiwei; Bergsman, Wicher
  7. Nudging Energy Efficiency Behavior: The Role of Information Labels By Newell, Richard G.; Siikamäki, Juha
  8. Comparison of Parametric and Semi-Parametric Binary Response Models By Xiangjin Shen; Shiliang Li; Hiroki Tsurumi

  1. By: Bartolucci, Francesco; Nigro, Valentina; Pigini, Claudia
    Abstract: We propose a test for state dependence in binary panel data under the dynamic logit model with individual covariates. For this aim, we rely on a quadratic exponential model in which the association between the response variables is accounted for in a different way with respect to more standard formulations. The level of association is measured by a single parameter that may be estimated by a conditional maximum likelihood approach. Under the dynamic logit model, the conditional estimator of this parameter converges to zero when the hypothesis of absence of state dependence is true. This allows us to implement a Wald test for this hypothesis which may be very simply performed and attains the nominal significance level under any structure of the individual covariates. Through an extensive simulation study, we find that our test has good finite sample properties and it is more robust to the presence of (autocorrelated) covariates in the model specification in comparison with other existing testing procedures for state dependence. The test is illustrated by an application based on data coming from the Panel Study of Income Dynamics.
    Keywords: conditional inference, dynamic logit model, quadratic exponential model, Wald test
    JEL: C12 C23
    Date: 2013–07–11
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:48233&r=dcm
  2. By: Björnerstedt, Jonas (Swedish Competition Authority); Verboven, Frank (University of Leuven)
    Abstract: In this article we show how to implement merger simulation in Stata after estimating an aggregate nested logit demand system with a linear regression model. We also show how to implement merger simulation when the demand parameters are not estimated, but instead calibrated to be consistent with outside information on average price elasticities and profit margins.
    Keywords: mergersim; merger simulation; aggregate nested logit model; unit demand; constant expenditures demand
    JEL: C63 C87 D40 L10
    Date: 2013–04–15
    URL: http://d.repec.org/n?u=RePEc:hhs:kkveco:2013_002&r=dcm
  3. By: Deborah E. Lee, Stephen G. Hosking and Mario du Preez
    Abstract: Many valuations have been made of changes to in-estuary attributes but few have been made of out-of-estuary attributes. From a recreation perspective, an important type of out-of-estuary attribute is the availability of public paths by which to access attractive features of the estuary environment. This paper values an improvement in the level of public access in the form of an additional nature trail along the banks of the Sundays River Estuary in the Eastern Cape, but does not compare this value with the costs. By means of choice experiment modelling analyses it is estimated that in 2010 the marginal willingness-to-pay for an investment in a nature trail was R34 per user per annum. In order to determine whether the development of this trail is efficient, this benefit (R34 per user per annum) needs to be compared to the cost of the development, an analysis that remains to be done. However, this find does serve to provide guidance on how much funding could efficiently be allocated to such a development - about R1.22 million, assuming a social discount rate of 8.38%.
    Keywords: Estuary, willingness to pay, choice experiment, public access, recreational attributes
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:rza:wpaper:353&r=dcm
  4. By: Jeff Birchby (Rutgers University); Gary Gigliotti (Rutgers University); Barry Sopher (Rutgers University)
    Abstract: It is common in studies of individual choice behavior to report averages of the behavior under consideration. In the social sciences the mean is, indeed, often the quantity of interest, but at times focusing on the mean can be misleading. For example, it is well known in labor economics that failure to account for individual differences may lead to incorrect inference about the nature of hazard functions for unemployment duration. If all workers have constant hazard functions independent of duration, simple aggregation will nonetheless lead to the inference that the hazard function is state-dependent, with the hazard of leaving unemployment declining with duration of unemployment. Similarly, a recent study in psychology has shown that the “learning curve,” a monotonically increasing function of response to a stimuli, is better understood as an average representation of individual response functions that are, in fact, more step-function-like. As such, the learning curve as commonly understood is a misleading representation of the behavior of any one individual. These observations motivate us to consider the question of possible aggregation bias in the realm of choice under uncertainty. In particular, Cumulative Prospect Theory posits a weighting function through which probabilities are transformed into decision weights. An inverted S-shaped weighting function is commonly taken to be “the” appropriate weighting function, based on quite a number of experimental studies. This particular version of the weighting function implies, in simple two outcome lotteries, that an individual will tend to overweight small (near 0) probabilities and to underweight large (near 1) probabilities. A natural question to ask, suggested by both the hazard function and the learning curve examples, is whether this weighting function is not, similarly, an artifact of aggregation. Of course, no one believes that every individual’s behavior can be accounted for by a single weighting function. Studies have shown that there can be considerable variation in estimated weighting functions across individuals. But no one, to our knowledge, has systematically addresses the question of whether, in fact, one can meaningfully use a single weighting function, even as a rhetorical device, to accurately discuss individual choice behavior. If most individuals indeed do have an inverted S-shaped weighting function, then this representation of choice behavior is not misleading, provided it is clear that one is discussing the behavior of “most,” not all, individuals. We focus on the reliability of estimated weighting functions. We study the problem of determining the parameters of the cumulative prospect theory function. Using responses to paired sets of choice questions, it is possible to derive estimates for a two-parameter version of the Cumulative Prospect Theory choice function (using a power function for the value function and Prelec’s one parameter version of the weighting function). By analyzing multiple such pairs of choice questions, we are able to also investigate the consistency of these estimates. Our main finding is that there is, in general, considerable variation at the individual level in the choice parameters implied by the responses to the different pairs of choice questions. The modal choice pattern observed is one consistent with expected value maximization, and there is considerably less variation (again, at the individual level) in the parameters implied by those who appear to be maximizing expected value on one pair of choice questions than for those who never choose in this way. But these individuals account for only about one-fifth to one-sixth of subjects. For the rest of the subjects, it is rare that any two pairs of estimates are the same, and often the implied parameters
    Keywords: uncertainty, prospect theory, aggregation, consistency
    JEL: C9 D8
    Date: 2013–01–18
    URL: http://d.repec.org/n?u=RePEc:rut:rutres:201301&r=dcm
  5. By: John Bone (University of York); Paolo Crosetto (Max Planck Institute of Economics, Jena); John D. Hey (University of York); Carmen Pasca (University of York)
    Abstract: This paper reports an experiment designed to elicit social preferences over income compensation schemes, where income differences between subjects have two independent components: one due to chosen effort and the other due to random chance. These differences can be compensated through social dividends, according to principles chosen beforehand by subjects themselves from behind a stylised Rawlsian veil of ignorance, or outside the society on which the principles will be implemented. We test the attractiveness in particular of Luck Egalitarianism, compensating inequalities due to chance but not those due to choice. We find modest but not overwhelming support for these principles, suggesting that subjects' actual preferences are more complex.
    Keywords: chance, choice, envy-freeness, fairness, luck, luck egalitarianism, responsibility
    JEL: D31 D63 C91
    Date: 2013–07–12
    URL: http://d.repec.org/n?u=RePEc:jrp:jrpwrp:2013-029&r=dcm
  6. By: Gao, Wei; Yao, Qiwei; Bergsman, Wicher
    Abstract: In this paper, new estimating methods proposed for dynamic and static probit models with panel data. Simulation studies show that the proposed estimators work relatively well.
    Keywords: Dynamic and static probit models; Panel data; Generalized Linear models
    JEL: C13
    Date: 2013–07–15
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:48336&r=dcm
  7. By: Newell, Richard G.; Siikamäki, Juha (Resources for the Future)
    Abstract: We evaluate the effectiveness of energy efficiency labeling in guiding household appliance choice decisions. Using a carefully designed choice experiment with several alternative labeling treatments, we disentangle the relative importance of different types of information and intertemporal behavior (i.e., discounting) in guiding energy efficiency behavior. We find that simple information on the economic value of saving energy was the most important element guiding more cost-efficient investments in appliance energy efficiency, with information on physical energy use and carbon dioxide emissions having additional but lesser importance. The degree to which the current EnergyGuide label guided cost-efficient decisions depends importantly on the discount rate assumed appropriate for the analysis. Using individual discount rates separately elicited in our study, we find that the current EnergyGuide label came very close to guiding cost-efficient decisions, on average. However, using a uniform five percent rate for discounting—which was much lower than the average individual elicited rate—the EnergyGuide label led to choices that result in a one-third undervaluation of energy efficiency. We find that labels that not only nudged people with dispassionate monetary or physical information, but also endorsed a model (with Energy Star) or gave a suggestive grade to a model (as with the EU-style label), had a substantial impact in encouraging the choice of appliances with higher energy efficiency. Our results reinforce the centrality of views on intertemporal choice and discounting, both in terms of understanding individual behavior and in guiding public policy decisions.
    Keywords: energy efficiency behavior, gap, information label, discounting, time preference gap, choice experiment, mixed logit
    JEL: C91 D12 D91 D83 H43 Q41 Q48
    Date: 2013–07–03
    URL: http://d.repec.org/n?u=RePEc:rff:dpaper:dp-13-17&r=dcm
  8. By: Xiangjin Shen (Rutgers University, Economics Department); Shiliang Li (Rutgers University, Statistics Department); Hiroki Tsurumi (Rutgers University, Economics Department)
    Abstract: A Bayesian semi-parametric estimation of the binary response model using Markov Chain Monte Carlo algorithms is proposed. The performances of the parametric and semi-parametric models are presented. The mean squared errors, receiver operating characteristic curve, and the marginal effect are used as the model selection criteria. Simulated data and Monte Carlo experiments show that unless the binary data is extremely unbalanced the semi-parametric and parametric models perform equally well. However, if the data is extremely unbalanced the maximum likelihood estimation does not converge whereas the Bayesian algorithms do. An application is also presented.
    Keywords: Semi-parametric binary response models, Markov Chain Monte Carlo algorithms, Kernel densities, Optimal bandwidth, Receiver operating characteristic curve
    JEL: C14 C35 C11
    Date: 2013–07–12
    URL: http://d.repec.org/n?u=RePEc:rut:rutres:201308&r=dcm

This nep-dcm issue is ©2013 by Edoardo Marcucci. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.