nep-dcm New Economics Papers
on Discrete Choice Models
Issue of 2023‒09‒11
seven papers chosen by
Edoardo Marcucci, Università degli studi Roma Tre


  1. Torch-Choice: A PyTorch Package for Large-Scale Choice Modelling with Python By Du, Tianyu; Kanodia, Ayush; Athey, Susan
  2. Optimizing B2B Product Offers with Machine Learning, Mixed Logit, and Nonlinear Programming By John V. Colias; Stella Park; Elizabeth Horn
  3. Behavioural welfare analysis and revealed preference: Theory and experimental evidence By Caliari, Daniele
  4. Risk Preferences Over Health: Empirical Estimates and Implications for Healthcare Decision-Making By Karen Mulligan; Drishti Baid; Jason N. Doctor; Charles E. Phelps; Darius N. Lakdawalla
  5. Rationality is not consistency By Caliari, Daniele
  6. A Robust Method for Microforecasting and Estimation of Random Effects By Raffaella Giacomini; Sokbae Lee; Silvia Sarpietro
  7. Threshold Regression in Heterogeneous Panel Data with Interactive Fixed Effects By Marco Barassi; Yiannis Karavias; Chongxian Zhu

  1. By: Du, Tianyu (Stanford U); Kanodia, Ayush (Stanford U); Athey, Susan (Stanford U)
    Abstract: The torch-choice is an open-source library for flexible, fast choice modeling with Python and PyTorch. torch-choice provides a ChoiceDataset data structure to manage databases flexibly and memory-efficiently. The paper demonstrates constructing a ChoiceDataset from databases of various formats and functionalities of ChoiceDataset. The package implements two widely used models, namely the multinomial logit and nested logit models, and supports regularization during model estimation. The package incorporates the option to take advantage of GPUs for estimation, allowing it to scale to massive datasets while being computationally efficient. Models can be initialized using either R-style formula strings or Python dictionaries. We conclude with a comparison of the computational efficiencies of torch-choice and mlogit in R as (1) the number of observations increases, (2) the number of covariates increases, and (3) the expansion of item sets. Finally, we demonstrate the scalability of torch-choice on large-scale datasets.
    Date: 2023–04
    URL: http://d.repec.org/n?u=RePEc:ecl:stabus:4106&r=dcm
  2. By: John V. Colias (Decision Analyst); Stella Park (AT&T); Elizabeth Horn (Decision Analyst)
    Abstract: In B2B markets, value-based pricing and selling has become an important alternative to discounting. This study outlines a modeling method that uses customer data (product offers made to each current or potential customer, features, discounts, and customer purchase decisions) to estimate a mixed logit choice model. The model is estimated via hierarchical Bayes and machine learning, delivering customer-level parameter estimates. Customer-level estimates are input into a nonlinear programming next-offer maximization problem to select optimal features and discount level for customer segments, where segments are based on loyalty and discount elasticity. The mixed logit model is integrated with economic theory (the random utility model), and it predicts both customer perceived value for and response to alternative future sales offers. The methodology can be implemented to support value-based pricing and selling efforts. Contributions to the literature include: (a) the use of customer-level parameter estimates from a mixed logit model, delivered via a hierarchical Bayes estimation procedure, to support value-based pricing decisions; (b) validation that mixed logit customer-level modeling can deliver strong predictive accuracy, not as high as random forest but comparing favorably; and (c) a nonlinear programming problem that uses customer-level mixed logit estimates to select optimal features and discounts.
    Date: 2023–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2308.07830&r=dcm
  3. By: Caliari, Daniele
    Abstract: Behavioural welfare economics provides tools to elicit welfare preferences when individuals use nonstandard behavioural models. Current proposals either require assumptions on the models or elicit preferences that become coarser and coarser as the dataset grows. We propose an informational property [Informational Responsiveness] that solves the coarseness problem and, as the dataset grows, characterizes the family of welfare preference elicitation tools that elicit the underlying utility function of a broad family of stochastic models, denoted as preference monotonic models. As such, we argue that Informational Responsiveness is an important property of preference elicitation tools. We then test our property in an experiment in which participants first face a sequence of questions regarding time and risk outcomes and second report their preferences over a subset of the alternatives. We find that preference elicitation tools that satisfy our requirement provide a significantly better match between the elicited and the reported welfare relation.
    Keywords: Behavioural Welfare economics, Bounded rationality, Stochastic choice, Revealed preference
    JEL: D0 D6
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:zbw:wzbeoc:spii2023303&r=dcm
  4. By: Karen Mulligan; Drishti Baid; Jason N. Doctor; Charles E. Phelps; Darius N. Lakdawalla
    Abstract: Recent research has documented a link between consumer risk preferences over health and the willingness to pay (WTP) for medical technologies. However, the absence of empirical health risk preference estimates so far limits the implementation of this generalized risk-adjusted cost-effectiveness (GRACE) theory, which addresses several limitations of traditional cost-effectiveness analysis (CEA). To address this gap, we elicit from a nationally representative U.S. sample individual risk preference parameters over health-related quality of life (HRQoL) that shed light on health risk attitudes and enable GRACE valuation of medical technology. We find individuals exhibit risk-seeking preferences at low levels of health, switch to risk-averse preferences at health equal to 0.485 (measured on a zero to one scale), and become most risk-averse when their health is perfect (coefficient of relative risk aversion = 4.36). The risk preference estimates imply an empirical premium for disease severity: each unit of health is worth three times more to patients with serious health conditions (health equals 0.5) than those who are perfectly healthy. They also imply that traditional CEA overvalues treatments for the mildest diseases by more than a factor of two. Use of traditional CEA both overstimulates mild disease treatment innovation and underprovides severe disease treatment innovation.
    JEL: I11 I18
    Date: 2023–08
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:31524&r=dcm
  5. By: Caliari, Daniele
    Abstract: We challenge the standard definition of economic rationality as consistency by making use of a novel distinction between axioms of decision theory: consistency and preference axioms. We argue that this distinction has been overlooked by the literature and, as a result, evidence that consistency is a proxy of decision-making ability is often based on incorrect identification strategies. We conduct an experiment to investigate the factors that drive violations of consistency alone. While we find no evidence that consistency axioms are a proxy of decisionmaking ability, we provide suggestive evidence that some preference axioms are, confirming their potential role as confounding factors. Overall, our experimental evidence raises doubts about the choice of language that equates consistency with rationality in economics.
    Keywords: Decision Theory, Experimental Design, Consistency, Rationality
    JEL: D00 D90 D91
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:zbw:wzbeoc:spii2023304&r=dcm
  6. By: Raffaella Giacomini; Sokbae Lee; Silvia Sarpietro
    Abstract: We propose a method for forecasting individual outcomes and estimating random effects in linear panel data models and value-added models when the panel has a short time dimension. The method is robust, trivial to implement and requires minimal assumptions. The idea is to take a weighted average of time series- and pooled forecasts/estimators, with individual weights that are based on time series information. We show the forecast optimality of individual weights, both in terms of minimax-regret and of mean squared forecast error. We then provide feasible weights that ensure good performance under weaker assumptions than those required by existing approaches. Unlike existing shrinkage methods, our approach borrows the strength - but avoids the tyranny - of the majority, by targeting individual (instead of group) accuracy and letting the data decide how much strength each individual should borrow. Unlike existing empirical Bayesian methods, our frequentist approach requires no distributional assumptions, and, in fact, it is particularly advantageous in the presence of features such as heavy tails that would make a fully nonparametric procedure problematic.
    Date: 2023–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2308.01596&r=dcm
  7. By: Marco Barassi (University of Birmingham); Yiannis Karavias (University of Birmingham); Chongxian Zhu (University of Birmingham)
    Abstract: This paper introduces unit-specific heterogeneity in panel data threshold regression. Both slope coefficients and threshold parameters are allowed to vary by unit. The heterogeneous threshold parameters manifest via a unit-specific empirical quantile transformation of a common underlying threshold parameter which is estimated efficiently from the whole panel. In the errors, the unobserved heterogeneity of the panel takes the general form of interactive fixed effects. The newly introduced parameter heterogeneity has implications for model identification, estimation, interpretation, and asymptotic inference. The assumption of a shrinking threshold magnitude now implies shrinking heterogeneity and leads to faster estimator rates of convergence than previously encountered. The asymptotic theory for the proposed estimators is derived and Monte Carlo simulations demonstrate its usefulness in small samples. The new model is employed to examine the Feldstein-Horioka puzzle and it is found that the trade liberalization policies of the 80's significantly impacted cross-country capital mobility.
    Date: 2023–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2308.04057&r=dcm

This nep-dcm issue is ©2023 by Edoardo Marcucci. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.