|
on Discrete Choice Models |
| By: | Mohammad Ghaderi |
| Abstract: | This paper introduces the attention-entropy random utility (AERU) model, a behavioral model of discrete choice in which a decision-maker endogenously allocates attention across subsets of attributes in order to increase subjective confidence by reducing ex post choice uncertainty, and subsequently chooses an option based solely on the attended information. By endogenizing attention, the decision problem is reformulated from “which alternative to choose” to “which informational cues to process, ” with the observed choice emerging as the outcome of this attentional allocation. The AERU framework nests random utility model (RUM)-like behavior under transparent conditions, yet it is not restricted by Luce’s independence of irrelevant alternatives (IIA), order-independence, or regularity. This flexibility enables AERU to capture key context effects in a disciplined manner and to generate sharp, testable predictions regarding the conditions for each context effect. From an empirical standpoint, AERU preserves the parsimony of the multinomial logit, requiring only a single additional attention parameter. Employing a scalable estimation procedure based on block coordinate ascent combined with a quasi-Newton method, I provide results from computational experiments demonstrating that AERU can produce better in-sample and out-of-sample predictions. Overall, AERU provides a flexible, parsimonious, and interpretable model of boundedly rational choice with a clear behavioral foundation and implications for context effects. |
| Keywords: | discrete choice, endogenous attention, entropy, subjective confidence, random utility, context effects, regularity |
| JEL: | D91 C35 D01 D83 C63 |
| Date: | 2026–01 |
| URL: | https://d.repec.org/n?u=RePEc:upf:upfgen:1936 |
| By: | Mohammad Ghaderi |
| Abstract: | This paper introduces the attention-entropy random utility (AERU) model, a behavioral model of discrete choice in which a decision-maker endogenously allocates attention across subsets of attributes in order to increase subjective confidence by reducing ex post choice uncertainty, and subsequently chooses an option based solely on the attended information. By endogenizing attention, the decision problem is reformulated from "which alternative to choose" to "which informational cues to process, " with the observed choice emerging as the outcome of this attentional allocation. The AERU framework nests random utility model (RUM)-like behavior under transparent conditions, yet it is not restricted by Luce's independence of irrelevant alternatives (IIA), order-independence, or regularity. This flexibility enables AERU to capture key context effects in a disciplined manner and to generate sharp, testable predictions regarding the conditions for each context effect. From an empirical standpoint, AERU preserves the parsimony of the multinomial logit, requiring only a single additional attention parameter. Employing a scalable estimation procedure based on block coordinate ascent combined with a quasi-Newton method, I provide results from computational experiments demonstrating that AERU can produce better in-sample and out-of-sample predictions. Overall, AERU provides a flexible, parsimonious, and interpretable model of boundedly rational choice with a clear behavioral foundation and implications for context effects. |
| Keywords: | context effects, discrete choice, endogenous attention, entropy, random utility, regularity, subjective confidence |
| JEL: | D91 C35 D01 D83 C63 |
| Date: | 2026–01 |
| URL: | https://d.repec.org/n?u=RePEc:bge:wpaper:1552 |
| By: | Victor Aguirregabiria; Hui Liu; Yao Luo |
| Abstract: | We propose a fast algorithm for computing the GMM estimator in the BLP demand model (Berry, Levinsohn, and Pakes, 1995). Inspired by nested pseudo-likelihood methods for dynamic discrete choice models, our approach avoids repeatedly solving the inverse demand system by swapping the order of the GMM optimization and the fixed-point computation. We show that, by fixing consumer-level outside-option probabilities, BLP’s market-share–mean-utility inversion becomes closed-form and, crucially, separable across products, yielding a nested pseudo-GMM algorithm with analytic gradients. The resulting estimator scales dramatically better with the number of products and is naturally suited for parallel and multithreaded implementation. In the inner loop, outside-option probabilities are treated as fixed objects while a pseudo-GMM criterion is minimized with respect to the structural parameters, substantially reducing computational cost. Monte Carlo simulations and an empirical application show that our method is significantly faster than the fastest existing alternatives, with efficiency gains that grow more than proportionally in the number of products. We provide MATLAB and Julia code to facilitate implementation. |
| Keywords: | Random Coefficients Logit; Sufficient Statistics; Market Share Inversion; Newton-Kantorovich Iteration; Asymptotic Properties; LCBO |
| JEL: | C23 C25 C51 C61 D12 L11 |
| Date: | 2026–02–04 |
| URL: | https://d.repec.org/n?u=RePEc:tor:tecipa:tecipa-819 |
| By: | Fabien Giauque; Mehdi Farsi |
| Abstract: | Dynamic social norms have been recognized as a promising approach to promote energy sufficiency. By highlighting trends and future shifts rather than current states, dynamic norms allow for a better focus on emerging norms that are not widely adopted. While existing studies predominantly examine behavioral outcomes, the underlying processes and trade-offs remain to be explored. This paper uses a discrete choice experiment (DCE) combined with a randomized controlled trial to study electricity saving preferences under various dynamic norms. An emphasis is placed on the rationale for the norm changes. The results show that dynamic norms framed in terms of growing concerns about energy supply security positively affect electricity saving goal, whereas those framed around climate change do not. The heterogeneity analyses suggest that dynamic norms shape behavior through two complementary mechanisms: they generate new preferences while simultaneously reinforcing existing ones. The concluding analysis identifies four distinct groups that vary systematically in their preferences for electricity sufficiency. |
| Keywords: | Electricity saving; Dynamic Norms; Energy supply security; Climate change; Discrete choice experiment; Latent Class Model; Mixed Logit Model; Value-Belief-Norm Theory |
| JEL: | D12 D91 Q48 |
| Date: | 2026–01 |
| URL: | https://d.repec.org/n?u=RePEc:irn:wpaper:26-01 |
| By: | Fry, Tim R. L.; Longmire, Ritchard |
| Abstract: | The Consumer Panel of Australia data collected by The Roy Morgan Research Centre is used to investigate the determinants of brand choice behavior in the laundry detergent market. To reduce potential problems of heterogeneity in the market all purchases of the eleven leading brands in the Melbourne metropolitan area over the period July 1992 to June 1993 are considered. The purchases over this period are divided into an estimation sample and a holdout sample. A mixed Logit model is then fitted using the estimation sample. Price, brand loyalty, household size and household income are found to be significant in explaining brand choice behavior. A test for the, potentially restrictive, property of Independence of Irrelevant Alternatives (IA) is carried out and it is found that IIA is acceptable for this data. The model appears to fit well and estimated elasticities are presented. The estimated model is then validated using the holdout sample. It is found that the model is very good at predicting the brand choices made by consumers in the holdout sample. |
| Keywords: | Research and Development/Tech Change/Emerging Technologies, Research Methods/Statistical Methods |
| URL: | https://d.repec.org/n?u=RePEc:ags:monebs:267919 |
| By: | Hoang Giang Pham; Tien Mai |
| Abstract: | We study assortment and price optimization under the generalized nested logit (GNL) model, one of the most general and flexible modeling frameworks in discrete choice modeling. Despite its modeling advantages, optimization under GNL is highly challenging: even the pure assortment problem is NP-hard, and existing approaches rely on approximation schemes or are limited to simple cardinality constraints. In this paper, we develop the first exact and near-exact algorithms for constrained assortment and joint assortment--pricing optimization (JAP) under GNL. Our approach reformulates the problem into bilinear and exponential-cone convex programs and exploits convexity, concavity, and submodularity properties to generate strong cutting planes within a Branch-and-Cut framework (B\&C). We further extend this framework to the mixed GNL (MGNL) model, capturing heterogeneous customer segments, and to JAP with discrete prices. For the continuous pricing case, we propose a near-exact algorithm based on piecewise-linear approximation (PWLA) that achieves arbitrarily high precision under general linear constraints. Extensive computational experiments demonstrate that our methods substantially outperform state-of-the-art approximation approaches in both solution quality and scalability. In particular, we are able to solve large-scale instances with up to 1000 products and 20 nests, and to obtain near-optimal solutions for continuous pricing problems with negligible optimality gaps. To the best of our knowledge, this work resolves several open problems in assortment and price optimization under GNL. |
| Date: | 2025–12 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2601.04220 |
| By: | Claire Greene; Oz Shy; Joanna Stavins |
| Abstract: | This paper investigates the degree to which merchants influence consumers’ choice of how they pay for transactions. Using data from the Survey and Diary of Consumer Payments Choice, we examine consumers’ adherence to their preferred payment method when making in-person transactions. We also investigate whether merchants are able to steer consumers away from their preferred payment method. We characterize preferences for paying with cash or cards according to consumers’ income, level of education, and employment status. We find that consumers make most payments with their preferred method. When consumers pay with a non-preferred method, it is due only in small part to merchants’ refusal to accept that payment method. If a merchant accepts card payments, consumers who prefer paying with cards are not likely to pay with cash for large-value transactions or for gas or groceries. Discounts on cash purchases do not affect the probability of consumers deviating from using cards and paying with cash. Finally, the paper identifies “inertia” effects, which lead consumers to use the same payment method for consecutive purchases. |
| Keywords: | consumer payments; consumer payment choice; merchant steering; discounts; surcharges |
| JEL: | D14 E42 |
| Date: | 2026–01–01 |
| URL: | https://d.repec.org/n?u=RePEc:fip:fedbwp:102380 |
| By: | Bejarano, Hernan; Busso, Matías; Santos, Juan Francisco |
| Abstract: | We study how individuals in six Latin American countries value public versus private provision of education and healthcare using a survey experiment. Respondents were randomly assigned to vignettes that vary income, service quality, and provider type. Perceived quality is the main driver of choices: the probability of selecting a private provider roughly doubles when public quality falls from 80 to 20 percent, while income has a smaller effect. Higher institutional trust lowers the likelihood of switching to private providers but does not affect willingness to pay once individuals choose private provision. The multi-country design supports external validity and reveals similar behavioral responses across contexts. The results show that improving service quality and rebuilding institutional trust can reduce reliance on private provision. |
| Keywords: | Stated Preferences;willingness to pay;Public versus Private Provision;service quality |
| JEL: | D12 H42 I21 I18 O54 |
| Date: | 2026–01 |
| URL: | https://d.repec.org/n?u=RePEc:idb:brikps:14477 |
| By: | Harris, Mark N.; Macquarie, Lachlan R.; Siouclis, Anthony J. |
| Abstract: | Recent advances in computing power have brought the use of computer intensive estimation methods of binary panel data models within the reach of the applied researcher. The aim of this paper is to apply some of these techniques to a marketing data set and compare the results. In addition, their small sample performance is examined via Monte Carlo simulation experiments. The first estimation technique used was maximum likelihood estimation of the cross section probit (ignoring heterogeneity). The remaining .techniques estimated the binary panel probit model using: standard maximum likelihood; the Solomon-Cox approximation to this likelihood and finally; the Gibbs sampler to obtain Bayesian estimates. The results suggested that, in most cases, standard maximum likelihood estimation of the binary panel probit model was the preferred technique primarily because it is readily available to applied practitioners. Although when the variance of the heterogeneity term is small, the computational simplicity of the Solomon-Cox approximation may prove attractive. In large samples, the Gibbs sampler was also found to perform well. |
| Keywords: | Research Methods/Statistical Methods |
| URL: | https://d.repec.org/n?u=RePEc:ags:monebs:267940 |
| By: | Birkenstock, Maren; Röder, Norbert; Thiermann, Insa; Buschmann, Christoph; Feindt, Peter |
| Abstract: | Background – With the new Common Agricultural Policy (CAP), EU member states (MS) gained flexibility in the design of agri-environmental measures (AEM). In particular, MS are encouraged to determine AEM payment levels based on a marginal supplier approach. Analytical determination of payment levels would require sufficient information about the distribution of cost-structures. As this data is generally lacking, calculations are typically based on assumptions. The question arises whether MS use the ensuing discretionary scope to design environmentally ambitious policies or primarily income-generating farm payments. Objective – Prioritising objectives and implementation options under budget constraints, an essential task in policy design, is particularly difficult when developing schemes to support the provision of public goods. By definition, public goods lack a market value, therefore their cost-effectiveness is difficult to assess. We examine how scientific experts assess trade-offs between the remuneration level for AEM and the achievable environmental effectiveness of a funding scheme. Method – In a discrete choice survey, experts with a track record in dealing with European agri-environmental challenges were asked to choose between different schemes and the status quo. These experts are an important group as they influence scientific and political debates on the future CAP. The attributes presented in the choice set were the environmental effectiveness of a CAP strategic plan (CSP), the share of agricultural area enrolled in agri- environmental measures (AEM), the share of ‘dark-green’ measures, and ‘payment to farmer’. Results & Discussion – The results show that higher CSP’s environmental effectiveness and a higher share of agricultural area enrolled in AEM increased the likelihood that experts selected a funding scheme. Higher levels of ’payment to farmer’ decreased the selection probability. In order to achieve more ambitious CSPs, experts regarded higher payments for AEM acceptable. A latent class analysis revealed preference heterogeneity among experts, reflecting different disciplinary and geographical perspectives. |
| Keywords: | Environmental Economics and Policy |
| Date: | 2025 |
| URL: | https://d.repec.org/n?u=RePEc:ags:eaae25:391391 |
| By: | Joshua S. Gans |
| Abstract: | Machine learning systems embed preferences either in training losses or through post-processing of calibrated predictions. Applying information design methods from Strack and Yang (2024), this paper provides decision problem agnostic conditions under which separation training preference free and applying preferences ex post is optimal. Unlike prior work that requires specifying downstream objectives, the welfare results here apply uniformly across decision problems. The key primitive is a diminishing-value-of-information condition: relative to a fixed (normalised) preference-free loss, preference embedding makes informativeness less valuable at the margin, inducing a mean-preserving contraction of learned posteriors. Because the value of information is convex in beliefs, preference-free training weakly dominates for any expected utility decision problem. This provides theoretical foundations for modular AI pipelines that learn calibrated probabilities and implement asymmetric costs through downstream decision rules. However, separation requires users to implement optimal decision rules. When cognitive constraints bind, as documented in human AI decision-making, preference embedding can dominate by automating threshold computation. These results provide design guidance: preserve optionality through post-processing when objectives may shift; embed preferences when decision-stage frictions dominate. |
| Date: | 2026–01 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2601.18732 |
| By: | Quitz\'e Valenzuela-Stookey |
| Abstract: | A principal must allocate a set of heterogeneous tasks (or objects) among multiple agents. The principal has preferences over the allocation. Each agent has preferences over which tasks they are assigned, which are their private information. The principal is constrained by the fact that each agent has the right to demand some status-quo task assignment. I characterize the conditions under which the principal can gain by delegating some control over the assignment to the agents. Within a large class of delegation mechanisms, I then characterize those that are obviously strategy-proof (OSP), and provide guidance for choosing among OSP mechanisms. |
| Date: | 2026–01 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2601.20035 |
| By: | Annaji Sarma (National Institute of Fashion Technology, Hyderabad, India); Achanta Rajyalakshmi (National Institute of Fashion Technology, Hyderabad, India) |
| Abstract: | The integration of sustainability into luxury fashion is transforming the industry, driven by increasing consumer awareness, environmental concerns, and evolving societal values. This study explores the intersection of sustainability and luxury, focusing on consumer perceptions, preferences, and the challenges faced by luxury brands in adopting sustainable practices. Based on survey data, findings reveal that consumers prioritize eco-friendly production, ethical sourcing, and transparency in supply chains, with 89% of respondents agreeing that sustainability is essential for the future of luxury fashion. Key challenges for brands include the high cost of sustainable materials, maintaining exclusivity, and overcoming established business models. The study also highlights the unique potential of the Indian luxury market, where growing consumer awareness and homegrown brands contribute to sustainability efforts. Consumers increasingly align luxury with timeless quality, durability, and ethical practices, and are willing to pay a premium for sustainable luxury products. The research concludes that luxury brands must innovate and integrate sustainability into their core strategies to maintain relevance and credibility. By leading the shift toward sustainability, the luxury fashion can redefine its value proposition and influence the broader fashion industry, fostering a more sustainable and ethical future. |
| Keywords: | sustainability, Gen Z, luxury fashion, consumer behavior, ethical sourcing, brand innovation |
| Date: | 2025–08 |
| URL: | https://d.repec.org/n?u=RePEc:smo:raiswp:0554 |
| By: | Eric Fortier (PhD Candidate, Simon Fraser University) |
| Abstract: | This study examines how specification choices in local projections influence the estimation of impulse responses to monetary policy shocks. Using monthly U.S. data from 1983 to 2007 and the Aruoba and Drechsel (2024) shock series, I systematically compare levels and long-differences specifications across 12 control sets and multiple lag lengths. The results are evaluated both qualitatively by benchmarking impulse responses against theory, standard beliefs, and prior evidence and quantitatively, using information criteria (AIC, BIC, CV). The findings show that the long-differences specification produces distorted long-term dynamics. In contrast, the levels specification, when paired with a robust control set, generates well-behaved responses. These results stress the importance of careful specification and provide practical guidance for researchers applying local projections to monetary policy shocks. |
| Date: | 2026–01 |
| URL: | https://d.repec.org/n?u=RePEc:sfu:sfudps:dp26-01 |