nep-upt New Economics Papers
on Utility Models and Prospect Theory
Issue of 2025–03–17
fourteen papers chosen by
Alexander Harin


  1. Decision theory and the "almost implies near" phenomenon By Christopher P Chambers; Federico Echenique
  2. Source Theory : A Tractable and Positive Ambiguity Theory By Aurélien Baillon; Han Bleichrodt; Chen Li; Peter P. Wakker
  3. From Decision in Risk to Decision in Time - and Return: A Restatement of Probability Discounting By Marc-Arthur Diaye; André Lapidus; Christian Schmidt
  4. Impartial utilitarianism on infinite utility streams By Kensei Nakamura
  5. Social Choice Rules with Responsibility for Individual Skills By Kensei Nakamura
  6. Properties of Path-Independent Choice Correspondences and Their Applications to Efficient and Stable Matchings By Keisuke Bando; Kenzo Imamura; Yasushi Kawase
  7. Measure of Morality: A Mathematical Theory of Egalitarian Ethics By Shuang Wei
  8. It's Not All Black and White: Degree of Truthfulness for Risk-Avoiding Agents By Eden Hartman; Erel Segal-Halevi; Biaoshuai Tao
  9. Non-Monetary Mechanism Design without Distributional Information: Using Scarce Audits Wisely By Yan Dai; Moise Blanchard; Patrick Jaillet
  10. Weak independence of irrelevant alternatives and generalized Nash bargaining solutions By Kensei Nakamura
  11. Degrees of Freedom Analysis: A mixed method for theory building, decision making, and prediction By Tractenberg, Rochelle E.
  12. Knightian Uncertainty and Bayesian Entrepreneurship By Joshua S. Gans
  13. Beyond the Median Voter Theorem: A New Framework for Ideological Positioning By Shitong Wang
  14. Policy Design in Long-Run Welfare Dynamics By Jiduan Wu; Rediet Abebe; Moritz Hardt; Ana-Andreea Stoica

  1. By: Christopher P Chambers; Federico Echenique
    Abstract: We propose to relax traditional axioms in decision theory by incorporating a measurement, or degree, of satisfaction. For example, if the independence axiom of expected utility theory is violated, we can measure the size of the violation. This measure allows us to derive an approximation guarantee for a utility representation that aligns with the unmodified version of the axiom. Almost satisfying the axiom implies, then, a utility that is near a utility representation. We develop specific examples drawn from expected utility theory under risk and uncertainty.
    Date: 2025–02
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2502.07126
  2. By: Aurélien Baillon (EM - EMLyon Business School, GATE Lyon Saint-Étienne - Groupe d'Analyse et de Théorie Economique Lyon - Saint-Etienne - UL2 - Université Lumière - Lyon 2 - UJM - Université Jean Monnet - Saint-Étienne - EM - EMLyon Business School - CNRS - Centre National de la Recherche Scientifique); Han Bleichrodt (Universidad de Alicante); Chen Li (Erasmus University Rotterdam); Peter P. Wakker (Erasmus University Rotterdam)
    Abstract: This paper introduces source theory, a new theory for decision under ambiguity (unknown probabilities). It shows how Savage's subjective probabilities, with source-dependent nonlinear weighting functions, can model Ellsberg's ambiguity. It can do so in Savage's framework of state-contingent assets, permits nonexpected utility for risk, and avoids multistage complications. It is tractable, shows ambiguity attitudes through simple graphs, is empirically realistic, and can be used prescriptively. We provide a new tool to analyze weighting functions: pmatchers. They give Arrow–Pratt-like transformations but operate "within" rather than "outside" functions. We further show that ambiguity perception and inverse S probability weighting, seemingly unrelated concepts, are two sides of the same "insensitivity" coin.
    Keywords: subjective beliefs, ambiguity aversion, Ellsberg paradox, source of uncertainty
    Date: 2025–02–12
    URL: https://d.repec.org/n?u=RePEc:hal:journl:hal-04964898
  3. By: Marc-Arthur Diaye (CES - Centre d'économie de la Sorbonne - UP1 - Université Paris 1 Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique); André Lapidus (PHARE - Philosophie, Histoire et Analyse des Représentations Économiques - UP1 - Université Paris 1 Panthéon-Sorbonne); Christian Schmidt (PHARE - Philosophie, Histoire et Analyse des Représentations Économiques - UP1 - Université Paris 1 Panthéon-Sorbonne)
    Abstract: This paper aims to restate, in a decision theory framework, the results of some significant contributions of the literature on probability discounting that followed the publication of the pioneering article by Rachlin et al. We provide a restatement of probability discounting, usually limited to the case of 2-issues lotteries, in terms of rank-dependent utility, in which the utilities of the outcomes of n-issues lotteries are weighted by probabilities transformed after their transposition into time-delays. This formalism makes the typical cases of rationality in time and in risk mutually exclusive, but allows looser types of rationality. The resulting attitude toward probability and toward risk are then determined in relation to the values of the two parameters involved in the procedure of probability discounting: a parameter related to impatience and pessimism, and a parameter related to time-consistency and the separation between non-optimism and non-pessimism. A simulation illustrates these results through the characteristics of the transformation of probabilities function.
    Keywords: Probability discounting, Time discounting, Logarithmic time perception, Rank-dependent utility, Rationality, Attitude toward probabilities, Attitude toward risk
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:hal:journl:hal-03256606
  4. By: Kensei Nakamura
    Abstract: When evaluating policies that affect future generations, the most commonly used criterion is the discounted utilitarian rule. However, in terms of intergenerational fairness, it is difficult to justify prioritizing the current generation over future generations. This paper axiomatically examines impartial utilitarian rules over infinite-dimensional utility streams. We provide simple characterizations of the social welfare ordering evaluating utility streams by their long-run average in the domain where the average can be defined. Furthermore, we derive the necessary and sufficient conditions of the same axioms in a more general domain, the set of bounded streams. Some of these results are closely related to the Banach limits, a well-known generalization of the classical limit concept for streams. Thus, this paper can be seen as proposing an appealing subclass of the Banach limits by the axiomatic analysis.
    Date: 2025–02
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2502.04934
  5. By: Kensei Nakamura
    Abstract: This paper examines normatively acceptable criteria for evaluating social states when individuals are responsible for their skills or productivity and these factors should be accounted for. We consider social choice rules over sets of feasible utility vectors \`a la Nash's (1950) bargaining problem. First, we identify necessary and sufficient conditions for choice rules to be rationalized by welfare orderings or functions over ability-normalized utility vectors. These general results provide a foundation for exploring novel choice rules with the normalization and providing their axiomatic foundations. By adding natural axioms, we propose and axiomatize a new class of choice rules, which can be viewed as combinations of three key principles: distribution according to individuals' abilities, utilitarianism, and egalitarianism. Furthermore, we show that at the axiomatic level, this class of choice rules is closely related to the classical bargaining solution introduced by Kalai and Smorodinsky (1975).
    Date: 2025–02
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2502.04989
  6. By: Keisuke Bando; Kenzo Imamura; Yasushi Kawase
    Abstract: Choice correspondences are crucial in decision-making, especially when faced with indifferences or ties. While tie-breaking can transform a choice correspondence into a choice function, it often introduces inefficiencies. This paper introduces a novel notion of path-independence (PI) for choice correspondences, extending the existing concept of PI for choice functions. Intuitively, a choice correspondence is PI if any consistent tie-breaking produces a PI choice function. This new notion yields several important properties. First, PI choice correspondences are rationalizabile, meaning they can be represented as the maximization of a utility function. This extends a core feature of PI in choice functions. Second, we demonstrate that the set of choices selected by a PI choice correspondence for any subset forms a generalized matroid. This property reveals that PI choice correspondences exhibit a nice structural property. Third, we establish that choice correspondences rationalized by ordinally concave functions inherently satisfy the PI condition. This aligns with recent findings that a choice function satisfies PI if and only if it can be rationalized by an ordinally concave function. Building on these theoretical foundations, we explore stable and efficient matchings under PI choice correspondences. Specifically, we investigate constrained efficient matchings, which are efficient (for one side of the market) within the set of stable matchings. Under responsive choice correspondences, such matchings are characterized by cycles. However, this cycle-based characterization fails in more general settings. We demonstrate that when the choice correspondence of each school satisfies both PI and monotonicity conditions, a similar cycle-based characterization is restored. These findings provide new insights into the matching theory and its practical applications.
    Date: 2025–02
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2502.09265
  7. By: Shuang Wei
    Abstract: This paper develops a rigorous mathematical framework for egalitarian ethics by integrating formal tools from economics and mathematics. We motivate the formalism by investigating the limitations of conventional informal approaches by constructing examples such as probabilistic variant of the trolley dilemma and comparisons of unequal distributions. Our formal model, based on canonical welfare economics, simultaneously accounts for total utility and the distribution of outcomes. The analysis reveals deficiencies in traditional statistical measures and establishes impossibility theorems for rank-weighted approaches. We derive representation theorems that axiomatize key inequality measures including the Gini coefficient and a generalized Atkinson index, providing a coherent, axiomatic foundation for normative philosophy.
    Date: 2025–02
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2503.00039
  8. By: Eden Hartman; Erel Segal-Halevi; Biaoshuai Tao
    Abstract: The classic notion of truthfulness requires that no agent has a profitable manipulation -- an untruthful report that, for some combination of reports of the other agents, increases her utility. This strong notion implicitly assumes that the manipulating agent either knows what all other agents are going to report, or is willing to take the risk and act as-if she knows their reports. Without knowledge of the others' reports, most manipulations are risky -- they might decrease the manipulator's utility for some other combinations of reports by the other agents. Accordingly, a recent paper (Bu, Song and Tao, ``On the existence of truthful fair cake cutting mechanisms'', Artificial Intelligence 319 (2023), 103904) suggests a relaxed notion, which we refer to as risk-avoiding truthfulness (RAT), which requires only that no agent can gain from a safe manipulation -- one that is sometimes beneficial and never harmful. Truthfulness and RAT are two extremes: the former considers manipulators with complete knowledge of others, whereas the latter considers manipulators with no knowledge at all. In reality, agents often know about some -- but not all -- of the other agents. This paper introduces the RAT-degree of a mechanism, defined as the smallest number of agents whose reports, if known, may allow another agent to safely manipulate, or $n$ if there is no such number. This notion interpolates between classic truthfulness (degree $n$) and RAT (degree at least $1$): a mechanism with a higher RAT-degree is harder to manipulate safely. To illustrate the generality and applicability of this concept, we analyze the RAT-degree of prominent mechanisms across various social choice settings, including auctions, indivisible goods allocations, cake-cutting, voting, and stable matchings.
    Date: 2025–02
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2502.18805
  9. By: Yan Dai; Moise Blanchard; Patrick Jaillet
    Abstract: We study a repeated resource allocation problem with strategic agents where monetary transfers are disallowed and the central planner has no prior information on agents' utility distributions. In light of Arrow's impossibility theorem, acquiring information about agent preferences through some form of feedback is necessary. We assume that the central planner can request powerful but expensive audits on the winner in any round, revealing the true utility of the winner in that round. We design a mechanism achieving $T$-independent $O(K^2)$ regret in social welfare while requesting $O(K^3 \log T)$ audits in expectation, where $K$ is the number of agents and $T$ is the number of rounds. We also show an $\Omega(K)$ lower bound on the regret and an $\Omega(1)$ lower bound on the number of audits when having low regret. Algorithmically, we show that incentive-compatibility can be mostly enforced with an accurate estimation of the winning probability of each agent under truthful reporting. To do so, we impose future punishments and introduce a *flagging* component, allowing agents to flag any biased estimate (we show that doing so aligns with individual incentives). On the technical side, without monetary transfers and distributional information, the central planner cannot ensure that truthful reporting is exactly an equilibrium. Instead, we characterize the equilibrium via a reduction to a simpler *auxiliary game*, in which agents cannot strategize until late in the $T$ rounds of the allocation problem. The tools developed therein may be of independent interest for other mechanism design problems in which the revelation principle cannot be readily applied.
    Date: 2025–02
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2502.08412
  10. By: Kensei Nakamura
    Abstract: In Nash's (1950) seminal result, independence of irrelevant alternatives (IIA) plays a central role, but it has long been a subject of criticism in axiomatic bargaining theory. This paper examines the implication of a weak version of IIA in multi-valued bargaining solutions defined on non-convex bargaining problems. We show that if a solution satisfies weak IIA together with standard axioms, it can be represented, like the Nash solution, using weighted products of normalized utility levels. In this representation, the weight assigned to players for evaluating each agreement is determined endogenously through a two-stage optimization process. These solutions bridge the two dominant solution concepts, the Nash solution and the Kalai-Smorodinsky solution (Kalai and Smorodinsky, 1975). Furthermore, we consider special cases of these solutions in the context of bargaining over linear production technologies.
    Date: 2025–02
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2502.06157
  11. By: Tractenberg, Rochelle E. (Georgetown University)
    Abstract: Degrees of Freedom Analysis (DoFA) is a method originally published in 1975 that combines qualitative analysis to summarize narrative data with quantitation to summarize alignment or support of the qualitative results for theory building. This paper discusses recent adaptations of the method to facilitate decision making and prediction when theory building is not the investigator's focus. Eleven applications of the method across a variety of disciplines and materials are discussed. These examples highlight the flexibility and utility of DoFA in rigorous and reproducible analyses that involve qualitative materials that would otherwise be challenging to analyze and summarize.
    Date: 2023–05–06
    URL: https://d.repec.org/n?u=RePEc:osf:socarx:r5a7z_v1
  12. By: Joshua S. Gans
    Abstract: This paper examines the relationship between Knightian uncertainty and Bayesian approaches to entrepreneurship. Using Bewley's formal model of uncertainty and incomplete preferences, it demonstrates that key predictions from Bayesian entrepreneurship remain robust when accounting for Knightian uncertainty, particularly regarding experimentation, venture financing, and strategic choice. The analysis shows that while Knightian uncertainty creates a more challenging decision environment, it maintains consistency with the three pillars of Bayesian entrepreneurship: heterogeneous beliefs, stronger entrepreneurial priors, and Bayesian updating. The paper also explores connections to effectuation theory, finding that formal uncertainty models can bridge different entrepreneurial methodologies.
    JEL: D81 O30
    Date: 2025–02
    URL: https://d.repec.org/n?u=RePEc:nbr:nberwo:33507
  13. By: Shitong Wang
    Abstract: This paper revisits the limitations of the Median Voter Theorem and introduces a novel framework to analyze the optimal economic ideological positions of political parties. By incorporating Nash equilibrium, we examine the mechanisms and elasticity of ideal deviation costs, voter distribution, and policy feasibility. Our findings show that an increase in a party's ideal deviation cost shifts its optimal ideological position closer to its ideal point. Additionally, if a voter distribution can be expressed as a positive linear combination of two other distributions, its equilibrium point must lie within the interval defined by the equilibrium points of the latter two. We also find that decreasing feasibility costs incentivize governments, regardless of political orientation, to increase fiscal expenditures (e.g., welfare) and reduce fiscal revenues (e.g., taxes). This dynamic highlights the fiscal pressures commonly faced by democratic nations under globalization. Moreover, we demonstrate that even with uncertain voter distributions, parties can identify optimal ideological positions to maximize their utility. Lastly, we explain why the proposed framework cannot be applied to community ideologies due to their fundamentally different nature. This study provides new theoretical insights into political strategies and establishes a foundation for future empirical research.
    Date: 2025–02
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2502.06562
  14. By: Jiduan Wu; Rediet Abebe; Moritz Hardt; Ana-Andreea Stoica
    Abstract: Improving social welfare is a complex challenge requiring policymakers to optimize objectives across multiple time horizons. Evaluating the impact of such policies presents a fundamental challenge, as those that appear suboptimal in the short run may yield significant long-term benefits. We tackle this challenge by analyzing the long-term dynamics of two prominent policy frameworks: Rawlsian policies, which prioritize those with the greatest need, and utilitarian policies, which maximize immediate welfare gains. Conventional wisdom suggests these policies are at odds, as Rawlsian policies are assumed to come at the cost of reducing the average social welfare, which their utilitarian counterparts directly optimize. We challenge this assumption by analyzing these policies in a sequential decision-making framework where individuals' welfare levels stochastically decay over time, and policymakers can intervene to prevent this decay. Under reasonable assumptions, we prove that interventions following Rawlsian policies can outperform utilitarian policies in the long run, even when the latter dominate in the short run. We characterize the exact conditions under which Rawlsian policies can outperform utilitarian policies. We further illustrate our theoretical findings using simulations, which highlight the risks of evaluating policies based solely on their short-term effects. Our results underscore the necessity of considering long-term horizons in designing and evaluating welfare policies; the true efficacy of even well-established policies may only emerge over time.
    Date: 2025–03
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2503.00632

This nep-upt issue is ©2025 by Alexander Harin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.