nep-upt New Economics Papers
on Utility Models and Prospect Theory
Issue of 2026–02–16
seventeen papers chosen by
Alexander Harin


  1. History-dependent Preferences: An Axiomatic Perspective By Dolgopolov, Artur; Karos, Dominik; Lehrer, Ehud
  2. Prospect Theory as Active Inference: A Metabolic Account of Risk-Sensitive Decision Making By Palumbo, Riccardo; Bortolotti, Alessandro; Sacco, Pier Luigi
  3. A Simple Nested Test of AIDS By McLaren, Keith R.; Fry, Jane M.; Fry, Tim R. L.
  4. Exact Value Solution to the Equity Premium Puzzle By Atilla Aras
  5. A System of Demand Equations Satisfying Effectively Global Curvature Conditions By Cooper, Russel J.; McLaren, Keith R.; Parameswaran, Priya
  6. A System of Demand Equations Satisfying Effectively Global Regularity Conditions By Cooper, Russel J.; McLaren, Keith R.
  7. Health Dynamics and Annuitization Decisions: The Case of Social Security By Diego Ascarza-Mendoza; Alex Carrasco
  8. Wealth Preferences and Wealth Inequality: Experimental Evidence By Nobuyuki Hanaki; Yuta Shimodaira
  9. Chasing Tails: How Do People Respond to Wait Time Distributions? By Evgeny Kagan; Kyle Hyndman; Andrew Davis
  10. Endogenous Inequality Aversion: Decision criteria for triage and other ethical tradeoffs By Federico Echenique; Teddy Mekonnen; M. Bumin Yenmez
  11. Modelling the Probability of Youth Unemployment in Australia: 1985-1988 By Harris, Mark N.
  12. Calibrating Behavioral Parameters with Large Language Models By Brandon Yee; Krishna Sharma
  13. Information Design and Mechanism Design: An Integrated Framework By Dirk Bergemann; Tibor Heumann; Stephen Morris
  14. Accessibility of Pareto Optima By Bernard Cornet
  15. Consistency in collective decision-making under uncertainty: an axiomatic approach By Stéphane Gonzalez; Le-Nhat-Linh Huynh
  16. Smart Lotteries in School Choice: Ex-ante Pareto-Improvement with Ex-post Stability By Haris Aziz; P\'eter Bir\'o; Gergely Cs\'aji; Tom Demeulemeester
  17. Time-Inhomogeneous Volatility Aversion for Financial Applications of Reinforcement Learning By Federico Cacciamani; Roberto Daluiso; Marco Pinciroli; Michele Trapletti; Edoardo Vittori

  1. By: Dolgopolov, Artur (Center for Mathematical Economics, Bielefeld University); Karos, Dominik (Center for Mathematical Economics, Bielefeld University); Lehrer, Ehud (Center for Mathematical Economics, Bielefeld University)
    Abstract: This paper develops an axiomatic framework for decision making when preferences depend not only on the current alternative but also on the past frequency with which alternatives have been chosen. We identify key independence axioms that characterize frequency-dependent preferences. In addition, we derive representation results for preference structures that separate the intrinsic utility of an alternative from the effect associated with its consumption frequency. The framework provides a foundation for modeling variety-seeking behavior, while remaining closely connected to classical utility theory and extending it to encompass history-sensitive preferences.
    Keywords: History-dependent Preferences, Repeated decision problem, time-inconsistent preferences, habit formation
    Date: 2026–01–28
    URL: https://d.repec.org/n?u=RePEc:bie:wpaper:762
  2. By: Palumbo, Riccardo; Bortolotti, Alessandro; Sacco, Pier Luigi
    Abstract: Prospect theory's characteristic patterns (loss aversion, reference dependence, and nonlinear probability weighting) have generally been interpreted as cognitive biases, i.e. as evidence of bounded rationality. This paper proposes a conceptual framework for understanding these phenomena through the lens of active inference and the free energy principle. We argue that prospect theory's central features are consistent with computationally efficient solutions to decisionmaking under uncertainty within the thermodynamic constraints of neural computation. Loss aversion implements adaptive precision-weighting of prediction errors, allocating greater computational resources to negative deviations that threaten survival. Reference dependence implements efficient predictive coding, transmitting only surprising deviations from expectations. Probability weighting reflects optimal precision allocation across the probability range when maintaining full Bayesian representations would exceed metabolic budgets. This framework is supported by converging evidence: neuroimaging studies show unified value coding with asymmetric precision for losses; pharmacological manipulations reveal dissociable neurotransmitter systems for value encoding versus loss sensitivity; and metabolic manipulations including hypoxia, glucose depletion, and circadian mismatch modulate prospect theory parameters in predicted directions. Developmental evidence shows that children display probability weighting patterns opposite to adults, with gradual transformation through experience pointing at calibration rather than to genetic determination. We propose that prospect theory patterns reflect how biological systems navigate uncertainty under fundamental energetic constraints, with implications for understanding decision-making architecture and reconceptualizing rationality.
    Date: 2026–01–30
    URL: https://d.repec.org/n?u=RePEc:osf:socarx:2d9v5_v1
  3. By: McLaren, Keith R.; Fry, Jane M.; Fry, Tim R. L.
    Abstract: The MPIGLOG specification of an indirect utility function gives rise to Cooper and McLaren's (1992) Modified AIDS (MAIDS) specification, which nests AIDS. Following the 'combined' approach outlined by Fry, Fry and McLaren (1993), we transform our deterministic equations to logratio form for estimation. This procedure not only restricts the shares implied by the model to the unit simplex, but also provides a transparent representation of the restriction implied by AIDS. We estimate MAIDS (with and without the AIDS restriction imposed) using the 'combined' approach and proceed to test the AIDS restriction.
    Keywords: Research Methods/Statistical Methods
    URL: https://d.repec.org/n?u=RePEc:ags:monebs:267417
  4. By: Atilla Aras
    Abstract: The aim of this article is to provide the solution to the equity premium puzzle without using calibrated values. Calibrated values of subjective time discount factor were used in the prior derived models because 4 variables were determined from 3 different equations. Furthermore, calculated values and risk behavior determination of prior models were compatible with empirical literature. 4 unknown variables are now calculated from 4 different equations in the new derived model in this article. Subjective time discount factor and coefficient of relative risk aversion are found 0.9581 and 1.0319, respectively from the system of equations which are compatible with empirical studies. Micro and macro studies about CRRA value affirm each other for the first time in the literature. Furthermore, equity and risk-free asset investors are pinned down to be insufficient risk-loving, which can be considered a type of risk-averse behavior. Hence it can be said that calculated values and risk attitude determination align with empirical literature. This shows that derived model is valid and make CCAPM work under the same assumptions with those of prior derived models.
    Date: 2026–02
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2602.11687
  5. By: Cooper, Russel J.; McLaren, Keith R.; Parameswaran, Priya
    Abstract: The MPIGLOG specification of an indirect utility function leads to a parametric representation in terms of expenditure and two unit cost functions. Specification of these unit cost functions in terms of regular, flexible functions leads to the notion of an "effectively globally regular" system of demand equations. Three examples demonstrate the success of such a specification in achieving regularity and flexibility.
    Keywords: Demand and Price Analysis
    URL: https://d.repec.org/n?u=RePEc:ags:monebs:267295
  6. By: Cooper, Russel J.; McLaren, Keith R.
    Abstract: A parametric specification of an indirect utility function in terms of expenditure and two unit cost functions.is.proposed. Specification of these unit cost functions in terms of regular functions leads to the notion of an "effectively globally regular" system of demand equations; that is, a system of demand equations that is regular over a cone in expenditure-price space, and for which the regular region includes all points in any given sample, and all values of nominal expenditure and prices generating higher values of real expenditure than the sample minimum. This general model nests a number of popular demand systems, such as the Linear Expenditure System, as special cases. An empirical application demonstrates the value of the generalization.
    Keywords: Research Methods/Statistical Methods
    URL: https://d.repec.org/n?u=RePEc:ags:monebs:267397
  7. By: Diego Ascarza-Mendoza (School of Government and Public Transformation, Tecnológico de Monterrey); Alex Carrasco (Massachusetts Institute of Technology)
    Abstract: Why do two out of three Americans claim Social Security benefits before reaching their Full Retirement Age? Why do even sufficiently rich people claim early very often? This paper resolves this puzzling phenomenon by extending a standard incomplete markets life-cycle model to incorporate health dynamics and bequest motives. Relative to the existing literature, health plays a broader role, affecting not only medical expenses and mortality but also directly the marginal utility of consumption. This role of health is disciplined using microdata on consumption, assets, income, and health from the Health and Retirement Study (HRS) and the Consumption and Activities Mail Survey (CAMS). The calibrated model successfully replicates the fraction of early claimers. Counterfactual exercises show that health-dependent preferences and bequest motives are crucial for this result. The model’s success is explained by a novel channel that comes from the interaction between the negative effect of worsening health on the marginal utility of consumption, the downward health trend because of aging, and bequest motives. These two elements reduce the gains from delaying by (1) making individuals more impatient and (2) increasing the strength of bequest motives relative to future consumption. The results suggest that governments aiming to insure against longevity must consider the complementary interaction between individual incentives to insure against longevity and health risks.
    Keywords: Health, MarginalUtility, Frailty Index, Social Security, Annuities
    JEL: F13 D72 D83 C91
    Date: 2026–02
    URL: https://d.repec.org/n?u=RePEc:gnt:wpaper:22
  8. By: Nobuyuki Hanaki; Yuta Shimodaira
    Abstract: Some researchers claim that a preference for wealth accumulation is the main cause of the long-run stagnation of the Japanese economy. A theoretical implication of people having such a preference, particularly the assumption that the marginal utility of wealth accumulation has a positive lower bound while that of consumption does not, is a widening of wealth inequality. We experimentally test this theoretical prediction by inducing a wealth preference in the laboratory. We find partial support for this prediction: wealth inequality widens when initial inequality is large, but not when it is small. This is because high-wealth participants tend to overconsume more than lower-wealth participants, partly offsetting the effect of the induced preference for wealth accumulation on the widening of wealth inequality. Activating participants’ status concerns by displaying their ranking in accumulated wealth has only a limited impact on the expansion of wealth inequality.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:dpr:wpaper:1260rr
  9. By: Evgeny Kagan; Kyle Hyndman; Andrew Davis
    Abstract: We use a series of pre-registered, incentive-compatible online experiments to investigate how people evaluate and choose among different waiting time distributions. Our main findings are threefold. First, consistent with prior literature, people show an aversion to both longer expected waits and higher variance. Second, and more surprisingly, moment-based utility models fail to capture preferences when distributions have thick-right tails: indeed, decision-makers strongly prefer distributions with long-right tails (where probability mass is more evenly distributed over a larger support set) relative to tails that exhibit a spike near the maximum possible value, even when controlling for mean, variance, and higher moments. Conditional Value at Risk (CVaR) utility models commonly used in portfolio theory predict these choices well. Third, when given a choice, decision-makers overwhelmingly seek information about right-tail outcomes. These results have practical implications for service operations: (1) service designs that create a spike in long waiting times (such as priority or dedicated queue designs) may be particularly aversive; (2) when informativeness is the goal, providers should prioritize sharing right-tail probabilities or percentiles; and (3) to increase service uptake, providers can strategically disclose (or withhold) distributional information depending on right-tail shape.
    Date: 2026–02
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2602.06263
  10. By: Federico Echenique; Teddy Mekonnen; M. Bumin Yenmez
    Abstract: Medical ``Crisis Standards of Care'' call for a utilitarian allocation of scarce resources in emergencies, while favoring the worst-off under normal conditions. Inspired by such triage rules, we introduce social welfare functions whose distributive tradeoffs depend on the prevailing level of aggregate welfare. These functions are inherently self-referential: they take the welfare level as an input, even though that level is itself determined by the function. In our formulation, inequality aversion varies with welfare and is therefore self-referential. We provide an axiomatic foundation for a family of social welfare functions that move from Rawlsian to utilitarian criteria as overall welfare falls, thereby formalizing triage guidelines. We also derive the converse case, in which the social objective shifts from Rawlsianism toward utilitarianism as welfare increases.
    Date: 2026–01
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2601.22250
  11. By: Harris, Mark N.
    Abstract: This paper attempts to explain how particular personal characteristics affect the probability of unemployment of Australian youth. For this purpose the Australian Longitudinal Survey (1985-1988) is utilised and a Univariate Equicorrelated Random Effects Probit model applied to the data. The Survey appears to be affected by endogenous attrition, the source of which was found to be nationality and education levels. These processes were accounted for in the estimation procedures. This study appears to be the first attempt to analyse this particular data as a panel data set in a Random Utility Discrete Choice context. Results indicate that age, education and financial housing commitments exert a positive influence on the probability of unemployment. Also, there is evidence to suggest that the disabled are discriminated against, and that reservation wages exert a strong negative effect.
    Keywords: Labor and Human Capital, Public Economics, Research and Development/Tech Change/Emerging Technologies, Research Methods/Statistical Methods
    URL: https://d.repec.org/n?u=RePEc:ags:monebs:267628
  12. By: Brandon Yee; Krishna Sharma
    Abstract: Behavioral parameters such as loss aversion, herding, and extrapolation are central to asset pricing models but remain difficult to measure reliably. We develop a framework that treats large language models (LLMs) as calibrated measurement instruments for behavioral parameters. Using four models and 24{, }000 agent--scenario pairs, we document systematic rationality bias in baseline LLM behavior, including attenuated loss aversion, weak herding, and near-zero disposition effects relative to human benchmarks. Profile-based calibration induces large, stable, and theoretically coherent shifts in several parameters, with calibrated loss aversion, herding, extrapolation, and anchoring reaching or exceeding benchmark magnitudes. To assess external validity, we embed calibrated parameters in an agent-based asset pricing model, where calibrated extrapolation generates short-horizon momentum and long-horizon reversal patterns consistent with empirical evidence. Our results establish measurement ranges, calibration functions, and explicit boundaries for eight canonical behavioral biases.
    Date: 2026–02
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2602.01022
  13. By: Dirk Bergemann (Yale University); Tibor Heumann (Instituto de Econom’a, Pontificia Universidad Cat—lica de Chile); Stephen Morris (Massachusetts Institute of Technology)
    Abstract: We develop an integrated framework for information design and mechanism design in screening environments with quasilinear utility. Using the tools of majorization theory and quantile functions, we show that both information design and mechanism design problems reduce to maximizing linear functionals subject to majorization constraints. For mechanism design, the designer chooses allocations weakly majorized by the exogenous inventory. For information design, the designer chooses information structures that are majorized by the prior distribution. When the designer can choose both the mechanism and the information structure simultaneously, then the joint optimization problem becomes bilinear with two majorization constraints. We show that pooling of values and associated allocations is always optimal in this case. Our approach unifies classic results in auction theory and screening, extends them to information design settings, and provides new insights into the welfare effects of jointly optimizing allocation and information.
    Date: 2026–01–23
    URL: https://d.repec.org/n?u=RePEc:cwl:cwldpp:2494
  14. By: Bernard Cornet (Department of Economics, University of Kansas, Lawrence, KS 66045, USA)
    Abstract: Non-tatonnement processes and planning procedures have been defined in different economic contexts, as dynamic processes to reach efficient allocations, with or without price adjustment, satisfying the property that, along the process, the utility of every agent is non-decreasing and transactions can occur, thus making a clear distinction with the study of tatonnement processes whose goal is to reach competitive equilibria with transactions occurring only at equilibrium. In this paper, we provide sufficient conditions guaranteeing that every Pareto optimum which is preferred or indifferent to some given initial situation by every agent is accessible by a monotone efficient dynamic process. The framework considered is general enough to encompass the accessibility of Pareto optima by a non-tatonnement barter process in an exchange economy, the neutrality of the MDP procedure in an economy with public goods, and other types of planning procedures.
    Keywords: Non-tatonnement process, Barter and exchange process, Planning procedure, Pareto optima.
    Date: 2026–02
    URL: https://d.repec.org/n?u=RePEc:kan:wpaper:202605
  15. By: Stéphane Gonzalez (Université Jean Monnet Saint-Etienne, CNRS, Université Lumière Lyon 2, emlyon business school, GATE, 42023 Lyon, France); Le-Nhat-Linh Huynh (Université Jean Monnet Saint-Etienne, CNRS, Université Lumière Lyon 2, emlyon business school, GATE, 42023 Lyon, France)
    Abstract: We study collective decision-making when individual preferences depend on the state of the world. The paper introduces an axiom, Aggregation Consistency, linking the way society aggregates utilities across individuals with the way each individual aggregates outcomes across states. The axiom requires that any alternative preferred in every realized state remains preferred before uncertainty is resolved. Combined with standard social choice and aggregation principles, it implies that the same functional form must govern both interpersonal and intrapersonal aggregation. Under familiar conditions, this yields two canonical families of solutions: generalized utilitarian rules based on quasi-arithmetic means, and Rawlsian rules based on minimum or maximum operators. The analysis unifies utilitarian and egalitarian criteria within a single axiomatic framework for collective choice under uncertainty.
    Date: 2026
    URL: https://d.repec.org/n?u=RePEc:gat:wpaper:2604
  16. By: Haris Aziz; P\'eter Bir\'o; Gergely Cs\'aji; Tom Demeulemeester
    Abstract: In a typical school choice application, the students have strict preferences over the schools while the schools have coarse priorities over the students based on their distance and their enrolled siblings. The outcome of a centralized admission mechanism is then usually obtained by the Deferred Acceptance (DA) algorithm with random tie-breaking. Therefore, every possible outcome of this mechanism is a stable solution for the coarse priorities that will arise with certain probability. This implies a probabilistic assignment, where the admission probability for each student-school pair is specified. In this paper, we propose a new efficiency-improving stable `smart lottery' mechanism. We aim to improve the probabilistic assignment ex-ante in a stochastic dominance sense, while ensuring that the improved random matching is still ex-post stable, meaning that it can be decomposed into stable matchings regarding the original coarse priorities. Therefore, this smart lottery mechanism can provide a clear Pareto-improvement in expectation for any cardinal utilities compared to the standard DA with lottery solution, without sacrificing the stability of the final outcome. We show that although the underlying computational problem is NP-hard, we can solve the problem by using advanced optimization techniques such as integer programming with column generation. We conduct computational experiments on generated and real instances. Our results show that the welfare gains by our mechanism are substantially larger than the expected gains by standard methods that realize efficiency improvements after ties have already been broken.
    Date: 2026–02
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2602.10679
  17. By: Federico Cacciamani; Roberto Daluiso; Marco Pinciroli; Michele Trapletti; Edoardo Vittori
    Abstract: In finance, sequential decision problems are often faced, for which reinforcement learning (RL) emerges as a promising tool for optimisation without the need of analytical tractability. However, the objective of classical RL is the expected cumulated reward, while financial applications typically require a trade-off between return and risk. In this work, we focus on settings where one cares about the time split of the total return, ruling out most risk-aware generalisations of RL which optimise a risk measure defined on the latter. We notice that a preference for homogeneous splits, which we found satisfactory for hedging, can be unfit for other problems, and therefore propose a new risk metric which still penalises uncertainty of the single rewards, but allows for an arbitrary planning of their target levels. We study the properties of the resulting objective and the generalisation of learning algorithms to optimise it. Finally, we show numerical results on toy examples.
    Date: 2026–02
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2602.12030

This nep-upt issue is ©2026 by Alexander Harin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.