nep-upt New Economics Papers
on Utility Models and Prospect Theory
Issue of 2017‒01‒22
ten papers chosen by



  1. Hammond’s Equity Principle and the Measurement of Ordinal Inequalities » By Nicolas Gravel; Brice Magdalou; Patrick Moyes
  2. Can forbidden zones for the expectation explain noise influence in behavioral economics and decision sciences? By Harin, Alexander
  3. Property rights and loss aversion in contests By Subhasish M. Chowdhury; Joo Young Jeon; Abhijit Ramalingam
  4. Fair Utilitarianism By Marc Fleurbaey; Stéphane Zuber
  5. The newsvendor problem with convex risk By Balbás, Alejandro; Charron, Jean Philippe
  6. Spaces for agreement: a theory of time-stochastic dominance and an application to climate change By Simon Dietz; Nicoleta Anca Matei
  7. Hyperbolic discounting can be good for your health By Strulik, Holger; Trimborn, Timo
  8. Forecasting the equity risk premium with frequency-decomposed predictors By Faria, Gonçalo; Verona, Fabio
  9. Interpolating between matching and hedonic pricing models By Brendan Pass
  10. Dynamic game under ambiguity: the sequential bargaining example, and a new "coase conjecture" By Besanko, David; Tong, Jian; Wu, Jianjun

  1. By: Nicolas Gravel; Brice Magdalou; Patrick Moyes
    Abstract: What would be the analogue of the Lorenz quasi-ordering when the variable of interest is of a purely ordinal nature? We argue that it is possible to derive such a criterion by substituting for the Pigou-Dalton transfer used in the standard inequality literature what we refer to as a Hammond progressive transfer. According to this criterion, one distribution of utilities is considered to be less unequal than another if it is judged better by both the lexicographic extensions of the maximin and the minimax, henceforth referred to as the leximin and the antileximax, respectively. If one imposes in addition that an increase in someone’s utility makes the society better off, then one is left with the leximin, while the requirement that society welfare increases as the result of a decrease of one person’s utility gives the antileximax criterion. Incidently, the paper provides an alternative and simple characterisation of the leximin principle widely used in the social choice and welfare literature.
    Date: 2017–01
    URL: http://d.repec.org/n?u=RePEc:lam:wpaper:17-03&r=upt
  2. By: Harin, Alexander
    Abstract: The present article is devoted to discrete random variables that take a limited number of values in finite closed intervals. I prove that if non-zero lower bounds exist for the variances of the variables, then non-zero bounds or forbidden zones exist for their expectations near the boundaries of the intervals. This article is motivated by the need in rigorous theoretical support for the analysis of the influence of scattering and noise on data in behavioral economics and decision sciences.
    Keywords: probability; dispersion; variance; noise; economics; utility theory; prospect theory; behavioral economics; decision sciences;
    JEL: C02 C1 D8 D81 D84
    Date: 2017–01–15
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:76240&r=upt
  3. By: Subhasish M. Chowdhury (University of East Anglia); Joo Young Jeon (University of East Anglia); Abhijit Ramalingam (University of East Anglia)
    Abstract: We analyze the effects of property rights and the resulting loss aversion on contest outcomes. We study three treatments: in ‘gain’ two players start with no prize and make sunk bids to win a prize; whereas in ‘loss’ both the subjects start with prizes and whoever loses the contest loses their prize. Finally, in ‘mixed’ one player starts with a prize, which stays with him if he wins but is transferred to the rival otherwise. Since the differences among the treatments arise only from framing, the expected utility or the standard loss aversion models predict no difference in bids across treatments. We introduce a new model with loss aversion in which the property rights are made salient. This model predicts average bids in descending order in the loss, the mixed, and the gain treatment; and higher bids by the player with property rights in the mixed treatment. The results from a laboratory experiment broadly support these predictions. In the laboratory, no significant difference is found in bids in the loss (gain) treatment versus bids by property rights holder (non-holder) in the mixed treatment. A model incorporating both loss aversion and social preference explains this result.
    Keywords: contest, experiment, framing, property rights, loss aversion
    JEL: C91 C72 D23 D74
    Date: 2016–09–23
    URL: http://d.repec.org/n?u=RePEc:uea:wcbess:16-14&r=upt
  4. By: Marc Fleurbaey (Woodrow Wilson School and Center for Human Values - Princeton University); Stéphane Zuber (Centre d'Economie de la Sorbonne - Paris School of Economics)
    Abstract: Utilitarianism is a prominent approach to social justice that has played a central role in economic theory. A key issue for utilitarianism is to define how utilities should be measured and compared. This paper draws on Harsanyi's approach (Harsanyi, 1955) to derive utilities from choices in risky situations. We introduce a new normalization of utilities that ensures that: 1) a transfer from a rich to a poor is welfare enhancing, and 2) populations with more risk averse people have lower welfare. We propose normative principles that reflect these fairness requirements and characterize fair utilitarianism. We also study some implications of fair utilitarianism for risk sharing and collective risk aversion
    Keywords: Fairness; utilitarianism; risk sharing; collective risk aversion
    JEL: D63 D81
    Date: 2017–01
    URL: http://d.repec.org/n?u=RePEc:mse:cesdoc:17005&r=upt
  5. By: Balbás, Alejandro; Charron, Jean Philippe
    Abstract: The newsvendor problem is a classical topic in Management Science and Operations Research. It deals with purchases and price strategies when a least one deadline is involved. In this paper we will assume that the decision is driven by an optimization problem involving both expected pro ts and risks. As a main novelty, risks will be given by a convex risk measure, including the usual utility functions. This approach will allow us to nd necessary and su¢ cient optimality conditions under very general frameworks, since we will not need any speci c assumption about the demand distribution.
    Keywords: Saddle point conditions; Utility function; Convex risk measure; Risk; News vendor problem
    JEL: M30 M21 M50
    Date: 2016–12–12
    URL: http://d.repec.org/n?u=RePEc:cte:idrepe:23950&r=upt
  6. By: Simon Dietz; Nicoleta Anca Matei
    Abstract: Many investments involve both a long time-horizon and risky returns. Making investment decisions thus requires assumptions about time and risk preferences. Such assumptions are frequently contested, particularly in the public sector, and there is no immediate prospect of universal agreement. Motivated by these observations, we develop a theory and method of finding ‘spaces for agreement’. These are combinations of classes of discount and utility function, for which one investment dominates another (or ‘almost’ does so), so that all decision-makers whose preferences can be represented by such combinations would agree on the option to be chosen. The theory is built on combining the insights of stochastic dominance on the one hand, and time dominance on the other, thus offering a nonparametric approach to inter-temporal, risky choice. We go on to apply the theory to the controversy over climate policy evaluation and show with the help of a popular simulation model that, in fact, even tough carbon emissions targets would be chosen by almost everyone, barring those with arguably ‘extreme’ preferences.
    Keywords: almost stochastic dominance; climate change; discounting; integrated assessment; project appraisal; risk aversion; stochastic dominance; time dominance; time-stochastic dominance
    JEL: D61 H43 Q54
    Date: 2016–01–11
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:64182&r=upt
  7. By: Strulik, Holger; Trimborn, Timo
    Abstract: It has been argued that hyperbolic discounting of future gains and losses leads to time-inconsistent behavior and thereby, in the context of health economics, not enough investment in health and too much indulgence of unhealthy consumption. Here, we challenge this view. We set up a life-cycle model of human aging and longevity in which individuals discount the future hyperbolically and make time-consistent decisions. This allows us to disentangle the role of discounting from the time consistency issue. We show that hyperbolically discounting individuals, under a reasonable normalization, invest more in their health than they would if they had a constant rate of time preference. Using a calibrated life-cycle model of human aging, we predict that the average U.S. American lives about 4 years longer with hyperbolic discounting than he would if he had applied a constant discount rate. The reason is that, under hyperbolic discounting, experiences in old age receive a relatively high weight in life time utility. In an extension we show that the introduction of health-dependent survival probability motivates an increasing discount rate for the elderly and, in the aggregate, a u-shaped pattern of the discount rate with respect to age.
    Keywords: discount rates,present bias,health behavior,aging,longevity
    JEL: D03 D11 D91 I10 I12
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:zbw:tuweco:112016&r=upt
  8. By: Faria, Gonçalo; Verona, Fabio
    Abstract: We show that the out-of-sample forecast of the equity risk premium can be signi ficantly improved by taking into account the frequency-domain relationship between the equity risk premium and several potential predictors. We consider fi fteen predictors from the existing literature, for the out-of-sample forecasting period from January 1990 to December 2014. The best result achieved for individual predictors is a monthly out-of-sample R2 of 2.98 % and utility gains of 549 basis points per year for a mean-variance investor. This performance is improved even further when the individual forecasts from the frequency-decomposed predictors are combined. These results are robust for di fferent subsamples, including the Great Moderation period, the Great Financial Crisis period and, more generically, periods of bad, normal and good economic growth. The strong and robust performance of this method comes from its ability to disentangle the information aggregated in the original time series of each variable, which allows to isolate the frequencies of the predictors with the highest predictive power from the noisy parts.
    JEL: C58 G11 G12 G17
    Date: 2017–01–03
    URL: http://d.repec.org/n?u=RePEc:bof:bofrdp:2017_001&r=upt
  9. By: Brendan Pass
    Abstract: We consider the theoretical properties of a model which encompasses bi-partite matching under transferable utility on the one hand, and hedonic pricing on the other. This framework is intimately connected to tripartite matching problems (known as multi-marginal optimal transport problems in the mathematical literature). We exploit this relationship in two ways; first, we show that a known structural result from multi-marginal optimal transport can be used to establish an upper bound on the dimension of the support of stable matchings. Next, assuming the distribution of agents on one side of the market is continuous, we identify a condition on their preferences that ensures purity and uniqueness of the stable matching; this condition is a variant of a known condition in the mathematical literature, which guarantees analogous properties in the multi-marginal optimal transport problem. We exhibit several examples of surplus functions for which our condition is satisfied, as well as some for which it fails.
    Date: 2017–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1701.04431&r=upt
  10. By: Besanko, David; Tong, Jian; Wu, Jianjun
    Abstract: Conventional Bayesian games of incomplete information are limited in their ability to represent severe incompleteness of information. Using an illustrative example of (seller offer) sequential bargaining with one-sided incomplete information, we analyze a dynamic game under ambiguity. The novelty of our model is the stark assumption that the seller has complete ignorance---represented by the set of all plausible prior distributions---over the buyer's type. We propose a new equilibrium concept---Perfect Objectivist Equilibrium (POE)---in which multiple priors and full Bayesian updating characterize the belief system, and the uninformed player maximizes the infimum expected utility over non-weakly-dominated strategies. We provide a novel justification for refining POE through Markov perfection, and obtain a unique refined equilibrium. This results in a New "Coase Conjecture"---a competitive outcome arising from an apparent monopoly, which does not require the discount rate to approach zero, and is robust to reversion caused by reputation equilibria.
    Date: 2016–12–16
    URL: http://d.repec.org/n?u=RePEc:stn:sotoec:1606&r=upt

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.