nep-upt New Economics Papers
on Utility Models and Prospect Theory
Issue of 2015‒07‒25
twelve papers chosen by
Alexander Harin
Modern University for the Humanities

  1. Stochastic dominance, risk and disappointment: a synthesis. By Thierry Chauveau
  2. Solving the utility maximization problem with CES and Cobb-Douglas utility function via mathematical inequalities By Vedran Kojić
  3. Muckenhoupt's $(A_p)$ condition and the existence of the optimal martingale measure By Dmitry Kramkov; Kim Weston
  4. Rational insurance with linear utility and perfect information By Ole Peters; Alexander Adamou
  5. Asymmetries and Portfolio Choice By Dahlquist, Magnus; Farago, Adam; Tédongap, Roméo
  6. Axiomatization of the Choquet integral for heterogeneous product sets By Mikhail Timonin
  7. Diminishing Marginal Rates of Substitution and Quasi-concavity By Layson, Stephen
  8. Measurement Scales and Welfarist Social Choice By Michael Morreau; John A Weymark
  9. Losing Face By Thomas Gall; David Reinstein
  10. Idealizations of uncertainty, and lessons from artificial intelligence By Smith, Robert Elliott
  11. Information Design By Ina A Taneva
  12. Jump-Starting the Euro Area Recovery: Would a Rise in Core Fiscal Spending Help the Periphery?* By Blanchard, Olivier; Erceg, Christopher J.; Lindé, Jesper

  1. By: Thierry Chauveau (Centre d'Economie de la Sorbonne)
    Abstract: The theory of disappointment of Loomrs and Sugden [1986] has never been given an axiomatics. This article, where a theory of disappointment is derived from a simple axiomatics, makes up for this omission. The new theory is close to that of Loomes and Sugden although the functional representing the preferences of the decision-maker is now lottery-dependent. Actually, preferences exhibit four properties of interest: (a) risk-averse and risk prone investors actually behave differently; (b) risk is defined in a consistent way with risk aversion; (c) the functional is nothing but the opposite to a convex measure of risk (Föllmer and Schied [2002]) when constant marginal utility is assumed and (d) violations of the second-order stochastic dominance property are allowed for when monetary values are taken into account (but not when "utils" are substituted for them). Moreover, the preorder induced by stochastic dominance over utils is as "close" to the preorder of preferences as possible and utility functions may be elicited through experimental testing
    Keywords: Disappointment; risk-aversion; expected utility; risk premium; stochastic dominance; subjective risk
    JEL: D81
    Date: 2014–06
  2. By: Vedran Kojić (Faculty of Economics and Business, University of Zagreb)
    Abstract: This paper presents a new, non-calculus approach to solving the utility maximization problem with CES utility function, as well as with Cobb-Douglas utility function in case of n≥2 commodities. Instead of using the Lagrange multiplier method or some other method based on differential calculus, these two maximization problems are solved by using Jensen's inequlity and weighted arithmetic-geometric mean (weighted AM-GM) inequality. In comparison with calculus methods, this approach does not require checking the first and the second order conditions.
    Keywords: Utility maximization problem, CES and Cobb-Douglas utility function, mathematical inequalities, without calculus
    JEL: C69 D11
    Date: 2015–07–15
  3. By: Dmitry Kramkov; Kim Weston
    Abstract: In the problem of optimal investment with utility function defined on $(0,\infty)$, we formulate sufficient conditions for the dual optimizer to be a uniformly integrable martingale. Our key requirement consists of the existence of a martingale measure whose density process satisfies the probabilistic Muckenhoupt $(A_p)$ condition for the power $p=1/(1-a)$, where $a\in (0,1)$ is a lower bound on the relative risk-aversion of the utility function. We construct a counterexample showing that this $(A_p)$ condition is sharp.
    Date: 2015–07
  4. By: Ole Peters; Alexander Adamou
    Abstract: We present a mathematical solution to the insurance puzzle. Our solution only uses time-average growth rates and makes no reference to risk preferences. The insurance puzzle is this: according to the expectation value of wealth, buying insurance is only rational at a price that makes it irrational to sell insurance. There is no price that is beneficial to both the buyer and the seller of an insurance contract. The puzzle why insurance contracts exist is traditionally resolved by appealing to utility theory, asymmetric information, or a mix of both. Here we note that the expectation value is the wrong starting point -- a legacy from the early days of probability theory. It is the wrong starting point because not even the most basic models of wealth (random walks) are stationary, and what the individual experiences over time is not the expectation value. We use the standard model of noisy exponential growth and compute time-average growth rates instead of expectation values of wealth. In this new paradigm insurance contracts exist that are beneficial for both parties.
    Date: 2015–07
  5. By: Dahlquist, Magnus; Farago, Adam; Tédongap, Roméo
    Abstract: We examine the portfolio choice of an investor with generalized disappointment aversion preferences who faces returns described by a normal-exponential model. We derive a three-fund separation strategy: the investor allocates wealth to a risk-free asset, a standard mean-variance efficient fund, and an additional fund reflecting return asymmetries. The optimal portfolio is characterized by the investor's endogenous effective risk aversion and implicit asymmetry aversion. We find that disappointment aversion is associated with much larger asymmetry aversion than are standard preferences. Our model explains patterns in popular portfolio advice and provides a reason for shifting from bonds to stocks as the investment horizon increases.
    Keywords: Asset allocation; Downside risk
    JEL: G11
    Date: 2015–07
  6. By: Mikhail Timonin
    Abstract: We prove a representation theorem for the Choquet integral model. The preference relation is defined on a two-dimensional heterogeneous product set $X = X_1 \times X_2$ where elements of $X_1$ and $X_2$ are not necessarily comparable with each other. However, making such comparisons in a meaningful way is necessary for the construction of the Choquet integral (and any rank-dependent model). We construct the representation, study its uniqueness properties, and look at applications in multicriteria decision analysis, state-dependent utility theory, and social choice. Previous axiomatizations of this model, developed for decision making under uncertainty, relied heavily on the notion of comonotocity and that of a "constant act". However, that requires $X$ to have a special structure, namely, all factors of this set must be identical. Our characterization does not assume commensurateness of criteria a priori, so defining comonotonicity becomes impossible.
    Date: 2015–07
  7. By: Layson, Stephen (University of North Carolina at Greensboro, Department of Economics)
    Abstract: Only in the 2-good case is a diminishing marginal rate of substitution equivalent to quasi-concavity of the utility function. When there are more than 2 goods, the conditions for quasi-concavity, expressed in terms of bordered hessians, are very unintuitive and tedious to implement. This paper demonstrates, however, that a constant or diminishing marginal rate of substitution between any good and a composite good, consisting of all other goods, is equivalent to quasi-concavity. A new method for checking quasi-concavity is demonstrated that is sometimes easier to use than the traditional method of checking the signs of the bordered hessians.
    Keywords: Marginal Rates; Substitution; Quasi-concavity
    JEL: D01 D11
    Date: 2015–07–17
  8. By: Michael Morreau (UiT - The Arctic University of Norway); John A Weymark (Vanderbilt University)
    Abstract: The social welfare functional approach to social choice theory fails to distinguish between a genuine change in individual well-beings from a merely representational change due to the use of dierent measurement scales. A generalization of the concept of a social welfare functional is introduced that explicitly takes account of the scales that are used to measure well-beings so as to distinguish between these two kinds of changes. This generalization of the standard theoretical framework results in a more satisfactory formulation of welfarism, the doctrine that social alternatives are evaluated and socially ranked solely in terms of the well-beings of the relevant individuals. This scale-dependent form of welfarism is axiomatized using this framework. The implications of this approach for characterizing classes of social welfare orderings are also considered.
    Keywords: grading; measurement scales; social welfare functionals; utility aggregation; welfarism
    JEL: D7 D6
    Date: 2015–07–13
  9. By: Thomas Gall; David Reinstein
    Abstract: When Al makes an offer to Betty that Betty observes and rejects, Al may “lose face”. This loss of face (LoF) may cost Al utility, either directly or through reputation effects. This can lead to fewer offers and inefficiency in the context of bilateral matching problems, e.g., the marriage market, research partnering, and international negotiations. We offer a simple model with asymmetric information, a continuous signal of an individual’s binary type, and a linear marriage production function. We add a primitive LoF term, characterize the stable equilibria, compare the benchmark without LoF to a case where only one side is vulnerable to LoF, and present comparative statics. A small amount of LoF has no effect on low types’ behavior, but, will make high types on both sides more selective. A stronger LoF drives high types out of the market, and makes low types reverse snobs, further reducing welfare. LoF also makes rejecting strictly preferred to being rejected, making the “high types reject” equilibrium stable. We can eliminate the effects of LoF by letting the vulnerable side move second, or setting up a “Conditionally Anonymous Environment” that only reveals when both parties say yes. We motivate our model with a variety of empirical examples, and we suggest policy and managerial implications.
    Date: 2015–07–20
  10. By: Smith, Robert Elliott
    Abstract: Making decisions under uncertainty is at the core of human decision-making, particularly economic decision-making. In economics, a distinction is often made between quantifiable uncertainty (risk) and un-quantifiable uncertainty (Knight, Uncertainty and Profit, 1921). However, this distinction is often ignored by, in effect, the quantification of unquantifiable uncertainty, through the assumption of subjective probabilities in the mind of the human decision makers (Savage, The Foundations of Statistics, 1954). This idea is also reflected in developments in artificial intelligence (AI). However, there are serious reasons to doubt this assumption, which are relevant to both AI and economics. Some of the reasons for doubt relate directly to problems that AI has faced historically, that remain unsolved, but little regarded. AI can proceed on a prescriptive agenda, making engineered systems that aid humans in decision-making, despite the fact that these problems may mean that the models involved have serious departures from real human decision-making, particularly under uncertainty. However, in descriptive uses of AI and similar ideas (like the modelling of decision- making agents in economics), it is important to have a clear understanding of what has been learned from AI about these issues. This paper will look at AI history in this light, to illustrate what can be expected from models of human decision-making under uncertainty that proceed from these assumptions. Alternative models of uncertainty are discussed, along with their implications for examining in vivo human decision-making uncertainty in economics.
    Keywords: uncertainty,probability,Bayesian,artificial intelligence
    JEL: B59
    Date: 2015
  11. By: Ina A Taneva
    Abstract: There are two ways of creating incentives for interacting agents to behave in a desired way. One is by providing appropriate payoff incentives, which is the subject of mechanism design. The other is by choosing the information that agents observe, which we refer to as information design. We consider a model of symmetric information where a designer chooses and announces the information structure about a payoff relevant state. The interacting agents observe the signal realizations and take actions which affect the welfare of both the designer and the agents. We characterize the general finite approach to deriving the optimal information structure for the designer - the one that maximizes the designer's ex ante expected utility subject to agents playing a Bayes Nash equilibrium. We then apply the general approach to a symmetric two state, two agent, and two actions environment in a parameterized underlying game and fully characterize the optimal information structure: it is never strictly optimal for the designer to use conditionally independent private signals; the optimal information structure may be a public signal or may consist of correlated private signals. Finally, we examine how changes in the underlying game affect the designer's maximum payoff. This exercise provides a joint mechanism/information design perspective.
    Keywords: informtion design, implementation, incomplete information, Bayes correlated equilibrium, sender-receiver games
    JEL: C72 D72 D82 D83
    Date: 2015–02–11
  12. By: Blanchard, Olivier (International Monetary Fund); Erceg, Christopher J. (Federal Reserve Board); Lindé, Jesper (Research Department, Central Bank of Sweden)
    Abstract: We show that a fiscal expansion by the core economies of the euro area would have a large and positive impact on periphery GDP assuming that policy rates remain low for a prolonged period. Under our preferred model specification, an expansion of core government spending equal to one percent of euro area GDP would boost periphery GDP around 1 percent in a liquidity trap lasting three years, about half as large as the effect on core GDP. Accordingly, under a standard ad hoc loss function involving output and inflation gaps, increasing core spending would generate substantial welfare improvements, especially in the periphery. The benefits are considerably smaller under a utility-based welfare measure, reflecting in part that higher net exports play a material role in raising periphery GDP.
    Keywords: Monetary Policy; Fiscal Policy; Liquidity Trap; Zero Bound Constraint; DSGE Model; Currency Union
    JEL: E52 E58
    Date: 2015–07–01

This nep-upt issue is ©2015 by Alexander Harin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.