|
on Utility Models and Prospect Theory |
Issue of 2019‒02‒25
sixteen papers chosen by |
By: | Kanin Anantanasuwong; Roy Kouwenberg; Olivia S. Mitchell; Kim Peijnenberg |
Abstract: | Using an incentivized survey and a representative sample of investors, we elicit ambiguity attitudes toward a familiar company stock, a local stock index, a foreign stock index, and a crypto currency. We separately estimate ambiguity aversion (ambiguity preferences) and perceived ambiguity levels (perceptions about ambiguity), while controlling for unknown likelihood beliefs. We show that ambiguity aversion is highly correlated across different assets and can be summarized by a single underlying factor. By contrast, individuals’ perceived ambiguity levels differ depending on the type of asset and cannot be summarized by a single underlying factor. Perceived ambiguity is mitigated by financial literacy and education, while the preference component is correlated with risk aversion. Perceived ambiguity proves to be related to actual investment choices, validating our measure. Finally, our results imply that policies enhancing financial literacy and knowledge of financial markets can help stimulate equity market participation and reduce inequality, as these reduce peoples’ perceived levels of ambiguity about financial assets. |
JEL: | C93 D14 D81 |
Date: | 2019–02 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:25561&r=all |
By: | Patrick Schmidt |
Abstract: | I consider the elicitation of ambiguous beliefs about an event. I introduce a mechanism that allows to identify an interval of probabilities (representing ambiguity perception) for several classes of ambiguity averse preferences. The agent reveals her preference for mixing binarized bets on the uncertain event and its complement under varying betting odds. Under ambiguity aversion, mixing is informative about the interval of beliefs. In particular, the mechanism allows to distinguish ambiguous beliefs from point beliefs, and identifies the belief interval for maxmin preferences. For ambiguity averse smooth second order and variational preferences, the mechanism reveals inner bounds for the belief interval, which are sharp under additional assumptions. |
Date: | 2019–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1902.07447&r=all |
By: | Jean-Pierre Fouque; Ruimeng Hu |
Abstract: | Empirical studies indicate the presence of multi-scales in the volatility of underlying assets: a fast-scale on the order of days and a slow-scale on the order of months. In our previous works, we have studied the portfolio optimization problem in a Markovian setting under each single scale, the slow one in [Fouque and Hu, SIAM J. Control Optim., 55 (2017), 1990-2023], and the fast one in [Hu, Proceedings of IEEE CDC 2018, accepted]. This paper is dedicated to the analysis when the two scales coexist in a Markovian setting. We study the terminal wealth utility maximization problem when the volatility is driven by both fast- and slow-scale factors. We first propose a zeroth-order strategy, and rigorously establish the first order approximation of the associated problem value. This is done by analyzing the corresponding linear partial differential equation (PDE) via regular and singular perturbation techniques, as in the single-scale cases. Then, we show the asymptotic optimality of our proposed strategy within a specific family of admissible controls. Interestingly, we highlight that a pure PDE approach does not work in the multi-scale case and, instead, we use the so-called epsilon-martingale decomposition. This completes the analysis of portfolio optimization in both fast mean-reverting and slowly-varying Markovian stochastic environments. |
Date: | 2019–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1902.06883&r=all |
By: | John Gathergood; David Hirshleifer; David Leake; Hiroaki Sakaguchi; Neil Stewart |
Abstract: | Individual investors buying multiple stocks on the same day often use a naïve diversification 1/N heuristic, dividing purchase value equally across stocks. Yet very few investors maintain a 1/N portfolio allocation. Instead, investors appear to narrowly frame their buy-day decision independently of their portfolio, applying the 1/N heuristic only for new purchases. The use of this heuristic decreases, but does not disappear, as financial stakes and investor trading experience increase. These findings indicate that the simple heuristics individual investors use in practice depart further from rationality than is often assumed even in behavioral models of investment decisions. |
JEL: | D14 D53 D91 G02 G11 G12 |
Date: | 2019–02 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:25567&r=all |
By: | Levon Barseghyan; Francesca Molinari; Matthew Thirkettle |
Abstract: | This paper is concerned with learning decision makers' (DMs) preferences using data on observed choices from a finite set of risky alternatives with monetary outcomes. We propose a discrete choice model with unobserved heterogeneity in consideration sets (the collection of alternatives considered by DMs) and unobserved heterogeneity in standard risk aversion. In this framework, stochastic choice is driven both by different rankings of alternatives induced by unobserved heterogeneity in risk preferences and by different sets of alternatives considered. We obtain sufficient conditions for semi-nonparametric point identification of both the distribution of unobserved heterogeneity in preferences and the distribution of consideration sets. Our method yields an estimator that is easy to compute and that can be used in markets with a large number of alternatives. We apply our method to a dataset on property insurance purchases. We find that although households are on average strongly risk averse, they consider lower coverages more frequently than higher coverages. Finally, we estimate the monetary losses associated with limited consideration in our application. |
Date: | 2019–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1902.06629&r=all |
By: | Jason S. Anquandah; Leonid V. Bogachev |
Abstract: | Managing unemployment is one of the key issues in social policies. Unemployment insurance schemes are designed to cushion the financial and morale blow of loss of job but also to encourage the unemployed to seek new jobs more pro-actively due to the continuous reduction of benefit payments. In the present paper, a simple model of unemployment insurance is proposed with a focus on optimality of the individual's entry to the scheme. The corresponding optimal stopping problem is solved, and its similarity and differences with the perpetual American call option are discussed. Beyond a purely financial point of view, we argue that in the actuarial context the optimal decisions should take into account other possible preferences through a suitable utility function. Some examples in this direction are worked out. |
Date: | 2019–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1902.06175&r=all |
By: | Nina Hestermann; Yves Le Yaouanq |
Abstract: | We study the inference and experimentation problem of an agent in a situation where the outcomes depend on the individual’s intrinsic ability and on an external variable. We analyze the mistakes made by decision-makers who hold inaccurate prior beliefs about their ability. Overconfident individuals take too much credit for their successes and excessively blame external factors if they fail. They are too easily dissatisfied with their environment, which leads them to experiment in variable environments and revise their self-confidence over time. In contrast, underconfident decision-makers might be trapped in low-quality environments and incur perpetual utility losses. |
Keywords: | learning, experimentation, overconfidence, attribution bias |
JEL: | D83 |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:ces:ceswps:_7501&r=all |
By: | Nicole B\"auerle; Tomer Shushi |
Abstract: | Certainty Equivalent is a utility-based measure that performs as a measure in which investors are indifferent between this measure and investment that holds some uncertainty. Therefore, it plays an essential role in utility-based decision making. One of the most extensively used risk measures is the Value at Risk, which is investigated and used by both researchers and practitioners as a powerful tool that measures the risk under some quantile level which allows focusing on the extreme amount of losses. In this paper, we propose a natural generalization of the Certainty Equivalent measure that, similar to the Value at Risk measure, is focusing on the tail distribution of the risk, and thus, focusing on extreme financial and insurance risk events. We then investigate the fundamental properties of the proposed measure and show its unique features and implications in the risk measurement process. Furthermore, we derive formulas for truncated elliptical models of losses and provide formulas for selected members of such models. |
Date: | 2019–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1902.06941&r=all |
By: | Valeria Di Cosmo (Economic and Social Research Institute and Fondazione Eni Enrico Mattei); Elisa Trujillo-Baute (University of Barcelona and Barcelona Institute of Economics) |
Abstract: | The benefits of smoothing demand peaks in the electricity market has been widely recognised. European countries such Spain and some of the Scandinavian countries have recently given to the consumers the possibility to face the spot prices instead of having a fixed tariffs determined by retailers. This paper develops a theoretical model to study the relations between risk averse consumers, retailers and producers, both in the spot and in the forward markets when consumers are able to choose between fixed tariffs and the wholesale prices. The model is calibrated on a real market case - Spain - where since 2014 spot tariffs were introduced beside the flat tariffs for household consumers. Finally, simulations of agents behavior and markets performance, depending on consumers risk aversion and the number of producers, are used to analyse the implications from the model. Our results show that the quantities the retailers and the producers trade in the forward market are positively related with the loss aversion of consumers. The quantities bought by the retailers in the forward market are negatively related with the skewness of the spot prices. On the contrary, quantity sold forward by producers are positively related with the skewness of the spot prices (high probability of getting high prices increase the forward sale) and with the total market demand. In the spot market, the degree of loss aversion of consumers determine the quantity the retailers buy in the spot market but does not have a direct effect on the spot prices. |
Keywords: | Electricity Spot Market, Electricity Forward Market, Risk Aversion |
JEL: | D40 L11 Q41 |
Date: | 2018–12 |
URL: | http://d.repec.org/n?u=RePEc:fem:femwpa:2018.31&r=all |
By: | Charles Bellemare; Alexander Sebald |
Abstract: | We derive bounds on the causal effect of belief-dependent preferences (reciprocity and guilt aversion) on choices in sequential two-player games without exploiting information or data on the (higher-order) beliefs of players. We show how informative bounds can be derived by exploiting a specific invariance property common to those preferences. We illustrate our approach by analyzing data from an experiment conducted in Denmark. Our approach produces tight bounds on the causal effect of reciprocity in the games we consider. These bounds suggest there exists significant reciprocity in our population – a result also substantiated by the participants’ answers to a post-experimental questionnaire. On the other hand, our approach yields high implausible estimates of guilt aversion. We contrast our estimated bounds with point estimates obtained using data on self-declared higher-order beliefs, keeping all other aspects of the model unchanged. We find that point estimates fall within our estimated bounds suggesting that elicited higher-order belief data in our experiment is weakly (if at all) affected by a potential endogeneity problem due to e.g. false consensus effects. |
Keywords: | belief-dependent preferences, partial identification |
JEL: | C93 D63 D84 |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:ces:ceswps:_7505&r=all |
By: | Ahlfeldt, Gabriel; Maennig, Wolfgang; Mueller, Steffen |
Abstract: | We provide the first systematic documentation and analysis of a generation gap in direct democracy outcomes across a wide range of topics using postelection survey data covering more than 300 Swiss referenda and four decades. We find that young voters are more likely to support reform projects that are politically liberal, support the young, or protect the environment. We separate age and cohort effects without imposing functional form constraints using a panel rank regression approach. The aging effect on political orientation is robust for con-trolling for arbitrary cohort effects and appears to be driven by expected utility maximization and not by habitu-ation-induced status-quo bias. In Switzerland, population ageing is already affecting direct democracy outcomes. Five referenda since 2004 would have had a different outcome, had the population distribution remained at 1981 levels. |
Keywords: | age; cohort; direct democracy; generation gap; Referendum; reform; status quo; utility |
JEL: | D7 H3 |
Date: | 2019–01 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:13449&r=all |
By: | Dirk Bergemann (Cowles Foundation, Yale University); Benjamin Brooks (Dept. of Economics, University of Chicago); Stephen Morris (Dept. of Economics, Princeton University) |
Abstract: | We describe a methodology for making counterfactual predictions when the information held by strategic agents is a latent parameter. The analyst observes behavior which is rationalized by a Bayesian model in which agents maximize expected utility, given partial and differential information about payoff-relevant states of the world. A counterfactual prediction is desired about behavior in another strategic setting, under the hypothesis that the distribution of and agents’ information about the state are held fixed. When the data and the desired counterfactual prediction pertain to environments with finitely many states, players, and actions, there is a finite dimensional description of the sharp counterfactual prediction, even though the latent parameter, the type space, is infinite dimensional. |
Keywords: | Counterfactuals, Bayes correlated equilibrium, Information structure, Type space, Linear program |
JEL: | C72 D44 D82 D83 |
Date: | 2019–02 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:2162r&r=all |
By: | Alvino, Letizia (Tilburg University, School of Economics and Management); Constantinides, Efthymios; Franco, Massimo |
Abstract: | Understanding consumers’ decision-making process is one of the most important goal in Marketing. However, the traditional tools (e,g, surveys, personal interviews and observations) used in Marketing research are often inadequate to analyse and study consumer behaviour. Since people’s decisions are influenced by several unconscious mental processes, the consumers very often do not want to, or do not know how to, explain their choices. For this reason, Neuromarketing research has grown in popularity. Neuromarketing uses both psychological and Neuroscience techniques in order to analyse the neurological and psychological mechanisms that underlying human decisions and behaviours. Hence, studying these mechanisms is useful to explain consumers’ responses to marketing stimuli. This paper (1) provides an overview of the current and previous research in Neuromarketing; (2) analyzes the use of Marginal Utility theory in Neuromarketing. In fact, there is remarkably little direct empirical evidence of the use of Marginal Utility in Neuromarketing studies. Marginal Utility is an essential economic parameter affecting satisfaction and one of the most important elements of the consumers’ decision-making process. Through the use of Marginal Utility theory, economists can measure satisfaction, which affects largely the consumer’s decision-making process. The research gap between Neuromarketing and use of Marginal Utility theory is discussed in this paper. We describe why Neuromarketing studies should take into account this parameter. We conclude with our vision of the potential research at the interaction of Marginal Utility and Neuromarketing. |
Date: | 2018 |
URL: | http://d.repec.org/n?u=RePEc:tiu:tiutis:b3e61951-9032-4cb4-b075-163f27c92f51&r=all |
By: | Gregorio Curello; Ludvig Sinander |
Abstract: | Most comparisons of preferences have the structure of single-crossing dominance. We examine the lattice structure of single-crossing dominance, proving characterisation, existence and uniqueness results for minimum upper bounds of arbitrary sets of preferences. We apply these theorems to monotone comparative statics, ambiguity- and risk-aversion, social choice, and politically correct discourse. |
Date: | 2019–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1902.07260&r=all |
By: | Chloé Michel; Michelle Sovinsky; Steven Stern |
Abstract: | Using rich data from the Panel Study of Income Dynamics on breast cancer diagnosis and lifestyle choices, we estimate how being diagnosed in‡uences smoking, drinking, and exercising habits for more than 9; 000 women over the period from 1999 to 2011. These data allow us to learn more about the trade-o¤s women are willing to make between participating in unhealthy (but enjoyable) habits and increasing one’s life expectancy. Our parameter estimates indicate that breast cancer diagnosis (and recency of diagnosis) impacts lifestyle choices. However, the impact of diagnosis has a di¤erent e¤ect on smoking, drinking, and exercising behavior, and the impact also depends upon the recency of the diagnosis. We …nd that women who had a diagnosis recently in their lives (within the last …ve years) exercise less and smoke less but do not change their drinking habits relative to healthy women. These changes in behavior are not always consistent with information provided to the public on breast cancer risk factors. However, we …nd that these choices are rationalized when one considers the overall value of life where lifestyle choices increase the utility of living. For a woman diagnosed with breast cancer, our results indicate that a woman will smoke only if the value placed on smoking is greater than 6% of the total utility from being alive. We …nd the threshold is lower for drinking where drinking has a positive impact on the value of life if the value placed on drinking is greater than 3% of the total utility from being alive. Finally, a woman with breast cancer will …nd it valuable to engage in exercise even when it brings disutility of 3% of the value of living. Using conventional estimates for the value of a year of life, we …nd that these choices imply smoking is valued at about $49; 000 per year for smokers, drinking is valued at about $29; 500 per year for drinkers, and exercising is valued at about $28; 200 for exercisers. |
Date: | 2018–02 |
URL: | http://d.repec.org/n?u=RePEc:bon:boncrc:crctr224_2019_069&r=all |
By: | Alan Krause |
Abstract: | We examine the nonlinear taxation of labour income and savings when the government places more weight on the welfare of the elderly than of young people. Our analysis is motivated by the observation that the elderly are more likely to vote. Compared to optimal taxation under a utilitarian social welfare function, we show that savings are subsidised, and young low-skill workers face a higher marginal labour tax rate. We also show that the lifetime utility of low-skill individuals is reduced, and that of high-skill individuals is increased, relative to optimal taxation under utilitarianism. An extension of the model to include generation-specific public spending is also considered. |
Keywords: | generational policy; nonlinear taxation. |
JEL: | H21 H42 |
Date: | 2019–02 |
URL: | http://d.repec.org/n?u=RePEc:yor:yorken:19/02&r=all |