|
on Utility Models and Prospect Theory |
Issue of 2023‒03‒20
seventeen papers chosen by |
By: | Hans-Martin von Gaudecker; Axel Wogrolly; Christian Zimpelmann |
Abstract: | This paper analyzes the stability and distribution of ambiguity attitudes using a broad population sample. Using high-powered incentives, we collected six waves of data on ambiguity attitudes about financial markets—our main application—and climate change. Estimating a structural stochastic choice model, we obtain three individuallevel parameters: Ambiguity aversion, likelihood insensitivity, and the magnitude of decision errors. These parameters are very heterogeneous in the population. At the same time, they are stable over time and largely stable across domains.We summarize heterogeneity in these three dimensions using a discrete classification approach with four types. Each group makes up 20-30% of the sample. One group comes close to the behavior of expected utility maximizers. Two types are characterized by high likelihood insensitivity; one of them is ambiguity averse and the other ambiguity seeking. Members of the final group have large error parameters; robust conclusions about their ambiguity attitudes are difficult. Observed characteristics vary between groups in plausible ways. Ambiguity types predict risky asset holdings in the expected fashion, even after controlling for many covariates. |
Keywords: | ambiguity attitudes; temporal stability; domain specificity; sociodemographic factors; cluster analysis; household portfolio choice |
JEL: | D81 G41 C38 D14 |
Date: | 2021–04 |
URL: | http://d.repec.org/n?u=RePEc:bon:boncrc:crctr224_2021_272v2&r=upt |
By: | Mats Köster; Paul Voss |
Abstract: | We develop a theory of conversations. Two agents with different interests take turns choosing the topic of the conversation. Talking about a single topic allows them to delve deeper, making the conversation more informative (or enjoyable). To capture this dynamic, we assume that the marginal utility from conversing increases when the agents stay on topic. The equilibrium conversation is extreme: it either maximizes or minimizes welfare. Long conversations are deep and thus efficient. Short ones are often superficial. The topic of a deep conversation depends in subtle ways on who speaks when. Applications range from echo chambers to team production. |
Keywords: | communication, information acquisition, team production |
JEL: | D83 |
Date: | 2023 |
URL: | http://d.repec.org/n?u=RePEc:ces:ceswps:_10275&r=upt |
By: | Davide Carpentiere; Alfio Giarlotta; Stephen Watson |
Abstract: | All possible types of deterministic choice behavior are classified by their degree of irrationality. This classification is performed in three steps: (1) select a benchmark of rationality, for which this degree is zero; (2) endow the set of choices with a metric to measure deviations from rationality; and (3) compute the distance of any choice behavior from the selected benchmark. The natural candidate for step 1 is the family of all rationalizable behaviors. A possible candidate for step 2 is the metric described by Klamler (2008), who incorrectly claims this is the only one satisfying five intuitive properties. While proving a correct characterization of this metric, we determine the causes of its low discriminating power, and design a high-discerning variation of it. In step 3 we use this new metric to establish the minimum distance of any choice behavior from the benchmark of rationality. We conclude by describing a measure of stochastic irrationality, which employs the random utility model as a benchmark of rationality, and the Block-Marschak polynomials to measure deviations from it. |
Date: | 2023–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2302.13656&r=upt |
By: | Laurens Cherchye; Bram De Rock; Frederic Vermeulen |
Abstract: | In its most basic form, the classical theory of consumer behaviour describes how a consumer allocates a given budget to a set of goods and services, while taking as given the prices of these goods and services. Although the most important implications of this theory have been known at least since Hicks (1939)’ Value and Capital, it has taken another fifteen years before the theory was brought in its entirety to real world data. This was done by Richard Stone (1954), who applied the Linear Expenditure System (LES) to British aggregate demand data. As shown by Geary (1950), the underlying preferences of the LES are of the Stone-Geary type. In other words, if one maximizes a direct utility function that presents Stone-Geary preferences subject to the consumer’s budget constraint, then one obtains the LES as the relation between the quantities purchased by the consumer, and her budget and the prices she is faced with. As is well-known, this system of Marshallian demand functions will satisfy all the theoretical implications of the maximization of rational preferences subject to a linear budget constraint. Firstly, it will satisfy adding-up, which implies that the sum of all the expenses on the different goods and services purchased by the consumer will be equal to the consumer’s budget. Secondly, the demand will be homogeneous of degree zero in prices and budget, which implies that if one multiplies all prices and the budget by, say, two, that the quantities purchased will remain unaffected (this implies that the consumer does not suffer from money illusion). Finally, the Slutsky matrix, that contains all the Hicksian or compensated price effects, will be symmetric and negative semidefinite. The latter implies, among others, that the consumer’s Hicksian or compensated demand of a given good can never increase following a price increase of that good, ceteris paribus. Stone’s work thus implies that for the first time a demand system was estimated that in principle could satisfy all the theoretical implications of the classical theory of consumer behaviour. On its turn, this implies that, for the first time, an estimate of, in Stone’s case, an average consumer’s preferences was obtained based on real-world data. An important feature of the LES is that it makes specific assumptions on the relation between the quantities purchased by the consumer, her budget and the prices she is faced with. These, on their turn, imply a particular specification for the consumer's preferences (in this case of the Stone-Geary type). These specific assumptions are far from innocuous. They potentially rule out consumer behaviour that is theoretically possible. For example, income elasticities that are derived from the LES, will all be positive, which implies that the LES can only capture consumer behaviour for goods and services that are normal. Inferior goods cannot be modelled by means of the LES. The same applies to the substitutability pattern between the modelled goods and services: with Stone-Geary preferences, all goods are substitutes for each other; complementary goods are ruled out by construction.Throughout the years, more general systems of demand equations have been proposed in the literature that allow the econometrician to capture richer behavioural patterns than those that can be modelled by means of the LES. Examples of such, often widely used, demand systems are the Rotterdam model of Barten (1964) and Theil (1965), the translog model of Christensen, Jorgenson and Lau (1975), the Almost Ideal Demand System of Deaton and Muellbauer (1980), the Quadratic Almost Ideal Demand System (QUAIDS) of Banks, Blundell and Lewbel (1997) and the Exact Affine Stone Index (EASI) demand system of Lewbel and Pendakur (2009). Still, all these demand systems have in common that they impose additional structure on the form consumer behaviour can take, which goes beyond the pure theory of consumer behaviour. In other words, theory gives only little guidance on the specific functional form of demand or the consumer’s preferences. The approach just described can be coined a parametric approach: the functional form of the preferences or the demand system is assumed to be known from the outset by the econometrician, while the unknown parameters in this functional form are to be estimated by means of econometric techniques. The latter are to be applied to the data at hand that captures observed consumer behaviour. The strength of the parametric approach is that it not only allows econometricians to easily apply and test the theory of the consumer’s utility maximizing behaviour, but that it also opens up a toolbox that contains plenty of instruments that are directly useful to evaluate economic policy. Think about the estimation of price and income elasticities, or the calculation of Hicksian equivalent and compensation variations to evaluate the distributional effects of price changes coming from, for example, indirect tax reforms like an increase in the taxes on gasoline or the introduction of a sugar tax. The main disadvantage of the parametric approach, though, is that it is prone to misspecification. As mentioned before, the particular functional form for the demand system used by the econometrician goes beyond the pure theory of consumption behaviour. The parametric approach implies additional assumptions, on top of other, mainly statistical assumptions, to bring the theory to the data. And these assumptions might not fit well with the data at hand. A rejection of, say, Slutsky symmetry, may either be due to the theory of consumer behaviour that is not appropriate to explain observed demand behaviour, or it may be due to the use of a functional specification that is not suitable for the data at hand. The nonparametric approach is an alternative way to bring the theory of consumer behaviour to the data. In a nutshell, the nonparametric approach aims to analyse consumer behaviour by starting from the pure theory of consumer behaviour while imposing only minimal additional assumptions that are needed to bring the theory to the data. Most importantly, it aims to analyse consumer behaviour without making any assumptions on the specific system of demand equations applicable to the consumer or without assuming specific preferences of that consumer. The term “nonparametric approach” has multiple meanings though. In what follows, we will concentrate on two meanings that figured prominently in the applied demand literature. The first meaning refers to the theory of revealed preference, that was initially proposed by Samuelson (1938, 1948). The second meaning refers to applications of consumer behaviour, whereby the relation between demand, income and/or prices can be of a very general shape that does not refer to known parametric shapes of demand or preferences. This general shape is then typically estimated by means of nonparametric regression techniques. We will end this encyclopedia entry with a discussion of a final nonparametric approach that combines the revealed preference approach with nonparametric (or semi-parametric) regression. |
Date: | 2023–03 |
URL: | http://d.repec.org/n?u=RePEc:eca:wpaper:2013/356680&r=upt |
By: | Oluwaseun Fajana; Chen Zheng; Masaki Mori |
Abstract: | Rethinking the claim on the negative association between social interaction and the decision to move into homeownership from renting. A hypothetical shortcoming of related literature is that social interaction is taken as exogenous while neglecting the problem of moral hazards, arising from information asymmetry. In this paper, these shortcomings are confronted conceptually with reference to Gronevetter(1973) tie strenghth hypothesis and empirically using the self-reported panel information from a representative British household panel. The conclusions suggest that the value of social interaction for homeownership may be uncovered, where it is observed indirectly in the individual’s utility function, as an other-regarding preference rather than directly in the individuals’ deep preferences for homeowenrship. Consequences for social and housing policy are further discussed. |
Keywords: | Decision Framing; Homeownership; Information Diffusion; Word of Mouth |
JEL: | R3 |
Date: | 2022–01–01 |
URL: | http://d.repec.org/n?u=RePEc:arz:wpaper:2022_166&r=upt |
By: | Raluca Ursu; Stephan Seiler; Elisabeth Honka |
Abstract: | We provide a detailed overview of the empirical implementation of the sequential search model proposed by Weitzman (1979). We discuss the assumptions underlying the model, the identifica-tion of search cost and preference parameters, the necessary normalizations of utility parameters, counterfactuals that require a search model framework, and different estimation approaches. The goal of this paper is to consolidate knowledge and provide a unified treatment of various aspects of sequential search models that are relevant for empirical work. |
Keywords: | sequential search model |
JEL: | D43 D83 L13 |
Date: | 2023 |
URL: | http://d.repec.org/n?u=RePEc:ces:ceswps:_10264&r=upt |
By: | Ani Guerdjikova (GAEL - Laboratoire d'Economie Appliquée de Grenoble - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement - UGA - Université Grenoble Alpes - Grenoble INP - Institut polytechnique de Grenoble - Grenoble Institute of Technology - UGA - Université Grenoble Alpes); John Quiggin (University of Queensland [Brisbane]) |
Abstract: | We consider an infinite-horizon economy with differential awareness in the form of coarsening. Agents with limited awareness are averse to unfavorable surprises. As a result their optimal trades are measurable w.r.t. their respective awareness partitions. We define an equilibrium with differential awareness and illustrate how the obtained equilibrium allocations observationally differ from those in economies with full awareness. In particular, economies with differential awareness can exhibit (i) lack of insurance against idiosyncratic risk; (ii) partial insurance against aggregate risk; (iii) biased state prices even when beliefs are correct and (iv) overpricing of assets which pay on events with low aggregate payoffs. We next adapt the results of Guerdjikova and Quiggin (2019) to show that agents with different levels of awareness can survive and influence prices in the limit. In this sense, the characteristics identified above would persist in the long-run. Moreover, differential awareness can lead to belief heterogeneity even in the limit. This is in contrast with the classical result of Blume and Easley (2006) stating that only agents with beliefs closest to the truth can survive. Finally, we examine the individual welfare implications of bounded awareness. If an increase in awareness comes at the cost of wrong beliefs over the larger state-space, bounded awareness can simultaneously increase individual welfare (with respect to the truth) and help avoid ruin. In this sense, heuristics which constrain agents to invest in "assets they understand" can be both ecologically rational in the sense of Gigerenzer (2007) and improve the stability of financial markets by allowing a larger set of agents to survive. |
Keywords: | ambiguity ambiguity-aversion survival. JEL Codes: D50 D81, ambiguity, ambiguity-aversion, survival. JEL Codes: D50, D81 |
Date: | 2023–01–30 |
URL: | http://d.repec.org/n?u=RePEc:hal:wpaper:hal-03962427&r=upt |
By: | Ani Guerdjikova (GAEL - Laboratoire d'Economie Appliquée de Grenoble - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement - UGA - Université Grenoble Alpes - Grenoble INP - Institut polytechnique de Grenoble - Grenoble Institute of Technology - UGA - Université Grenoble Alpes); Jürgen Eichberger (Universität Heidelberg [Heidelberg]) |
Abstract: | In this paper, we provide a novel framework for decision making under uncertainty based on information available in the form of a data set of cases. A case contains information about an action taken, an outcomes obtained, and other circumstances that were recorded with the action and the outcome. The set of actions, the set of outcomes and the set of possibly relevant recorded characteristics are derived from the cases in the data set. The information from the data set induces a belief function over outcomes for each action. From a decision maker's preferences over belief functions one can derive a representation evaluating outcomes according to the α-max min criterion. New data affects behavioral parameters, such as awareness, ambiguity and ambiguity attitude, and may suggest a classifications of data into states. |
Keywords: | partial information case-based decisions data objective ambiguity subjective ambiguity attitudes JEL Classification: D81, partial information, case-based decisions, data, objective ambiguity, subjective ambiguity attitudes JEL Classification: D81 |
Date: | 2023–01–21 |
URL: | http://d.repec.org/n?u=RePEc:hal:wpaper:hal-03962412&r=upt |
By: | Manganelli, Simone |
Abstract: | Bayesian decisions are observationally identical to decisions with judgment. Decisions with judgment test whether a judgmental decision is optimal and, in case of rejection, move to the closest boundary of the confidence interval, for a given confidence level. The resulting decisions condition on sample realizations, which are used to construct the confidence interval itself. Bayesian decisions condition on sample realizations twice, with the tested hypothesis and with the choice of the confidence level. The second conditioning reveals that Bayesian decision makers have an ex ante confidence level equal to one, which is equivalent to assuming an uncertainty neutral behavior. Robust Bayesian decisions are characterized by an ex ante confidence level strictly lower than one and are therefore uncertainty averse. JEL Classification: C1, C11, C12, C13 |
Keywords: | ambiguity aversion, confidence intervals, hypothesis testing, statistical decision theory |
Date: | 2023–02 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20232786&r=upt |
By: | Sean, Duffy; John, Smith |
Abstract: | Noise is a pervasive feature of economic choice. However, standard economics experiments are not well equipped to study the noise because experiments are constrained: preferences are either unknown or only imperfectly measured by experimenters. As a result of these designs--where the optimal choice is not observable to the analyst--many important questions about the noise in apparently random choice cannot be addressed. We design an experiment to better understand stochastic choice by directing subjects to make incentivized binary choices between lines. Subjects are paid a function of the length of the selected line, so subjects will attempt to select the longer of the lines. We find a gradual (not sudden) relationship between the difference in the lengths of the lines and the optimal choice. Our analysis suggests that the errors are better described as having a Gumbel distribution rather than a normal distribution, and our simulated data increase our confidence in this inference. We find evidence that suboptimal choices are associated with longer response times than optimal choices, which appears to be consistent with the predictions of Fudenberg, Strack, and Strzalecki (2018). Although we note that the relationship between response time and the optimality of choice becomes weaker across trials. In our experiment, 54 of 56 triples are consistent with Strong Stochastic Transitivity and this is the median outcome in our simulated data. Finally, we find a relationship between choice and attention, although we find strong evidence that the relationship is endogenous. |
Keywords: | Stochastic transitivity, choice theory, judgment, memory, search |
JEL: | C91 D12 |
Date: | 2023–02–17 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:116382&r=upt |
By: | Fang, Yi; Niu, Hui; Lin, Yuen |
Abstract: | We propose a simple algorithm for the ex-ante valuation based on prospect theory. Our results reveal a strong and robust pricing effect associated with predicted values based on prospect theory (PV) in the US market, that is, higher ex-ante PV stocks associated with higher returns. Our findings indicate no equilibrium exists for ex-ante PV. Our evidence shows liquidity has a limited impact on the ex-ante PV effect, which is mainly on liquid stocks. In general, liquidity, equilibrium, and the limits of arbitrage are crucial to understanding the ex-ante PV effect. |
Keywords: | ex-ante valuation; prospect theory; equilibrium; liquidity; crash; jackpot |
JEL: | G02 G11 G12 G14 G17 |
Date: | 2023–01–01 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:116386&r=upt |
By: | Zheng, Jiakun; Couprie, Helene; Hopfensitz, Astrid |
Abstract: | 101 real couples participated in a controlled experimental risk taking task involving variations in household and individuals’ income risks, but controlling for ex-ante income inequal- ity. Our design disentangles the effect of household risk, of intra-household risk inequality and of ex-post pay-off inequality. We find that most couples (about 79%) did pool their risk at the household level when risks were borne symmetrically but a significant proportion of couples (about 36%) failed to do so when individual risks were borne asymmetrically. Furthermore, we find that intra-household risk inequality has a larger impact on non-married couples than married ones. |
Keywords: | Experiment; Income pooling; Household risk taking; Inequality |
JEL: | C91 C92 D19 D81 |
Date: | 2022 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:116537&r=upt |
By: | Roman Frydman (Department of Economics, New York University); Morten Nyboe Tabor (Institute for New Economic Thinking) |
Abstract: | We open a New Keynesian Phillips curve model to nonrecurring structural shifts in its parameters and propose a novel implementation of Muth's hypothesis to represent market participants' inflation expectations under Knightian uncertainty arising from such shifts. We refer to our approach as the Knight-Muth hypothesis (KMH). We find empirical support for KMH's core premise that processes driving inflation time-series and inflation forecasts undergo nonrecurring structural shifts. In contrast to the rational expectations hypothesis and behavioral specifications, KMH reconciles model consistency with an autonomous role for participants' expectations in driving aggregate outcomes and the influence of psychological factors on those expectations. |
Keywords: | Expectations; Structural Shifts; Unforeseeable Change; Knightian Uncertainty; Muth's Hypothesis. |
JEL: | D83 D84 E31 E37 |
Date: | 2022–12–01 |
URL: | http://d.repec.org/n?u=RePEc:thk:wpaper:inetwp194&r=upt |
By: | Davide Carpentiere; Angelo Petralia |
Abstract: | Many bounded rationality approaches discussed in the literature are models of limited consideration. We provide a novel representation and data interpretation for some of the analyzed behavioral patterns. Moreover, we characterize a testable choice procedure that allows the experimenter to uniquely infer limited consideration from irrational features of the observed behavior. |
Date: | 2023–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2302.00978&r=upt |
By: | Pablo Garcia Sanchez (Banque centrale du Luxembourg, Département Economie et Recherche); Luca Marchiori (Banque centrale du Luxembourg, Département Economie et Recherche); Olivier Pierrard (Banque centrale du Luxembourg, Département Economie et Recherche) |
Abstract: | Long-term care (LTC) expenditures of the elderly are high in developed countries and will grow further with population aging. In addition, LTC costs are heterogeneous across individuals and unknown early in life. In this paper, we add uncertainty over the arrival and magnitude of future LTC costs into a life-cycle model with endogenous aging, and we analyze how this affects the optimal behavior of agents. We show that uncertainty boosts precautionary savings, lowers investment in preventive care, and weakens the effectiveness of subsidies to encourage prevention. Our results therefore suggest that uncertainty should not be ignored in models that study positive or normative aspects of health investment. |
Keywords: | health; long-term care costs; uncertainty; stochastic model |
JEL: | C60 D15 D81 I12 I18 |
Date: | 2023–01 |
URL: | http://d.repec.org/n?u=RePEc:ctl:louvir:2023006&r=upt |
By: | Jean-Marc Bonnisseau (Centre d'Economie de la Sorbonne - Université Paris1 Pantheon-Sorbonne, Paris School of Economics); Alain Chateauneuf (Centre d'Economie de la Sorbonne); Jean-Pierre Drugeon (Paris School of Economics, CNRS) |
Abstract: | In this paper, we show that by merely fixing upper bounds and lower bounds for the stream of consumptions, we can compute the optimal planning of consumptions independently from an explicit sequence of discounting factors as soon as they are assumed to be strictly decreasing. The optimal solution is unique and exhibits two regimes with a pivotal period in the middle. The same principle applies to future reimbursements of a debt as soon as we assume discounting factors to be strictly increasing. Therefore, one gets plans satisfying some kind of intergenerational fairness since: the highest effort is supported by the first generations and then it decreases for the remaining ones. Furthermore, we show that the solution is time consistent and we study the link with the standard discounted utilitarian model |
Keywords: | intertemporal allocation; multiple regimes; discount rates; fairness; probability intervals; capacities |
JEL: | D11 D90 |
Date: | 2022–12 |
URL: | http://d.repec.org/n?u=RePEc:mse:cesdoc:23004&r=upt |
By: | IDA Takanori; ISHIHARA Takunori; ITO Koichiro; KIDO Daido; KITAGAWA Toru; SAKAGUCHI Shosei; SASAKI Shusaku |
Abstract: | We develop an optimal policy assignment rule that integrates two distinctive approaches commonly used in economics—targeting by observable characteristics and targeting through self-selection . Our method uses experimental or quasi-experimental data to identify who should be treated, untreated, and who should self- select to achieve a policymaker’s objective. Applying this method to a randomized controlled trial on a residential energy rebate program, we find that targeting that leverages both observable data and self- selection outperforms conventional targeting for a standard utilitarian welfare function and welfare functions that balance the equity-efficiency trade-off. We highlight that the LATE framework (Imbens and Angrist, 1994) can be used to investigate the mechanism behind our approach. By introducing new estimators called the LATEs for takers and non-takers , we show that our method allows policymakers to identify whose self-selection would be valuable and harmful to social welfare. |
Date: | 2023–02 |
URL: | http://d.repec.org/n?u=RePEc:eti:dpaper:23011&r=upt |