|
on Utility Models and Prospect Theory |
By: | Felix Brandt; Patrick Lederer |
Abstract: | An important -- but very demanding -- property in collective decision-making is strategyproofness, which requires that voters cannot benefit from submitting insincere preferences. Gibbard (1977) has shown that only rather unattractive rules are strategyproof, even when allowing for randomization. However, Gibbard's theorem is based on a rather strong interpretation of strategyproofness, which deems a manipulation successful if it increases the voter's expected utility for at least one utility function consistent with his ordinal preferences. In this paper, we study weak strategyproofness, which deems a manipulation successful if it increases the voter's expected utility for all utility functions consistent with his ordinal preferences. We show how to systematically design attractive, weakly strategyproof social decision schemes (SDSs) and explore their limitations for both strict and weak preferences. In particular, for strict preferences, we show that there are weakly strategyproof SDSs that are either ex post efficient or Condorcet-consistent, while neither even-chance SDSs nor pairwise SDSs satisfy both properties and weak strategyproofness at the same time. By contrast, for the case of weak preferences, we discuss two sweeping impossibility results that preclude the existence of appealing weakly strategyproof SDSs. |
Date: | 2024–12 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2412.11977 |
By: | Flora C. Shi; Stephen Bates; Martin J. Wainwright |
Abstract: | Statistical protocols are often used for decision-making involving multiple parties, each with their own incentives, private information, and ability to influence the distributional properties of the data. We study a game-theoretic version of hypothesis testing in which a statistician, also known as a principal, interacts with strategic agents that can generate data. The statistician seeks to design a testing protocol with controlled error, while the data-generating agents, guided by their utility and prior information, choose whether or not to opt in based on expected utility maximization. This strategic behavior affects the data observed by the statistician and, consequently, the associated testing error. We analyze this problem for general concave and monotonic utility functions and prove an upper bound on the Bayes false discovery rate (FDR). Underlying this bound is a form of prior elicitation: we show how an agent's choice to opt in implies a certain upper bound on their prior null probability. Our FDR bound is unimprovable in a strong sense, achieving equality at a single point for an individual agent and at any countable number of points for a population of agents. We also demonstrate that our testing protocols exhibit a desirable maximin property when the principal's utility is considered. To illustrate the qualitative predictions of our theory, we examine the effects of risk aversion, reward stochasticity, and signal-to-noise ratio, as well as the implications for the Food and Drug Administration's testing protocols. |
Date: | 2024–12 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2412.16452 |
By: | Ruodu Wang; Qinyu Wu |
Abstract: | We obtain a full characterization of consistency with respect to higher-order stochastic dominance within the rank-dependent utility model. Different from the results in the literature, we do not assume any conditions on the utility functions and the probability weighting function, such as differentiability or continuity. It turns out that the level of generality that we offer leads to models that do not have a continuous probability weighting function and yet they satisfy prudence. In particular, the corresponding probability weighting function can only have a jump at 1, and must be linear on [0, 1). |
Date: | 2024–12 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2412.15350 |
By: | Guohui Guan; Zongxia Liang; Yi Xia |
Abstract: | This paper studies the robust reinsurance and investment games for competitive insurers. Model uncertainty is characterized by a class of equivalent probability measures. Each insurer is concerned with relative performance under the worst-case scenario. Insurers' surplus processes are approximated by drifted Brownian motion with common and idiosyncratic insurance risks. The insurers can purchase proportional reinsurance to divide the insurance risk with the reinsurance premium calculated by the variance principle. We consider an incomplete market driven by the 4/2 stochastic volatility mode. This paper formulates the robust mean-field game for a non-linear system originating from the variance principle and the 4/2 model. For the case of an exponential utility function, we derive closed-form solutions for the $n$-insurer game and the corresponding mean-field game. We show that relative concerns lead to new hedging terms in the investment and reinsurance strategies. Model uncertainty can significantly change the insurers' hedging demands. The hedging demands in the investment-reinsurance strategies exhibit highly non-linear dependence with the insurers' competitive coefficients, risk aversion and ambiguity aversion coefficients. Finally, numerical results demonstrate the herd effect of competition. |
Date: | 2024–12 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2412.09157 |
By: | Christopher Turansick |
Abstract: | We study a dynamic random utility model that allows for consumption dependence. We axiomatically analyze this model and find insights that allow us to distinguish between behavior that arises due to consumption dependence and behavior that arises due to state dependence. Building on our axiomatic analysis, we develop a hypothesis test for consumption dependent random utility. We show that our hypothesis test offers computational improvements over the natural extension of Kitamura and Stoye (2018) to our environment. Finally, we consider a parametric application of our model and show how an analyst can predict the long run perturbation to market shares due to habit formation using choice data from only two periods. |
Date: | 2024–12 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2412.05344 |
By: | Grégory Ponthière (ENS Rennes - École normale supérieure - Rennes, CREM - Centre de recherche en économie et management - UNICAEN - Université de Caen Normandie - NU - Normandie Université - UR - Université de Rennes - CNRS - Centre National de la Recherche Scientifique) |
Abstract: | Nozick's ‘utility monster' is often regarded as impossible, because one life cannot be better than a large number of other lives. Against that view, I propose a purely marginalist account of utility monster defining the monster by a higher sensitivity of well-being to resources (instead of a larger total well-being), and I introduce the concept of collective utility monster to account for resource predation by a group. Since longevity strengthens the sensitivity of well-being to resources, large groups of long-lived persons may, if their longevity advantage is sufficiently strong, fall under the concept of collective utility monster, against moral intuition. |
Keywords: | Longevity, mortality, inequalities, utilitarianism, Nozick’s utility monster |
Date: | 2024 |
URL: | https://d.repec.org/n?u=RePEc:hal:journl:hal-04834045 |
By: | Ester Sudano |
Abstract: | We model stochastic choices with categorization, resulting from the preliminary step of grouping alternatives in homogenous disjoint classes. The agent randomly chooses one class among those available, then randomly picks an item within the selected class. We give a formal definition of a choice generated by this procedure, and provide a characterization. The characterizing properties allow an external observer to elicit that categorization is applied. In a more general interpretation, the model allows to describe the observed choice as the composition of independent subchoices. This composition preserves rationalizability by random utility maximization. A generalization of the model subsumes Luce model and Nested Logit. |
Date: | 2024–12 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2412.03554 |
By: | Huy Chau; Duy Nguyen; Thai Nguyen |
Abstract: | In a reinforcement learning (RL) framework, we study the exploratory version of the continuous time expected utility (EU) maximization problem with a portfolio constraint that includes widely-used financial regulations such as short-selling constraints and borrowing prohibition. The optimal feedback policy of the exploratory unconstrained classical EU problem is shown to be Gaussian. In the case where the portfolio weight is constrained to a given interval, the corresponding constrained optimal exploratory policy follows a truncated Gaussian distribution. We verify that the closed form optimal solution obtained for logarithmic utility and quadratic utility for both unconstrained and constrained situations converge to the non-exploratory expected utility counterpart when the exploration weight goes to zero. Finally, we establish a policy improvement theorem and devise an implementable reinforcement learning algorithm by casting the optimal problem in a martingale framework. Our numerical examples show that exploration leads to an optimal wealth process that is more dispersedly distributed with heavier tail compared to that of the case without exploration. This effect becomes less significant as the exploration parameter is smaller. Moreover, the numerical implementation also confirms the intuitive understanding that a broader domain of investment opportunities necessitates a higher exploration cost. Notably, when subjected to both short-selling and money borrowing constraints, the exploration cost becomes negligible compared to the unconstrained case. |
Date: | 2024–12 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2412.10692 |
By: | Zongxia Liang; Sheng Wang; Jianming Xia |
Abstract: | This paper discusses a nonlinear integral equation arising from a class of time-consistent portfolio selection problem. We propose a unified framework requiring minimal assumptions, such as right-continuity of market coefficients and square-integrability of the market price of risk. Our main contribution is proving the existence and uniqueness of the square-integrable solution for the integral equation under mild conditions. Illustrative applications include the mean-variance portfolio selection and the utility maximization with random risk aversion. |
Date: | 2024–12 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2412.02446 |
By: | N. J. Chater; R. S. MacKay |
Abstract: | An axiomatic approach to macroeconomics based on the mathematical structure of thermodynamics is presented. It deduces relations between aggregate properties of an economy, concerning quantities and flows of goods and money, prices and the value of money, without any recourse to microeconomic foundations about the preferences and actions of individual economic agents. The approach has three important payoffs. 1) it provides a new and solid foundation for aspects of standard macroeconomic theory such as the existence of market prices, the value of money, the meaning of inflation, the symmetry and negative-definiteness of the macro-Slutsky matrix, and the Le Chatelier-Samuelson principle, without relying on implausibly strong rationality assumptions over individual microeconomic agents. 2) the approach generates new results, including implications for money flow and trade when two or more economies are put in contact, in terms of new concepts such as economic entropy, economic temperature, goods' values and money capacity. Some of these are related to standard economic concepts (eg marginal utility of money, market prices). Yet our approach derives them at a purely macroeconomic level and gives them a meaning independent of usual restrictions. Others of the concepts, such as economic entropy and temperature, have no direct counterparts in standard economics, but they have important economic interpretations and implications, as aggregate utility and the inverse marginal aggregate utility of money, respectively. 3) this analysis promises to open up new frontiers in macroeconomics by building a bridge to ideas from non-equilibrium thermodynamics. More broadly, we hope that the economic analogue of entropy (governing the possible transitions between states of economic systems) may prove to be as fruitful for the social sciences as entropy has been in the natural sciences. |
Date: | 2024–12 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2412.00886 |
By: | Ken-ichi Hashimoto (Graduate School of Economics, Kobe University); Ryonghun Im (School of Economics, Kwansei Gakuin University); Takuma Kunieda (School of Economics, Kwansei Gakuin University); Akihisa Shibata (Institute of Economic Research, Kyoto University) |
Abstract: | By applying a simple dynamic general equilibrium model without exogenous shocks inhabited by infinitely lived capitalists and workers, we show that a higher degree of relative risk aversion can destabilize an economy. In traditional real business cycle (RBC) theory, a higher degree of relative risk aversion dampens the amplitude of the consumption fluctuations caused by exogenous shocks through consumption smoothing. However, a higher degree of relative risk aversion combined with a high degree of elasticity of the marginal product of capital can also lead to the emergence of a nonlinear mechanism that causes endogenous business fluctuations. The nontrivial steady state loses stability due to the higher degree of relative risk aversion; thus, endogenous business fluctuations can occur. This result suggests that for a deeper understanding of boom-bust cycles, researchers should merge exogenous and endogenous business fluctuations when investigating economies. |
Keywords: | endogenous business fluctuations, relative risk aversion, dynamic general equilibrium, instability |
JEL: | E1 E2 E3 |
Date: | 2025–01 |
URL: | https://d.repec.org/n?u=RePEc:kgu:wpaper:285 |
By: | Thomas F Epper (LEM - Lille économie management - UMR 9221 - UA - Université d'Artois - UCL - Université catholique de Lille - Université de Lille - CNRS - Centre National de la Recherche Scientifique, CNRS - Centre National de la Recherche Scientifique, IÉSEG School Of Management [Puteaux]); Helga Fehr-Duda (UZH - Universität Zürich [Zürich] = University of Zurich) |
Abstract: | Standard economic models view risk taking and time discounting as two independent dimensions of decision making. However, mounting experimental evidence demonstrates striking parallels in patterns of risk taking and time discounting behavior and systematic interaction effects, which suggests that there may be common underlying forces driving these interactions. Here we show that the inherent uncertainty associated with future prospects together with individuals' proneness to probability weighting generates a unifying framework for explaining a large number of puzzling behavioral regularities: delay-dependent risk tolerance, aversion to sequential resolution of uncertainty, preferences for the timing of the resolution of uncertainty, the differential discounting of risky and certain outcomes, hyperbolic discounting, subadditive discounting, and the order dependence of prospect valuation. Furthermore, all these phenomena can be predicted simultaneously with the same set of preference parameters. |
Keywords: | risk preferences, time preferences, preference interaction, increasing risk tolerance |
Date: | 2024–02–01 |
URL: | https://d.repec.org/n?u=RePEc:hal:journl:hal-03473431 |
By: | Dirk Bergemann; Marek Bojko; Paul D\"utting; Renato Paes Leme; Haifeng Xu; Song Zuo |
Abstract: | We study mechanism design when agents hold private information about both their preferences and a common payoff-relevant state. We show that standard message-driven mechanisms cannot implement socially efficient allocations when agents have multidimensional types, even under favorable conditions. To overcome this limitation, we propose data-driven mechanisms that leverage additional post-allocation information, modeled as an estimator of the payoff-relevant state. Our data-driven mechanisms extend the classic Vickrey-Clarke-Groves class. We show that they achieve exact implementation in posterior equilibrium when the state is either fully revealed or the utility is linear in an unbiased estimator. We also show that they achieve approximate implementation with a consistent estimator, converging to exact implementation as the estimator converges, and present bounds on the convergence rate. We demonstrate applications to digital advertising auctions and large language model (LLM)-based mechanisms, where user engagement naturally reveals relevant information. |
Date: | 2024–12 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2412.16132 |
By: | Yannai A. Gonczarowski; Ella Segev |
Abstract: | We axiomatically define a cardinal social inefficiency function, which, given a set of alternatives and individuals' vNM preferences over the alternatives, assigns a unique number -- the social inefficiency -- to each alternative. These numbers -- and not only their order -- are uniquely defined by our axioms despite no exogenously given interpersonal comparison, outside option, or disagreement point. We interpret these numbers as per capita losses in endogenously normalized utility. We apply our social inefficiency function to a setting in which interpersonal comparison is notoriously hard to justify -- object allocation without money -- leveraging techniques from computer science to prove an approximate-efficiency result for the Random Serial Dictatorship mechanism. |
Date: | 2024–12 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2412.11984 |
By: | Guillaume Bataille (AMSE - Aix-Marseille Sciences Economiques - EHESS - École des hautes études en sciences sociales - AMU - Aix Marseille Université - ECM - École Centrale de Marseille - CNRS - Centre National de la Recherche Scientifique, AMU - Aix Marseille Université, CNRS - Centre National de la Recherche Scientifique) |
Abstract: | This paper derives closed‐form solutions for a strategic , simultaneous harvesting in a predator–prey system. Using a parametric constraint, it establishes the existence and uniqueness of a linear feedback‐Nash equilibrium involving two specialized fleets and allows for continuous time results for a class of payoffs that have constant elasticity of the marginal utility. These results contribute to the scarce literature on analytically tractable predator–prey models with endogenous harvesting. A discussion based on industry size effects is provided to highlight the role played by biological versus strategic interactions in the multispecies context. Recommendations for Resource Managers This model presents a thorough examination of the economic inefficiencies inherent in the exploitation dynamics of two interdependent species, elucidating the complex interplay between ecological interactions and economic outcomes. The size of the fishing industries constitutes a significant variable that must be integrated into the formulation of pertinent policy recommendations. This constitutes an advancement towards a more time‐consistent approach to Ecosystem‐Based Fishery Management (EBFM). |
Keywords: | common‐pool resource, dynamic games, fisheries, predator–preyrelationship |
Date: | 2024–10–15 |
URL: | https://d.repec.org/n?u=RePEc:hal:journl:hal-04793204 |
By: | Cuong Le; Tien Mai; Ngan Ha Duong; Minh Hoang Ha |
Abstract: | We study a competitive facility location problem, where customer behavior is modeled and predicted using a discrete choice random utility model. The goal is to strategically place new facilities to maximize the overall captured customer demand in a competitive marketplace. In this work, we introduce two novel considerations. First, the total customer demand in the market is not fixed but is modeled as an increasing function of the customers' total utilities. Second, we incorporate a new term into the objective function, aiming to balance the firm's benefits and customer satisfaction. Our new formulation exhibits a highly nonlinear structure and is not directly solved by existing approaches. To address this, we first demonstrate that, under a concave market expansion function, the objective function is concave and submodular, allowing for a $(1-1/e)$ approximation solution by a simple polynomial-time greedy algorithm. We then develop a new method, called Inner-approximation, which enables us to approximate the mixed-integer nonlinear problem (MINLP), with arbitrary precision, by an MILP without introducing additional integer variables. We further demonstrate that our inner-approximation method consistently yields lower approximations than the outer-approximation methods typically used in the literature. Moreover, we extend our settings by considering a\textit{ general (non-concave)} market-expansion function and show that the Inner-approximation mechanism enables us to approximate the resulting MINLP, with arbitrary precision, by an MILP. To further enhance this MILP, we show how to significantly reduce the number of additional binary variables by leveraging concave areas of the objective function. Extensive experiments demonstrate the efficiency of our approaches. |
Date: | 2024–12 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2412.17021 |
By: | Bletzinger, Tilman; Lemke, Wolfgang; Renne, Jean-Paul |
Abstract: | Inflation risk premiums tend to be positive in an economy mainly hit by supply shocks, and negative if demand shocks dominate. Risk premiums also fluctuate with risk aversion. We shed light on this nexus in a linear-quadratic equilibrium microfinance model featuring time variation in inflation-consumption correlation and risk aversion. We obtain analytical solutions for real and nominal yield curves and for risk premiums. While changes in the inflation-consumption correlation drive nominal yields, changes in risk aversion drive real yields and act as amplifier on nominal yields. Combining a trend-cycle specification of real consumption with hysteresis effects generates an upward-sloping real yield curve. Estimating the model on US data from 1961 to 2019 confirms substantial time variation in inflation risk premiums: distinctly positive in the earlier part of our sample, especially during the 1980s, and turning negative with the onset of the new millennium. JEL Classification: E43, E44, C32 |
Keywords: | demand and supply, inflation risk premiums, risk aversion, term structure model |
Date: | 2025–01 |
URL: | https://d.repec.org/n?u=RePEc:ecb:ecbwps:20253012 |
By: | Ilke Aydogan (LEM - Lille économie management - UMR 9221 - UA - Université d'Artois - UCL - Université catholique de Lille - Université de Lille - CNRS - Centre National de la Recherche Scientifique); Loïc Berger (CNRS - Centre National de la Recherche Scientifique, IÉSEG School Of Management [Puteaux], EIEE - European Institute on Economics and the Environment, CMCC - Centro Euro-Mediterraneo per i Cambiamenti Climatici [Bologna]); Vincent Théroude (BETA - Bureau d'Économie Théorique et Appliquée - AgroParisTech - UNISTRA - Université de Strasbourg - Université de Haute-Alsace (UHA) - Université de Haute-Alsace (UHA) Mulhouse - Colmar - UL - Université de Lorraine - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement) |
Abstract: | We investigate the validity of a double random incentive system where only a subset of subjects is paid for one of their choices. By focusing on individual decision-making under risk and ambiguity, we show that using either a standard random incentive system, where all subjects are paid, or a double random system, where only 10% of subjects are paid, yields similar preference elicitation results. These findings suggest that adopting a double random incentive system could significantly reduce experimental costs and logistic efforts, thereby facilitating the exploration of individual decision-making in larger-scale and higher-stakes experiments. |
Keywords: | Experimental methodology, Payment method, s Incentives, Ambiguity elicitation |
Date: | 2024–10 |
URL: | https://d.repec.org/n?u=RePEc:hal:journl:hal-04818422 |
By: | Neil Christy; Amanda Ellen Kowalski |
Abstract: | We present a design-based model of a randomized experiment in which the observed outcomes are informative about the joint distribution of potential outcomes within the experimental sample. We derive a likelihood function that maintains curvature with respect to the joint distribution of potential outcomes, even when holding the marginal distributions of potential outcomes constant -- curvature that is not maintained in a sampling-based likelihood that imposes a large sample assumption. Our proposed decision rule guesses the joint distribution of potential outcomes in the sample as the distribution that maximizes the likelihood. We show that this decision rule is Bayes optimal under a uniform prior. Our optimal decision rule differs from and significantly outperforms a ``monotonicity'' decision rule that assumes no defiers or no compliers. In sample sizes ranging from 2 to 40, we show that the Bayes expected utility of the optimal rule increases relative to the monotonicity rule as the sample size increases. In two experiments in health care, we show that the joint distribution of potential outcomes that maximizes the likelihood need not include compliers even when the average outcome in the intervention group exceeds the average outcome in the control group, and that the maximizer of the likelihood may include both compliers and defiers, even when the average intervention effect is large and statistically significant. |
Date: | 2024–12 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2412.16352 |
By: | Clémentine Bouleau (PSE - Paris School of Economics - UP1 - Université Paris 1 Panthéon-Sorbonne - ENS-PSL - École normale supérieure - Paris - PSL - Université Paris Sciences et Lettres - EHESS - École des hautes études en sciences sociales - ENPC - École nationale des ponts et chaussées - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement, CES - Centre d'économie de la Sorbonne - UP1 - Université Paris 1 Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique); Nicolas Jacquemet (PSE - Paris School of Economics - UP1 - Université Paris 1 Panthéon-Sorbonne - ENS-PSL - École normale supérieure - Paris - PSL - Université Paris Sciences et Lettres - EHESS - École des hautes études en sciences sociales - ENPC - École nationale des ponts et chaussées - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement, CES - Centre d'économie de la Sorbonne - UP1 - Université Paris 1 Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique); Maël Lebreton (PSE - Paris School of Economics - UP1 - Université Paris 1 Panthéon-Sorbonne - ENS-PSL - École normale supérieure - Paris - PSL - Université Paris Sciences et Lettres - EHESS - École des hautes études en sciences sociales - ENPC - École nationale des ponts et chaussées - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement, PJSE - Paris Jourdan Sciences Economiques - UP1 - Université Paris 1 Panthéon-Sorbonne - ENS-PSL - École normale supérieure - Paris - PSL - Université Paris Sciences et Lettres - EHESS - École des hautes études en sciences sociales - ENPC - École nationale des ponts et chaussées - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement, UNIGE - Université de Genève = University of Geneva) |
Abstract: | Whether individuals feel confident about their own actions, choices, or statements being correct, and how these confidence levels differ between individuals are two key primitives for countless behavioral theories and phenomena. In cognitive tasks, individual confidence is typically measured as the average of reports about choice accuracy, but how reliable is the resulting characterization of within-and between-individual confidence remains surprisingly undocumented. Here, we perform a large-scale resampling exercise in the Confidence Database to investigate the reliability of individual confidence estimates, and of comparisons across individuals' confidence levels. Our results show that confidence estimates are more stable than their choice-accuracy counterpart, reaching a reliability plateau after roughly 50 trials, regardless of a number of task design characteristics. While constituting a reliability upper-bound for task-based confidence measures, and thereby leaving open the question of the reliability of the construct itself, these results characterize the robustness of past and future task designs. |
Keywords: | Confidence, Accuracy, Reliability, Design of experiments, Multiple trials |
Date: | 2025–01 |
URL: | https://d.repec.org/n?u=RePEc:hal:cesptp:halshs-04893009 |
By: | Thomas Epper (LEM - Lille économie management - UMR 9221 - UA - Université d'Artois - UCL - Université catholique de Lille - Université de Lille - CNRS - Centre National de la Recherche Scientifique); Ernst Fehr (UCPH - University of Copenhagen = Københavns Universitet); Claus Thustrup Kreiner (UCPH - University of Copenhagen = Københavns Universitet); Søren Leth-Petersen (UCPH - University of Copenhagen = Københavns Universitet); Isabel Skak Olufsen (UCPH - University of Copenhagen = Københavns Universitet); Peer Ebbesen Skov (AUT - Auckland University of Technology) |
Abstract: | Rising inequality has brought redistribution back on the political agenda. In theory, inequality aversion drives people's support for redistribution. People can dislike both advantageous inequality (comparison relative to those worse off) and disadvantageous inequality (comparison relative to those better off). Existing experimental evidence reveals substantial variation across people in these preferences. However, evidence is scarce on the broader role of these two distinct forms of inequality aversion for redistribution in society. We provide evidence by exploiting a unique combination of data. We use an incentivized experiment to measure inequality aversion in a large population sample (≈9, 000 among 20- to 64-y-old Danes). We link the elicited inequality aversion to survey information on individuals' support for public redistribution (policies that reduce income differences) and administrative records revealing their private redistribution (real-life donations to charity). In addition, the link to administrative data enables us to include a large battery of controls in the empirical analysis. Theory predicts that support for public redistribution increases with both types of inequality aversion, while private redistribution should increase with advantageous inequality aversion, but decrease with disadvantageous inequality aversion. A strong dislike for disadvantageous inequality makes people willing to sacrifice own income to reduce the income of people who are better off, thereby reducing the distance to people with more income than themselves. Public redistribution schemes achieve this but private donations to charity do not. Our empirical results provide strong support for these predictions and with quantitatively large effects compared to other predictors. |
Date: | 2024–09–17 |
URL: | https://d.repec.org/n?u=RePEc:hal:journl:hal-04816620 |
By: | Karen Frilya Celine; Warut Suksompong; Sheung Man Yuen |
Abstract: | Allocating indivisible goods is a ubiquitous task in fair division. We study additive welfarist rules, an important class of rules which choose an allocation that maximizes the sum of some function of the agents' utilities. Prior work has shown that the maximum Nash welfare (MNW) rule is the unique additive welfarist rule that guarantees envy-freeness up to one good (EF1). We strengthen this result by showing that MNW remains the only additive welfarist rule that ensures EF1 for identical-good instances, two-value instances, as well as normalized instances with three or more agents. On the other hand, if the agents' utilities are integers, we demonstrate that several other rules offer the EF1 guarantee, and provide characterizations of these rules for various classes of instances. |
Date: | 2024–12 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2412.15472 |
By: | Guohui Guan; Zongxia Liang; Yi Xia |
Abstract: | This paper investigates robust stochastic differential games among insurers under model uncertainty and stochastic volatility. The surplus processes of ambiguity-averse insurers (AAIs) are characterized by drifted Brownian motion with both common and idiosyncratic insurance risks. To mitigate these risks, AAIs can purchase proportional reinsurance. Besides, AAIs allocate their wealth in a financial market consisting of cash, and a stock characterized by the 4/2 stochastic volatility model. AAIs compete with each other based on relative performance with the mean-variance criterion under the worst-case scenario. This paper formulates a robust time-consistent mean-field game in a non-linear system. The AAIs seek robust, time-consistent response strategies to achieve Nash equilibrium strategies in the game. We introduce $n$-dimensional extended Hamilton-Jacobi-Bellman-Isaacs (HJBI) equations and corresponding verification theorems under compatible conditions. Semi-closed forms of the robust $n$-insurer equilibrium and mean-field equilibrium are derived, relying on coupled Riccati equations. Suitable conditions are presented to ensure the existence and uniqueness of the coupled Riccati equation as well as the integrability in the verification theorem. As the number of AAIs increases, the results in the $n$-insurer game converge to those in the mean-field game. Numerical examples are provided to illustrate economic behaviors in the games, highlighting the herd effect of competition on the AAIs. |
Date: | 2024–12 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2412.09171 |