|
on Utility Models and Prospect Theory |
By: | Stelios Arvanitis (Department of Economics, AUEB) |
Abstract: | This paper utilizes a Banach-type fixed point theorem in a functorial context to develop Universal Choice Spaces for addressing decision problems, focusing on expected utility and preference uncertainty. This generates an infinite sequence of optimal selection problems involving probability measures on utility sets. Each solution at a given stage addresses the preference ambiguity from the previous stage, enabling optimal choices at that level. The Universal Choice Space is characterized as a collection of finite-dimensional vectors of probability distributions, with the mth component being an arbitrary probability measure relevant to the mth stage of the problem. The space is derived as the canonical fixed point of a suitable endofunctor on an enriched category and simultaneously as the colimit of the sequence of iterations of this functor, starting from a suitable object. |
Keywords: | Expected utility, ambiguity of preferences, infinite regress, enriched category, endofunctor, canonical fixed point, initial algebra, colimit, universal choice space |
JEL: | D81 |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:qed:wpaper:1534 |
By: | Yuexin Liao; Kota Saito; Alec Sandroni |
Abstract: | This paper studies when discrete choice data involving aggregated alternatives such as categorical data or an outside option can be rationalized by a random utility model (RUM). Aggregation introduces ambiguity in composition: the underlying alternatives may differ across individuals and remain unobserved by the analyst. We characterize the observable implications of RUMs under such ambiguity and show that they are surprisingly weak, implying only monotonicity with respect to adding aggregated alternatives and standard RUM consistency on unaggregated menus. These are insufficient to justify the use of an aggregated RUM. We identify two sufficient conditions that restore full rationalizability: non-overlapping preferences and menu-independent aggregation. Simulations show that violations of these conditions generate estimation bias, highlighting the practical importance of how aggregated alternatives are defined. |
Date: | 2025–05 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2506.00372 |
By: | Lidia Ceriani; Paolo Verme |
Abstract: | Despite the growing numbers of forcibly displaced persons worldwide, many people living under conflict choose not to flee. Individuals face two lotteries - staying or leaving - characterized by two distributions of potential outcomes. This paper proposes to model the choice between these two lotteries using quantile maximization as opposed to expected utility theory. The paper posits that risk-averse individuals aim at minimizing losses by choosing the lottery with the best outcome at the lower end of the distribution, whereas risk-tolerant individuals aim at maximizing gains by choosing the lottery with the best outcome at the higher end of the distribution. Using a rich set of household and conflict panel data from Nigeria, the paper finds that risk-tolerant individuals have a significant preference for staying and risk-averse individuals have a significant preference for fleeing, in line with the predictions of the quantile maximization model. These findings are in contrast to findings on economic migrants, and call for separate policies toward economic and forced migrants. |
Date: | 2025–05 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2505.03405 |
By: | Stelios Arvanitis (Department of Economics, AUEB) |
Abstract: | We develop and implement methods for determining whether relaxing sparsity constraints on portfolios improves the investment opportunity set for risk-averse investors.We formulate a new estimation procedure for sparse second-order stochastic spanning based on a greedy algorithm and Linear Programming. We show the optimal recovery of the sparse solution asymptotically whether spanning holds or not. From large equity datasets, we estimate the expected utility loss due to possible under-diversification, and find that there is no benefit from expanding a sparse opportunity set beyond 45 assets. The optimal sparse portfolio invests in 10 industry sectors and cuts tail risk when compared to a sparse mean-variance portfolio. On a rolling-window basis, the number of assets shrinks to 25 assets in crisis periods, while standard factor models cannot explain the performance of the sparse portfolios. |
Keywords: | Nonparametric estimation, stochastic dominance, spanning, under-diversification, greedy algorithm, Linear Programming |
JEL: | C13 C14 C44 C58 C61 D81 G11 |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:qed:wpaper:1532 |
By: | Leo Kurata; Kensei Nakamura |
Abstract: | This paper studies preference aggregation under uncertainty in the multi-profile framework introduced by Sprumont (2018, 2019) and characterizes a new class of aggregation rules that can address classical concerns about Harsanyi's (1955) utilitarian rules. Our class of aggregation rules, which we call relative fair aggregation rules, is grounded in three key ideas: utilitarianism, egalitarianism, and the 0--1 normalization. These rules are parameterized by a set of weights over individuals. Each ambiguous alternative is evaluated by computing the minimum weighted sum of the 0--1 normalized utility levels within that weight set. For the characterization, we propose two novel key axioms -- weak preference for mixing and restricted certainty independence -- developed using a new method of objectively randomizing outcomes even within the fully uncertain Savagean framework. Furthermore, we show that relative utilitarian aggregation rules can be identified from the above class by imposing an axiom stronger than restricted certainty independence, and that the Rawlsian maximin version can be derived by considering strong preference for mixing instead. |
Date: | 2025–05 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2505.03232 |
By: | Christian Callaghan |
Abstract: | This paper introduces Experiential Matrix Theory (EMT), a general theory of growth, employment, and technological change for the age of artificial intelligence (AI). EMT redefines utility as the alignment between production and an evolving, infinite-dimensional matrix of human experiential needs, thereby extending classical utility frameworks and integrating ideas from the capabilities approach of Sen and Nussbaum into formal economic optimisation modelling. We model the economy as a dynamic control system in which AI collapses ideation and coordination costs, transforming production into a real-time vector of experience-aligned outputs. Under this structure, the production function becomes a continuously learning map from goods to experiential utility, and economic success is redefined as convergence toward an asymptotic utility frontier. Using Pontryagin's Maximum Principle in an infinite-dimensional setting, we derive conditions under which AI-aligned output paths are asymptotically optimal, and prove that unemployment is Pareto-inefficient wherever unmet needs and idle human capacities persist. On this foundation, we establish Alignment Economics as a new research field dedicated to understanding and designing economic systems in which technological, institutional, and ethical architectures co-evolve. EMT thereby reframes policy, welfare, and coordination as problems of dynamic alignment, not static allocation, and provides a mathematically defensible framework for realigning economic production with human flourishing. As ideation costs collapse and new experiential needs become addressable, EMT shows that economic growth can evolve into an inclusive, meaning-centred process -- formally grounded, ethically structured, and AI-enabled. |
Date: | 2025–05 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2505.19045 |
By: | Mingshi Chen; Tracy Xiao Liu; You Shan; Shu Wang; Songfa Zhong; Yanju Zhou |
Abstract: | Choice consistency with utility maximization is a fundamental assumption in economic analysis and is extensively measured across various contexts. Here we investigate the generalizability of consistency measures derived from purchasing decisions using supermarket scanner data and budgetary decisions from lab-in-the-field experiments. We observe a lack of correlation between consistency scores from supermarket purchasing decisions and those from risky decisions in the experiment. However, we observe moderate correlations among experimental tasks and low to moderate correlations across purchasing categories and time periods within the supermarket. These results suggest that choice consistency may be characterized as a multidimensional skill set. |
Date: | 2025–05 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2505.05275 |
By: | Leonardo Tariffi (University of Barcelona) |
Abstract: | I first write a partial equilibrium model “á la Rogoff†where there are relative prices of non-tradable goods in terms of prices of tradables goods. I find that the behaviour of the real exchange rate shows structural breaks in the short term. Secondly, I explain that any change in the real exchange rate is transitory in the long run. I obtain a general equilibrium model after I add a utility function to the partial-equilibrium model. In the general equilibrium model, an increase occurring in consumption of tradables is going to keep the RER constant over the time. |
Keywords: | Exchange rate, Non-tradable goods, General equilibrium model, Dynamics |
JEL: | F31 F41 |
Date: | 2024 |
URL: | https://d.repec.org/n?u=RePEc:ewp:wpaper:476web |
By: | Haochuan Wang |
Abstract: | Understanding how market participants react to shocks like scheduled macroeconomic news is crucial for both traders and policymakers. We develop a calibrated data generation process DGP that embeds four stylized trader archetypes retail, pension, institutional, and hedge funds into an extended CAPM augmented by CPI surprises. Each agents order size choice is driven by a softmax discrete choice rule over small, medium, and large trades, where utility depends on risk aversion, surprise magnitude, and liquidity. We aim to analyze each agent's reaction to shocks and Monte Carlo experiments show that higher information, lower aversion agents take systematically larger positions and achieve higher average wealth. Retail investors under react on average, exhibiting smaller allocations and more dispersed outcomes. And ambient liquidity amplifies the sensitivity of order flow to surprise shocks. Our framework offers a transparent benchmark for analyzing order flow dynamics around macro releases and suggests how real time flow data could inform news impact inference. |
Date: | 2025–05 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2505.01962 |
By: | Carlos Alos Ferrer; Johannes Buckenmaier; Michele Garagnani |
Abstract: | Economic decisions are noisy due to errors and cognitive imprecision. Often, they are also systematically biased by heuristics or behavioral rules of thumb, creating behavioral anomalies which challenge established economic theories. The interaction of noise and bias, however, has been mostly neglected, and recent work suggests that received behavioral anomalies might be just due to regularities in the noise. This contribution formalizes the idea that decision makers might follow a mixture of rules of behavior combining cognitively- imprecise value maximization and computationally simpler shortcuts. The model delivers new testable predictions which we validate in two experiments, focusing on biases in probability judgments and the certainty effect in lottery choice, respectively. Our findings suggest that neither cognitive imprecision nor multiplicity of behavioral rules suffice to explain received patterns in economic decision making. However, jointly modeling (cognitive) noise in value maximization and biases arising from simpler, cognitive shortcuts delivers a unified framework which can parsimoniously explain deviations from normative prescriptions across domains. |
Keywords: | Cognitive Imprecision, Strength of Preference, Noise, Decision Biases, Belief Updating, Certainty Heuristic |
JEL: | D01 D81 D87 D91 |
Date: | 2025 |
URL: | https://d.repec.org/n?u=RePEc:lan:wpaper:423483206 |
By: | Shanyu Han; Yang Liu; Xiang Yu |
Abstract: | We propose a reinforcement learning (RL) framework under a broad class of risk objectives, characterized by convex scoring functions. This class covers many common risk measures, such as variance, Expected Shortfall, entropic Value-at-Risk, and mean-risk utility. To resolve the time-inconsistency issue, we consider an augmented state space and an auxiliary variable and recast the problem as a two-state optimization problem. We propose a customized Actor-Critic algorithm and establish some theoretical approximation guarantees. A key theoretical contribution is that our results do not require the Markov decision process to be continuous. Additionally, we propose an auxiliary variable sampling method inspired by the alternating minimization algorithm, which is convergent under certain conditions. We validate our approach in simulation experiments with a financial application in statistical arbitrage trading, demonstrating the effectiveness of the algorithm. |
Date: | 2025–05 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2505.04553 |
By: | Molina, Jose Alberto; Salvatierra, Alba; Velilla, Jorge |
Abstract: | This paper analyzes work from home from a household perspective, focusing on its various relationships with spouses’ wages, household labor supply, expenditures, and chores. We use a collective model that predicts that work-from-home decisions result from joint utility maximization. Using data from the PSID (2011-2021), we find that both partners’ wages and hours are associated with their own and their spouse’s WFH status in pooled specifications, but these associations weaken substantially when accounting for endogeneity and unobserved heterogeneity. Instrumental variable estimates suggest that wage effects are partly driven by occupational sorting, while fixed effects models reveal that changes in WFH status are strongly correlated across spouses but largely unrelated to short-term changes in wages or hours. Implications point to the need for models of remote work that incorporate intra-household dynamics, and to the importance of recognizing WFH as a negotiated outcome rather than an individual choice. |
Keywords: | work from home; collective model, PSID data |
JEL: | D10 D79 J22 |
Date: | 2025 |
URL: | https://d.repec.org/n?u=RePEc:pra:mprapa:124906 |
By: | Lei Bill Wang; Sooa Ahn |
Abstract: | This paper examines why eligible households do not participate in welfare programs. Under the assumption that there exist some observed fully attentive groups, we model take-up as a two-stage process: attention followed by choice. We do so with two novel approaches. Drawing inspiration from the demand estimation for stochastically attentive consumers literature, Approach I is semiparametric with a nonparametric attention function and a parametric choice function. It uses fully attentive households to identify choice utility parameters and then uses the entire population to identify the attention probabilities. By augmenting Approach I with a random effect that simultaneously affects the attention and choice stages, Approach II allows household-level unobserved heterogeneity and dependence between attention and choice even after conditioning on observed covariates. Applied to NLSY panel data for WIC participation, both approaches consistently point to two empirical findings with regard to heterogeneous policy targeting. (1) As an infant ages towards 12 months and beyond, attention probability drops dramatically while choice probability steadily decreases. Finding (1) suggests that exit-prevention is the key for increasing the take-up rate because once a household exits the program when the infant ages close to 12 months old, it is unlikely to rejoin due to low attention. A value-increasing solution is predicted to be effective in promoting take-up by reducing exit probability. In contrast, an attention-raising solution is predicted to be ineffective. (2) Higher educated households are less attentive but more likely to enroll if attentive. Finding (2) suggests that running informational campaigns with parenting student groups at higher education institutions could be an effective strategy for boosting take-up. |
Date: | 2025–06 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2506.03457 |
By: | Henrike Sternberg (Technical University of Munich, TUM School of Social Sciences and Technology & TUM School of Management, Munich School of Politcs and Public Policy & Friedrich-Alexander-Universität Erlangen-Nürnberg); Janina Isabel Steinert (Technical University of Munich, TUM School of Social Sciences and Technology & TUM School of Medicine and Health, Munich School of Politcs and Public Policy); Tim Büthe (Technical University of Munich, TUM School of Social Sciences and Technology & TUM School of Management, Munich School of Politcs and Public Policy & Duke Univery, Sanford School of Public Policy) |
Abstract: | This paper examines how inequality aversion shapes public support of international redistributive policies. We investigate this question in the context of the global allocation of vaccines during the Covid-19 pandemic, using online survey data from incentivized behavioral games and a discrete choice experiment conducted with German citizens in April 2021 (N=2, 402). We distinguish between aversion to advantageous inequality (others worse off, the ’guilt’ parameter) and aversion to disadvantageous inequality (others better off, the ’envy’ parameter). These two forms of inequality aversion shape German citizens’ attitudes towards the cross-country allocation of resources in distinct ways: While higher levels of the guilt parameter significantly increase respondents’ likelihood to prioritize an equitable vaccine allocation, the envy parameter is associated with lower support thereof. These findings suggest that inequality aversion matters for citizens’ support of redistribution beyond the national level and emphasize that distinguishing between both forms of inequality aversion is crucial. |
Keywords: | Distributional preferences; Inequality aversion; International inequality; Covid-19 pandemic; Support for vaccine donations; Survey experiment |
JEL: | C83 D63 D91 H87 I14 I18 |
Date: | 2025–07 |
URL: | https://d.repec.org/n?u=RePEc:aiw:wpaper:43 |
By: | Jussupow, Ekaterina; Benbasat, Izak; Heinzl, Armin |
Abstract: | People have conflicting responses for support from algorithms or humans in decision-making. On the one hand, they fail to benefit from algorithms due to algorithm aversion, as they reject decisions provided by algorithms more frequently than those made by humans. On the other hand, many prefer algorithmic over human advice, an effect of algorithm appreciation. However, currently, we lack a shared understanding of these constructs’ meaning and measurements, resulting in a lack of theoretical integration of empirical findings. Thus, in this research note, we conceptualize algorithm aversion as the preference for humans over algorithms in decision-making and analyze approaches in current research to measure this preference. First, we outline the implications of focusing on a specific understanding of algorithms as computational procedures or as embedded in material or nonmaterial objects. Then, we classify four decision configurations that distinguish individuals’ evaluations of algorithms, human advisors, their own judgments, or combinations of these. Consequently, we develop a classification scheme that provides guidance for future research to develop more specific hypotheses on the direction of preferences (aversion vs. appreciation) and the effect of moderators. |
Date: | 2024–12–01 |
URL: | https://d.repec.org/n?u=RePEc:dar:wpaper:154733 |
By: | Omar Besbes; Yash Kanoria; Akshit Kumar |
Abstract: | Individuals often navigate several options with incomplete knowledge of their own preferences. Information provisioning tools such as public rankings and personalized recommendations have become central to helping individuals make choices, yet their value proposition under different marketplace environments remains unexplored. This paper studies a stylized model to explore the impact of these tools in two marketplace settings: uncapacitated supply, where items can be selected by any number of agents, and capacitated supply, where each item is constrained to be matched to a single agent. We model the agents utility as a weighted combination of a common term which depends only on the item, reflecting the item's population level quality, and an idiosyncratic term, which depends on the agent item pair capturing individual specific tastes. Public rankings reveal the common term, while personalized recommendations reveal both terms. In the supply unconstrained settings, both public rankings and personalized recommendations improve welfare, with their relative value determined by the degree of preference heterogeneity. Public rankings are effective when preferences are relatively homogeneous, while personalized recommendations become critical as heterogeneity increases. In contrast, in supply constrained settings, revealing just the common term, as done by public rankings, provides limited benefit since the total common value available is limited by capacity constraints, whereas personalized recommendations, by revealing both common and idiosyncratic terms, significantly enhance welfare by enabling agents to match with items they idiosyncratically value highly. These results illustrate the interplay between supply constraints and preference heterogeneity in determining the effectiveness of information provisioning tools, offering insights for their design and deployment in diverse settings. |
Date: | 2025–06 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2506.03369 |