|
on Economic Design |
| By: | Dirk Bergemann (Yale University); Tibor Heumann (Instituto de Econom’a, Pontificia Universidad Cat—lica de Chile); Stephen Morris (Massachusetts Institute of Technology) |
| Abstract: | We develop an integrated framework for information design and mechanism design in screening environments with quasilinear utility. Using the tools of majorization theory and quantile functions, we show that both information design and mechanism design problems reduce to maximizing linear functionals subject to majorization constraints. For mechanism design, the designer chooses allocations weakly majorized by the exogenous inventory. For information design, the designer chooses information structures that are majorized by the prior distribution. When the designer can choose both the mechanism and the information structure simultaneously, then the joint optimization problem becomes bilinear with two majorization constraints. We show that pooling of values and associated allocations is always optimal in this case. Our approach unifies classic results in auction theory and screening, extends them to information design settings, and provides new insights into the welfare effects of jointly optimizing allocation and information. |
| Date: | 2026–01–23 |
| URL: | https://d.repec.org/n?u=RePEc:cwl:cwldpp:2494 |
| By: | Maor Ben Zaquen; Ron Holzman |
| Abstract: | Given a set of $n$ individuals with strict preferences over $m$ indivisible objects, the Random Serial Dictatorship (RSD) mechanism is a method for allocating objects to individuals in a way that is efficient, fair, and incentive-compatible. A random order of individuals is first drawn, and each individual, following this order, selects their most preferred available object. The procedure continues until either all objects have been assigned or all individuals have received an object. RSD is widely recognized for its application in fair allocation problems involving indivisible goods, such as school placements and housing assignments. Despite its extensive use, a comprehensive axiomatic characterization has remained incomplete. For the balanced case $n=m=3$, Bogomolnaia and Moulin have shown that RSD is uniquely characterized by Ex-Post Efficiency, Equal Treatment of Equals, and Strategy-Proofness. The possibility of extending this characterization to larger markets had been a long-standing open question, which Basteck and Ehlers recently answered in the negative for all markets with $n, m\geq5$. This work completes the picture by identifying exactly for which pairs $\left(n, m\right)$ these three axioms uniquely characterize the RSD mechanism and for which pairs they admit multiple mechanisms. In the latter cases, we construct explicit alternatives satisfying the axioms and examine whether augmenting the set of axioms could rule out these alternatives. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.01224 |
| By: | Haris Aziz; P\'eter Bir\'o; Gergely Cs\'aji; Tom Demeulemeester |
| Abstract: | In a typical school choice application, the students have strict preferences over the schools while the schools have coarse priorities over the students based on their distance and their enrolled siblings. The outcome of a centralized admission mechanism is then usually obtained by the Deferred Acceptance (DA) algorithm with random tie-breaking. Therefore, every possible outcome of this mechanism is a stable solution for the coarse priorities that will arise with certain probability. This implies a probabilistic assignment, where the admission probability for each student-school pair is specified. In this paper, we propose a new efficiency-improving stable `smart lottery' mechanism. We aim to improve the probabilistic assignment ex-ante in a stochastic dominance sense, while ensuring that the improved random matching is still ex-post stable, meaning that it can be decomposed into stable matchings regarding the original coarse priorities. Therefore, this smart lottery mechanism can provide a clear Pareto-improvement in expectation for any cardinal utilities compared to the standard DA with lottery solution, without sacrificing the stability of the final outcome. We show that although the underlying computational problem is NP-hard, we can solve the problem by using advanced optimization techniques such as integer programming with column generation. We conduct computational experiments on generated and real instances. Our results show that the welfare gains by our mechanism are substantially larger than the expected gains by standard methods that realize efficiency improvements after ties have already been broken. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.10679 |
| By: | Amirmahdi Mirfakhar; Xuchuang Wang; Mengfan Xu; Hedyeh Beyhaghi; Mohammad Hajiesmaili |
| Abstract: | Two-sided matching markets rely on preferences from both sides, yet it is often impractical to evaluate preferences. Participants, therefore, conduct a limited number of interviews, which provide early, noisy impressions and shape final decisions. We study bandit learning in matching markets with interviews, modeling interviews as \textit{low-cost hints} that reveal partial preference information to both sides. Our framework departs from existing work by allowing firm-side uncertainty: firms, like agents, may be unsure of their own preferences and can make early hiring mistakes by hiring less preferred agents. To handle this, we extend the firm's action space to allow \emph{strategic deferral} (choosing not to hire in a round), enabling recovery from suboptimal hires and supporting decentralized learning without coordination. We design novel algorithms for (i) a centralized setting with an omniscient interview allocator and (ii) decentralized settings with two types of firm-side feedback. Across all settings, our algorithms achieve time-independent regret, a substantial improvement over the $O(\log T)$ regret bounds known for learning stable matchings without interviews. Also, under mild structured markets, decentralized performance matches the centralized counterpart up to polynomial factors in the number of agents and firms. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.12224 |
| By: | Filip Tokarski |
| Abstract: | I study the welfare-maximizing allocation of heterogeneous goods when monetary transfers are prohibited. Agents have private cardinal values, and the designer chooses a non-monetary mechanism subject to incentive compatibility and aggregate supply constraints. I characterize implementable allocations and give sufficient conditions under which the optimum coincides with a competitive equilibrium with equal incomes (CEEI). When these conditions fail, I characterize the optimum for two symmetric goods. I show that when narrow preference margins between goods predict greater need, the designer can sometimes benefit from distorting CEEI by offering a menu containing pure options and bundles. |
| Date: | 2026–01 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.00487 |
| By: | Jason Allen; Jakub Kastl; Milena Wittwer |
| Abstract: | We introduce a framework for estimating demand across multiple assets with bidding data. Unlike existing methods, our approach does not rely on price instruments, which are often difficult to obtain. We describe the data requirements for implementation and illustrate its versatility using two applications: message-level data from Nasdaq and bidder-level data from Canadian Treasury bill auctions. We argue that understanding demand systems is a crucial factor in assessing the impact of market design on price stability and liquidity. |
| JEL: | C14 D44 E58 G10 G12 |
| Date: | 2026–01 |
| URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:34774 |
| By: | Andreas Kleiner; Benny Moldovanu; Philipp Strack |
| Abstract: | A key insight is that many, seemingly different, economic problems share a common mathematical structure: they all involve the maximization of a functional over sets of monotonic functions that are either majorized by, or majorize, a given function. We first present new, simpler proofs for the main characterization results of the extreme points of sets defined by monotonicity and majorization constraints obtained by Kleiner, Moldovanu, and Strack (2021). We then demonstrate how the characterization results can be fruitfully applied to a broad range of economic applications, from auction and information design to decision problems under risk such as optimal stopping. Finally, we conclude with an overview of recent, related work that extends these characterizations to settings with additional constraints, multidimensional state spaces, and alternative stochastic orders. |
| Date: | 2026–01–06 |
| URL: | https://d.repec.org/n?u=RePEc:cwl:cwldpp:2492 |
| By: | Joshua S. Gans |
| Abstract: | Machine learning systems embed preferences either in training losses or through post-processing of calibrated predictions. Applying information design methods from Strack and Yang (2024), this paper provides decision-problem-agnostic conditions under which separation—training preference-free and applying preferences ex post is optimal. Unlike prior work that requires specifying downstream objectives, the welfare results here apply uniformly across decision problems. The key primitive is a diminishing-value-of-information condition: relative to a fixed (normalised) preference-free loss, preference embedding makes informativeness less valuable at the margin, inducing a mean-preserving contraction of learned posteriors. Because the value of information is convex in beliefs, preference-free training weakly dominates for any expected-utility decision problem. This provides theoretical foundations for modular AI pipelines that learn calibrated probabilities and implement asymmetric costs through downstream decision rules. However, separation requires users to implement optimal decision rules. When cognitive constraints bind—as documented in human-AI decision-making—preference embedding can dominate by automating threshold computation. These results provide design guidance: preserve optionality through postprocessing when objectives may shift; embed preferences when decision-stage frictions dominate. |
| JEL: | C45 C53 D81 D82 D83 |
| Date: | 2026–01 |
| URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:34780 |
| By: | Kolagani Paramahamsa |
| Abstract: | We investigate a seller's revenue-maximizing mechanism in a setting where a desirable good is sold together with an undesirable bad (e.g., advertisements) that generates third-party revenue. The buyer's private information is two-dimensional: valuation for the good and willingness to pay to avoid the bad. Following the duality framework of Daskalakis, Deckelbaum, and Tzamos (2017), whose results extend to our setting, we formulate the seller's problem using a transformed measure $\mu$ that depends on the third-party payment $k$. We provide a near-characterization for optimality of three pricing mechanisms commonly used in practice -- the Good-Only, Ad-Tiered, and Single-Bundle Posted Price -- and introduce a new class of tractable, interpretable two-dimensional orthant conditions on $\mu$ for sufficiency. Economically, $k$ yields a clean comparative static: low $k$ excludes the bad, intermediate $k$ separates ad-tolerant and ad-averse buyers, and high $k$ bundles ads for all types. |
| Date: | 2026–01 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2601.22404 |
| By: | David Lancashire |
| Abstract: | Achieving incentive compatibility under informational decentralization is impossible within the class of direct and revelation-equivalent mechanisms typically studied in economics and computer science. We show that these impossibility results are conditional by identifying a narrow class of non-revelation-equivalent mechanisms that sustain enforcement by inferring preferences indirectly through parallel, uncorrelatable games. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.01790 |
| By: | Navin Kartik (Department of Economics, Yale University); Elliot Lipnowski (Department of Economics, Yale University); Harry Pei (Department of Economics, Northwestern University) |
| Abstract: | Does electoral replacement ensure that officeholders eventually act in voters' interests? We study a reputational model of accountability. Voters observe incumbents' performance and decide whether to replace them. Politicians may be "good" types who always exert effort or opportunists who may shirk. We find that good long-run outcomes are always attainable, though the mechanism and its robustness depend on economic conditions. In environments conducive to incentive provision, some equilibria feature sustained effort, yet others exhibit some long-run shirking. In the complementary case, opportunists are never fully disciplined, but selection dominates: every equilibrium eventually settles on a good politician, yielding permanent effort. |
| Date: | 2025–12–15 |
| URL: | https://d.repec.org/n?u=RePEc:cwl:cwldpp:2483 |
| By: | J. Ignacio Conde-Ruiz; Clara I. González; Miguel Díaz Salazar |
| Abstract: | This paper combines artificial intelligence with economic modeling to design evaluation committees that are both efficient and fair in the presence of gender differences in economic research orientation. We develop a dynamic framework in which research evaluation depends on the thematic similarity between evaluators and researchers. The model shows that while topic balanced committees maximize welfare, this research neutral-gender allocation is dynamically unstable, leading to the persistent dominance of the group initially overrepresented in evaluation committees. Guided by these predictions, we employ unsupervised machine learning to extract research profiles for male and female researchers from articles published in leading economics journals between 2000 and 2025. We characterize optimal balanced committees within this multidimensional latent topic space and introduce the Gender-Topic Alignment Index (GTAI) to measure the alignment between committee expertise and female-prevalent research areas. Our simulations demonstrate that AI-based committee designs closely approximate the welfare-maximizing benchmark. In contrast, traditional headcount-based quotas often fail to achieve balance and may even disadvantage the groups they intend to support. We conclude that AI-based tools can significantly optimize institutional design for editorial boards, tenure committees, and grant panels. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:fda:fdaddt:2026-01 |
| By: | Heather N. Fogarty; Sooie-Hoe Loke; Nicholas F. Marshall; Enrique A. Thomann |
| Abstract: | This paper studies decentralized risk-sharing on networks. In particular, we consider a model where agents are nodes in a given network structure. Agents directly connected by edges in the network are referred to as friends. We study actuarially fair risk-sharing under the assumption that only friends can share risk, and we characterize the optimal signed linear risk-sharing rule in this network setting. Subsequently, we consider a special case of this model where all the friends of an agent take on an equal share of the agent's risk, and establish a connection to the graph Laplacian. Our results are illustrated with several examples. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.05155 |
| By: | Manuel Eberl; Patrick Lederer |
| Abstract: | In rank aggregation, the goal is to combine multiple input rankings into a single output ranking. In this paper, we analyze rank aggregation methods, so-called social welfare functions (SWFs), with respect to strategyproofness, which requires that no agent can misreport his ranking to obtain an output ranking that is closer to his true ranking in terms of the Kemeny distance. As our main result, we show that no anonymous SWF satisfies unanimity and strategyproofness when there are at least four alternatives. This result is proven by SAT solving, a computer-aided theorem proving technique, and verified by Isabelle, a highly trustworthy interactive proof assistant. Further, we prove by hand that strategyproofness is incompatible with majority consistency, a variant of Condorcet-consistency for SWFs. Lastly, we show that all SWFs in two natural classes have a large incentive ratio and are thus highly manipulable. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.06582 |
| By: | Stéphane Gonzalez (Université Jean Monnet Saint-Etienne, CNRS, Université Lumière Lyon 2, emlyon business school, GATE, 42023 Lyon, France); Le-Nhat-Linh Huynh (Université Jean Monnet Saint-Etienne, CNRS, Université Lumière Lyon 2, emlyon business school, GATE, 42023 Lyon, France) |
| Abstract: | We study collective decision-making when individual preferences depend on the state of the world. The paper introduces an axiom, Aggregation Consistency, linking the way society aggregates utilities across individuals with the way each individual aggregates outcomes across states. The axiom requires that any alternative preferred in every realized state remains preferred before uncertainty is resolved. Combined with standard social choice and aggregation principles, it implies that the same functional form must govern both interpersonal and intrapersonal aggregation. Under familiar conditions, this yields two canonical families of solutions: generalized utilitarian rules based on quasi-arithmetic means, and Rawlsian rules based on minimum or maximum operators. The analysis unifies utilitarian and egalitarian criteria within a single axiomatic framework for collective choice under uncertainty. |
| Date: | 2026 |
| URL: | https://d.repec.org/n?u=RePEc:gat:wpaper:2604 |
| By: | Umutcan Salman (University of Padova) |
| Abstract: | This paper studies the problem of reallocating objects to agents while taking into account agents’ endowments, object capacities and agents’ preferences. The goal is to find a Pareto efficient and individually rational allocation that minimizes the number of individuals who need to change from their initial allocation to the final one. We call this problem as MINDIST. We establish NP-completeness result for MINDIST. We also show that MINDIST remains NP-complete when we restrict individual preferences to be binary, meaning that each individual can rank at most two objects in the preferences. Finally, we present an integer programming formulation to solve small to moderately sized instances of the NP-hard problems. |
| Keywords: | : Object allocation, Pareto-efficiency, Individual Rationality, Computational complexity, Minimum changes |
| URL: | https://d.repec.org/n?u=RePEc:pad:wpaper:0323 |
| By: | Federico Echenique; Teddy Mekonnen; M. Bumin Yenmez |
| Abstract: | Medical ``Crisis Standards of Care'' call for a utilitarian allocation of scarce resources in emergencies, while favoring the worst-off under normal conditions. Inspired by such triage rules, we introduce social welfare functions whose distributive tradeoffs depend on the prevailing level of aggregate welfare. These functions are inherently self-referential: they take the welfare level as an input, even though that level is itself determined by the function. In our formulation, inequality aversion varies with welfare and is therefore self-referential. We provide an axiomatic foundation for a family of social welfare functions that move from Rawlsian to utilitarian criteria as overall welfare falls, thereby formalizing triage guidelines. We also derive the converse case, in which the social objective shifts from Rawlsianism toward utilitarianism as welfare increases. |
| Date: | 2026–01 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2601.22250 |