|
on Microeconomics |
| By: | Ian Ball; Deniz Kattwinkel; Jan Knoepfle |
| Abstract: | Two horizontally differentiated firms compete for consumers who are partially informed about their future preferences. The firms screen consumers by offering menus of option contracts. Each consumer enters contracts with both firms. Subsequently, each consumer learns his preferences and purchases only one product. We find the unique equilibrium. Relative to spot pricing, consumption is distorted because each consumer is endogenously locked into one firm. If contracting is sufficiently early, so that consumers are less informed and hence less differentiated, consumers benefit; this reverses the conclusion in the monopoly case. Exclusive contracting further benefits consumers by intensifying competition. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.08144 |
| By: | Andrei Iakovlev |
| Abstract: | When multiple informative equilibria are possible in a general cheap talk game, how much information can a principal guarantee herself? To answer this question, I define the notion of worst-case implementation-implementation via the worst non-trivial equilibrium of a mechanism. Under this objective, standard full-commitment mechanisms fail, yielding the principal no more than her no-communication payoff. Partial commitment, however, can provide a strict improvement. The possibility of facing a strategic, uncommitted principal disciplines the agent's reporting incentives across all equilibria. I characterize the worst-case optimal mechanism and payoff under weak assumptions on the players' preferences. The optimal mechanism has a simple two-message structure. The agent's messages are polarizing, designed to maximize their strategic impact on the uncommitted principal's actions. If full commitment is interpreted as decision automation, these results highlight a fundamental complementarity between automated and human decision-makers: the presence of a human aligns the agent's incentives to reveal information, while the automated system leverages these informative reports to take accurate actions. This strategic interaction is often overlooked by literature that compares the two based on standalone decision accuracy. Applications of the model include bail-setting automation, fintech lending, delegation, lobbying, and audit design. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.13645 |
| By: | Piotr Dworczak; Alex Smolin |
| Abstract: | An agent chooses an action using her private information combined with recommendations from an informed but potentially misaligned adviser. With a known alignment probability, the adviser reports his signal truthfully; with remaining probability, the adviser can send an arbitrary message. We characterize the decision rule that maximizes the agent's worst-case expected payoff. Every optimal rule admits a trust region representation in belief space: advice is taken at face value when it induces a posterior within the trust region; otherwise, the agent acts as if the posterior were on the trust region's boundary. We derive thresholds on the alignment probability above which the adviser's presence strictly benefits the agent and fully characterize the solution in binary-state as well as binary-action environments. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.09490 |
| By: | Morteza Honarvar; Joanna Krysta; Eric Tang |
| Abstract: | A designer offers vertically-differentiated positions to agents in the absence of transfers. Agents have private outside options and may reject their offers ex-post. The designer has preferences over the quantity of agents who accept each position. We show that under a general condition on the distribution of outside options, an optimal mechanism for the designer offers all agents an identical lottery, and we characterize this mechanism. When our condition does not hold, the optimal mechanism may require screening agents by offering a menu of distinct lotteries. Our results follow from a decomposition of agents' participation probabilities in any feasible mechanism. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.10428 |
| By: | Florian Mudekereza |
| Abstract: | We propose a new principal-agent framework where a principal communicates a roadmap -- a set of plausible outcome models and a prior belief over them -- to guide an agent who is learning the value of innovation. The agent trusts the prior but fears that each model is misspecified (or incorrect). In dynamic contracting, we find an impossibility result: the agent may fall into a breakthrough trap, where early unexplained success can raise his misspecification concerns to the point that no contract can motivate him to continue innovating. We also obtain an upper bound on the frequency of innovative activity that tightens as the degree of misspecification increases, which then causes innovation cycles to emerge endogenously in the long run. In static contracting, we show that diversifying the roadmap increases the principal's profit by reducing the agent's exposure to idiosyncratic epistemic risk. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.18879 |
| By: | Yeon-Koo Che; Longjian Li; Tianling Luo |
| Abstract: | We study how a decision-maker (DM) learns from data of unknown quality to form robust, ''general-purpose'' posterior beliefs. We develop a framework for robust learning and belief formation under a minimax-regret criterion, cast as a zero-sum game: the DM chooses posterior beliefs to minimize ex-ante regret, while an adversarial Nature selects the data-generating process (DGP). We show that, in large samples of $n$ signal draws, Nature optimally induces ambiguity by choosing a process whose precision converges to the uninformative signals at the rate $1/\sqrt{n}$. As a result, learning against the adversarial DGP is nontrivial as well as incomplete: the DM's ex-ante regret remains strictly positive even with an infinite amount of data. However, when the true DGP is fixed and informative (even if only slightly), our DM with a robust updating rule eventually learns the state with enough data. Still, learning occurs at a sub-exponential rate -- quantifying the asymptotic price of robustness -- and it exhibits ''under-inference'' bias. Our framework provides a decision-theoretic dual to the local alternatives method in asymptotic statistics, deriving the characteristic $1/\sqrt{n}$-scaling endogenously from the signal ambiguity. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.15246 |
| By: | Brian Roberson |
| Abstract: | We study incentive design when multiple principals simultaneously design mechanisms for their respective teams in environments with strategic spillovers. In this environment, each principal's set of incentive-compatible mechanisms--those that satisfy their own agents' incentive compatibility constraints--depends on the mechanisms offered by the other teams. Following a classic example by Myerson (1982), such games may lack equilibrium due to discontinuities in the correspondence of incentive-compatible mechanisms. We establish general conditions for equilibrium existence by introducing a novel approach that involves tracking both the outcome distributions along the truthful-obedient path and the sets of outcome distributions achievable through unilateral deviations, thereby providing a foundation for analyzing a wide range of multi-principal mechanism design with team production and agency problems. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.20281 |
| By: | Florian Brandl; Wanying Huang; Atulya Jain |
| Abstract: | We study whether a social planner can improve the efficiency of learning, measured by the expected total welfare loss, in a sequential decision-making environment. Agents arrive in order and each makes a binary action based on their private signal and the social information they observe. The planner can intervene by jointly designing the social information disclosed to agents and offering monetary transfers contingent on agents' actions. We show that, despite such flexibility, efficient learning cannot be restored with a finite budget: whenever learning is inefficient without intervention, no combination of information disclosure and transfers can achieve efficient learning while keeping total expected transfers finite. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.08812 |
| By: | Nemanja Antic; Harry Pei |
| Abstract: | We develop an overlapping generations model where each agent observes a verifiable private signal about the state and, with positive probability, also receives signals disclosed by his predecessor. The agent then takes an action and decides which signals to pass on. Each agent's action has a positive externality on his predecessor and his optimal action increases in his belief about the state. We show that as the communication friction vanishes, agents become increasingly selective in disclosing information. As the probability that messages reach the next generation approaches one, all signals except those with the highest likelihood ratio will be concealed in equilibrium. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.09406 |
| By: | Joshua Bißbort; Daniel Heyen; Soheil Shayegh |
| Abstract: | Advice plays a central role in health, personal finance, and energy-efficiency decisions. We study how a benevolent expert should design verifiable advice—such as whether to commission a diagnostic test of different accuracy—when the agent is behaviorally biased, either neglecting payoff-relevant considerations or updating beliefs in a systematic, non-Bayesian way. The expert both informs the agent about underlying risk and persuades the agent away from choices driven by bias. In a Bayesian persuasion framework with a binary safe-versus-risky decision and moderate (monotone) distortions, we show that the expert’s payoff need not be monotone in informativeness: intermediate information can reduce welfare relative to no information. Nonetheless, full disclosure remains optimal. |
| Keywords: | expert advice, risky choice, Bayesian persuasion, information design, behavioral bias, non-Bayesian updating, full disclosure |
| JEL: | D82 D81 D03 D83 I18 |
| Date: | 2026 |
| URL: | https://d.repec.org/n?u=RePEc:ces:ceswps:_12482 |
| By: | Zhihao Tang; Shixin Wang |
| Abstract: | In practice, auction data are often endogenously censored and anonymous, revealing only limited outcome statistics rather than full bid profiles. We study robust auction design when the seller observes only aggregated, anonymous order statistics and seeks to maximize worst-case expected revenue over all product distributions consistent with the observed statistic. We show that simple and widely used mechanisms are robustly optimal. Specifically, posted pricing is robustly optimal given the distribution of the highest value; the Myerson auction designed for the unique consistent i.i.d. distribution is robustly optimal given the lowest value distribution; and the second-price auction with an optimal reserve is robustly optimal when an intermediate order statistic is observed and the implied i.i.d. distribution is regular above its reserve. More generally, for a broad class of monotone symmetric mechanisms depending only on the top k order statistics, including multi-unit and position auctions, the worst-case revenue is attained under the i.i.d. distribution consistent with the observed k-th order statistic. Our results provide a tractable foundation for non-discriminatory auction design, where fairness and privacy are intrinsic consequences of the information structure rather than imposed constraints. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.20429 |
| By: | Frank Yang |
| Abstract: | A principal screens an agent with an arbitrary set of allocations $X$. The agent's preferences over allocations are comonotonic. A subset of allocations $X^*\subseteq X$ is a surplus-elasticity frontier if (i) any other allocation has a demand curve that is pointwise lower and less elastic than some allocation in $X^*$ and (ii) the allocations in $X^*$ can be ordered in terms of their demand curves such that a higher demand curve is more inelastic. We show that any surplus-elasticity frontier is an optimal menu. Moreover, if the incremental demand curves along the frontier are also ordered by their elasticities, then the frontier is optimal even among stochastic mechanisms. The result is agnostic to type distributions and redistributive welfare weights -- the same frontier remains optimal for a broad class of objectives. As applications, we show how these results immediately yield new insights into optimal bundling, optimal taxation, sequential screening, selling information, and regulating a data-rich monopolist. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.20087 |
| By: | Andres Espitia; Edwin Mu\~noz-Rodr\'iguez |
| Abstract: | Appropriate decisions depend on information gathered beforehand, yet such information is often obtained through intermediaries with biased preferences. Motivated by settings such as testing and recertification in organ transplantation, we study the problem faced by a decision-maker who can only access costly information through an agent with misaligned preferences. In a dynamic framework with exogenous decision timing, we ask how requests for verifiable information (evidence) should be scheduled and their implications for the quality of attained choices. When the agent's incentives are ignored, evidence requests do not condition on previously reported information. However, such policies may be susceptible to strategic manipulation by the agent. We show that, in these cases, optimal requests should be biased: additional evidence is more likely to be sought when previous reports favor the agent's preferred outcome. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.13879 |
| By: | Manik Dhar; Kunal Mittal; Clayton Thomas |
| Abstract: | Among two-candidate elections that treat the candidates symmetrically and never result in a tie, which voting rules are fair? A natural requirement is that each voter exerts an equal influence over the outcome, i.e., is equally likely to swing the election one way or the other. A voter's influence has been formalized in two canonical ways: the Shapley-Shubik (1954) index and the Banzhaf (1964) index. We consider both indices, and ask: Which electorate sizes admit a fair voting rule (under the respective index)? For an odd number $n$ of voters, simple majority rule is an example of a fair voting rule. However, when $n$ is even, fair voting rules can be challenging to identify, and a diverse literature has studied this problem under different notions of fairness. Our main results completely characterize which values of $n$ admit fair voting rules under the two canonical indices we consider. For the Shapley-Shubik index, a fair voting rule exists for $n>1$ if and only if $n$ is not a power of $2$. For the Banzhaf index, a fair voting rule exists for all $n$ except $2$, $4$, and $8$. Along the way, we show how the Shapley-Shubik and Banzhaf indices relate to the winning coalitions of the voting rule, and compare these indices to previously considered notions of fairness. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.13894 |
| By: | Antoine Dubus; Patrick Legros |
| Abstract: | Firms may share data to discover potential synergies between their data sets and algorithms, eventually leading to more efficient mergers and acquisitions (M&A) decisions. However, data sharing also modifies the competitive balance when firms do not merge, and a companymay be reluctant to share data with potential rivals. Under general conditions, we show thatfirms benefit from (partially) sharing data. By doing so, they can merge conditionally basedon high synergies. Compared to a laissez-faire situation, the presence of a regulator allowingor refusing the M&A may increase or decrease data sharing, with a concomitant increaseor decrease in consumer surplus. Hence, regulation can lower the surplus of consumers it iswilling to protect. We revisit the Google/Fitbit acquisition through the lens of this interplaybetween strategic data sharing and antitrust policy. |
| Keywords: | artificial Intelligence; Synergies; Mergers and Acquisition; incomplete Information; Antitrust |
| JEL: | G34 K21 L10 L21 L24 L50 L86 |
| Date: | 2026–02–05 |
| URL: | https://d.repec.org/n?u=RePEc:eca:wpaper:2013/403154 |
| By: | Cheaheon Lim |
| Abstract: | This paper develops a theory of learning under ambiguity induced by the decision maker's beliefs about the collection of data correlated with the true state of the world. Within our framework, two classical results on Bayesian learning extend to the setting with ambiguity: experiments are equivalent to distributions over posterior beliefs, and Blackwell's more informative and more valuable orders coincide. When applied to the setting of robust Bayesian analysis, our results clarify the source of time inconsistency in the Gamma-minimax problem and provide an argument in favor of the conditional Gamma-minimax criterion. We also apply our results to a persuasion game to illustrate that our model provides a natural benchmark for communication under ambiguity. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.07634 |
| By: | Takanori ADACHI; Naoshi DOI |
| Abstract: | How does a change in marginal costs affect the final consumer price in imperfectly competitive markets, where price-setting firms can also adjust product quality? In this paper, we study cost pass-through in such an environment. For both symmetric and heterogeneous firms, we show that pass-through for price and quality can be derived in terms of sufficient statistics that do not depend on any particular demand specification-namely, the first- and second-order elasticities of market demand, the Lerner index of market power, and equilibrium prices and quality choices. In addition, we obtain explicit pass-through formulas under firm symmetry. We then argue that under multinomial and random-coefficient logit demand systems, firms may respond to an increase in operational marginal costs by both lowering prices and reducing product quality when the number of symmetric firms is sufficiently small. Overall, our numerical analysis suggests that the random-coefficient logit model is more flexible than the multinomial logit model in that it allows price pass-through to exceed one, which is not possible under the multinomial logit. In addition, quality pass-through can be positive under random-coefficient logit demand. |
| Keywords: | Endogenous quality; Pass-through; Sufficient statistics; Oligopoly. |
| JEL: | D43 H22 L11 L13 |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:kue:epaper:e-25-013 |
| By: | Felix Brandt; Haoyuan Chen; Chris Dong; Patrick Lederer; Alexander Schlenga |
| Abstract: | A central problem in multiagent systems is the fair assignment of objects to agents. In this paper, we initiate the analysis of classic majoritarian social choice functions in assignment. Exploiting the special structure of the assignment domain, we show a number of surprising results with no counterparts in general social choice. In particular, we establish a near one-to-one correspondence between preference profiles and majority graphs. This correspondence implies that key properties of assignments -- such as Pareto-optimality, least unpopularity, and mixed popularity -- can be determined solely by the associated majority graph. We further show that all Pareto-optimal assignments are semi-popular and belong to the top cycle. Elements of the top cycle can thus easily be found via serial dictatorships. Our main result is a complete characterization of the top cycle, which implies the top cycle can only consist of one, two, all but two, all but one, or all assignments. By contrast, we find that the uncovered set contains only very few assignments. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.14816 |
| By: | Federico Echenique (UC Berkeley - University of California [Berkeley] - UC - University of California); Matías Núñez (CREST - Centre de Recherche en Économie et Statistique - ENSAI - Ecole Nationale de la Statistique et de l'Analyse de l'Information [Bruz] - Groupe ENSAE-ENSAI - Groupe des Écoles Nationales d'Économie et Statistique - X - École polytechnique - IP Paris - Institut Polytechnique de Paris - ENSAE Paris - École Nationale de la Statistique et de l'Administration Économique - Groupe ENSAE-ENSAI - Groupe des Écoles Nationales d'Économie et Statistique - IP Paris - Institut Polytechnique de Paris - CNRS - Centre National de la Recherche Scientifique) |
| Abstract: | We describe a sequential mechanism that fully implements the set of efficient outcomes in environments with quasi-linear utilities. The mechanism asks agents to take turns in defining prices for each outcome, with a final player choosing an outcome for all: Price & Choose. The choice triggers a sequence of payments, from each agent to the preceding agent. We present several extensions. First, payoff inequalities may be reduced by endogenizing the order of play. Second, our results extend to a model without quasi-linear utility, to a setting with an outside option, robustness to max-min behavior and caps on prices. |
| Keywords: | Prices, Mechanism, Subgame-perfect implementation, Efficiency |
| Date: | 2025–05–01 |
| URL: | https://d.repec.org/n?u=RePEc:hal:journl:hal-05511714 |
| By: | Lorenzo Portaluri |
| Abstract: | This paper develops a theoretical framework to study how the quality of public information shapes the intensity of revolt in global games of regime change. Building on the canonical literature, I model citizens deciding whether to attack a regime where intensity determines both effectiveness and failure costs. I extend the framework by endogenizing total conflict intensity through the strategic interaction of vanguard groups seeking to maximize the potential of the attack, and including the regime’s response. The analysis reveals a non-monotonic †transparency trap†: at intermediate beliefs, the relationship between information quality and total violence becomes Ushaped. Intensity is high when information is scarce (serving as a substitute coordination device), minimizes at intermediate levels, and surges again when high transparency facilitates violent coordination. These dynamics persist when intensity is the outcome of decentralized strategic choice. Moreover, as the number of competing vanguard groups increases, so does the equilibrium intensity. I empirically test these predictions drawing 177 events from the Revolutionary Episodes dataset (1900–2014), combined with historical Freedom of Expression indices. The results provide robust support for the U-shaped hypothesis and confirm that higher vanguard competition structurally escalates conflict. These findings highlight that transparency reforms can have counterintuitive effects, providing relevant policy implications. |
| Keywords: | Global Games, Regime Change, Public Information, Revolution, Violence. |
| JEL: | D72 D74 H56 P16 |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:mib:wpaper:570 |
| By: | Jason Hartline |
| Abstract: | The Vickrey-Clarke-Groves (VCG) mechanism is infamously revenue non-monotone in combinatorial auctions. I.e., when a buyer increases their value for a bundle of items, the total auction revenue may decrease. Combinatorial auctions exhibit complementarities which broadly result in complexities in auction theory. This brief note shows that non-monotonicity in multi-item auctions is not a result of complementarities, and in fact, VCG is revenue non-monotone even in matching markets. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.20439 |
| By: | Tim J. Boonen; Kenneth Tsz Hin Ng; Tak Wa Ng; Thai Nguyen |
| Abstract: | We propose a peer-to-peer (P2P) insurance scheme comprising a risk-sharing pool and a reinsurer. A plan manager determines how risks are allocated among members and ceded to the reinsurer, while the reinsurer sets the reinsurance loading. Our work focuses on the strategic interaction between the plan manager and the reinsurer, and this focus leads to two game-theoretic contract designs: a Pareto design and a Bowley design, for which we derive closed-form optimal contracts. In the Pareto design, cooperation between the reinsurer and the plan manager leads to multiple Pareto-optimal contracts, which are further refined by introducing the notion of coalitional stability. In contrast, the Bowley design yields a unique optimal contract through a leader-follower framework, and we provide a rigorous verification of the individual rationality constraints via pointwise comparisons of payoff vectors. Comparing the two designs, we prove that the Bowley-optimal contract is never Pareto optimal and typically yields lower total welfare. In our numerical examples, the presence of reinsurance improves welfare, especially with Pareto designs and a less risk-averse reinsurer. We further analyze the impact of the single-loading restriction, which disproportionately favors members with riskier losses. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.14223 |
| By: | Saurabh Amin; Amine Bennouna; Daniel Huttenlocher; Dingwen Kong; Liang Lyu; Asuman Ozdaglar |
| Abstract: | We develop a decision-theoretic model of human-AI interaction to study when AI assistance improves or impairs human decision-making. A human decision-maker observes private information and receives a recommendation from an AI system, but may combine these signals imperfectly. We show that the effect of AI assistance decomposes into two main forces: the marginal informational value of the AI beyond what the human already knows, and a behavioral distortion arising from how the human uses the AI's recommendation. Central to our analysis is a micro-founded measure of informational overlap between human and AI knowledge. We study an empirically relevant form of imperfect decision-making -- correlation neglect -- whereby humans treat AI recommendations as independent of their own information despite shared evidence. Under this model, we characterize how overlap and AI capabilities shape the Human-AI interaction regime between augmentation, impairment, complementarity, and automation, and draw key insights. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.14331 |
| By: | Anand, Kartik; König, Philipp Johann |
| Abstract: | This article provides a practical overview for applying the global games approach to solve models with multiple equilibria that are often used in discussions on fi- nancial and macroprudential policies. Global games offer a tractable approach to resolve multiple equilibria by introducing incomplete information, thereby yield- ing unique equilibrium predictions. The article proceeds along the lines of a simple regime change game with strategic complementarities. Starting from the canonical regime change game with homogeneous players, it extends the discus- sion to include heterogeneous groups of players and interlinkages across different institutions with different sets of players. These extensions highlight not only how strategic complementarities can amplify fragility across players and institu- tions but also how heterogeneity and interlinkages affects the design of micro- and macroprudential policy interventions. Finally, the article briefly discusses the application of global games to dynamic coordination games. |
| Keywords: | Global Games, Multiple Equilibria, Coordination Games |
| JEL: | C72 D82 G01 |
| Date: | 2026 |
| URL: | https://d.repec.org/n?u=RePEc:zbw:bubdps:337465 |
| By: | Hugo Hopenhayn; Maryam Saeedi |
| Abstract: | We study optimal simple rating systems that partition sellers into a finite number of tiers. We show that optimal ratings must be threshold partitions, and that for linear supply and Cournot competition with constant marginal cost, optimal thresholds solve a k-means clustering problem requiring only the quality distribution. For convex (concave) supply functions, optimal thresholds are higher (lower) than the k-means solution. For log-concave distributions, two-tier certification captures at least 50 percent of maximum welfare gains from full disclosure, with five tiers typically achieving over 90 percent. Applications to eBay and Medicare Advantage data illustrate our method. |
| JEL: | D21 D47 D60 D82 L11 |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:34889 |
| By: | Olivier Cailloux (LAMSADE - Laboratoire d'analyse et modélisation de systèmes pour l'aide à la décision - Université Paris Dauphine-PSL - PSL - Université Paris Sciences et Lettres - CNRS - Centre National de la Recherche Scientifique); Matías Núñez (CREST - Centre de Recherche en Économie et Statistique - ENSAI - Ecole Nationale de la Statistique et de l'Analyse de l'Information [Bruz] - Groupe ENSAE-ENSAI - Groupe des Écoles Nationales d'Économie et Statistique - X - École polytechnique - IP Paris - Institut Polytechnique de Paris - ENSAE Paris - École Nationale de la Statistique et de l'Administration Économique - Groupe ENSAE-ENSAI - Groupe des Écoles Nationales d'Économie et Statistique - IP Paris - Institut Polytechnique de Paris - CNRS - Centre National de la Recherche Scientifique); M. Remzi Sanver (LAMSADE - Laboratoire d'analyse et modélisation de systèmes pour l'aide à la décision - Université Paris Dauphine-PSL - PSL - Université Paris Sciences et Lettres - CNRS - Centre National de la Recherche Scientifique) |
| Abstract: | We consider two-person ordinal collective choice from an axiomatic perspective. We identify two principles: minimal Rawlsianism (the chosen alternatives belong to the upper-half of both individuals' preferences) and the equal loss principle (the chosen alternatives ensure that both individuals concede "as equally as possible" from their highest ranked alternative). The equal loss principle has variants of different strength, depending on the precise definition of "as equally as possible". We consider all prominent ordinal two-person social choice rules of the literature and explore which of these principles they satisfy. Moreover, we show that minimal Rawlsianism is logically incompatible with one version of the equal loss principle that we call the minimal dispersion principle. On the other hand, there are social choice rules that satisfy the Rawlsian minimal dispersion principle where the minimal dispersion principle is restricted to alternatives within the upper-half of both individuals' preferences. |
| Date: | 2024–11–10 |
| URL: | https://d.repec.org/n?u=RePEc:hal:journl:hal-05511713 |
| By: | Hans Gersbach |
| Abstract: | We provide a rationale for Co-Voting, a decision-making procedure that blends elements of direct and representative democracy to mitigate their main inefficiencies. A randomly selected group of citizens receives voting rights on specific issues, with their collective decision aggregated with parliament’s decision according to a pre-specified weight. Using a simple model, we show that Co-Voting acts as an insurance device against both uninformed decisions in direct democracy and decision biases in representative democracy. We further introduce Co-Del-Voting, which adds strategic delegation to parliament and strictly outperforms both systems. Finally, we outline possible extensions and a roadmap for implementation. |
| Keywords: | direct democracy, representative democracy, constitution, co-voting, biases, information asymmetry |
| JEL: | D02 D70 D72 D82 |
| Date: | 2026 |
| URL: | https://d.repec.org/n?u=RePEc:ces:ceswps:_12429 |
| By: | Federico Echenique; Teddy Mekonnen; M. Bumin Yenmez |
| Abstract: | We develop a general framework for incorporating distributional preferences in market design. We identify the structural properties of these preferences that guarantee the path independence of choice rules. In decentralized settings, a greedy rule uniquely maximizes these preferences; in centralized markets, the associated deferred-acceptance mechanism uniquely implements them. This framework subsumes canonical models, such as reserves and matroids, while accommodating complex objectives involving intersectional identities that lie beyond the scope of existing approaches. Our analysis provides unified axiomatic foundations and comparative statics for a broad class of distributional policies. |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2602.08035 |