|
on Microeconomics |
| By: | Bouvard, Matthieu; Jullien, Bruno; Martimort, David |
| Abstract: | We study how the organizational structure of producers affects competition between systems. We model systems as differentiated bundles of complementary components, where components within each system are produced either by a single firms (integration) or by two distinct firms (disintegration). When information about buyers' preferences is symmetric, disintegration typically increases prices and reduces total welfare as the less efficient system gains market share relative to integration. In addition, when buyers' preferences are private information, disintegration magnifies the quality distortions suppliers introduce to screen buyers and further reduces the market share of the more efficient system. Overall, the analysis suggests that technological standards that facilitate the combination of components from different suppliers can have adverse effects. |
| Keywords: | Composite goods; suppliers organization; competition; double; marginalization. |
| JEL: | D82 |
| Date: | 2025–11 |
| URL: | https://d.repec.org/n?u=RePEc:tse:wpaper:131250 |
| By: | Navin Kartik; Elliot Lipnowski; Harry Pei |
| Abstract: | Does electoral replacement ensure that officeholders eventually act in voters' interests? We study a reputational model of accountability. Voters observe incumbents' performance and decide whether to replace them. Politicians may be "good" types who always exert effort or opportunists who may shirk. We find that good long-run outcomes are always attainable, though the mechanism and its robustness depend on economic conditions. In environments conducive to incentive provision, some equilibria feature sustained effort, yet others exhibit some long-run shirking. In the complementary case, opportunists are never fully disciplined, but selection dominates: every equilibrium eventually settles on a good politician, yielding permanent effort. |
| Date: | 2025–12 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2512.13351 |
| By: | Yifan Dai; Drew Fudenberg; Harry Pei |
| Abstract: | A sender first publicly commits to an experiment and then can privately run additional experiments and selectively disclose their outcomes to a receiver. The sender has private information about the maximal number of additional experiments they can perform (i.e., their type). We show that the sender cannot attain their commitment payoff in any equilibrium if (i) the receiver is sufficiently uncertain about their type and (ii) the sender could benefit from selective disclosure after conducting their full-commitment optimal experiment. Otherwise, there can be equilibria where the sender obtains their commitment payoff. |
| Date: | 2026–01 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2601.05914 |
| By: | Safal Raman Aryal |
| Abstract: | Standard decision theory seeks conditions under which a preference relation can be compressed into a single real-valued function. However, when preferences are incomplete or intransitive, a single function fails to capture the agent's evaluative structure. Recent literature on multi-utility representations suggests that such preferences are better represented by families of functions. This paper provides a canonical and intrinsic geometric characterization of this family. We construct the \textit{ledger group} $U(P)$, a partially ordered group that faithfully encodes the native structure of the agent's preferences in terms of trade-offs. We show that the set of all admissible utility functions is precisely the \textit{dual cone} $U^*$ of this structure. This perspective shifts the focus of utility theory from the existence of a specific map to the geometry of the measurement space itself. We demonstrate the power of this framework by explicitly reconstructing the standard multi-attribute utility representation as the intersection of the abstract dual cone with a subspace of continuous functionals, and showing the impossibility of this for a set of lexicographic preferences. |
| Date: | 2025–12 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2512.07991 |
| By: | Navin Kartik; Frances Xu Lee; Wing Suen |
| Abstract: | We study voluntary disclosure with multiple biased senders who may bear costs for disclosing or concealing their private information. Under relevant assumptions, disclosures are strategic substitutes under a disclosure cost but complements under a concealment cost. Additional senders thus impede any sender's disclosure under a disclosure cost but promote it under a concealment cost. In the former case, a decision maker can be harmed by additional senders, even when senders have opposing interests. The effects under both kinds of message costs turn on how a sender, when concealing his information, expects others' messages to systematically sway the decision maker's belief. |
| Date: | 2026–01 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2601.10048 |
| By: | Srinivas Tunuguntla; Carl F. Mela; Jason Pratt |
| Abstract: | Digital advertising platforms and publishers sell ad inventory that conveys targeting information, such as demographic, contextual, or behavioral audience segments, to advertisers. While revealing this information improves ad relevance, it can reduce competition and lower auction revenues. To resolve this trade-off, this paper develops a general auction mechanism -- the Information-Bundling Position Auction (IBPA) mechanism -- that leverages the targeting information to maximize publisher revenue across both search and display advertising environments. The proposed mechanism treats the ad inventory type as the publisher's private information and allocates impressions by comparing advertisers' marginal revenues. We show that IBPA resolves the trade-off between targeting precision and market thickness: publisher revenue is increasing in information granularity and decreasing in disclosure granularity. Moreover, IBPA dominates the generalized second-price (GSP) auction for any distribution of advertiser valuations and under any information or disclosure regime. We also characterize computationally efficient approximations that preserve these guarantees. Using auction-level data from a large retail media platform, we estimate advertiser valuation distributions and simulate counterfactual outcomes. Relative to GSP, IBPA increases publisher revenue by 68%, allocation rate by 19pp, advertiser welfare by 29%, and total welfare by 54%. |
| Date: | 2026–01 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2601.09541 |
| By: | Joshua S. Gans |
| Abstract: | This paper shows that income effects create an endogenous barrier to arbitrage, allowing price discrimination to survive costless resale. A monopolist sells an indivisible good to consumers with heterogeneous incomes who can freely resell. When the good is strictly normal, a consumer's reservation price to resell increases as the purchase price decreases—lower prices leave buyers wealthier and raise their valuation of the good. The monopolist exploits this by subsidising low-income consumers to raise their reservation prices to a target that high-income consumers must also pay. The optimal schedule increases dollar-for-dollar with income in the subsidised segment, weakly dominates uniform pricing, and achieves the first-best allocation when the entire market is served. We show the mechanism extends beyond income effects: low substitutability with market alternatives generates large reservation-price responses even when income sensitivity is modest. Sustaining discrimination requires market power at the individual level—consumer-specific quantity limits—not merely aggregate output restrictions. Extensions examine multiple monopolists and endogenous privacy choices. |
| JEL: | D11 D42 L12 |
| Date: | 2026–01 |
| URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:34669 |
| By: | Angelo Enrico Petralia |
| Abstract: | We investigate the choice of a decision maker (DM) who harms herself, by maximizing in each menu some distortion of her true preference, in which the first i alternatives are moved, in reverse order, to the bottom. This pattern has no empirical power, but it allows to define a degree of self-punishment, which measures the extent of the denial of pleasure adopted by the DM. We characterize irrational choices displaying the lowest degree of self-punishment, and we fully identify the preferences that explain the DM's picks by a minimal denial of pleasure. These datasets account for some well known selection biases, such as second-best procedures, and the handicapped avoidance. Necessary and sufficient conditions for the estimation of the degree of self-punishment of a choice are singled out. Moreover the linear orders whose harmful distortions justify choice data are partially elicited. Finally, we offer a simple characterization of the choice behavior that exhibits the highest degree of self-punishment, and we show that this subclass comprises almost all choices. |
| Date: | 2026–01 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2601.01421 |
| By: | Zafer Kanik; Zaruhi Hakobyan |
| Abstract: | Social media platforms systematically reward popularity but not authenticity, incentivizing users to strategically tailor their expression for attention. We develop a utilitarian framework addressing strategic expression in social media. Agents hold fixed heterogeneous authentic opinions and derive (i) utility gains from the popularity of their own posts--measured by likes received--, and (ii) utility gains (losses) from exposure to content that aligns with (diverges from) their authentic opinion. Social media interaction acts as a state-dependent welfare amplifier: light topics generate Pareto improvements, whereas intense topics make everyone worse off in a polarized society (e.g., political debates during elections). Moreover, strategic expression amplifies social media polarization during polarized events while dampening it during unified events (e.g., national celebrations). Consequently, strategic distortions magnify welfare outcomes, expanding aggregate gains in light topics while exacerbating losses in intense, polarized ones. Counterintuitively, strategic agents often face a popularity trap: posting a more popular opinion is individually optimal, yet collective action by similar agents eliminates their authentic opinion from the platform, leaving them worse off than under the authentic-expression benchmark. Preference-based algorithms--widely used by platforms--or homophilic exposures discipline popularity-driven behavior, narrowing the popularity trap region and limiting its welfare effects. Our framework fills a critical gap in the social media literature by providing a microfoundation for user welfare that maps to observable metrics, while also introducing popularity incentives as an unexplored channel in social networks distinct from the canonical mechanisms of conformity, learning, persuasion, and (mis)information transmission. |
| Date: | 2026–01 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2601.01370 |
| By: | Romain Biard (Université Marie et Louis Pasteur, LmB, UMR6623, F-25000 Besançon, France); Mostapha Diss (Université Marie et Louis Pasteur, CRESE UR3190, F-25000 Besançon, France); Salma Larabi (Université Marie et Louis Pasteur, CRESE UR3190, F-25000 Besançon, France) |
| Abstract: | We propose a weighted minority voting mechanism within a two-round sequential voting process, in which all individuals retain their voting rights in the second round but with different weights depending on the first-round outcome. In a utilitarian framework where individuals have a given utility function that depends on the outcomes of each round, first-round winners are identified and vote with reduced weight in the second round, while losers retain full weight. By giving greater weight to first-round losers, this design ensures that first-round winners continue to contribute to the final decision without dominating it, thereby mitigating repeated disadvantages for losers. We then compare the expected aggregate utility of society across different levels of second-round weight assigned to first-round losers, including both the simple majority rule – where all voters carry equal weight in both rounds – and the limiting case of minority voting where first-round losers receive no weight in the second round. To do so, we analyze two models: one in which individual utility derives solely from material payoffs, and another in which a form of harmony is considered, whereby individuals incur a utility loss if others repeatedly belong to the losing minority. This analysis allows us to assess how strategic behavior affects the effectiveness of the proposed mechanism. |
| Keywords: | Voting, Minority Voting, Simple Majority, Utilitarianism, Harmony. |
| JEL: | C72 D70 D71 D72 |
| Date: | 2026–01 |
| URL: | https://d.repec.org/n?u=RePEc:crb:wpaper:2026-01 |
| By: | Giuseppe Attanasi; Giuseppe Ciccarone; Giovanni Di Bartolomeo |
| Abstract: | We propose a simple model that connects creativity to rational inattention, introducing a new formal channel through which imprecise information generates creative benefits. While imprecision usually entails costs, it can also make creativity a complementary dimension of information acquisition, reshaping the trade-off between attention and decision quality. Our main result is that creativity reduces the effective cost of information processing. |
| Keywords: | Selective Attention; Information Processing Costs; Cognitive Constraints; Innovation |
| JEL: | D90 O31 D80 |
| Date: | 2026–01 |
| URL: | https://d.repec.org/n?u=RePEc:sap:wpaper:wp268 |
| By: | Masaki Aoyagi |
| Abstract: | This paper studies the assignment of a treatment by a social planner when the valuations of the treatment are interdependent across individuals in the population. Specifically, an individual’s valuation of the treatment is influenced by the treatment status of some group of individuals and is positive if and only if any member of the group is treated. The identities of those who have positive spillovers on an individual is his private type, and the social planner assigns the treatment based on their reported types aiming to maximize the number of treated individuals less subsidies. We study the property of an assignment mechanism that offers a subsidy to a single individual and provides the treatment to everyone over whom this individual has positive spillovers either directly or indirectly. We use the percolation theorem of McDiarmid (1981) to show that the number of treated individuals under such a mechanism is independent of the reciprocal property of the spillover relationship, and is asymptotically optimal when the population grows large. |
| Date: | 2026–01 |
| URL: | https://d.repec.org/n?u=RePEc:dpr:wpaper:1302 |
| By: | Tom Johnston; Michael Savery; Alex Scott; Bassel Tarbush |
| Abstract: | We study the typical structure of games in terms of their connectivity properties. A game is said to be `connected' if it has a pure Nash equilibrium and the property that there is a best-response path from every action profile which is not a pure Nash equilibrium to every pure Nash equilibrium, and it is generic if it has no indifferences. In previous work we showed that, among all $n$-player $k$-action generic games that admit a pure Nash equilibrium, the fraction that are connected tends to $1$ as $n$ gets sufficiently large relative to $k$. The present paper considers the large-$k$ regime, which behaves differently: we show that the connected fraction tends to $1-\zeta_n$ as $k$ gets large, where $\zeta_n>0$. In other words, a constant fraction of many-action games are not connected. However, $\zeta_n$ is small and tends to $0$ rapidly with $n$, so as $n$ increases all but a vanishingly small fraction of many-player-many-action games are connected. Since connectedness is conducive to equilibrium convergence we obtain, by implication, that there is a simple adaptive dynamic that is guaranteed to lead to a pure Nash equilibrium in all but a vanishingly small fraction of generic games that have one. Our results are based on new probabilistic and combinatorial arguments which allow us to address the large-$k$ regime that the approach used in our previous work could not tackle. We thus complement our previous work to provide a more complete picture of game connectivity across different regimes. |
| Date: | 2026–01 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2601.05965 |
| By: | Antonio Cabrales; Esther Hauk |
| Abstract: | This paper develops a model to understand the conditions under which groups within a society choose between a collaborative and an individualist approach to education. A key feature of the model is the presence of externalities, which can lead to multiple equilibria. This framework helps explain the persistence of diverse local educational cultures, even within relatively homogeneous countries. These features yield important and subtle insights for public policy. Policymakers may need to focus either on shifting beliefs or enhancing the abilities of parents and teachers. We also analyze the incentives driving segregation in education and explore potential policy responses. |
| Keywords: | collaborative learning, Coordination, education policy, Externalities, local interaction, multiple equilibria, parental educational styles, peer effects, school choice, segregation |
| JEL: | I21 D62 C72 I28 |
| Date: | 2026–01 |
| URL: | https://d.repec.org/n?u=RePEc:bge:wpaper:1546 |
| By: | Shengyu Cao; Ming Hu |
| Abstract: | We study how delegating pricing to large language models (LLMs) can facilitate collusion in a duopoly when both sellers rely on the same pre-trained model. The LLM is characterized by (i) a propensity parameter capturing its internal bias toward high-price recommendations and (ii) an output-fidelity parameter measuring how tightly outputs track that bias; the propensity evolves through retraining. We show that configuring LLMs for robustness and reproducibility can induce collusion via a phase transition: there exists a critical output-fidelity threshold that pins down long-run behavior. Below it, competitive pricing is the unique long-run outcome. Above it, the system is bistable, with competitive and collusive pricing both locally stable and the realized outcome determined by the model's initial preference. The collusive regime resembles tacit collusion: prices are elevated on average, yet occasional low-price recommendations provide plausible deniability. With perfect fidelity, full collusion emerges from any interior initial condition. For finite training batches of size $b$, infrequent retraining (driven by computational costs) further amplifies collusion: conditional on starting in the collusive basin, the probability of collusion approaches one as $b$ grows, since larger batches dampen stochastic fluctuations that might otherwise tip the system toward competition. The indeterminacy region shrinks at rate $O(1/\sqrt{b})$. |
| Date: | 2026–01 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2601.01279 |