|
on Microeconomics |
| By: | Roland Bénabou (Princeton University); Jean Tirole (TSE-R - Toulouse School of Economics - UT Capitole - Université Toulouse Capitole - Comue de Toulouse - Communauté d'universités et établissements de Toulouse - EHESS - École des hautes études en sciences sociales - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement, IAST - Institute for Advanced Study in Toulouse) |
| Abstract: | We analyze how private decisions and optimal public policies are shaped by personal and societal preferences, material incentives, and social norms. We show how honor and stigma interact with incentives and derive optimal taxation. We then analyze the expressive role of law as embodying society's values and identify when it calls for a weakening or a strengthening of incentives. The law should be softened when it signals agents' general willingness to contribute to the public good and toughened when it signals social externalities. We also shed light on norms-based interventions, societies' resistance to economists' messages, and the avoidance of cruel and unusual punishments. |
| Keywords: | Expressive law, Social norms, Incentives, Motivation |
| Date: | 2026–02 |
| URL: | https://d.repec.org/n?u=RePEc:hal:journl:hal-05577272 |
| By: | Rui Sun; Yi Zhang |
| Abstract: | A seller investigates a buyer before setting prices, balancing the cost of acquiring information against the gain from tailoring the contract to the buyer's private type. The optimal signal is coarse: no matter how rich the type space, the seller never needs more than three outcomes per buyer. The bound equals the number of independent post-signal decisions plus one, a quantity we call the effective policy dimension. Screening involves two decisions, whether to allocate and what to charge, giving the ternary bound. Limited liability is the source: without it, the price is pinned by the envelope, only the allocation decision remains, and signals are binary as in monitoring. The Myerson exclusion rule is an artifact of not investigating. With investigation, every marginal buyer trades with positive probability, governed by a universal function that connects information design to rational inattention. The bound holds for any strictly convex information cost. |
| Date: | 2026–04 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2604.04405 |
| By: | Sergei Kichko (Department of Economics and Management, University of Trento and CESifo); Marco A. Marini (Department of Social and Economic Sciences, Sapienza University of Rome); Riccardo D. Saulle (Department of Economics and Management, University of Padova); Jacques-Francois Thisse (CORE-UCLouvain and CEPR) |
| Abstract: | This paper extends the CES model of monopolistic competition to the case where varieties are both horizontally and vertically differentiated. A distinctive feature of our model is the presence of a network externality, which operates through the number of varieties available at each quality level. Depending on the quality gap, there are corner equilibria in which consumers purchase only high-quality or low-quality varieties, or an interior equilibrium in which consumers are split between the two qualities. Unlike the CES model of monopolistic competition, the equilibrium is never efficient and the market may even select the outcome with the lowest surplus. |
| Keywords: | Monopolistic competition, vertical differentiation, horizontal differentiation |
| JEL: | D42 D43 L1 L12 L13 L41 |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:fem:femwpa:2026.11 |
| By: | Tsuyoshi Toshimitsu (School of Economics, Kwansei Gakuin University) |
| Abstract: | Network connectivity is an important function in network industries. Based on the framework of a Hotelling model, we consider the impact of connectivity between network goods on incentives to innovate product R&D activities and profits. To explore the problems, we focus on the following three perspectives: market coverage (i.e., full and partial coverage), consumer expectations (i.e., rational and active expectations), and asymmetric firms (i.e., a high- and low-quality firm). Our findings are as follows. In the full market coverage case, the impact of connectivity on product R&D activities and profits depends on the type of consumer expectations and the difference in the quality of the firms. However, in the partial market coverage case, as connectivity improves, product R&D activities and profits increase, irrespective of the type of consumer expectations and the difference in the quality of the firms. |
| Keywords: | Network externality, Connectivity, Compatibility, Horizontal interoperability, R&D competition, Market coverage, Consumers' expectations, Firms' heterogeneity, Quality |
| JEL: | L13 L15 L31 L32 D43 |
| Date: | 2026–04 |
| URL: | https://d.repec.org/n?u=RePEc:kgu:wpaper:309 |
| By: | Tsuyoshi Toshimitsu (School of Economics, Kwansei Gakuin University) |
| Abstract: | We conduct welfare analysis of an improvement in compatibility in a network goods market, where firms compete on price and research and development (R&D) activity. Using a Hotelling model, we explore the impact of compatibility on a firm's R&D activity and on producer surplus, consumer surplus, and social welfare. Focusing on the difference in the formation of consumer expectations for network sizes, i.e., rational and active expectations, we demonstrate the following. First, under rational (active) expectations, an improvement in compatibility reduces (does not affect) a firm's R&D activity, but increases (decreases) consumer surplus. However, except for perfect compatibility, although the level of R&D activity is greater under rational expectations than under active expectations, consumer surplus is smaller under rational expectations than under active expectations. Second, regardless of the difference in the formation of consumer expectations, an improvement in compatibility increases producer surplus and social welfare. In addition, producer surplus and social welfare are greater under rational expectations than under active expectations. Finally, we consider the implications of social optimality for perfect compatibility. |
| Keywords: | Network externality, compatibility, strategic R&D competition, Hotelling linear market, fulfilled expectation equilibrium, rational expectation, active expectation |
| JEL: | L13 L15 L31 L32 D43 |
| Date: | 2026–04 |
| URL: | https://d.repec.org/n?u=RePEc:kgu:wpaper:310 |
| By: | Ryoga Doi; Kensei Nakamura |
| Abstract: | This paper studies a dominance relation among scoring rules with respect to avoiding the selection of the Condorcet loser. In a voting model with three or more alternatives, we say that a scoring rule $f$ Condorcet-loser-dominates (CL-dominates) another scoring rule $g$ if the set of profiles where $f$ selects a Condorcet loser is a proper subset of the set where $g$ does. We show that the Borda rule not only CL-dominates all other scoring rules, but also is the only scoring rule that CL-dominates some scoring rule. |
| Date: | 2026–04 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2604.05916 |
| By: | Ashwin Kambhampati |
| Abstract: | A central challenge in mechanism design is to identify mechanisms whose performance is robust under uncertainty about the environment. The maxmin optimality criterion is commonly used for this purpose, but it often yields a large and economically uninformative set of mechanisms. This paper proposes a lexicographic approach to refining the maxmin criterion and characterizes the efficiency of optimal mechanisms. In canonical screening and auction environments, the strongest refinement $\unicode{x2013}$ proper robustness $\unicode{x2013}$ selects ex post efficient mechanisms. By contrast, in a public good provision environment, it identifies the precise form of optimal inefficiencies, which become severe in large economies. |
| Date: | 2026–04 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2604.06105 |
| By: | Mathieu Martin; Linus Thierry Nana Noumi; Zéphirin Nganmeni; Ashley Piggins (CY Cergy Paris Université, THEMA) |
| Abstract: | In spatial voting games, the valence is traditionally modeled as a non-ideological attribute that is uniformly assigned to a candidate by all voters, independent of their policy preferences. In its original for-mulation, additive valence is assumed to be entirely detached from the candidate policy considerations. In this paper, we explore an alterna-tive framework in which additive valence interacts with the candidates' policy platforms. Each candidate possesses an individual valence level, but voters choose to recognize this valence only if the candidate is perceived as competent in defending their proposed policy. This perceived competence is assumed to be common knowledge among voters. The core objective of this study is to determine the conditions under which Nash equilibria arise in the context of electoral competition with policy-dependent additive valence. |
| Keywords: | Spatial voting, Electoral competition, Dual valence, Equilibrium |
| JEL: | D70 D71 D72 |
| Date: | 2026 |
| URL: | https://d.repec.org/n?u=RePEc:ema:worpap:2026-03 |
| By: | Alessandro Doldi; Marco Frittelli; Marco Maggis |
| Abstract: | Within a general semimartingale framework, we study the relationship between collective market efficiency and individual rationality. We derive a necessary and sufficient condition for the existence of (possibly zero-sum) exchanges among agents that strictly increase their indirect utilities and characterize this condition in terms of the compatibility between agents' preferences and collective pricing measures. The framework applies to both continuous- and discrete-time models and clarifies when cooperation leads to a strict improvement in each participating agent's indirect utility. |
| Date: | 2026–04 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2604.02862 |
| By: | Mathieu Martin; Linus Thierry Nana Noumi; Zéphirin Nganmeni; Ashley Piggins (CY Cergy Paris Université, THEMA) |
| Abstract: | A long-standing foundational problem in the spatial theory of politics is the generic emptiness of the majority core when there is more than one dimension in the policy space. This implies that, in general, we cannot predict where win-motivated candidates will locate in an electoral contest decided by majority rule. We assume that the candidates face some uncertainty: they observe each voter’s ideal point in the policy space but not their indifference surfaces. Given any proper spatial voting game, we first identify the set of imprudent positions in the space. If a candidate adopts an imprudent position, then there exists a position for their opponent that will defeat them for certain. We introduce a new concept, the prudent core, as the set of positionsthat are not imprudent in this sense. We show that the prudent core is always non-empty. With majority voting and an odd number of voters, the prudent core equals the dimension-by-dimension median. The prudent core equals the majority core whenever the latter is nonempty. |
| Keywords: | Spatial theory of politics, median voter theorem, prudent core, prudence |
| JEL: | D71 D72 D81 |
| Date: | 2026 |
| URL: | https://d.repec.org/n?u=RePEc:ema:worpap:2026-04 |
| By: | Negin Golrezaei; MohammadTaghi Hajiaghayi; Suho Shin |
| Abstract: | In the contest design problem, there are $n$ strategic contestants, each of whom decides an effort level. A contest designer with a fixed budget must then design a mechanism that allocates a prize $p_i$ to the $i$-th rank based on the outcome, to incentivize contestants to exert higher costly efforts and induce high-quality outcomes. In this paper, we significantly deepen our understanding of optimal mechanisms under general settings by considering nonconvex objectives in contestants' qualities. Notably, our results accommodate the following objectives: (i) any convex combination of user welfare (motivated by recommender systems) and the average quality of contestants, and (ii) arbitrary posynomials over quality, both of which may neither be convex nor concave. In particular, these subsume classic measures such as social welfare, order statistics, and (inverse) S-shaped functions, which have received little or no attention in the contest literature to the best of our knowledge. Surprisingly, across all these regimes, we show that the optimal mechanism is highly structured: it allocates potentially higher prize to the first-ranked contestant, zero to the last-ranked one, and equal prizes to the all intermediate contestants, i.e., $p_1 \ge p_2 = \ldots = p_{n-1} \ge p_n = 0$. Thanks to the structural characterization, we obtain a fully polynomial-time approximation scheme given a value oracle. Our technical results rely on Schur-convexity of Bernstein basis polynomial-weighted functions, total positivity and variation diminishing property. En route to our results, we obtain a surprising reduction from a structured high-dimensional nonconvex optimization to a single-dimensional optimization by connecting the shape of the gradient sequences of the objective function to the number of transition points in optimum, which might be of independent interest. |
| Date: | 2026–04 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2604.04844 |
| By: | Edmond Baranès (Unknown); Ulrich Hege (TSE-R - Toulouse School of Economics - UT Capitole - Université Toulouse Capitole - Comue de Toulouse - Communauté d'universités et établissements de Toulouse - EHESS - École des hautes études en sciences sociales - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement); Jin-Hyuk Kim (Unknown) |
| Abstract: | We present a stylized model of three entrepreneurial financing methods based on two tradeoffs. First, token financing and crowdfunding reveal consumer-investors' demand for the product prior to investment, but upfront purchase weakens the entrepreneur's incentive to deliver. Second, token financing permits a bubble component in token value, but reduces consumer surplus. |
| Keywords: | crowdfunding, entrepreneurial financing, initial coin offering, token regulation, ,financement participatif, financement des entreprises, offre initiale de jetons, réglementation des jetons, jeton utilitaire, utility token |
| Date: | 2026–03–30 |
| URL: | https://d.repec.org/n?u=RePEc:hal:journl:hal-05578939 |
| By: | Daron Acemoglu; Tianyi Lin; Asuman Ozdaglar; James Siderius |
| Abstract: | Artificial intelligence (AI) changes social learning when aggregated outputs become training data for future predictions. To study this, we extend the DeGroot model by introducing an AI aggregator that trains on population beliefs and feeds synthesized signals back to agents. We define the learning gap as the deviation of long-run beliefs from the efficient benchmark, allowing us to capture how AI aggregation affects learning. Our main result identifies a threshold in the speed of updating: when the aggregator updates too quickly, there is no positive-measure set of training weights that robustly improves learning across a broad class of environments, whereas such weights exist when updating is sufficiently slow. We then compare global and local architectures. Local aggregators trained on proximate or topic-specific data robustly improve learning in all environments. Consequently, replacing specialized local aggregators with a single global aggregator worsens learning in at least one dimension of the state. |
| Date: | 2026–04 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2604.04906 |
| By: | Rui Sun |
| Abstract: | A principal with cheap capital optimally forces her counterparty to borrow at above-market rates. The reason: the form of finance is a screening device. Advances provide liquidity but pool types; contingent transfers separate types, but, because they are not pledgeable, impose financing costs. The optimal contract preserves outside-finance exposure to maintain screening power. Two sufficient statistics pin down the optimal advance share. With complementary counterparties, a uniform subsidy that cheapens finance across every relationship can reduce the value of each. This explains the coexistence of early payment and contingent compensation in trade credit, venture capital, and internal capital markets. |
| Date: | 2026–04 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2604.06447 |
| By: | Denis Claude (LEDi - Laboratoire d'Economie de Dijon [Dijon] - UBE - Université Bourgogne Europe); Mabel Tidball (CEE-M - Centre d'Economie de l'Environnement - Montpellier - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement - Institut Agro Montpellier - Institut Agro - Institut national d'enseignement supérieur pour l'agriculture, l'alimentation et l'environnement - UM - Université de Montpellier) |
| Abstract: | This paper revisits Heinrich F. von Stackelberg's original description of leader-follower games under incomplete information, exploring how learning dynamics shape strategic interaction. The leader iteratively updates its conjecture about the follower's reaction function before choosing an activity level that maximizes its payoff. The follower, in turn, responds optimally to each activity level, revealing information that the leader uses to refine its conjecture. Assuming linear conjectures, a smooth updating process à la Jean-Marie and Tidball [2006], and quadratic payoff functions, we establish conditions under which the learning process converges asymptotically to a self-confirming steadystate. We characterize the resulting activity levels and payoffs in two canonical environments: a sequential partnership game and a sequential duopoly game with quantity competition. We then compare the learning outcomes to both the (complete information) Stackelberg and the cartel solution. In the process, we find conditions under which the lack of information and the resulting strategic ambiguity lead to higher joint payoffs, and under which usual intuitions about the first-mover advantage need qualifications. |
| Keywords: | Leader-follower game, incomplete information, conjectures |
| Date: | 2026 |
| URL: | https://d.repec.org/n?u=RePEc:hal:journl:hal-05571970 |
| By: | SHIMIZU, Chihiro |
| Abstract: | We develop a dynamic theory of housing market collapse in which population decline interacts with information friction to produce irreversible market death. Declining transactions raise valuation uncertainty, eroding broker profitability and eliminating the intermediation channel through which most transactions are completed. We embed Jovanovic (1982)- Hopenhayn (1992) industry dynamics with forward-looking broker value functions and establish three main theorems, each proved in full: a tipping-point theorem characterising the separatrix between functioning and dead-market attractors; a dual-exit acceleration theorem showing that economic and demographic exit interact multiplicatively to compress the collapse timeline; and a welfare theorem establishing that disclosure is socially underprovided, with a convex marginal social benefit. The model delivers sharp monotone comparative statics throughout. |
| Keywords: | Housing market death, broker exit, information externalities, tipping points, dual-exit dynamics, akiya crisis |
| JEL: | D83 J11 R21 D92 R23 |
| Date: | 2026–03 |
| URL: | https://d.repec.org/n?u=RePEc:hit:rcesrs:dp26-10 |
| By: | Yu-Chin Hsu; Tong Li; Chu-An Liu; Hidenori Takahashi |
| Abstract: | This paper develops a unified framework for testing monotonicity of Bayesian Nash equilibrium strategies in unobserved types in games of incomplete information. We show that, under symmetric independent private types, monotonicity of differentiable equilibrium strategies is equivalent to monotonicity of a quasi-inverse strategy identified from observed actions. This allows the problem to be reformulated as testing a countable set of moment inequalities involving unconditional expectations. We propose a Cramer-von Mises-type statistic with bootstrap critical values. The method accommodates covariates and game heterogeneity. Monte Carlo simulations demonstrate finite-sample performance, and an application to procurement auctions illustrates cartel detection. |
| Date: | 2026–04 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2604.06643 |
| By: | Karun Adusumilli |
| Abstract: | This article introduces a framework for evaluating statistical decisions under both prior ambiguity and likelihood misspecification. We begin with an ambiguity set - a frequentist model that pairs a possibly misspecified likelihood with every possible prior - and uniformly expand it by a Kullback-Leibler radius to accommodate likelihood misspecification. We show that optimal decisions under this framework are equivalent to minimax decisions with an exponentially tilted loss function. Misspecification manifests as an exponential tilting of the loss, while ambiguity corresponds to a search for the least favorable prior. This separation between ambiguity and misspecification enables local asymptotic analysis under global misspecification, achieved by localizing the priors alone. Remarkably, for both estimation and treatment assignment, we show that optimal decisions coincide with those under correct specification, regardless of the degree of misspecification. These results extend to semi-parametric models. As a practical consequence, our findings imply that practitioners should prefer maximum likelihood over the simulated method of moments, and efficient GMM estimators - such as two-step GMM - over diagonally weighted alternatives. |
| Date: | 2026–04 |
| URL: | https://d.repec.org/n?u=RePEc:arx:papers:2604.05327 |
| By: | Franz Dietrich (Centre d'Economie de la Sorbonne, Paris School of Economics, CNRS) |
| Abstract: | Economists routinely measure individual welfare by (von-Neumann-Morgenstern) utility, for instance when analysing welfare intensity, social welfare, or welfare inequality. Is this welfare measure justified? Natural working hypotheses turn out to imply a different measure. It overcomes familiar problems of utility, by faithfully capturing non-ordinal information, such as welfare intensity - despite still resting on purely ordinal evidence, such as revealed preferences or self-reported welfare comparisons. Social welfare analysis changes when based on this new individual welfare measure rather than utility. For instance, Harsanyi's 'utilitarian theorem' now supports prioritarianism. We compare the standard utility-based versions of utilitarianism and prioritarianism with new versions based on our welfare measure. We show that utility is a hybrid object determined by two rival influences: welfare and the attitude to intrinsic risk, i.e., to risk in welfare. A new version of Harsanyi's theorem shows that Harsanyi makes the questionable implicit assumption that society is neutral to intrinsic risk, overruling people's risk attitudes. We thus propose risk-impartial utilitarianism, which adopts people's (average) risk attitude |
| Keywords: | welfare; utility; risk attitude; social welfare; utilitarianism; Harsanyi-Sen debate; Harsanyi's Theorem; Bernoulli's hypothesis |
| JEL: | D00 D60 D63 D69 D70 D80 |
| Date: | 2025–01 |
| URL: | https://d.repec.org/n?u=RePEc:mse:cesdoc:25003rrr |
| By: | Alma Cohen; Alon Klement; Zvika Neeman; Eilon Solan |
| Abstract: | In many institutional settings, k items are selected with the goal of representing the underlying distribution of claims, opinions, or characteristics in a large population. We study environments with two adversarial parties whose preferences over the selected items are commonly known and opposed. We propose the Quantile Mechanism: one party partitions the population into k disjoint subsets, and the other selects one item from each subset. We show that this procedure is optimally representative among all feasible mechanisms, and illustrate its use in jury selection, multi-district litigation, and committee formation. |
| JEL: | C7 D7 D82 K4 |
| Date: | 2026–04 |
| URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:35031 |