nep-mic New Economics Papers
on Microeconomics
Issue of 2025–11–17
twenty-one papers chosen by
Jing-Yuan Chiou, National Taipei University


  1. Mechanism Design with Information Leakage By Samuel H\"afner; Marek Pycia; Haoyuan Zeng
  2. Screening Information By Arrora. Falak
  3. Identity-Compatible Auctions By Haoyuan Zeng
  4. Strategy-Proof Social Choice Correspondences and Single-Peaked Preferences By Carmelo Rodríguez à lvarez
  5. Persuasive Selection in Signaling Games By Haoyuan Zeng
  6. Worker Selection and Efficiency By Yonghang Ji; Allen Vong
  7. Data-Driven Platform Encroachment By Chongwoo Choe; Antoine Dubus; Noriaki Matsushima; Shiva Shekhar
  8. Mediation and worker performance By Allen Vong
  9. Subjective inference By Andrew Mackenzie
  10. Information Completeness and Incompleteness By Towne, Bryce Petofi
  11. Automation and Task Allocation Under Asymmetric Information By Quitz\'e Valenzuela-Stookey
  12. A characterization of strategy-proof probabilistic assignment rules By Sai Praneeth Donthu; Souvik Roy; Soumyarup Sadhukhan; Gogulapati Sreedurga
  13. Research Waves By Mariagiovanna Baccara; Gilat Levy; Ronny Razin
  14. Hope, Signals, and Silicon: A Game-Theoretic Model of the Pre-Doctoral Academic Labor Market in the Age of AI By Shaohui Wang
  15. Characterizations of Proportional Division Value in TU-Games via Fixed-Population Consistency By Yukihiko Funaki; Yukio Koriyama; Satoshi Nakada; Yuki Tamura
  16. Different Forms of Imbalance in Strongly Playable Discrete Games I: Two-Player RPS Games By Itai Maimon
  17. Mapping Power Relations: A Geometric Framework for Game-Theoretic Analysis By Daniele De luca
  18. Fisher Meets Lindahl: A Unified Duality Framework for Market Equilibrium By Yixin Tao; Weiqiang Zheng
  19. Some economics of artificial super intelligence By Henry A. Thompson
  20. The Gatekeeping Expert's Dilemma By Shunsuke Matsuno
  21. The Cost of Optimally Acquired Information By Alexander W. Bloedel; Weijie Zhong

  1. By: Samuel H\"afner; Marek Pycia; Haoyuan Zeng
    Abstract: We study the design of mechanisms -- e.g., auctions -- when the designer does not control information flows between mechanism participants. A mechanism equilibrium is leakage-proof if no player conditions their actions on leaked information; a property distinct from ex-post incentive compatibility. Only leakage-proof mechanisms can implement social choice functions in environments with leakage. Efficient auctions need to be leakage-proof, while revenue-maximizing ones not necessarily so. Second-price and ascending auctions are leakage-proof; first-price auctions are not; while whether descending auctions are leakage-proof depends on tie-breaking.
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.00715
  2. By: Arrora. Falak (University of Warwick)
    Abstract: How does the presence of fake news affect incentives to acquire legitimate information? I study a model of costly information acquisition where either an honest or a fake sender communicates with a receiver through a platform. The honest sender sends a true but noisy signal, whereas the fake sender sends a false and uninformative signal. The platform can verify the signal’s authenticity; however, it faces a tradeoff. Fake news, although harmful for the receiver, makes her more skeptical and increases the honest sender’s incentives for acquiring more precise information. The platform commits to a policy that indicates the screening probability and a disclosure rule. My central finding is that the screening policy that maximizes the receiver’s welfare often requires tolerating fake news, even when such screening is costless. Moreover, not informing the receiver even when a message has been screened and found to be true is sometimes better than full transparency because it keeps the receiver skeptical.These findings suggest that complete moderation and fact-checking of content may inadvertently leave the receiver worse off.
    Keywords: Information acquisition ; communication game ; fake news ; platforms ; fact-checking JEL Codes: C72 ; D82 ; D83
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:wrk:warwec:1586
  3. By: Haoyuan Zeng
    Abstract: This paper studies the incentives of the seller and buyers to shill bid in a single-item auction. An auction is seller identity-compatible if the seller cannot profit from pretending to be one or more bidders via fake identities. It is buyer identity-compatible if no buyer profits from posing as more than one bidder. Lit auctions reveal the number of bidders, whereas dark auctions conceal the information. We characterize three classic selling mechanisms -- first-price, second-price, and posted-price -- based on identity compatibility. We show the importance of concealing the number of bidders, which enables the implementation of a broader range of outcome rules. In particular, no optimal lit auction is ex-post seller identity-compatible, while the dark first-price auction (with reserve) achieves the goal.
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.00723
  4. By: Carmelo Rodríguez à lvarez (Instituto Complutense de Análisis Económico (ICAE), Universidad Complutense de Madrid (Spain))
    Abstract: We consider strategy-proof social choice correspondences (SCCs) –mappings from preference profiles to sets of alternatives– when individuals are endowed with single-peaked preferences over alternatives. We interpret the selected sets of alternatives as the basis for lotteries that determine the final social choice, and consider that agents’ preferences over sets are consistent with Expected Utility Theory and Bayesian updating from an initial probability assessment over the full set of alternatives. We exploit the relation between SCCs and probabilistic decision schemes –mappings from preference profiles to lotteries over alternatives–, to characterize the family of SCCs that satisfy strategy-proofness and unanimity for arbitrary initial probability assessments. We extend the analysis to multi-dimensional convex spaces of alternatives under the uniform initial probability assessment.
    Keywords: Strategy-Proofness; Single-Peaked Preferences; Social Choice Correspondences.
    JEL: C71 D71
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:ucm:doicae:2506
  5. By: Haoyuan Zeng
    Abstract: This paper introduces a novel criterion, persuasiveness, to select equilibria in signaling games. In response to the Stiglitz critique, persuasiveness focuses on the comparison across equilibria. An equilibrium is more persuasive than an alternative if the set of types of the sender who prefer the alternative would sequentially deviate to the former once other types have done so -- that is, if an unraveling occurs. Persuasiveness has strong selective power: it uniquely selects an equilibrium outcome in monotone signaling games. Moreover, in non-monotone signaling games, persuasiveness refines predictions beyond existing selection criteria. Notably, it can also select equilibria in cheap-talk games, where standard equilibrium refinements for signaling games have no selective power.
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.00718
  6. By: Yonghang Ji; Allen Vong
    Abstract: We study a model where a manager repeatedly selects one worker from a group of homogeneous workers to perform a task. We characterize the largest set of parameters under which an equilibrium achieving efficient worker performance exists. We then show that this is the set of parameters given which the following manager's strategy constitutes an efficient equilibrium: the manager cyclically orders all workers and if the task is undesirable (resp., desirable), a worker is selected until good (resp., bad) performance, after which the manager randomizes between reselecting him and moving to the next worker; the reselection probability is set to be as high as effort incentives permit. Our findings extend to repeated selection of multiple workers.
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.05338
  7. By: Chongwoo Choe; Antoine Dubus; Noriaki Matsushima; Shiva Shekhar
    Abstract: Marketplace platforms are central players in online retail and are in an advantageous position to leverage data generated by third-party sellers. This paper analyzes how a platform's encroachment decision - whether to enter its marketplace as a direct competitor - is shaped by regulations that restrict its use of seller data. We show that the platform's encroachment decision follows a non-monotonic pattern: it enters against sellers with either relatively low or sufficiently high brand value, but remains a pure intermediary for intermediate brand values. The data ban regulation alters this strategy by making the platform more likely to exclude low brand-value sellers and more likely to accommodate high brand-value sellers. The implication is that, while such regulation can enhance competition in markets with high-value sellers, it can inadvertently harm sellers and reduce consumer surplus in emerging markets, where sellers typically lack brand recognition and depend on platform visibility. These results underscore the need for more nuanced regulatory approaches - promoting data sharing in emerging markets and targeted bans in mature, established markets - to better balance welfare and competition.
    Keywords: marketplace platforms, data regulations, digital markets act, innovation
    JEL: L21 L51 L42
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:ces:ceswps:_12233
  8. By: Allen Vong
    Abstract: I study how a firm uses mediated communication with a worker and its clients to maximize worker performance over time. I find that optimal mediation involves occasional randomizations, secret from clients, between two continuations. In one, the worker cuts corner and then retains his current continuation utility. In the other, the worker exerts effort and then receives the highest continuation equilibrium utility less a minimal penalty for underperformance. These randomizations eventually disappear, replaced by canonical carrot-and-stick incentives. Optimal mediation Pareto-improves upon no mediation for both the worker and the average client if and only if the worker is sufficiently patient.
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.02436
  9. By: Andrew Mackenzie
    Abstract: An agent observes a clue, and an analyst observes an inference: a ranking of events on the basis of how corroborated they are by the clue. We prove that if the inference satisfies the axioms of Villegas (1964) except for the classic qualitative probability axiom of monotonicity, then it has a unique normalized signed measure representation (Theorem 1). Moreover, if the inference also declares the largest event equivalent to the smallest event, then it can be represented as a difference between a posterior and a prior such that the former is the conditional probability of the latter with respect to an assessed event that is interpreted as a clue guess. Across these Bayesian representations, the posterior is unique, all guesses are in a suitable sense equivalent, and the prior is determined by the weight it assigns to each possible guess (Theorem 2). However, observation of a prior and posterior compatible with the inference could reveal that all of these guesses are wrong.
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.00173
  10. By: Towne, Bryce Petofi
    Abstract: This paper orthogonalizes information symmetry (who holds information) from information completeness (whether available information suffices for a target decision or inference). We formalize two dimensions of completeness: static (relative to a finite time window) and dynamic (absolute, intertemporal). Static incompleteness arises when currently admissible signals are insufficient to identify the target; dynamic incompleteness arises when further, in-scope and decision-relevant information is foreseeably forthcoming. Within a measurable "scope" and an information filtration, we define probabilistic relevance as primitive, relate it to decision relevance via Blackwell monotonicity, and characterize static completeness as (approximate) point identification while dynamic completeness requires the absence of any future relevant arrivals. We distinguish institutional regimes—closed versus expandable scope—and show non-confirmability results in expandable scopes: finite evidence cannot certify either the absence of omitted relevant variables (static) or the absence of future relevant information (dynamic). We provide audit-style certificates (sufficient conditions) and falsifiers (stop-tests) for both notions, and connect the framework to partial identification, robust decision-making, rational inattention, and optimal stopping. The key implication is normative and methodological: in open environments, "best" actions are conditional and time-indexed rather than absolutely correct, so governance and empirical practice should report identification-set diameters, treat scope as a policy lever, and prefer falsifiable, continuous disclosure over once-and-for-all attestations.
    Date: 2025–11–06
    URL: https://d.repec.org/n?u=RePEc:osf:socarx:gqpsf_v1
  11. By: Quitz\'e Valenzuela-Stookey
    Abstract: A firm can complete the tasks needed to produce output using either machines or workers. Unlike machines, workers have private information about their preferences over tasks. I study how this information asymmetry shapes the mechanism used by the firm to allocate tasks across workers and machines. I identify important qualitative differences between the mechanisms used when information frictions are large versus small. When information frictions are small, tasks are substitutes: automating one task lowers the marginal cost of other tasks and reduces the surplus generated by workers. When frictions are large, tasks can become complements: automation can raise the marginal cost of other tasks and increase the surplus generated by workers. The results extend to a setting with multiple firms competing for workers.
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.02675
  12. By: Sai Praneeth Donthu; Souvik Roy; Soumyarup Sadhukhan; Gogulapati Sreedurga
    Abstract: We study the classical probabilistic assignment problem, where finitely many indivisible objects are to be probabilistically or proportionally assigned among an equal number of agents. Each agent has an initial deterministic endowment and a strict preference over the objects. While the deterministic version of this problem is well understood, most notably through the characterization of the Top Trading Cycles (TTC) rule by Ma (1994), much less is known in the probabilistic setting. Motivated by practical considerations, we introduce a weakened incentive requirement, namely SD-top-strategy-proofness, which precludes only those manipulations that increase the probability of an agent's top-ranked object. Our first main result shows that, on any free pair at the top (FPT) domain (Sen, 2011), the TTC rule is the unique probabilistic assignment rule satisfying SD-Pareto efficiency, SD-individual rationality, and SD-top-strategy-proofness. We further show that this characterization remains valid when Pareto efficiency is replaced by the weaker notion of SD-pair efficiency, provided the domain satisfies the slightly stronger free triple at the top (FTT) condition (Sen, 2011). Finally, we extend these results to the ex post notions of efficiency and individual rationality. Together, our findings generalize the classical deterministic results of Ma (1994) and Ekici (2024) along three dimensions: extending them from deterministic to probabilistic settings, from full strategy-proofness to top-strategy-proofness, and from the unrestricted domain to the more general FPT and FTT domains.
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.04142
  13. By: Mariagiovanna Baccara; Gilat Levy; Ronny Razin
    Abstract: Competing research waves start and grow as scientists choose their specialization driven by career incentives. We build a strategic experimentation framework where agents irreversibly choose between two risky fields, and information arrives faster as more agents specialize in a field. In the "bad news" case, if no news arrives, all agents join a bandwagon wave into one field. In the "good news"' case, both fields are explored in two sequential surges, followed by slow entry into the initially inferior field. We describe how the equilibrium depends on the information-production technology, and assess the impact of first-mover advantages, congestion, and deadlines.
    Keywords: strategic experimentation, research specialisation
    JEL: D7
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:ces:ceswps:_12248
  14. By: Shaohui Wang
    Abstract: This paper develops a unified game-theoretic account of how generative AI reshapes the pre-doctoral "hope-labor" market linking Principal Investigators (PIs), Research Assistants (RAs), and PhD admissions. We integrate (i) a PI-RA relational-contract stage, (ii) a task-based production technology in which AI is both substitute (automation) and complement (augmentation/leveling), and (iii) a capacity-constrained admissions tournament that converts absolute output into relative rank. The model yields four results. First, AI has a dual and thresholded effect on RA demand: when automation dominates, AI substitutes for RA labor; when augmentation dominates, small elite teams become more valuable. Second, heterogeneous PI objectives endogenously segment the RA market: quantity-maximizing PIs adopt automation and scale "project-manager" RAs, whereas quality-maximizing PIs adopt augmentation and cultivate "idea-generator" RAs. Third, a symmetric productivity shock triggers a signaling arms race: more "strong" signals flood a fixed-slot tournament, depressing the admission probability attached to any given signal and potentially lowering RA welfare despite higher productivity. Fourth, AI degrades the informational content of polished routine artifacts, creating a novel moral-hazard channel ("effort laundering") that shifts credible recommendations toward process-visible, non-automatable creative contributions. We discuss welfare and equity implications, including over-recruitment with thin mentoring, selectively misleading letters, and opaque pipelines, and outline light-touch governance (process visibility, AI-use disclosure, and limited viva/replication checks) that preserves efficiency while reducing unethical supervision and screening practices.
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.00068
  15. By: Yukihiko Funaki; Yukio Koriyama; Satoshi Nakada; Yuki Tamura
    Abstract: We study the proportional division value in TU-games, which distributes the worth of the grand coalition in proportion to each player's stand-alone worth. Focusing on fixed-population consistency, we characterize the proportional division value through three types of axioms: a homogeneity axiom, composition axioms, and a nullified-game consistency axiom. The homogeneity axiom captures scale invariance with respect to the grand coalition's worth. The composition axioms ensure that payoffs remain consistent when the game is decomposed and recomposed. The nullified-game consistency axiom requires that when some players' payoffs are fixed, the solution for the remaining players, computed in the game adjusted to account for these fixed payoffs, coincides with their original payoffs. Together with efficiency and a fairness-related axiom, these axioms characterize the proportional division value.
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.05001
  16. By: Itai Maimon
    Abstract: We construct several definitions of imbalance and playability, both of which are related to the existence of dominated strategies. Specifically, a maximally balanced game and a playable game cannot have dominated strategies for any player. In this context, imbalance acts as a measure of inequality in strategy, similar to measures of inequality in wealth or population dynamics. Conversely, playability is a slight strengthening of the condition that a game has no dominated strategies. It is more accurately aligned with the intuition that all strategies should see play. We show that these balance definitions are natural by exhibiting a (2n+1)-RPS that maximizes all proposed imbalance definitions among playable RPS games. We demonstrate here that this form of imbalance aligns with the prevailing notion that different definitions of inequality for economic and game-theoretic distributions must agree on both the maximal and minimal cases. In the sequel paper, we utilize these definitions for multiplayer games to demonstrate that a generalization of this imbalanced RPS is at least nearly maximally imbalanced while remaining playable for under 50 players.
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.00374
  17. By: Daniele De luca
    Abstract: This paper introduces a geometric framework for analyzing power relations in games, independent of their strategic form. We define a canonical preference space where each player's relational stance is a normalized vector. This model eliminates the arbitrariness of selecting utility functions, a limitation of recent approaches. We show how classical concepts-bargaining power, dependence, reciprocity-are recovered and generalized within this space. The analysis proceeds in two steps: projecting a game's payoffs and outcomes onto the space, and then reducing the resulting landscape using key metrics. These include a Center of Mass (CoM) and structural indices for Hierarchy (H) and Reciprocity (R). Applications to canonical games (Prisoner's Dilemma, Battle of the Sexes) and economic models (Cournot duopoly) demonstrate that the framework reveals underlying structural similarities across different strategic settings and provides a quantitative characterization of relational dynamics. It thus bridges cooperative and non-cooperative game theory by conceptualizing power as a structural property of the mapping from preferences to equilibria.
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.07287
  18. By: Yixin Tao; Weiqiang Zheng
    Abstract: The Fisher market equilibrium for private goods and the Lindahl equilibrium for public goods are classic and fundamental solution concepts for market equilibria. While Fisher market equilibria have been well-studied, the theoretical foundations for Lindahl equilibria remain substantially underdeveloped. In this work, we propose a unified duality framework for market equilibria. We show that Lindahl equilibria of a public goods market correspond to Fisher market equilibria in a dual Fisher market with dual utilities, and vice versa. The dual utility is based on the indirect utility, and the correspondence between the two equilibria works by exchanging the roles of allocations and prices. Using the duality framework, we address the gaps concerning the computation and dynamics for Lindahl equilibria and obtain new insights and developments for Fisher market equilibria. First, we leverage this duality to analyze welfare properties of Lindahl equilibria. For concave homogeneous utilities, we prove that a Lindahl equilibrium maximizes Nash Social Welfare (NSW). For concave non-homogeneous utilities, we show that a Lindahl equilibrium achieves $(1/e)^{1/e}$ approximation to the optimal NSW, and the approximation ratio is tight. Second, we apply the duality framework to market dynamics, including proportional response dynamics (PRD) and t\^atonnement. We obtain new market dynamics for the Lindahl equilibria from market dynamics in the dual Fisher market. We also use duality to extend PRD to markets with total complements utilities, the dual class of gross substitutes utilities. Finally, we apply the duality framework to markets with chores. We propose a program for private chores for general convex homogeneous disutilities that avoids the "poles" issue, whose KKT points correspond to Fisher market equilibria. We also initiate the study of the Lindahl equilibrium for public chores.
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.04572
  19. By: Henry A. Thompson
    Abstract: Conventional wisdom holds that a misaligned artificial superintelligence (ASI) will destroy humanity. But the problem of constraining a powerful agent is not new. I apply classic economic logic of interjurisdictional competition, all-encompassing interest, and trading on credit to the threat of misaligned ASI. Using a simple model, I show that an acquisitive ASI refrains from full predation under surprisingly weak conditions. When humans can flee to rivals, inter-ASI competition creates a market that tempers predation. When trapped by a monopolist ASI, its "encompassing interest" in humanity's output makes it a rational autocrat rather than a ravager. And when the ASI has no long-term stake, our ability to withhold future output incentivizes it to trade on credit rather than steal. In each extension, humanity's welfare progressively worsens. But each case suggests that catastrophe is not a foregone conclusion. The dismal science, ironically, offers an optimistic take on our superintelligent future.
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.06613
  20. By: Shunsuke Matsuno
    Abstract: This paper studies how experts with veto power -- gatekeeping experts -- influence agents through communication. Their expertise informs agents' decisions, while veto power provides discipline. Gatekeepers face a dilemma: transparent communication can invite gaming, while opacity wastes expertise. How can gatekeeping experts guide behavior without being gamed? Many economic settings feature this tradeoff, including bank stress tests, environmental regulations, and financial auditing. Using financial auditing as the primary setting, I show that strategic vagueness resolves this dilemma: by revealing just enough to prevent the manager from inflating the report, the auditor guides the manager while minimizing opportunities for manipulation. This theoretical lens provides a novel rationale for why auditors predominantly accept clients' financial reports. Comparative statics reveal that greater gatekeeper independence or expertise sometimes dampens communication. This paper offers insights into why gatekeepers who lack direct control can still be effective.
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.00031
  21. By: Alexander W. Bloedel; Weijie Zhong
    Abstract: This paper introduces a framework for modeling the cost of information acquisition based on the principle of cost-minimization. We study the reduced-form \emph{indirect cost} of information generated by the sequential minimization of a primitive \emph{direct cost} function. Indirect cost functions: (i) are characterized by a novel recursive property, \emph{sequential learning-proofness}; (ii) provide an optimization foundation for the popular class of ``uniformly posterior separable'' costs; and (iii) can often be tractably calculated from their underlying direct costs. We apply the framework by identifying fundamental modeling tradeoffs in the rational inattention literature and two new indirect cost functions that balance these tradeoffs.
    Date: 2025–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2511.05466

This nep-mic issue is ©2025 by Jing-Yuan Chiou. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.