nep-mic New Economics Papers
on Microeconomics
Issue of 2024‒03‒18
seventeen papers chosen by
Jing-Yuan Chiou, National Taipei University


  1. Content Moderation with Opaque Policies By Scott Duke Kominers; Jesse M. Shapiro
  2. Career Concerns and Incentive Compatible Task Design By Masaki Aoyagi; Maxime Menuet
  3. How Do Digital Advertising Auctions Impact Product Prices? By Dirk Bergemann; Alessandro Bonatti; Nicholas Wu
  4. Persuading a Learning Agent By Tao Lin; Yiling Chen
  5. Insourcing Vs Outsourcing in Vertical Structure By Dongsoo Shin; Roland Strausz
  6. The impossibility of non-manipulable probability aggregation By Franz Dietrich; Christian List
  7. A Unified Approach to Second and Third Degree Price Discrimination By Dirk Bergemann; Tibor Heumann; Michael C. Wang
  8. Privacy Preserving Signals By Kai Hao Yang; Philipp Strack
  9. Sequentially Stable Outcomes By Francesc Dilmé
  10. The Limits of Price Discrimination Under Privacy Constraints By Alireza Fallah; Michael I. Jordan; Ali Makhdoumi; Azarakhsh Malekian
  11. Censored Beliefs and Wishful Thinking By Jarrod Burgh; Emerson Melo
  12. On Three-Layer Data Markets By Alireza Fallah; Michael I. Jordan; Ali Makhdoumi; Azarakhsh Malekian
  13. Dueling Over Dessert, Mastering the Art of Repeated Cake Cutting By Simina Br\^anzei; MohammadTaghi Hajiaghayi; Reed Phillips; Suho Shin; Kun Wang
  14. Misinterpreting Yourself By Paul Heidhues; Botond Koszegi; Philipp Strack
  15. Norms among heterogeneous agents: a rational-choice model By Zdybel, Karol B.
  16. Civil Liberties and Social Structure By Selman Erol; Camilo Garcia-Jimeno
  17. The Strategic Value of Data Sharing in Interdependent Markets By Hemant Bhargava; Antoine Dubus; David Ronayne; Shiva Shekhar

  1. By: Scott Duke Kominers; Jesse M. Shapiro
    Abstract: A sender sends a signal about a state to a receiver who takes an action that determines a payoff. A moderator can block some or all of the sender's signal before it reaches the receiver. When the moderator's policy is transparent to the receiver, the moderator can improve the payoff by blocking false or harmful signals. When the moderator's policy is opaque, however, the receiver may not trust the moderator. In that case, the moderator can guarantee an improved outcome only by blocking signals that enable harmful acts. Blocking signals that encourage false beliefs can be counterproductive.
    JEL: D47 D82 D83 L82 L86
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:32156&r=mic
  2. By: Masaki Aoyagi; Maxime Menuet
    Abstract: This paper studies the optimal disclosure of information about an agent’s talent when it consists of two components. The agent observes the first component of his talent as his private type, and reports it to a principal to perform a task which reveals the second component of his talent. Based on the report and performance, the principal discloses information to a firm who pays the agent the wage equal to his expected talent. We study incentive compatible disclosure rules that minimize the mismatch between the agent’s true talent and his wage. The optimal rule entails full disclosure when the agent’s talent is a supermodular function of the two components, but entails partial pooling when it is submodular. Under a mild degree of submodularity, we show that the optimal disclosure rule is obtained as a solution to a linear programming problem, and identify the number of messages required under the optimal rule. We relate it to the agent’s incentive compatibility conditions, and show that each pooling message has binary support.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:dpr:wpaper:1232&r=mic
  3. By: Dirk Bergemann (Yale University); Alessandro Bonatti (MIT); Nicholas Wu (Yale University)
    Abstract: We ask how the advertising mechanisms of digital platforms impact product prices. We present a model that integrates three fundamental features of digital advertising markets: (i) advertisers can reach customers on and off-platform, (ii) additional data enhances the value of matching advertisers and consumers, and (iii) bidding follows auction-like mechanisms. We compare data-augmented auctions, which leverage the platformÕs data advantage to improve match quality, with managed campaign mechanisms, where advertisersÕ budgets are transformed into personalized matches and prices through auto-bidding algorithms. In data-augmented second-price auctions, advertisers increase off- platform product prices to boost their competitiveness on-platform. This leads to socially efficient allocations on-platform, but inefficient allocations off-platform due to high product prices. The platform-optimal mechanism is a sophisticated managed campaign that conditions on-platform prices for sponsored products on off-platform prices set by all advertisers. Relative to auctions, the optimal managed campaign raises off-platform product prices and further reduces consumer surplus.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:2367&r=mic
  4. By: Tao Lin; Yiling Chen
    Abstract: We study a repeated Bayesian persuasion problem (and more generally, any generalized principal-agent problem with complete information) where the principal does not have commitment power and the agent uses algorithms to learn to respond to the principal's signals. We reduce this problem to a one-shot generalized principal-agent problem with an approximately-best-responding agent. This reduction allows us to show that: if the agent uses contextual no-regret learning algorithms, then the principal can guarantee a utility that is arbitrarily close to the principal's optimal utility in the classic non-learning model with commitment; if the agent uses contextual no-swap-regret learning algorithms, then the principal cannot obtain any utility significantly more than the optimal utility in the non-learning model with commitment. The difference between the principal's obtainable utility in the learning model and the non-learning model is bounded by the agent's regret (swap-regret). If the agent uses mean-based learning algorithms (which can be no-regret but not no-swap-regret), then the principal can do significantly better than the non-learning model. These conclusions hold not only for Bayesian persuasion, but also for any generalized principal-agent problem with complete information, including Stackelberg games and contract design.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.09721&r=mic
  5. By: Dongsoo Shin (Santa Clara University); Roland Strausz (HU Berlin)
    Abstract: We study an agency model with vertical hierarchy---the principal, the prime-agent and the sub-agent. The principal faces a project that needs both agents' services. Due to costly communication, the principal receives a report only from the prime-agent, who receives a report from the sub-agent. The principal can directly incentivize each agent by setting individual transfers (insourcing), or sets only one overall transfer to an independent organization in which the prime-agent hires the sub-agent (outsourcing). We show that insourcing is always optimal when the principal can perfectly process the prime-agent's report. When the principal's information process is limited, however, outsourcing can be the prevailing mode of operation. In addition, insourcing under limited information process is prone to collusion between the agents, whereas no possibility of collusion arises with outsourcing.
    Keywords: information process; sourcing policy; vertical structure;
    JEL: D86 L23 L25
    Date: 2024–02–14
    URL: http://d.repec.org/n?u=RePEc:rco:dpaper:495&r=mic
  6. By: Franz Dietrich (CNRS & Centre d'Economie de la Sorbonne, Paris School of Economics); Christian List (LMU Munich)
    Abstract: A probability aggregation rule assigns to each profile of probability functions across a group of individuals (representing their individual probability assignments to some propositions) a collective probability function (representing the group's probability assignment). The rule is "non-manipulable" if no group member can manipulate the collective probability for any proposition in the direction of his or her own probability by misrepresenting his or her probability function ("stratgic voting"). We show that, except in trivial cases, no probability aggregation rule satisfying two mild conditions (non-dictatorship and consensus preservation) is non-manipulable
    Keywords: opinion pooling; social choice theory; non-manipulability; strategy-proofness; impossibility theorem, judgment aggregation; Gibbard-Satterthwaite Theorem
    JEL: D70 D71 D8
    Date: 2024–01
    URL: http://d.repec.org/n?u=RePEc:mse:cesdoc:24001&r=mic
  7. By: Dirk Bergemann (Yale University); Tibor Heumann (Pontificia Universidad Catolica de Chile); Michael C. Wang (Yale University)
    Abstract: We analyze the welfare impact of a monopolist able to segment a multiproduct market and offer differentiated price menus within each segment. We characterize a family of extremal distributions such that all achievable welfare outcomes can be reached by selecting segments from within these distributions. This family of distributions arises as the solution to the consumer maximizing distribution of values for multigood markets. With these results, we analyze the effect of segmentation on consumer surplus and prices in both interior and extremal markets, including conditions under which there exists a segmentation benefiting all consumers. Finally, we present an efficient algorithm for computing segmentations.
    Date: 2024–01–01
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:2376&r=mic
  8. By: Kai Hao Yang (Yale University); Philipp Strack (Yale University)
    Abstract: A signal is privacy-preserving with respect to a collection of privacy sets, if the posterior probability assigned to every privacy set remains unchanged conditional on any signal realization. We characterize the privacy-preserving signals for arbitrary state space and arbitrary privacy sets. A signal is privacy-preserving if and only if it is a garbling of a reordered quantile signal. These signals are equivalent to couplings, which in turn lead to a characterization of optimal privacy-preserving signals for a decision-maker. We demonstrate the applications of this characterization in the contexts of algorithmic fairness, price discrimination, and information design.
    Date: 2023–07–03
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:2379&r=mic
  9. By: Francesc Dilmé
    Abstract: This paper introduces and analyzes sequentially stable outcomes in extensive-form games. An outcome ω is sequentially stable if, for any ǫ >0 and any small enough perturbation of the players’ behavior, there is an ǫ-perturbation of the players’ payoffs and a corresponding equilibrium with outcome close to ω. Sequentially stable outcomes exist for all finite games and are outcomes of sequential equilibria. They are closely related to stable sets of equilibria and satisfy versions of forward induction, iterated strict equilibrium dominance, and invariance to simultaneous moves. In signaling games, sequentially stable outcomes pass the standard selection criteria, and when payoffs are generic, they coincide with outcomes of stable sets of equilibria.
    Keywords: Sequentially stability, stable outcome, signaling games
    JEL: C72 C73
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:bon:boncrc:crctr224_2024_511&r=mic
  10. By: Alireza Fallah; Michael I. Jordan; Ali Makhdoumi; Azarakhsh Malekian
    Abstract: We consider a producer's problem of selling a product to a continuum of privacy-conscious consumers, where the producer can implement third-degree price discrimination, offering different prices to different market segments. In the absence of privacy constraints, Bergemann, Brooks, and Morris [2015] characterize the set of all possible consumer-producer utilities, showing that it is a triangle. We consider a privacy mechanism that provides a degree of protection by probabilistically masking each market segment, and we establish that the resultant set of all consumer-producer utilities forms a convex polygon, characterized explicitly as a linear mapping of a certain high-dimensional convex polytope into $\mathbb{R}^2$. This characterization enables us to investigate the impact of the privacy mechanism on both producer and consumer utilities. In particular, we establish that the privacy constraint always hurts the producer by reducing both the maximum and minimum utility achievable. From the consumer's perspective, although the privacy mechanism ensures an increase in the minimum utility compared to the non-private scenario, interestingly, it may reduce the maximum utility. Finally, we demonstrate that increasing the privacy level does not necessarily intensify these effects. For instance, the maximum utility for the producer or the minimum utility for the consumer may exhibit nonmonotonic behavior in response to an increase of the privacy level.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.08223&r=mic
  11. By: Jarrod Burgh; Emerson Melo
    Abstract: We present a model elucidating wishful thinking, which comprehensively incorporates both the costs and benefits associated with biased beliefs. Our findings reveal that wishful thinking behavior can be accurately characterized as equivalent to superquantile-utility maximization within the domain of threshold beliefs distortion cost functions. By leveraging this equivalence, we establish conditions that elucidate when an optimistic decision-maker exhibits a preference for choices characterized by positive skewness and increased risk.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.01892&r=mic
  12. By: Alireza Fallah; Michael I. Jordan; Ali Makhdoumi; Azarakhsh Malekian
    Abstract: We study a three-layer data market comprising users (data owners), platforms, and a data buyer. Each user benefits from platform services in exchange for data, incurring privacy loss when their data, albeit noisily, is shared with the buyer. The user chooses platforms to share data with, while platforms decide on data noise levels and pricing before selling to the buyer. The buyer selects platforms to purchase data from. We model these interactions via a multi-stage game, focusing on the subgame Nash equilibrium. We find that when the buyer places a high value on user data (and platforms can command high prices), all platforms offer services to the user who joins and shares data with every platform. Conversely, when the buyer's valuation of user data is low, only large platforms with low service costs can afford to serve users. In this scenario, users exclusively join and share data with these low-cost platforms. Interestingly, increased competition benefits the buyer, not the user: as the number of platforms increases, the user utility does not improve while the buyer utility improves. However, increasing the competition improves the overall utilitarian welfare. Building on our analysis, we then study regulations to improve the user utility. We discover that banning data sharing maximizes user utility only when all platforms are low-cost. In mixed markets of high- and low-cost platforms, users prefer a minimum noise mandate over a sharing ban. Imposing this mandate on high-cost platforms and banning data sharing for low-cost ones further enhances user utility.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.09697&r=mic
  13. By: Simina Br\^anzei; MohammadTaghi Hajiaghayi; Reed Phillips; Suho Shin; Kun Wang
    Abstract: We consider the setting of repeated fair division between two players, denoted Alice and Bob, with private valuations over a cake. In each round, a new cake arrives, which is identical to the ones in previous rounds. Alice cuts the cake at a point of her choice, while Bob chooses the left piece or the right piece, leaving the remainder for Alice. We consider two versions: sequential, where Bob observes Alice's cut point before choosing left/right, and simultaneous, where he only observes her cut point after making his choice. The simultaneous version was first considered by Aumann and Maschler (1995). We observe that if Bob is almost myopic and chooses his favorite piece too often, then he can be systematically exploited by Alice through a strategy akin to a binary search. This strategy allows Alice to approximate Bob's preferences with increasing precision, thereby securing a disproportionate share of the resource over time. We analyze the limits of how much a player can exploit the other one and show that fair utility profiles are in fact achievable. Specifically, the players can enforce the equitable utility profile of $(1/2, 1/2)$ in the limit on every trajectory of play, by keeping the other player's utility to approximately $1/2$ on average while guaranteeing they themselves get at least approximately $1/2$ on average. We show this theorem using a connection with Blackwell approachability. Finally, we analyze a natural dynamic known as fictitious play, where players best respond to the empirical distribution of the other player. We show that fictitious play converges to the equitable utility profile of $(1/2, 1/2)$ at a rate of $O(1/\sqrt{T})$.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.08547&r=mic
  14. By: Paul Heidhues (Heinrich Heine University Dusseldorf); Botond Koszegi (Central European University (CEU)); Philipp Strack (Yale University)
    Abstract: We model an agent who stubbornly underestimates how much his behavior is driven by undesirable motives, and, attributing his behavior to other considerations, updates his views about those considerations. We study general properties of the model, and then apply the framework to identify novel implications of partially naive present bias. In many stable situations, the agent appears realistic in that he eventually predicts his behavior well. His unrealistic self- view does, however, manifest itself in several other ways. First, in basic settings he always comes to act in a more present-biased manner than a sophisticated agent. Second, he systematically mispredicts how he will react when circumstances change, such as when incentives for forward-looking behavior increase or he is placed in a new, ex-ante identical environment. Third, even for physically non-addictive products, he follows empirically realistic addiction-like consumption dynamics that he does not anticipate. Fourth, he holds beliefs that Ñ when compared to those of other agents Ñ display puzzling correlations between logically unrelated issues. Our model implies that existing empirical tests of sophistication in intertemporal choice can reach incorrect conclusions. Indeed, we argue that some previous findings are more consistent with our model than with a model of correctly specified learning.
    Date: 2023–01–18
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:2378&r=mic
  15. By: Zdybel, Karol B.
    Abstract: Spontaneous norms, or simply norms, can be defined as rules of conduct that emerge without intentional design and in the absence of purposeful external coordination. While the law and economics scholarship has formally analyzed spontaneous norms, the analysis has typically been limited to scenarios where agents possess complete information about the interaction structure, including others' understanding of desirable and undesirable outcomes. In contrast, this paper examines spontaneous norms under the assumption of agent heterogeneity and private preferences. By employing a game-theoretical framework, the analysis reveals that norms' lifecycle can be divided into a formative phase and a long-run phase. The formative phase crucially shapes the norm's content and is itself critically dependent on the initial beliefs that agents hold about each other. Moreover, spontaneous norms are resilient to minor shocks to the belief structure but disintegrate when the magnitude of shocks becomes significant. In the final part, the paper highlights the broader implications of its findings, indicating applications in general law and economics, legal anthropology and history, and the sociology of social norms.
    Keywords: Spontaneous norms, Social norms, Custom, Private assessment, Legal history
    JEL: K00 K10 K39 P48 Z13
    Date: 2024
    URL: http://d.repec.org/n?u=RePEc:zbw:ilewps:78&r=mic
  16. By: Selman Erol; Camilo Garcia-Jimeno
    Abstract: Governments use coercion to aggregate distributed information relevant to governmental objectives—from the prosecution of regime-stability threats to terrorism or epidemics. A cohesive social structure facilitates this task, as reliable information will often come from friends and acquaintances. A cohesive citizenry can more easily exercise collective action to resist such intrusions, however. We present an equilibrium theory where this tension mediates the joint determination of social structure and civil liberties. We show that segregation and unequal treatment sustain each other as coordination failures: citizens choose to segregate along the lines of an arbitrary trait only when the government exercises unequal treatment as a function of the trait, and the government engages in unequal treatment only when citizens choose to segregate based on the trait. We characterize when unequal treatment against a minority or a majority can be sustained, and how equilibrium social cohesiveness and civil liberties respond to the arrival of widespread surveillance technologies, shocks to collective perceptions about the likelihood of threats or the importance of privacy, or to community norms such as codes of silence.
    Keywords: civil liberties; Segregation; Information Aggregation
    JEL: D23 D73 D85
    Date: 2024–02–14
    URL: http://d.repec.org/n?u=RePEc:fip:fedhwp:97780&r=mic
  17. By: Hemant Bhargava (University of California, Davis); Antoine Dubus (ETH Zurich); David Ronayne (ESMT Berlin); Shiva Shekhar (Tilburg School of Economics and Management)
    Abstract: Large, generalist, technology firms—so-called “big-tech” firms—powerful in their primary market, routinely enter secondary markets consisting of specialist firms. Naturally, one might expect a specialist firm to be fiercely protective of its data as a way to maintain its market position in the secondary market. Counter to this intuition, we demonstrate that a specialist firm willingly shares its market data with an intruding tech generalist. We do so by developing a model of crossmarket competition in which data collected via consumer usage in each market is a factor of product quality in both markets. We show that a specialist firm shares its data to strategically create co-dependence between the two firms, thereby softening competition and transforming the generalist firm from a traditional competitor into a co-opetitor. For the generalist intruder, data from the specialist firm substitute for its own investments in product quality in the secondary market. As such, the act of sharing data makes the intruder a stakeholder in the valuable data collected by the specialist, and consequently in the specialist’s continued success. Moreover, while the firms benefit from data sharing, consumers can be worse off from the weaker price competition and lower investments in innovation. Our results have managerial and policy implications, notably on account of backlash against data collection and the market power of big tech firms.
    Keywords: data-driven quality improvements; externalities; co-opetition; data sharing;
    Date: 2024–02–17
    URL: http://d.repec.org/n?u=RePEc:rco:dpaper:498&r=mic

This nep-mic issue is ©2024 by Jing-Yuan Chiou. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.