nep-mic New Economics Papers
on Microeconomics
Issue of 2024‒04‒22
seventeen papers chosen by
Jing-Yuan Chiou, National Taipei University


  1. Confidence and Organizations By Andres Espitia
  2. Inflated Recommendations By Martin Peitz; Anton Sobolev
  3. Auctions with Dynamic Scoring By Martino Banchio; Aranyak Mehta; Andres Perlroth
  4. Spatial multiproduct competition By Moez Kilani; André de Palma
  5. Bidder-Optimal Information Structures in Auctions By Dirk Bergemann; Tibor Heumann; Stephen Morris
  6. Identification of Information Structures in Bayesian Games By Masaki Miyashita
  7. Score-based mechanisms By Eduardo Perez-Richet; Vasiliki Skreta
  8. Interconnected Conflict By Dziubiński, M.; Goyal, S.; Zhou, J.
  9. Safe Implementation By Malachy James Gavan; Antonio Penta;
  10. Algorithmic Information Disclosure in Optimal Auctions By Yang Cai; Yingkai Li; Jinzhao Wu
  11. News Media as Suppliers of Narratives (and Information) By Kfir Eliaz; Ran Spiegler
  12. Entangled vs. Separable Choice By Nail Kashaev; Martin Pl\'avala; Victor H. Aguiar
  13. Volatility and Resilience of Democratic Public-Good Provision By Hans Gersbach; Fikri Pitsuwan; Giovanni Valvassori Bolgè
  14. Repeated Innovations and Excessive Spin-Offs By Mella-Barral, P.; Sabourian, H.
  15. Correlated Equilibrium Strategies with Multiple Independent Randomization Devices By Yohan Pelosse
  16. A stricter canon: general Luce models for arbitrary menu sets By José A. Rodrigues-Neto; Matthew Ryan; James Taylor
  17. Cycle conditions for “Luce rationality†By José A. Rodrigues-Neto; Matthew Ryan; James Taylor

  1. By: Andres Espitia
    Abstract: Miscalibrated beliefs generally compromise the quality of workers' decisions. Why might a firm prefer to hire an individual known to be overconfident? In this paper, I explore the role of such biases when members of the organization disagree about the right course of action. I present a model in which an agent uses his private information to make a choice on behalf of a principal. In this setting, I consider what I call the belief design problem: how would the principal like the agent to interpret his observations? I provide conditions under which the solution indicates a preference for a well-calibrated, an underconfident, or an overconfident agent. A well-calibrated agent is preferred if and only if his information does not affect the expected difference in the players' preferred actions. Overconfidence is optimal when the principal seeks to adjust actions beyond what a well-calibrated agent would do.
    Keywords: principal-agent, overconfidence, belief design
    JEL: D82 D83 D91
    Date: 2024–03
    URL: http://d.repec.org/n?u=RePEc:bon:boncrc:crctr224_2024_521&r=mic
  2. By: Martin Peitz; Anton Sobolev
    Abstract: Biased recommendations arise naturally in markets with heterogeneous consumers. We study a model in which a monopolist offers an experience good to a population of consumers with heterogeneous tastes and makes personalized purchase recommendations. We provide conditions under which a firm makes welfare-reducing purchase recommendations with positive probability, resulting in inflated recommendations. We extend this insight to a setting in which an intermediary makes the recommendations, whereas a seller sets the retail price. Regulatory interventions that forbid inflated recommendations may lead to higher social welfare or may backfire.
    Keywords: recommendation bias, recommender system, asymmetric information, experience good, intermediation
    JEL: L12 L15 D21 D42 M37
    Date: 2024–03
    URL: http://d.repec.org/n?u=RePEc:bon:boncrc:crctr224_2022_336v2&r=mic
  3. By: Martino Banchio; Aranyak Mehta; Andres Perlroth
    Abstract: We study the design of auctions with dynamic scoring, which allocate a single item according to a given scoring rule. We are motivated by online advertising auctions when users interact with a platform over the course of a session. The platform ranks ads based on a combination of bids and quality scores, and updates the quality scores throughout the session based on the user's online activity. The platform must decide when to show an ad during the session. By delaying the auction, the auctioneer acquires information about an ad's quality, improving her chances of selecting a high quality ad. However information is costly, because delay reduces market thickness and in turn revenue. When should the auctioneer allocate the impression to balance these forces? We develop a theoretical model to study the effect of market design on the trade-off between market thickness and information. In particular, we focus on first- and second-price auctions. The auctioneer can commit to the auction format, but not to its timing: her decision can thus be cast as a real options problem. We show that under optimal stopping the first-price auction allocates efficiently but with delay. Instead, the second-price auction generates more revenue by avoiding delay. The auctioneer benefits from introducing reserve prices, more so in a first-price auction.
    Date: 2024–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2403.11022&r=mic
  4. By: Moez Kilani; André de Palma (Université de Cergy-Pontoise, THEMA)
    Abstract: We analyze spatial competition on a circle between firms that have multiple outlets and face quadratic transport costs. The equilibrium is a two-stage Nash game: first, firms decide on their locations and then set their prices. We are able to solve analytically simple multi-outlet cases, but for the general case, we require an algorithm to enumerate all non-isomorphic configurations. While price equilibria are explicit and unique, solving the full two-stage game requires numerical methods. In the location game, we consider two scenarios: either firms cannot jump one outlet over a competitors’ outlet, or firms have the flexibility to locate outlets anywhere on the circle. The solution involves a balance between cannibalization, market protection, and spatial monopoly power. We compare prices, profits, and transport costs for all possible configurations. With flexible locations, the firms’ market areas are contiguous. In this case, surprisingly, each firm acts as a spatial monopoly. If regulations enforce that each firm must set the same price for its outlets, head-to-head competition prevails, leading to decreased profits for the firms but to a better-off situation for consumers.
    Keywords: Spatial competition, circle, multi-product oligopoly, price-location equilibria, coin change problem
    JEL: L13 R32 R53
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:ema:worpap:2023-18&r=mic
  5. By: Dirk Bergemann (Yale University); Tibor Heumann (Pontificia Universidad Catolica de Chile); Stephen Morris (Massachusetts Institute of Technology)
    Abstract: We characterize the bidders' surplus maximizing information structure in an optimal auction for a single unit good and related extensions to multi-unit and multi-good problems. The bidders seeks to find a balance between participation (and the avoidance of exclusion) and efficiency. The information structure that maximizes the bidders' surplus is given by a generalized Pareto distribution at the center of demand distribution, and displays complete information disclosure at either end of the Pareto distribution.
    Date: 2024–02–09
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:2375r1&r=mic
  6. By: Masaki Miyashita
    Abstract: To what extent can an external observer observing an equilibrium action distribution in an incomplete information game infer the underlying information structure? We investigate this issue in a general linear-quadratic-Gaussian framework. A simple class of canonical information structures is offered and proves rich enough to rationalize any possible equilibrium action distribution that can arise under an arbitrary information structure. We show that the class is parsimonious in the sense that the relevant parameters can be uniquely pinned down by an observed equilibrium outcome, up to some qualifications. Our result implies, for example, that the accuracy of each agent's signal about the state is identified, as measured by how much observing the signal reduces the state variance. Moreover, we show that a canonical information structure characterizes the lower bound on the amount by which each agent's signal can reduce the state variance, across all observationally equivalent information structures. The lower bound is tight, for example, when the actual information structure is uni-dimensional, or when there are no strategic interactions among agents, but in general, there is a gap since agents' strategic motives confound their private information about fundamental and strategic uncertainty.
    Date: 2024–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2403.11333&r=mic
  7. By: Eduardo Perez-Richet; Vasiliki Skreta
    Abstract: We propose a mechanism design framework that incorporates both soft information, which can be freely manipulated, and semi-hard information, which entails a cost for falsification. The framework captures various contexts such as school choice, public housing, organ transplant and manipulations of classification algorithms. We first provide a canonical class of mechanisms for these settings. The key idea is to treat the submission of hard information as an observable and payoff-relevant action and the contractible part of the mechanism as a mapping from submitted scores to a distribution over decisions (a score-based decision rule). Each type report triggers a distribution over score submission requests and a distribution over decision rules. We provide conditions under which score-based mechanisms are without loss of generality. In other words, situations under which the agent does not make any type reports and decides without a mediator what score to submit in a score-based decision rule. We proceed to characterize optimal approval mechanisms in the presence of manipulable hard information. In several leading settings optimal mechanisms are score-based (and thus do not rely on soft information) and involve costly screening. The solution methodology we employ is suitable both for concave cost functions and quadratic costs and is applicable to a wide range of contexts in economics and in computer science.
    Date: 2024–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2403.08031&r=mic
  8. By: Dziubiński, M.; Goyal, S.; Zhou, J.
    Abstract: We study a model of conflict with multiple battlefields and the possibility of investments spillovers between the battlefields. Results of conflicts at the individual battlefields are determined by the Tullock contest success function based on efforts assigned to a battlefield as well as efforts spilling over from the neighbouring battlefields. We characterize Nash equilibria of this model and uncover a network invariance result: equilibrium payoffs, equilibrium total expenditure, and equilibrium probabilities of winning individual battlefields are independent of the network of spillovers. We show that the network invariance holds for any contest success function that is homogeneous of degree zero and has the no-tie property. We define a network index that characterizes equilibrium efforts assignments of the players. We show that the index satisfies neighbourhood inclusion and can, therefore, be considered a network centrality.
    Keywords: Conflict, Investments, Models, Networks
    Date: 2024–02–21
    URL: http://d.repec.org/n?u=RePEc:cam:camjip:2403&r=mic
  9. By: Malachy James Gavan; Antonio Penta;
    Abstract: We introduce Safe Implementation, a framework for implementation theory that adds to the standard requirements the restriction that agents’ deviations induce outcomes that are acceptable. Our primitives therefore include both a Social Choice Correspondence, as standard, and an Acceptability Correspondence, each mapping every state of the world to a subset of allocations. This framework generalizes standard notions of implementation, and can accommodate a variety of questions, including robustness with respect to mistakes in play, behavioral considerations, state-dependent feasibility restrictions, limited commitment, etc. We provide results both for general solution concepts and for Nash Equilibrium. For the latter, we identify necessary and sufficient conditions (namely, Comonotonicity and safety-no veto) that restrict the joint behavior of the Social Choice and Acceptability Correspondences, which generalize Maskin’s (1977) conditions. We also show that these conditions are quite permissive in important economic applications, but also that Safe Implementation can be very demanding in environments with ‘rich’ preferences, regardless of the underlying solution concept.
    Keywords: Mechanism Design, Implementation, Robustness, Safe Implementation, Comonotonicity, Safe No-Veto
    JEL: C72 D82
    URL: http://d.repec.org/n?u=RePEc:liv:livedp:202401&r=mic
  10. By: Yang Cai; Yingkai Li; Jinzhao Wu
    Abstract: This paper studies a joint design problem where a seller can design both the signal structures for the agents to learn their values, and the allocation and payment rules for selling the item. In his seminal work, Myerson (1981) shows how to design the optimal auction with exogenous signals. We show that the problem becomes NP-hard when the seller also has the ability to design the signal structures. Our main result is a polynomial-time approximation scheme (PTAS) for computing the optimal joint design with at most an $\epsilon$ multiplicative loss in expected revenue. Moreover, we show that in our joint design problem, the seller can significantly reduce the information rent of the agents by providing partial information, which ensures a revenue that is at least $1 - \frac{1}{e}$ of the optimal welfare for all valuation distributions.
    Date: 2024–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2403.08145&r=mic
  11. By: Kfir Eliaz; Ran Spiegler
    Abstract: We present a model of news media that shape consumer beliefs by providing information (signals about an exogenous state) and narratives (models of what determines outcomes). To amplify consumers' engagement, media maximize consumers' anticipatory utility. Focusing on a class of separable consumer preferences, we show that a monopolistic media platform facing homogenous consumers provides a false "empowering" narrative coupled with an optimistically biased signal. Consumer heterogeneity gives rise to a novel menu-design problem due to a "data externality" among consumers. The optimal menu features multiple narratives and creates polarized beliefs. These effects also arise in a competitive media market model.
    Date: 2024–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2403.09155&r=mic
  12. By: Nail Kashaev; Martin Pl\'avala; Victor H. Aguiar
    Abstract: We study joint probabilistic choice rules that describe the behavior of two decision makers, each facing a possibly different menu. These choice rules are separable when they can be factored into autonomous choices from each individual solely correlated through their individual probabilistic choice rules. Despite recent interest in studying such rules, a complete characterization of the restrictions on them remains an open question. A reasonable conjecture is that such restrictions on separable joint choice can be factored into individual choice restrictions. We name these restrictions separable and show that this conjecture is true if and only if the probabilistic choice rule of at least one decision maker uniquely identifies the distribution over deterministic choice rules. Otherwise, entangled choice rules exist that satisfy separable restrictions yet are not separable. The possibility of entangled choice complicates the characterization of separable choice since one needs to augment the separable restrictions with the new emerging ones.
    Date: 2024–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2403.09045&r=mic
  13. By: Hans Gersbach; Fikri Pitsuwan; Giovanni Valvassori Bolgè
    Abstract: We examine democratic public-good provision with heterogeneous legislators. Decisions are taken by majority rule and an agenda-setter proposes a level of the public good, taxes, and subsidies. Members are heterogeneous with respect to their benefits from the public good. We find that, depending on the status quo public-good level, the agenda-setter will form a coalition with the agents who most desire, or least desire, the public good, and we may observe ‘strange bedfellow’ coalitions. Moreover, public-good provision is a non-monotonic function of the status quo public-good level. In the dynamic setting, public-good provision fluctuates endogenously, even if the agenda-setter stays the same over time. Moreover, the more polarized the legislature is, the higher is the volatility of public-good provision and the longer it may take for a society to recover from negative shocks to public-good provision. We illustrate these findings for a two-party system with polarized parties.
    Keywords: legislative bargaining, coalition, public goods, polarization, resilience
    JEL: C73 D72 H50
    Date: 2024
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_11004&r=mic
  14. By: Mella-Barral, P.; Sabourian, H.
    Abstract: Firms can voluntarily create independent firms to implement their technologically distant innovations and capture their value through capital markets. We argue that when firms repeatedly compete to make innovations, there is inefficient external implementation of innovations and “excessive†creation of such firms. This inefficiency is most exacerbated in the early stages of an industry, when the number of firms is still limited.
    Keywords: Repeated Innovations, Spin-Offs, Voluntary Firm Creation
    JEL: M13 O31 O33
    Date: 2023–06–30
    URL: http://d.repec.org/n?u=RePEc:cam:camjip:2312&r=mic
  15. By: Yohan Pelosse (Humanities and Social Sciences, Swansea University)
    Abstract: A primitive assumption underlying Aumann (1974, 1987) is that all players of a game may correlate their strategies by agreeing on a common single ’public roulette’. A natural extension of this idea is the study of strategies when the assumption of a single randomdevice common to all the players (public roulette) is dropped and (arbitrary) disjoint subsets of players forming a coalition structure are allowed to use independent randomdevices (private roulette) a la Aumann. Undermultiple independent randomdevices, the coalitionsmixed strategies forman equilibrium of the induced non-cooperative game played across the coalitions–the ’partitioned game’–when the profile of such coalitions’ strategies is a profile of correlated equilibria. These correlated equilibria which are themutual joint best responses of the coalitions are called the Nash coalitional correlated equilibria (NCCEs) of the game. The paper identifies various classes of finite and infinite games where there exists a non-empty set of NCCEs lying outside the regular correlated equilibrium distributions of the game. We notably relate the class of NCCEs to the ’coalitional equilibria’ introduced in Ray and Vohra (1997) to construct their ’Equilibrium Binding Agreements’. In a ’coalitional equilibrium’, coalitions’ best responses are defined by Pareto dominance and their existence are not guaranteed in arbitrary games without the use of correlated mixed strategies. We characterize a family of games where the existence of a non-empty set of non-trivial NCCEs is guaranteed to coincide with a subset of coalitional equilibria. Most of our results are based on the characterization of the induced non-cooperative ’partitioned game’ played across the coalitions.
    JEL: C72 C92 D83
    Date: 2024–03–17
    URL: http://d.repec.org/n?u=RePEc:swn:wpaper:2024-05&r=mic
  16. By: José A. Rodrigues-Neto (Research School of Economics, Australian National University); Matthew Ryan (Department of Economics and Finance, Auckland University of Technology); James Taylor (Research School of Economics, Australian National University)
    Abstract: The classical Luce model (Luce, 1959) assumes positivity of random choice: each available alternative is chosen with strictly positive probability. The model is characterised by Luce's choice axiom. Ahumada and Ülkü (2018) and (independently) Echenique and Saito (2019) define the general Luce model (GLM), which relaxes the positivity assumption, and show that it is characterised by a cyclical independence (CI) axiom. Cerreia-Vioglio et al. (2021) subsequently proved that the choice axiom characterises an important special case of the GLM in which a rational choice function (i.e., one that may be rationalised by a weak order) first selects the acceptable alternatives from the given menu, with any residual indifference resolved randomly in Luce fashion. The choice axiom is thus revealed as a fundamental “canon of probabilistic rationality". This result assumes that choice behaviour is specified for all non-empty, finite menus that can be constructed from a given universe, X, of alternatives. We relax this assumption by allowing choice behaviour to be specified for an arbitrary collection of non-empty, finite menus. In this context, we show that the Cerreia-Vioglio et al. (2021) result obtains when the choice axiom is replaced with a mild strengthening of CI. The latter condition implies the choice axiom, thus providing a “stricter canon".
    Keywords: :
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:aut:wpaper:2024-04&r=mic
  17. By: José A. Rodrigues-Neto (Research School of Economics, Australian National University); Matthew Ryan (Department of Economics and Finance, Auckland University of Technology); James Taylor (Research School of Economics, Australian National University)
    Abstract: We extend and refine conditions for “Luce rationality†(i.e., the existence of a Luce – or logit – model) in the context of stochastic choice. When choice probabilities satisfy positivity, we show that the cyclical independence (CI) condition of Ahumada and Ülkü (2018) and Echenique and Saito (2019) is necessary and sufficient for Luce rationality, even if choice is only observed for a restricted set of menus. We then adapt results from the cycles approach (Rodrigues-Neto, 2009) to the common prior problem (Harsanyi, 1967-1968) to refine the CI condition, by reducing the number of cycle equations that need to be checked. A general algorithm is provided to identify a minimal sufficient set of equations (depending on the collection of menus for which choice is observed). Three cases are discussed in detail: (i) when choice is only observed from binary menus, (ii) when all menus contain a common default; and (iii) when all menus contain an element from a common binary default set. Investigation of case (i) leads to a refinement of the famous product rule.
    Keywords: :
    Date: 2024–03
    URL: http://d.repec.org/n?u=RePEc:aut:wpaper:2024-03&r=mic

This nep-mic issue is ©2024 by Jing-Yuan Chiou. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.