nep-mic New Economics Papers
on Microeconomics
Issue of 2020‒04‒06
fourteen papers chosen by
Jing-Yuan Chiou
National Taipei University

  1. Commitment and Conflict in Multilateral Bargaining By Miettinen, Topi; Vanberg, Christoph
  2. Policy Announcement Design By Anna Cieslak; Semyon Malamud; Andreas Schrimpf
  3. Power to Ignore: Locally Ordinal Bayesian Incentive Compatibility By Miho Hong; Semin Kim
  4. Discounting by committee By Millner, Antony; Healey, Andrew
  5. Collusive Market Allocations By Iossa, Elisabetta; Loertscher, Simon; Marx, Leslie; Rey, Patrick
  6. Decision-making with partial information By Eichberger, Jürgen; Pasichnichenko, Illia
  7. Welfare Economics in Large Worlds: Welfare and Public Policies in an Uncertain Environment By Guilhem Lecouteux
  8. Security Design in Non-Exclusive Markets with Asymmetric Information By Vladimir Asriyan; Victoria Vanasco
  9. Recurrent Preemption Games By Hitoshi Matsushima
  10. Litigation and Settlement under Loss Aversion By Argenton, Cedric; Wang, Xiaoyu
  11. Cold play: Learning across bimatrix games By Lensberg, Terje; Schenk-Hoppé, Klaus R.
  12. Signaling Valence in Primary Elections By Giovanni Andreottola
  13. Scandals, Media Competition and Political Accountability By Giovanni Andreottola; Antoni-Italo de Moragas
  14. Big Data and Democracy By van Gils, Freek; Müller, Wieland; Prüfer, Jens

  1. By: Miettinen, Topi; Vanberg, Christoph
    Abstract: We extend the Baron and Ferejohn (1989) model of multilateral bargaining by allowing the players to attempt commiting to a bargaining position prior to negotiating. If successful,commitment binds a player to reject any proposal which allocates to her a share below a self-imposed threshold. Any such attempted commitment fails and decays with an exogenously given probability. We characterize and compare symmetric stationary subgame perfect equilibria under unanimity rule and majority rules. Under unanimity rule, there are potentially many equilibria which can be ordered from the least to most inefficient, according to how how many commitment attempts must fail in order for an agreement to arise. The most inefficient equilibrium exists independently of the number of players, and the delay in this equilibrium is increasing in the number of players. In addition, more efficient commitment profiles cannot be sustained in equilibrium if the number of players is sufficiently large. The expected inefficiency due to delay at the least and at the most efficient equilibrium increases as the number of players increases. Under any (super)majority rule, however, there is no equilibrium with delay or inefficiency. The reason is that competition to be included in the winning coalition discourages attempts to commit to an aggressive bargaining position. We also show that inefficiencies related to unanimity decision making may be aggravated by longer lags between consecutive bargaining rounds. The predicted patterns are by and large consistent with observed inefficiencies in many international arenas including the European Union, WTO, and UNFCCC. The results suggest that the unanimity rule is particularly damaging if the number of legislators is large and the time lags between consecutive sessions are long.
    Keywords: bargaining; commitment; conflict; delay; environmental agreements; international negotiations; legislative; majority; multilateral; unanimity
    Date: 2020–03–30
  2. By: Anna Cieslak (Duke University - Fuqua School of Business); Semyon Malamud (Ecole Polytechnique Federale de Lausanne; Centre for Economic Policy Research (CEPR); Swiss Finance Institute); Andreas Schrimpf (Bank for International Settlements (BIS) - Monetary and Economic Department)
    Abstract: We study the general problem of information design for a policymaker - a central bank - that communicates its private information (the "state") to the public. We show that it is optimal for the policymaker to partition the state space into a finite number of "clusters” and to communicate to the public to which cluster the state belongs. Optimal communication is more precise when the policymaker's beliefs conform with prior public expectations, but is more vague in case of divergence. We characterize the policymaker's trade-offs via a novel object - the information relevance matrix - and label its eigenvectors as principal information components (PICs). PICs with the highest eigenvalues determine the dimensions of information with the highest welfare sensitivity and, hence, are the ones that the policymaker should be most precise about.
    Keywords: Central Bank Announcements, Learning, Bayesian Persuasion, Information Design
    JEL: D82 D83 E52 E58
    Date: 2020–01
  3. By: Miho Hong (Yale Univ); Semin Kim (Yonsei Univ)
    Abstract: We investigate the locally ordinal notion of Bayesian incentive compatibility (LOBIC) of deterministic voting mechanisms. We consider a standard Bayesian environment where agents have private and strict preference orderings on a finite set of alternatives. Our main domains of preferences over alternatives are even larger than a broad class of domains — a few of its constituents being the unrestricted domain, the single-peaked domain, and the single-dipped domain. With independent and generic priors, we show that LOBIC of a mechanism combined with unanimity implies the tops-only property. Furthermore, we find a subclass of the domains where a mechanism with LOBIC and unanimity is dictatorial. We study the sufficiency of local incentive constraints for full incentive constraints and the relationship between LOBIC and dominant strategy incentive compatibility.
    Keywords: Incentive compatibility; Local incentive compatibility; Tops-only property; Dictatorship; Connected domains; Unanimity
    JEL: C72 D01 D02 D72 D82
    Date: 2020–03
  4. By: Millner, Antony; Healey, Andrew
    Abstract: We study a dynamic social choice problem in which a sequence of committees must decide how to consume a public asset. A committee convened at time t decides on consumption at t, accounting for the behaviour of future committees. Committee members disagree about the appropriate value of the pure rate of time preference, but must nevertheless reach a decision. If each committee aggregates its members' preferences in a utilitarian manner, the collective preferences of successive committees will be time inconsistent, and they will implement inefficient consumption plans. If however committees decide on the level of consumption by a majoritarian vote in each period, they may improve on the consumption plans implemented by utilitarian committees. Using a simple model, we show that this occurs in empirically plausible cases. Application to the problem of choosing the social discount rate is discussed.
    Keywords: collective decisions; intertemporal choice; time inconsistency; social discounting; ES/K006576/1
    JEL: D60 D71 D90
    Date: 2018–11–01
  5. By: Iossa, Elisabetta; Loertscher, Simon; Marx, Leslie; Rey, Patrick
    Abstract: Collusive schemes by suppliers often take the form of allocating customers or markets among cartel members. We analyze incentives for suppliers to initiate and sustain such a collusive schemes in a repeated procurement setting. We show that, contrary to some prevailing beliefs, staggered (versus synchronized) purchasing does not make collusion more difficult to sustain or initiate. Buyer defensive measures include synchronized rather than staggered purchasing, first-price rather than second-price auctions, more aggressive or secrete reserve prices, longer contract lengths, withholding information, and avoiding observable registration procedures. Inefficiency induced by defensive measures is an often unrecognized social cost of collusive conduct.
    Keywords: synchronized vs staggered purchasing; sustainability and initiation of collusion; coordinated effects
    JEL: D44 D82 L41
    Date: 2020–03
  6. By: Eichberger, Jürgen; Pasichnichenko, Illia
    Abstract: In this paper, we study choice under uncertainty with belief functions on a set of outcomes as objects of choice. Belief functions describe what is objectively known about the probabilities of outcomes. We assume that decision makers have preferences over belief functions that reflect both their valuation of outcomes and the information available about the likelihood of outcomes. We provide axioms which characterize a preference representation for belief functions that captures what is (objectively) known about the likelihood of outcomes and combines it with subjective beliefs about unknown probabilities according to the “principle of insufficient reason”. The approach is novel in its treatment of partial information and in its axiomatization of the uniform distribution in the case of ignorance. Moreover, our treatment of partial information yields a natural distinction between ambiguity and ambiguity attitude.
    Date: 2020–03–30
  7. By: Guilhem Lecouteux (Université Côte d'Azur; GREDEG CNRS)
    Abstract: The aim of this paper is first to review the different conceptions of welfare advanced in the literature on behavioural welfare economics. I then argue that Savage’s distinction between small and large worlds offers the adequate framework to conceptualise the problem of inferring a notion of welfare from possibly incoherent individual choices. I distinguish between welfarist, behaviourist, constitutional, and procedural approaches to the reconciliation problem, and show that they offer complementary solutions depending on the nature of the uncertainty of the choice problem, and on the epistemic position of the theoretician with respect to the agent we intend to model.
    Keywords: reconciliation problem, nudge, boost, large worlds, welfare
    JEL: A11 B40 D01 D63 D91
    Date: 2020–03
  8. By: Vladimir Asriyan; Victoria Vanasco
    Abstract: We revisit the classic problem of a seller (e.g. rm) who is privately informed about her asset and needs to raise funds from uninformed buyers (e.g. investors) by issuing securities backed by her asset cash flows. In our setting, buyers post menus of contracts to screen the seller, but the seller cannot commit to accept contracts from only one buyer, i.e., markets are non-exclusive. We show that an equilibrium of this screening game always exists, it is unique and features semi-pooling allocations for a wide range of parameters. In equilibrium, the seller tranches her asset cash flows into a debt security (senior tranche) and a levered-equity security (junior tranche). Whereas the seller of a high quality asset only issues her senior tranche, the seller of a low quality asset issues both tranches but to distinct buyers. Consistent with this, whereas the senior tranche is priced at pooling valuation, the junior tranche is priced at low valuation. Our theory's positive predictions are consistent with recent empirical evidence on issuance and pricing of mortgage-backed securities, and we analyze its normative implications within the context of recent reforms aimed at enhancing transparency of financial markets.
    Keywords: adverse slection, security design, non-exclusivity, tranching, liquidity, securitization, transparency, opacity, complexity, market design, Regulation
    JEL: G14 G18 D47 D82 D86
    Date: 2019–12
  9. By: Hitoshi Matsushima
    Abstract: I consider a new model of an infinitely repeated preemption game with random matching, termed the recurrent preemption game, wherein each player's discount factor depends on whether she wins the current game. This model describes sequential strategic technology adoptions in which a company becomes outdated by failing to maintain a position at the forefront of innovation. Assuming incomplete information about the presence of a rival, I clarify how the prominence of the innovator's dilemma influences the degree of excessive competition in preemption. I also reveal interesting properties demonstrated by the unique symmetric Nash equilibrium of the recurrent preemption game.
    Date: 2020–02
  10. By: Argenton, Cedric (Tilburg University, TILEC); Wang, Xiaoyu (Tilburg University, TILEC)
    Abstract: In this paper, we investigate how loss aversion affects people's behavior in civil litigation. We find that a loss-averse plaintiff demands a higher offer for small claims to maintain her threat to proceed to trial compared to a loss- neutral plaintiff. For larger claims, a loss-averse plaintiff demands a lower offer to increase the settlement probability as loss pains her extra in trial. We also investigate how various policies affect loss-averse litigants' settlement decisions. Only a reduction in the asymmetry of information about trial odds uniformly leads to higher settlement rates.
    Keywords: settlement; loss aversion; Asymmetric Information
    JEL: D82 K41
    Date: 2020
  11. By: Lensberg, Terje; Schenk-Hoppé, Klaus R.
    Abstract: We study one-shot play in the set of all bimatrix games by a large population of agents. The agents never see the same game twice, but they can learn ‘across games’ by developing solution concepts that tell them how to play new games. Each agent’s individual solution concept is represented by a computer program, and natural selection is applied to derive stochastically stable solution concepts. Our aim is to develop a theory predicting how experienced agents would play in one-shot games.
    Keywords: One-shot games, solution concepts, genetic programming, evolutionary stability.
    JEL: C63 C73 C90
    Date: 2020–03–10
  12. By: Giovanni Andreottola (Università di Napoli Federico II and CSEF)
    Abstract: I build a model of two-stage (primary and general) elections in which primary election candidates differ in terms of a privately observed quality dimension (valence). I show that primary election candidates have the incentive to signal their valence by means of their policy platform choice. There can be two types of separating equilibria in primary elections: an extremist equilibrium, in which valent candidates choose more extreme policies than non-valent ones, and a centrist one, in which valent candidates instead move close to the incumbent from the opposing party. The ideology of primary elections voters is the main driver of the choice of one versus the other separating strategy. I also study the conditions under which party voters benefit from primaries, as well as those under which primaries increase the probability for a party of winning the general election. Finally, I assess the effects of incumbency advantage/disadvantage, explore alternative patterns of valence observability and extend the model to account for both parties holding primaries.
    Date: 2020–03–20
  13. By: Giovanni Andreottola (Università di Napoli Federico II and CSEF); Antoni-Italo de Moragas (Colegio Universitario de Estudios Financieros (CUNEF))
    Abstract: We present a model of a media market in which a set of news outlets compete to break news. In our model, each media receives some information on whether a politician in office is corrupt. Media outlets can decide whether to break the story immediately or wait and fact-check, taking into account that if another media breaks the news, the profit opportunity disappears. We show that as the number of competitors increases, each outlet becomes more likely to break the news without fact-checking. Therefore, as the number of media increases, the incumbent politician is more likely to be accused of corruption by the media: this makes the re-election of incumbents more difficult and increases political turnover. In particular, we show that if voters consult with higher priority the media outlets that report about a scandal, increasing the number of competitors decreases the probability of having an honest politician in office.
    Date: 2020–03–10
  14. By: van Gils, Freek (Tilburg University, Center For Economic Research); Müller, Wieland (Tilburg University, Center For Economic Research); Prüfer, Jens (Tilburg University, Center For Economic Research)
    Abstract: Recent technological developments have raised concerns about threats to democracy because of their potential to distort election outcomes: (a) data-driven voter research enabling political microtargeting, and (b) growing news consumption via social me- dia and news aggregators that obfuscate the origin of news items, leading to voters’ unawareness about a news sender’s identity. We provide a theoretical framework in which we can analyze the effects that microtargeting by political interest groups and unawareness have on election outcomes in comparison to “conventional” news report- ing. We show which voter groups suffer from which technological development, (a) or (b). While both microtargeting and unawareness have negative effects on voter welfare, we show that only unawareness can flip an election. Our model framework allows the theory-based discussion of policy proposals, such as to ban microtargeting or to require news platforms to signal the political orientation of a news item’s originator.
    Keywords: disinformation; interest groups; news platforms; microtargeting; voter awarness
    JEL: C72 D72 D82 D83
    Date: 2020

This nep-mic issue is ©2020 by Jing-Yuan Chiou. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.