nep-mic New Economics Papers
on Microeconomics
Issue of 2018‒10‒15
nine papers chosen by
Jing-Yuan Chiou
National Taipei University

  1. Optimal Law Enforcement with Ordered Leniency By Claudia M. Landeo; Kathryn E. Spier
  2. Discriminating Against Captive Customers By Armstrong, Mark; Vickers, John
  3. Why Echo Chambers are Useful By Ole Jann
  4. Belief-weighted Nash aggregation of Savage preferences By SPRUMONT, Yves
  5. Robust scoring rules By Tsakas, Elias
  6. Extensions of the Shapley Value for Environments with Externalities By Inés Macho-Stadler; David Pérez-Castrillo; David Wettstein
  7. Precision may harm: The comparative statics of imprecise judgement By HORAN, Sean; MANZINI, Paola
  8. Why do voters elect less qualified candidates? By Mizuno, Nobuhiro; Okazawa, Ryosuke
  9. All or Nothing: State Capacity and Optimal Public Goods Provision By Felix Bierbrauer; Justus Winkelmann

  1. By: Claudia M. Landeo (University of Alberta); Kathryn E. Spier (Harvard Law School and NBER)
    Abstract: This paper studies the design of enforcement policies to detect and deter harmful short-term activities committed by groups of injurers. With an ordered-leniency policy, the degree of leniency granted to an injurer who self-reports depends on his or her position in the self-reporting queue. By creating a “race to the courthouse,” ordered-leniency policies lead to faster detection and stronger deterrence of illegal activities. The socially-optimal level of deterrence can be obtained at zero cost when the externalities associated with the harmful activities are not too high. Without leniency for self-reporting, the enforcement cost is strictly positive and there is underdeterrence of harmful activities relative to the first-best level. Hence, ordered-leniency policies are welfare improving. Our findings for environments with groups of injurers complement Kaplow and Shavell's (1994) results for single-injurer environments.
    Keywords: Law Enforcement, Ordered Leniency, Self-Reporting, Leniency, Harmful Externalities, Non-Cooperative Games, Prisoners' Dilemma Game, Coordination Game; Risk Dominance; Pareto Dominance; Corporate Misconduct; White-Collar Crime; Securities Fraud; Insider Trading; Market Manipulation; Whistleblowers; Plea Bargaining; Tax Evasion; Environmental Policy Enforcement
    JEL: C72 D86 K10 L23
    Date: 2018–09
  2. By: Armstrong, Mark; Vickers, John
    Abstract: We analyze a market where some consumers only consider buying from a specific seller while other consumers choose the best deal from several sellers. When sellers are able to discriminate against their captive customers, we show that discrimination harms consumers in aggregate relative to the situation with uniform pricing when sellers are approximately symmetric, while the practice tends to benefit consumers in sufficiently asymmetric markets.
    Keywords: Price discrimination; captive customers; consideration sets
    JEL: D43 D8 L13
    Date: 2018–10
  3. By: Ole Jann
    Abstract: Ole Jann, Christoph Schottmüller Why do people appear to forgo information by sorting into “echo chambers†? We construct a highly tractable multi-sender, multi-receiver cheap talk game in which players choose with whom to communicate. We show that segregation into small, homogeneous groups can improve everybody’s information and generate Pareto-improvements. Polarized preferences create a need for segregation; uncertainty about preferences magnifies this need. Using data from Twitter, we show several behavioral patterns that are consistent with the results of our model.
    Keywords: Asymmetric Information, Echo Chambers, Polarization, Debate, Cheap Talk, Information Aggregation, Twitter
    JEL: D72 D82 D83 D85
    Date: 2018–10–02
  4. By: SPRUMONT, Yves
    Abstract: The 'belief-weighted Nash social welfare functions' are methods for aggregating Savage preferences defined over a set of acts. Each such method works as follows. Fix a 0-normalized subjective expected utility representation of every possible preference and assign a vector of individual weights to each profile of beliefs. To compute the social preference at a given preference profile, rank the acts according to the weighted product of the individual 0-normalized subjective expected utilities they yield, where the weights are those associated with the belief profile generated by the preference profile. We show that these social welfare functions are characterized by the weak Pareto principle, a continuity axiom, and the following informational robustness property : the social ranking of two acts is unaffected by the addition of any outcome that every individual deems at least as good as the one she originally found worst. This makes the belief-weighted Nash social welfare functions appealing in contexts where the 'best' relevant outcome for an individual is difficult to identify.
    Keywords: Preference aggregation; uncertainty; subjective expected utility; Nash product
    JEL: D63 D71
    Date: 2018
  5. By: Tsakas, Elias (General Economics 1 (Micro))
    Abstract: We study elicitation of latent (prior) beliefs when the agent can acquire information via a costly attention strategy. We introduce a mechanism that simultaneously makes it strictly dominant to (a) not acquire any information, and (b) report truthfully. We call such a mechanism a robust scoring rule. Robust scoring rules are important for different reasons. Theoretically, they are crucial both for establishing that decision-theoretic models under uncertainty are testable. From an applied point of view, they are needed for eliciting unbiased estimates of population beliefs. We prove that a robust scoring rule exists under mild axioms on the attention costs. These axioms are shown to characterize the class of posterior-separable cost functions. Our existence proof is constructive, thus identifying an entire class of robust scoring rules. Subsequently, we show that we can arbitrarily approximate the agent's prior beliefs with a quadratic scoring rule. The same holds true for a discrete scoring rule. Finally, we show that the prior beliefs can be approximated, even when we are uncertain about the exact specification of the agent's attention costs.
    Keywords: belief elicitation, prior beliefs, rational inattention, hidden information costs, posterior-separability, Shannon entropy, population beliefs, testing decision-theoretic models
    JEL: C91 D81 D82 D83 D87
    Date: 2018–10–08
  6. By: Inés Macho-Stadler; David Pérez-Castrillo; David Wettstein
    Abstract: Shapley (1953a) formulates his proposal of a value for cooperative games with transferable utility in characteristic function form, that is, for games where the resources every group of players has available to distribute among its members only depend on the members of the group. However, the worth of a coalition of agents often depends on the organization of the rest of the players. The existence of externalities is one of the key ingredients in most interesting economic, social, or political environments. Thrall and Lucas (1963) provide the first formal description of settings with externalities by introducing the games in partition function form. In this chapter, we present the extensions of the Shapley value to this larger set of games. The different approaches that lead to the Shapley value in characteristic function form games (axiomatic, marginalistic, potential, dividends, non-cooperative) provide alternative routes for addressing the question of the most suitable extension of the Shapley value for the set of games in partition function form.
    Keywords: shapley value, Externalities
    JEL: C71 D62
    Date: 2018–10
  7. By: HORAN, Sean; MANZINI, Paola
    Abstract: We consider an agent whose information about the objects of choice is imperfect in two respects: first, their values are perceived with ‘error’; and, second, the realised values cannot be discriminated with absolute ‘precision’. Reasons for imprecise discrimination include limitations in sensory perception, memory function, or the technology that experts use to communicate with decision-makers. We study the effect of increasing precision on the quality of decision-making. When values are perceived ‘without’ error, more precision is unambiguously beneficial. We show that this ceases to be true when values are perceived ‘with’ error. As a practical implication, our results establish conditions where it is counter-productive for an expert to use a finer signalling scheme to communicate with a decision-maker.
    Keywords: Stochastic choice; imprecise perception
    JEL: D01
    Date: 2018
  8. By: Mizuno, Nobuhiro; Okazawa, Ryosuke
    Abstract: Voters sometimes vote for seemingly less qualified candidates; the winners of elections are sometimes less competent than the losers in light of candidates' observable characteristics such as their past careers. To explain this fact, we develop a political agency model with repeated elections in which a voter elects a policy maker among candidates with different competency (valence) levels. We show that politicians' competency relates negatively with political accountability when the challenger in the future election is likely to be incompetent. When this negative relation exists, voters prefer to elect an incompetent candidate if they emphasize politicians' policy choices over their competency. The negative relation between competency and accountability is possible because voters cannot commit to future voting strategies. Furthermore, voters' private information about how they evaluate candidates' competency generates a complementary mechanism leading to the negative relation between competency and accountability. This mechanism implies that voters' anti-elitism can be rational ex post even if it is groundless in the first place.
    Keywords: Candidates' competency, Political agency, Repeated elections, Private information, Signaling
    JEL: D72 D82
    Date: 2018–09–27
  9. By: Felix Bierbrauer; Justus Winkelmann
    Abstract: We study the provision of public goods. Different public goods can be bundled provided there is enough capacity, i.e. resources to pay for all the public goods in the bundle. The analysis focuses on the all-or-nothing-mechanism: Expand provision as much as resource feasible if no one vetoes - otherwise stick to the status quo. We show that the probability of the all-outcome converges to one as the capacity becomes unbounded. We also provide conditions under which the all-or-nothing-mechanism is ex ante welfare-maximizing - even though, ex post, it involves an overprovision of public goods.
    Keywords: public goods, bundling, state capacity, mechanism design
    JEL: D79 D82 H41
    Date: 2018

This nep-mic issue is ©2018 by Jing-Yuan Chiou. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.