nep-mic New Economics Papers
on Microeconomics
Issue of 2016‒08‒21
eleven papers chosen by
Jing-Yuan Chiou
National Taipei University

  1. The curse of long horizons By Bhaskar, Venkataraman; Mailath, George
  2. Bayesian Posteriors For Arbitrarily Rare Events By Drew Fudenberg; Kevin He; Lorens Imhof
  3. Endogenous Public Information and Welfare in Market Games By Xavier Vives
  4. Marketmaking Middlemen By Pieter Gautier; Bo Hu; Makoto Watanabe
  5. Rational allocation of attention in decision-making By Schmitt, Stefanie Yvonne
  6. Delegation of quality control in value chains By Saak, Alexander E.
  7. Secret ballots and costly information gathering: the jury size problem revisited By Guha, Brishti
  8. Strategic sequential voting By González-Díaz, Julio; Herold, Florian; Domínguez, Diego
  9. Patentability, R&D direction, and cumulative innovation By Chen, Yongmin; Pan, Shiyuan; Zhang, Tianle
  10. Common Belief Revisited By Romeo Matthew Balanquit
  11. Threshold Bank-run Equilibrium in Dynamic Games By Romeo Matthew Balanquit

  1. By: Bhaskar, Venkataraman; Mailath, George
    Abstract: We study dynamic moral hazard with symmetric ex ante uncertainty about the difficulty of the job. The principal and agent update their beliefs about the difficulty as they observe output. Effort is private and the principal can only offer spot contracts. The agent has an additional incentive to shirk beyond the disutility of effort when the principal induces effort: shirking results in the principal having incorrect beliefs. We show that the effort inducing contract must provide increasingly high powered incentives as the length of the relationship increases. Thus it is never optimal to always induce effort in very long relationships.
    Keywords: differences in beliefs; high-powered incentives.; moral hazard; principal-agency
    JEL: D01 D23 D86 J30
    Date: 2016–08
  2. By: Drew Fudenberg; Kevin He; Lorens Imhof
    Abstract: Each period, either a blue die or a red die is tossed. The two dice land on side \bar{k} with unknown probabilities $p_{\bar{k}}$ and $q_{\bar{k}}$, which can be arbitrarily low. Given a data-generating process where $p_{\bar{k}}\ge q_{\bar{k}}$, we are interested in how much data is required to guarantee that with high probability the observer's Bayesian posterior mean for $p_{\bar{k}}$ exceeds that for $q_{\bar{k}}$. If the prior is positive on the interior of the simplex and vanishes no faster than polynomially to zero at the simplex boundaries, then for every $\epsilon>0$, there exists $N\in\mathbb{N}$ so that the observer obtains such an inference after n periods with probability at least $1-\epsilon$ whenever $np_{\bar{k}}\ge N$. This result can fail if the prior vanishes to zero exponentially fast at the boundary.
    Date: 2016–08
  3. By: Xavier Vives (IESE)
    Abstract: This paper performs a welfare analysis of markets with private information in which agents condition on prices in the rational expectations tradition. Price-contingent strategies introduce two externalities in the use of private information: a pecuniary externality and a learning externality. The pecuniary externality induces agents to put too much weight on private information and in the standard case, when the allocation role of the price prevails over its informational role, overwhelms the learning externality which impinges in the opposite way. The price may be very informative but at the cost of an excessive dispersion of the actions of agents. The welfare loss at the market solution may be increasing in the precision of private information. The analysis provides insights into optimal business cycle policy and a rationale for a Tobin-like tax for financial transactions.
    Date: 2016
  4. By: Pieter Gautier (VU University Amsterdam, the Netherlands); Bo Hu (VU University Amsterdam, the Netherlands); Makoto Watanabe (VU University Amsterdam, the Netherlands)
    Abstract: This paper develops a model in which market structure is determined endogenously by the choice of intermediation mode. We consider two representative business modes of intermediation that are widely used in real-life markets: one is a middleman mode by which an intermediary holds inventories which he stocks from sellers for the purpose of reselling to buyers; the other is a market-making mode by which an intermediary offers a platform for buyers and sellers to trade with each other. In our model, buyers and sellers can simultaneously search in an outside market and use the intermediation service. We show that a marketmaking middleman, who adopts the mixture of these two intermediation modes, can emerge in a directed search equilibrium.
    Keywords: Middlemen; Marketmakers; Platform; Directed Search
    JEL: D4 G2 L1 L8 R1
    Date: 2016–08–09
  5. By: Schmitt, Stefanie Yvonne
    Abstract: This paper proposes a model of attention allocation in decision-making. Attention has various definitions across the literature. Here, I understand attention as selecting information for costly processing. The paper investigates how an agent rationally allocates attention. The resulting attention allocation is context-dependent and influences choice quality. Next to inattention, two strategies of allocating attention prevail. These strategies share similarities with bottom-up and top-down attention - concepts reported in the psychological literature. Exploring firms' strategic considerations reveals an incentive for firms to produce high quality and highlight quality, if consumers expect low quality, and to exploit consumers by producing low quality and shrouding quality, if agents expect high quality.
    Keywords: rational attention,information-processing,decision-making,shrouding
    JEL: D10 D03 D81 D83 L15
    Date: 2016
  6. By: Saak, Alexander E.
    Abstract: This paper studies the decision of a firm that sells an experience good to delegate quality control to an independent monitor. In an infinitely repeated game consumers’ trust provides incentives to (1) acquire information about whether the good is defective and (2) withhold the good from sale if it is defective. If third-party reports are observable to consumers, delegation of monitoring lessens the first and dispenses with the second moral hazard concern but also creates agency costs due to either limited liability or lack of commitment. In equilibrium the firm controls quality without an independent monitor only if trades are sufficiently frequent and consumer information about quality is sufficiently precise. This result holds under different assumptions about feasible contracts, collusion, verifiability of reports, joint inspections, and the number of firms that hire the third-party monitor. If third-party reports are not publicly observed, delegation can be optimal only if two or more firms hire the third-party monitor because then both moral hazard concerns are present under delegation.
    Keywords: quality controls, monitoring techniques, food safety, repeated game, trust, imperfect monitoring, moral hazard, value chains,
    Date: 2016
  7. By: Guha, Brishti
    Abstract: Suppose paying attention during jury trials is costly, but that jurors do not pool information (as in contemporary Brazil, or ancient Athens). If inattentive jurors are as likely to be wrong as right, I find that small jury panels work better as long as identical jurors behave symmetrically. If not paying attention makes error more likely than not, jurors may co-ordinate on two different symmetric outcomes: a “high-attention” one or a “low attention” one. If social norms stigmatize shirking, jurors co-ordinate on the high-attention equilibrium, and a smaller jury yields better outcomes. However, increasing the jury up to a finite bound works better if norms are tolerant of shirking, in which case co-ordination on the low-attention outcome results. If the cost of attention is high, a bare majority of jurors pay attention, and efficiency increases in jury size up to a bound. The model also applies to elections and referendums.
    Keywords: Jury size, pivotal voters, secret ballots, multiple equilibria, costly information.
    JEL: D72 D82 K40
    Date: 2016–08–12
  8. By: González-Díaz, Julio; Herold, Florian; Domínguez, Diego
    Abstract: In this paper, we study the potential implications of a novel yet natural voting system: strategic sequential voting. Each voter has one vote and can choose when to cast his vote. After each voting period, the current count of votes is publicized enabling subsequent voters to use this information. Given the complexity of the general model, in this paper we study a simplified two-period setting. We find that, in elections involving three or more candidates, voters with a strong preference for one particular candidate have a strategic incentive to vote in an early period to signal that candidate's viability. Voters who are more interested in preventing a particular candidate from winning have an incentive to vote in a later period, when they will be better able to tell which other candidate will most likely beat the one they dislike. Strategic sequential voting may therefore result in voters coordinating their choices, mitigating the problem of a Condorcet loser winning an election due to mis-coordination. Furthermore, a (relatively) strong intensity of preferences for the preferred candidate can be partially expressed by voting early, possibly swaying the choice of remaining voters.
    Keywords: sequential voting,elections,endogenous timing,strategic timing
    JEL: D72 D71 C72
    Date: 2016
  9. By: Chen, Yongmin; Pan, Shiyuan; Zhang, Tianle
    Abstract: We present a model of cumulative innovation where firms can conduct R&D in both a safe and a risky direction. Innovations in the risky direction produce quality improvements with higher expected sizes and variances. As patentability standards rise, an innovation in the risky direction is less likely to receive a patent that replaces the current technology, which decreases the static incentive for new entrants to conduct risky R&D, but increases their dynamic incentive because of the longer duration---and hence higher reward---for incumbency. These, together with a strategic substitution and a market structure effect, result in an inverted-U shape in the risky direction but a U shape in the safe direction for the relationship between R&D intensity and patentability standards. There exists a patentability standard that induces the efficient innovation direction, whereas R&D is biased towards (against) the risky direction under lower (higher) standards. The optimal patentability standard may distort the R&D direction to increase the industry innovation rate that is socially deficient.
    Keywords: cumulative innovation, patentability standards, R&D intensity, R&D direction, rate of innovation, innovation direction
    JEL: L1 O3
    Date: 2016–08
  10. By: Romeo Matthew Balanquit (School of Economics, University of the Philippines Diliman)
    Abstract: This study presents how selection of equilibrium in a game with many equilibria can be made possible when the common knowledge assumption (CKA) is replaced by the notion of common belief. Essentially, this idea of pinning down an equilibrium by weakening the CKA is the central feature of the global game approach which introduces a natural perturbation on games with complete information. We argue that since common belief is another form of departure from the CKA, it can also obtain the results attained by the global game framework in terms of selecting an equilibrium. We provide here necessary and sufficient conditions. Following the program of weakening the CKA, we weaken the notion of common belief further to provide a less stringent and a more natural way of believing an event. We call this belief process as iterated quasi-common p-belief which is a generalization to many players of a two-person iterated p-belief. It is shown that this converges with the standard notion of common p-belief at a sufficiently large number of players. Moreover, the agreeing to disagree result in the case of beliefs (Monderer & Samet, 1989 and Neeman, 1996a) can also be given a generalized form, parameterized by the number of players.
    Keywords: common p-belief; common knowledge assumption; global games
    JEL: D83 C70
    Date: 2016–08
  11. By: Romeo Matthew Balanquit (School of Economics, University of the Philippines Diliman)
    Abstract: This study sets a bank-run equilibrium analysis in a dynamic and incomplete information environment where agents can reconsider attempts to run on the bank over time. The typical static bank-run model is extended in this paper to capture the learning dynamics of agents through time, giving bank-run analysis a more realistic feature. Apart from employing a self-fullling framework in this model, where agents' actions are strategic complements, we allow agents to update over time their beliefs on the strength of the fundamentals that is not commonly known. In particular, we extend the bank-run model analyzed by Goldstein and Pauzner (Journal of Finance 2005) and build it on a dynamic global games framework studied by Angeletos (Econometrica 2007). We present here how a simple recursive setup can generate a unique monotone perfect Bayesian Nash equilibrium and show how the probability of bank-run is a¤ected through time by the inow of information and the knowledge of previous state outcome. Finally, it is also shown that when an unobservable shock is introduced, multiplicity of equilibria can result in this dynamic learning process.
    Keywords: threshold bank-run, monotone perfect Bayesian Nash equilibrium, dynamic global games
    JEL: C73 D82 G10
    Date: 2016–08

This nep-mic issue is ©2016 by Jing-Yuan Chiou. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.