nep-mic New Economics Papers
on Microeconomics
Issue of 2017‒02‒12
24 papers chosen by
Jing-Yuan Chiou
National Taipei University

  1. Discount Pricing By Armstrong, Mark; Chen, Yongmin
  2. On the Equivalence of Bilateral and Collective Mechanism Design By Yu Chen
  3. Cooperation, Competition and Entry in a Tullock Contest By GRANDJEAN, G.; TELLONE, D.; VERGOTE, W.
  4. Centralization versus Separation of Regulatory Institutions By Foarta, Dana; Sugaya, Takuo
  5. Cautious and Globally Ambiguity Averse By Özgür Evren
  6. Formation of Coalition Structures as a Non-Cooperative Game By Dmitry Levando
  7. A Polynomial Optimization Approach to Principal-Agent Problems By Philipp Renner; Karl Schmedders
  8. One-Switch Discount Functions By Nina Anchugina
  9. Optimal Risk Sharing with Limited Liability By Semyon Malamud; Huaxia Rui; Andrew B. Whinston
  10. Upstream Monopoly and Downstream Information Sharing By Pio Baake; Andreas Harasser
  11. Centralizing Disconnected Markets? An Irrelevance Result By Wittwer, Milena
  12. The Sorry Clause By Srivastava, Vatsalya
  13. Cournot oligopoly with randomly arriving producers By Pierre Bernhard; Marc Deschamps
  14. A criterion to compare mechanisms when solutions are not unique, with applications to constrained school choice By DECERF, Benoit; VAN DER LINDEN, Martin
  15. Vertical Mergers in Platform Markets By Jérôme Pouyet; Thomas Trégouët
  16. Type-Compatible Equilibria in Signalling Games By Drew Fudenberg; Kevin He
  17. Reexamining the Schmalensee effect By Kim, Jeong-Yoo
  18. Fairness and well-being measurement By FLEURBAEY, Marc; MANIQUET, François
  19. How does the probability of wrongful conviction affect the standard of proof? By Marie Obidzinski; Yves Oytana
  20. Choice Overload and Asymmetric Regret By Gökhan Buturaky; Özgür Evren
  21. Mindreading and Endogenous Beliefs in Games By Lauren Larrouy; Guilhem Lecouteux
  22. Screening Multiple Uninformed Experts By Francisco Barreras
  23. The Swing Voter's Curse in Social Networks By Berno Buechel; Lydia Mechtenberg
  24. Distrust in Experts and the Origins of Disagreement By Alice Hsiaw; Ing-Haw Cheng

  1. By: Armstrong, Mark; Chen, Yongmin
    Abstract: We investigate the marketing practice of framing a price as a discount from an earlier price. We discuss two reasons why a discounted price---rather than a merely low price---can make a rational consumer more willing to purchase. First, a high initial price can indicate the seller has chosen to supply a high-quality product. Second, a seller with limited stock runs a clearance sale, later consumers infer that an unsold product may be poor quality, but if the initial price was higher they do not downgrade their evaluation of quality as much. In either case, if able to do so a seller has an incentive to engage in fictitious pricing, where the reported initial price is exaggerated.
    Keywords: Reference dependence, sales tactics, false advertising, fictitious pricing, consumer protection
    JEL: D42 D83 L15 M31 M37
    Date: 2017–01
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:76681&r=mic
  2. By: Yu Chen (University of Graz)
    Abstract: We explore the theoretical justification of adopting bilateral mechanism design, which is a simplification of canonical collective mechanism design, in general multi-agency contracting games under Bayesian Nash equilibrium. We establish interim payoff equivalence between collective and bilateral mechanism design in the quasi-separable environment, in which inter- dependent valuations and correlated types are allowed. We employ interim payooff equivalence to further show the equivalence between optimal bilateral and collective mechanism design, when the principal's payoff exhibits certain relations with separate agents' payoffs. Our analysis can also incorporate individual rationality and budget balance constraints and the asymptotic equivalence.
    Keywords: Bayesian Nash equilibrium, bilateral mechanism, collective mechanism, interim payoff equivalence, quasi-separable environment
    JEL: C72 D82 D86
    Date: 2017–02
    URL: http://d.repec.org/n?u=RePEc:grz:wpaper:2017-01&r=mic
  3. By: GRANDJEAN, G.; TELLONE, D.; VERGOTE, W. (Université catholique de Louvain, CORE, Belgium)
    Abstract: We propose a model of network formation in a Tullock contest. Agents first form their partnerships and then choose their investment in the contest. While a link improves the strength of an agent, it also improves the position of her rival. It is thus not obvious that they decide to cooperate. We characterize all pairwise equilibrium networks and find that the network formation process can act as a barrier to entry to the contest. We then analyze the impact of network formation on total surplus and find that a social planner can increase total surplus by creating more asymmetry between agents, as long as this does not reduce the number of participating agents. We show that barriers to entry may either hurt total surplus, as the winner of the prize does not exploit all the possible network benefits, or improve total surplus since less rent is dissipated when competition becomes less fierce. Finally, when networking acts as an endogenous barrier to entry, no pairwise equilibrium network is efficient.
    Keywords: Network Formation, Tullock Contest, Participation Constraints, Efficiency
    JEL: D72 D85
    Date: 2016–07–31
    URL: http://d.repec.org/n?u=RePEc:cor:louvco:2016032&r=mic
  4. By: Foarta, Dana (Stanford University); Sugaya, Takuo (Stanford University)
    Abstract: Why might a country choose a decentralized rather than a centralized regulatory structure? And what might reverse this choice? We consider a core institutional relationship in regulation. A regulator exerts effort towards a final outcome, but an oversight authority can intervene, which prevents the final outcome from being reached. We examine the choice between institutional centralization and separation, where centralization affords the oversight authority more information on the probable outcome. This creates a static trade-off in which more information lowers regulatory effort. Dynamically, institutional separation improves the screening of regulators. This leads to switching between centralization and separation in equilibrium.
    Date: 2016–12
    URL: http://d.repec.org/n?u=RePEc:ecl:stabus:3489&r=mic
  5. By: Özgür Evren (New Economic School)
    Abstract: I study ambiguity attitudes in Uzi Segal's recursive non-expected utility model. I show that according to this model, the negative certainty independence axiom over simple lotteries is equivalent to a robust, or global form of ambiguity aversion that requires ambiguity averse behavior irrespective of the number of states and the decision maker's second-order belief. Thus, the recursive cautious expected utility model is the only subclass of Segal's model that robustly predicts ambiguity aversion. Similarly, the independence axiom over lotteries is equivalent to a robust form of ambiguity neutrality. In fact, any non-expected utility preference over lotteries coupled with a suitable second-order belief over three states produces either the Ellsberg paradox or the opposite mode of behavior. Finally, I propose a definition of a mean-preserving spread for second-order beliefs that is equivalent to increasing ambiguity aversion for every recursive preference that satisfies the negative certainty independence axiom.
    Keywords: Ambiguity Aversion, Ellsberg Paradox, Allais Paradox, Negative Certainty Independence, Reduction of Compound Lotteries, Increasing Second-Order Uncertainty
    JEL: D81
    Date: 2017–01
    URL: http://d.repec.org/n?u=RePEc:cfr:cefirw:w0236&r=mic
  6. By: Dmitry Levando (National Research University Higher School of Economics)
    Abstract: The paper defines a family of nested non-cooperative simultaneous finite games to study coalition structure formation with intra and inter-coalition externalities. The novelties of the paper are: a definition of every games embeds a coalition structure formation mechanism. Every game has two outcomes - an allocation of players over coalitions and a payoff profile for every player. The family is parametrized by a maximum coalition size in every coalition structure (a partition) in a game. For every partition a player has a partition-specific set of strategies. The mechanism portions a set of strategies of the game (a Cartesian product) into partition-specific strategy domains, what makes every partition to be itself a non-cooperative game with partition-specific payoffs for every player. Payoffs are assigned separately for every partition and are independent from the mechanism. Every game in the family has an equilibrium in mixed strategies. The equilibrium can generate more than one coalition and encompasses intra and inter group externalities, what makes it different from the Shapley value. Presence of individual payoff allocation makes it different from a strong Nash, coalition-proof equilibrium, and some other equilibrium concepts. The paper demonstrate some applications of the proposed toolkit: for non-cooperative fundamentals of cooperation in a coalition, Bayesian game, stochastic games and construction of a non-cooperative criterium of coalition structure stability.
    Keywords: noncooperative Games
    JEL: C72
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:hig:wpaper:157/ec/2017&r=mic
  7. By: Philipp Renner (Stanford University); Karl Schmedders (University of Zurich)
    Abstract: This paper presents a new method for the analysis of moral hazard principal-agent problems. The new approach avoids the stringent assumptions on the distribution of outcomes made by the classical first-order approach and instead only requires the agent's expected utility to be a rational function of the action. This assumption allows for a reformulation of the agent's utility maximization problem as an equivalent system of equations and inequalities. This reformulation in turn transforms the principal's utility maximization problem into a nonlinear program. Under the additional assumptions that the principal's expected utility is a polynomial and the agent's expected utility is rational in the wage, the final nonlinear program can be solved to global optimality. The paper also shows how to first approximate expected utility functions that are not rational by polynomials, so that the polynomial optimization approach can be applied to compute an approximate solution to non-polynomial problems. Finally, the paper demonstrates that the polynomial optimization approach extends to principal-agent models with multi-dimensional action sets.
    Keywords: Principal-agent model, moral hazard, first order approach, polynomials
    JEL: C63 D80 D82
    URL: http://d.repec.org/n?u=RePEc:chf:rpseri:rp1235&r=mic
  8. By: Nina Anchugina
    Abstract: Bell (1988) introduced the one-switch property for preferences over sequences of dated outcomes. This property concerns the effect of adding a common delay to two such sequences: it says that the preference ranking of the delayed sequences is either independent of the delay, or else there is a unique delay such that one strict ranking prevails for shorter delays and the opposite strict ranking for longer delays. For preferences that have a discounted utility (DU) representation, Bell (1988) argues that the only discount functions consistent with the one-switch property are sums of exponentials. This paper proves that discount functions of the linear times exponential form also satisfy the one-switch property. We further demonstrate that preferences which have a DU representation with a linear times exponential discount function exhibit increasing impatience (Takeuchi (2011)). We also clarify an ambiguity in the original Bell (1988) definition of the one-switch property by distinguishing a weak one-switch property from the (strong) one-switch property. We show that the one-switch property and the weak one-switch property definitions are equivalent in a continuous-time version of the Anscombe and Aumann (1963) setting.
    Date: 2017–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1702.02254&r=mic
  9. By: Semyon Malamud (Ecole Polytechnique Federale de Lausanne, Swiss Finance Institute, and Centre for Economic Policy Research (CEPR)); Huaxia Rui (University of Rochester); Andrew B. Whinston (University of Texas at Austin)
    Abstract: We solve the general problem of optimal risk sharing among a finite number of agents with limited liability. We show that the optimal allocation is characterized by endogenously determined ranks assigned to the participating agents and a hierarchical structure of risk sharing, where all agents take on risks only above the agent-specific thresholds determined by their ranks. When all agents have CARA utilities, linear risk sharing is optimal between two adjacent thresholds. We use our general characterization of optimal risk sharing with limited liability to solve the problem of optimal insurance design with multiple insurers. We show that the optimal thresholds, or deductibles, can be efficiently calculated through the fixed point of a contraction mapping. We then use this contraction mapping technique to derive a number of comparative statics results for optimal insurance design and its dependence on microeconomic characteristics.
    Keywords: optimal risk sharing, limited liability, optimal insurance design
    JEL: A10 D86 G22
    URL: http://d.repec.org/n?u=RePEc:chf:rpseri:rp1205&r=mic
  10. By: Pio Baake; Andreas Harasser
    Abstract: We analyze a vertical structure with an upstream monopoly and two downstream retailers. Demand is uncertain but each retailer receives an informative private signal about the state of the demand. We construct an incentive compatible and ex ante balanced mechanism which induces the retailers to share their information truthfully. Information sharing can be profitable for the retailers but is likely to be detrimental for social welfare.
    Keywords: information sharing, upstream monopoly, vertical relations
    JEL: D82 L13 L14
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:diw:diwwpp:dp1635&r=mic
  11. By: Wittwer, Milena
    Abstract: This article compares centralized with disconnected markets in which n>2 strategic agents trade two perfectly divisible goods. In a multi-goods uniform-price double auction (centralized market) traders can make their demand for one good contingent on the price of the other good. Interlinking demands across goods is - by design - not possible when each good is traded in separate, single-good uniform-price double auctions (disconnected market). Here, agents are constrained in the way they can submit their joint preferences. I show for a class of models that equilibrium allocations and efficiency of centralized and disconnected markets nevertheless coincide when the total supply of the goods is known or perfectly correlated. This suggests that disconnected markets perform as well as centralized markets when the underlying uncertainty that governs the goods' market prices is perfectly correlated.
    Keywords: Disconnected markets, divisible goods, multi-unit double auctions, trading
    JEL: D44 D47 D82 G14
    Date: 2017–02–01
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:76534&r=mic
  12. By: Srivastava, Vatsalya (Tilburg University, TILEC)
    Abstract: When players face uncertainty in choosing actions, undesirable outcomes cannot be avoided. Accidental defections caused by uncertainty, that does not depend on the level of care, require a mechanism to reconcile the players. This paper shows the existence of a perfect sorry equilibrium in a game of imperfect public monitoring. In the sorry equilibrium, costly apology is self-imposed in case of accidental defections, making private information public and allowing cooperation to resume. Cost of the apology required to sustain this equilibrium is calculated, the efficiency characteristics of the equilibrium evaluated and outcomes compared to those from other bilateral social governance mechanisms and formal legal systems. It is argued that with the possibility of accidental defections, other social mechanisms have limitations, while formal legal systems can generate perverse incentives. Therefore, apologies can serve as a useful economic governance institution.
    Keywords: apology; sorry; imperfect public monitoring; uncertainty; social norms; economics governance; Legal institutions; incentives; courts
    JEL: D8 K4 Z10
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:tiu:tiutil:9340f3b1-ebf3-46b9-8ffd-352d7fa81724&r=mic
  13. By: Pierre Bernhard (BIOCORE - Biological control of artificial ecosystems - LOV - Laboratoire d'océanographie de Villefranche - UPMC - Université Pierre et Marie Curie - Paris 6 - INSU - CNRS - Centre National de la Recherche Scientifique - CRISAM - Inria Sophia Antipolis - Méditerranée - Inria - Institut National de Recherche en Informatique et en Automatique - INRA - Institut National de la Recherche Agronomique); Marc Deschamps (CRESE - Centre de REcherches sur les Stratégies Economiques - UFC - UFC - Université de Franche-Comté, UBFC - Université Bourgogne Franche-Comté)
    Abstract: Cournot model of oligopoly appears as a central model of strategic interaction between competing firms both from a theoretical and applied perspective (e.g antitrust). As such it is an essential tool in the economics toolbox and always a stimulus. Although there is a huge and deep literature on it and as far as we know, we think that there is a ”mouse hole” wich has not already been studied: Cournot oligopoly with randomly arriving producers. In a companion paper [Bernhard and Deschamps, 2016b] we have proposed a rather general model of a discrete dynamic decision process where producers arrive as a Bernoulli random process and we have given some examples relating to oligopoly theory (Cournot, Stackelberg, cartel). In this paper we study Cournot oligopoly with random entry in discrete (Bernoulli) and continuous (Poisson) time, whether time horizon is finite or infinite. Moreover we consider here constant and variable probability of entry or density of arrivals. In this framework, we are able to provide algorithmes answering four classical questions: 1/ what is the expected profit for a firm inside the Cournot oligopoly at the beginning of the game?, 2/ How do individual quantities evolve?, 3/ How do market quantities evolve?, and 4/ How does market price evolve?
    Keywords: Cournot market structure,Bernoulli process of entry,Poisson density of arrivals,Dynamic Programming
    Date: 2016–11–01
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-01413910&r=mic
  14. By: DECERF, Benoit (Universit e de Namur); VAN DER LINDEN, Martin (Vanderbilt University)
    Abstract: We introduce a new criterion to compare the properties of mechanisms when the solution concept used induces multiple solutions. Our criterion generalizes previous approaches in the literature. We use our criterion to compare the stability of constrained versions of the Boston (BOS) and deferred acceptance (DA) school choice mechanisms in which students can only rank a subset of the schools they could potentially access. When students play a Nash equilibrium, we show that there is a stability cost to increasing the number of schools students can rank in DA. On the other hand, when students only play undominated strategies, increasing the number of schools students can rank increases stability. We find sim- ilar results for BOS. We also compare BOS and DA. Whatever the number of schools students can rank, we find that BOS is more stable than DA in Nash equilibrium, but less stable in undominated strategies.
    Keywords: Multiple solutions, School choice, Stability, Boston mecha- nism, Deferred acceptance mechanism, Nash equilibrium, Undominated strategy
    JEL: C78 D47 D82 I
    Date: 2016–10–04
    URL: http://d.repec.org/n?u=RePEc:cor:louvco:2016033&r=mic
  15. By: Jérôme Pouyet (PSE - Paris-Jourdan Sciences Economiques - CNRS - Centre National de la Recherche Scientifique - INRA - Institut National de la Recherche Agronomique - EHESS - École des hautes études en sciences sociales - ENS Paris - École normale supérieure - Paris - École des Ponts ParisTech (ENPC), PSE - Paris School of Economics); Thomas Trégouët (THEMA - Théorie économique, modélisation et applications - Université de Cergy Pontoise - CNRS - Centre National de la Recherche Scientifique)
    Abstract: We analyze the competitive impact of vertical integration between a platform and a manufacturer when platforms provide operating systems for devices sold by manufacturers to customers, and, customers care about the applications developed for the operating systems. Two-sided network effects between customers and developers create strategic substitutability between manufacturers' prices. When it brings efficiency gains, vertical integration increases consumer surplus, is not profitable when network effects are strong, and, benefits the non-integrated manufacturer. When developers bear a cost to make their applications available on a platform, manufacturers boost the participation of developers by affiliating with the same platform. This creates some market power for the integrated firm and vertical integration then harms consumers, is always profitable, and, leads to foreclosure. Introducing developer fees highlights that not only the level, but also the structure of indirect network effects matter for the competitive analysis.
    Keywords: Vertical integration,two-sided markets,network effects
    Date: 2016–12
    URL: http://d.repec.org/n?u=RePEc:hal:psewpa:halshs-01410077&r=mic
  16. By: Drew Fudenberg; Kevin He
    Abstract: The key issue in selecting between equilibria in signalling games is determining how receivers will interpret deviations from the path of play. We develop a foundation for these off-path beliefs, and an associated equilibrium refinement, in a model where equilibrium arises from non-equilibrium learning by long-lived senders and receivers. In our model, non-equilibrium signals are sent by young senders as experiments to learn about receivers' behavior, and different types of senders have different incentives for these various experiments. Using the Gittins index (Gittins, 1979), we characterize which sender types use each signal more often, leading to a constraint we call the "compatibility criterion" on the receiver's off-path beliefs and to the concept of a "type-compatible equilibrium." We compare type-compatible equilibria to signalling-game refinements such as the Intuitive Criterion (Cho and Kreps, 1987) and divine equilibrium (Banks and Sobel, 1987).
    Date: 2017–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1702.01819&r=mic
  17. By: Kim, Jeong-Yoo
    Abstract: The author reexamines the Schmalensee effect from a dynamic perspective. Schmalsensee's argument suggesting that high quality can be signaled by high prices is based on the assumption that higher quality necessarily incurs higher production cost. In this paper, the author argues that firms producing high-quality products have a stronger incentive to lower the marginal cost of production cost because they can then sell larger quantities than low-quality firms can. If this dynamic effect is large enough, then the Schmalensee effect degenerates and, thus, low prices signal high quality. This result is different from the Nelson effect relying on the assumption that only the high-quality product can generate repeat purchase, because the result is valid even if low-quality products can also be purchased repeatedly. The author characterizes a separating equilibrium in which a high-quality monopolist invests more to reduce cost and, as a result, charges a lower price. Separation is possible due to a difference in quantities sold in the second period across qualities.
    Keywords: experience good,quality,signal,Schmalensee effect
    JEL: D82 L15
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:zbw:ifwedp:20173&r=mic
  18. By: FLEURBAEY, Marc (Princeton University); MANIQUET, François (Université catholique de Louvain, CORE, Belgium)
    Abstract: We assume that economic justice requires resources to be allocated fairly, and we construct individual well-being measures that embody fairness principles in interpersonal comparisons. These measures are required to respect agents’ preferences. Across preferences well-being comparisons are required to depend on comparisons of the bundles of resources consumed by agents. We axiomatically justify two main families of well-being measures reminiscent to the ray utility and money-metric utility functions.
    Keywords: fairness, well-being measure, preferences
    JEL: D63 I32
    Date: 2016–11–22
    URL: http://d.repec.org/n?u=RePEc:cor:louvco:2016040&r=mic
  19. By: Marie Obidzinski (Université Panthéon-Assas, CRED); Yves Oytana (Université Bourgogne Franche-Comté, CRESE)
    Abstract: The paper inquires into the impact of mistakes of identity (ID errors) on the optimal standard of proof. A mistake of identity is defined as an error such that an individual is punished for someone else’s crime; and for the same crime, the criminal is falsely acquitted. Therefore, the decision to engage in a criminal activity generates a negative externality, as the expected number of ID errors increases. Thus, our objective is to understand how public law enforcement can deal with this type of error by means of the standard of proof. Our main results are twofold. First, we show that when ID errors occur, the under-deterrence issue is exacerbated. Second, we find that the optimal standard of proof may be higher or lower than without ID errors, depending on the crime rate at equilibrium and on the impact of the standard of proof on (i) the probability of an acquittal error for each crime committed, (ii) the probability of convicting an innocent person when an acquittal error arises, and (iii) the level of deterrence.
    Keywords: Mistakes of identity, standard of proof, deterrence
    JEL: K4
    Date: 2017–02
    URL: http://d.repec.org/n?u=RePEc:crb:wpaper:2017-02&r=mic
  20. By: Gökhan Buturaky (Freelance Researcher); Özgür Evren (New Economic School)
    Abstract: We propose a model of "choice overload" which refers to a stronger tendency to select the default option in larger choice problems. Our main finding is a behavioral characterization of an asymmetric regret representation that depicts a decision maker who does not consider the possibility of experiencing regret for choosing the default option. By contrast, the value of ordinary alternatives is subject to regret. The calculus of regret for ordinary alternatives is identical to that in Sarver's (2008) anticipated regret model, despite the fact that the primitives of the two theories are different. Our model can also be applied to choice problems with the option to defer the decision.
    Keywords: Choice overload, anticipated regret, subjective states, choice deferral
    JEL: D11 D81
    Date: 2016–12
    URL: http://d.repec.org/n?u=RePEc:cfr:cefirw:w0235&r=mic
  21. By: Lauren Larrouy (Université Côte d'Azur; GREDEG CNRS); Guilhem Lecouteux (Université Côte d'Azur; GREDEG CNRS)
    Abstract: We argue that a Bayesian explanation of strategic choices in games requires introducing a psychological theory of belief formation. We highlight that beliefs in epistemic game theory are derived from the actual choice of the players, and cannot therefore explain why Bayesian rational players should play the strategy they actually chose. We introduce the players’ capacity of mindreading in a game theoretical framework with the simulation theory, and characterise the beliefs that Bayes rational players could endogenously form in games. We show in particular that those beliefs need not be ratifiable, and therefore that rational players can form action-dependent beliefs.
    Keywords: prior beliefs, mindreading, simulation, action-dependent beliefs, choice under uncertainty
    JEL: B41 C72 D81
    Date: 2017–01
    URL: http://d.repec.org/n?u=RePEc:gre:wpaper:2017-01&r=mic
  22. By: Francisco Barreras
    Abstract: Testing the validity of claims made by self-proclaimed experts can be impossible when testing them in isolation, even with infinite observations at the disposal of the tester. However, in a multiple expert setting it’s possible to design a contract that only informed experts accept and uninformed experts reject. The tester can pit competing theories against each other and take advantage of the uncertainty experts have about the other experts’ type. This contract will work even when there is only a single data point to evaluate.
    Keywords: Self-proclaimed, isolation, Uninformed Experts, uncertainty
    Date: 2017–01–24
    URL: http://d.repec.org/n?u=RePEc:col:000508:015282&r=mic
  23. By: Berno Buechel (University of St. Gallen and Liechtenstein-Institute); Lydia Mechtenberg (University of Hamburg)
    Abstract: We study private communication in social networks prior to a majority vote on two alternative policies. Some (or all) agents receive a private imperfect signal about which policy is correct. They can, but need not, recommend a policy to their neighbors in the social network prior to the vote. We show theoretically and empirically that communication can undermine efficiency of the vote and hence reduce welfare in a common interest setting. Both efficiency and existence of fully informative equilibria in which vote recommendations are always truthfully given and followed hinge on the structure of the communication network. If some voters have distinctly larger audiences than others, their neighbors should not follow their vote recommendation; however, they may do so in equilibrium. We test the model in a lab experiment and strong support for the comparative-statics and, more generally, for the importance of the network structure for voting behavior.
    Keywords: Strategic Voting, Social Networks, Swing Voter's Curse, Information Aggregation
    JEL: D72 D83 D85 C91
    Date: 2017–01
    URL: http://d.repec.org/n?u=RePEc:fem:femwpa:2017.05&r=mic
  24. By: Alice Hsiaw (Brandeis University); Ing-Haw Cheng (Brandeis University)
    Abstract: Disagreements about substance and expert credibility often go hand-in-hand and are hard to resolve on several issues including economics, climate science, and medicine. We argue that disagreement arises because individuals overinterpret how much they can learn when both substance and credibility are uncertain. Our learning bias predicts that: 1) Disagreement about credibility drives disagreement about substance, 2) First impressions of credibility drive long-lasting disagreement, 3) Under-trust is difficult to unravel, 4) Encountering experts in different order generates disagreement, and 5) Confirmation bias and/or its opposite arise endogenously. These effects provide a theory for the origins of disagreement.
    Keywords: disagreement, polarization, learning, expectations, experts
    Date: 2016–10
    URL: http://d.repec.org/n?u=RePEc:brd:wpaper:110r2&r=mic

This nep-mic issue is ©2017 by Jing-Yuan Chiou. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.