
on Microeconomics 
By:  Lionel DE BOISDEFFRE 
Abstract:  We consider a pure exchange economy, where agents, possibly asymmetrically informed, exchange on spot markets and on incomplete financial markets, with no model of how future prices are determined, and keep private their own characteristics, anticipations and beliefs. We show they face an incompressible uncertainty, represented by a socalled "minimum uncertainty set", which typically adds to the exogenous uncertainty, on tomorrow's state of nature, an endogenous uncertainty on tomorrow's possible spot prices, depending on all agents' private beliefs today. At equilibrium, all consumers expect the 'true' price, in each realizable state, as a possible outcome, and elect optimal strategies, ex ante, which clear markets, ex post. Our main Theorem states that a sequential equilibrium exists, in this model, under standard conditions, as long as agents' prior anticipations, which may be refined from observing markets, embed the minimum uncertainty set. This claim is stronger than the classical generic existence result, which followed Hart (1975) and Radner (1979), based on the rational expectations of equilibrium prices. 
Keywords:  Sequential equilibrium, Temporary equilibrium, Perfect foresight, Existence, Rational expectations, Financial markets, Asymmetric information, Arbitrage 
JEL:  D52 
Date:  2017–04 
URL:  http://d.repec.org/n?u=RePEc:tac:wpaper:20162017_9&r=mic 
By:  Tajika, Tomoya 
Abstract:  Behaving consistently is widely observed, which implies that a person clings to his/her initial opinion and ignores future information that may be more accurate. We explain such behavior by proposing a model in which a reputationconcerned expert has two opportunities to recommend a choice to someone. Before making each recommendation, the expert receives a signal whose accuracy depends on his ability; the second signal is always more accurate. Since a highability expert is less likely to receive different signals, the expert has an incentive to pretend to have high ability by recommending the same choice throughout all opportunities. This fact results in the persistency of the initial opinion even when following the second signal is the efficient choice. Further, we consider the case that the expert has the option to remain silent at the first opportunity, which enables the sending of only the more accurate signal and concealing the receiving of different signals. Nevertheless, we find that the expert has an incentive to break silence at the first opportunity and also persists with the initial opinion, which is the driving force behind the expert’s snap decision. 
Keywords:  Reputation, herding, persistency of the initial opinion, snap decision 
JEL:  D82 D83 D90 
Date:  2017–07 
URL:  http://d.repec.org/n?u=RePEc:hit:hituec:661&r=mic 
By:  Rene J.R. van den Brink (Vrije Universiteit Amsterdam; Tinbergen Institute, The Netherlands); Agnieszka Rusinowska (Paris School of Economics  CNRS, University Paris 1) 
Abstract:  In this paper, we connect the social network theory on centrality measures to the economic theory of preferences and utility. Using the fact that networks form a special class of cooperative TUgames, we provide a foundation for the degree measure as a von NeumannMorgenstern expected utility function reflecting preferences over being in different positions in different networks. The famous degree measure assigns to every position in a weighted network the sum of the weights of all links with its neighbours. A crucial property of a preference relation over network positions is neutrality to ordinary risk. If a preference relation over network positions satisfies this property and some regularity properties, then it must be represented by a utility function that is a multiple of the degree centrality measure. We show this in three steps. First, we characterize the degree measure as a centrality measure for weighted networks using four natural axioms. Second, we relate these network centrality axioms to properties of preference relations over positions in networks. Third, we show that the expected utility function is equal to a multiple of the degree measure if and only if it represents a regular preference relation that is neutral to ordinary risk. Similarly, we characterize a class of affine combinations of the outdegree and indegree measure in weighted directed networks and deliver its interpretation as a von NeumannMorgenstern expected utility function. 
Keywords:  Weighted network; network centrality; utility function; degree centrality; von NeumannMorgenstern expected utility function; cooperative TUgame; weighted directed network. 
JEL:  D81 D85 C02 
Date:  2017–07–25 
URL:  http://d.repec.org/n?u=RePEc:tin:wpaper:20170065&r=mic 
By:  Schmitz, Patrick W. 
Abstract:  In the GrossmanHartMoore property rights approach to the theory of the firm, it is usually assumed that information is symmetric. Ownership matters for investment incentives, provided that investments are partly relationshipspecific. We study the case of completely relationshipspecific investments (i.e., the disagreement payoffs do not depend on the investments). It turns out that if there is asymmetric information, then ownership matters for investment incentives and for the expected total surplus. Specifically, giving ownership to party B can be optimal, even when only party A has to make an investment decision and even when the owner's expected disagreement payoff is larger under Aownership. 
Keywords:  Incomplete Contracts; Investment incentives; private information; Property rights; relationship specificity 
JEL:  D23 D82 D86 L23 L24 
Date:  2017–07 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:12174&r=mic 
By:  Hitoshi Matsushima (Faculty of Economics, The University of Tokyo); Shunya Noda (Department of Economics, Stanford University) 
Abstract:  We investigate general mechanism design problems in which agents can take hidden actions that influence state distribution. Their action choices exert significant externality effects on their valuation functions through this influence. We characterize all mechanisms that resolve the hidden action problem (i.e., that induce a targeted action profile). A variety of action choices shrinks the set of mechanisms that induce the targeted action profile, leading to the equivalence properties in the expost term with respect to payoffs, payments, and revenues. When the agents can take unilateral deviations to change the state distribution in various directions (i.e., when the action profile satisfies richness ), pureVCG mechanisms â€”the simplest form of canonical VCG mechanism, which is implemented via openbid descending procedures that determine the losers' compensationâ€”are the only mechanisms that induce an efficient action profile. Contrariwise, the popular pivot mechanism, implemented by ascending auctions that determine the winner's payment, generally fails to induce any efficient action profile. 
Date:  2017–07 
URL:  http://d.repec.org/n?u=RePEc:tky:fseres:2017cf1057&r=mic 
By:  Osório, António (António Miguel) 
Abstract:  This paper examines different Brownian information structures over varying time intervals. We focus on the nonlimit case, and on the tradeoffs between information quality and quantity when making a decision whether to cooperate or defect in a prisoners' dilemma game. In the bestcase scenario, the information quality gains are strong enough so that agents can substitute information quantity with information quality. In the second bestcase scenario, the information quality gains are weak and must be compensated for with additional information quantity. In this case, information quality improves but not quickly enough to dispense with the use of information quantity. For suficiently large time intervals, information degrades and monitoring becomes mostly based on information quantity. The results depend crucially on the particular information structure and on the rate at which information quality improves or decays with respect to the discounting incentives. JEL: C73, D82, D86. KEYWORDS: Repeated Games; Frequent Monitoring; Information Quantity; In formation Quality. 
Keywords:  Jocs, Teoria de, Informació, Teoria de la, 33  Economia, 
Date:  2017 
URL:  http://d.repec.org/n?u=RePEc:urv:wpaper:2072/290761&r=mic 
By:  Schneider, Tim; Bizer, Kilian 
Abstract:  We investigate a market in which experts have a moral hazard problem because they need to invest in costly but unobservable effort to identify consumer problems. Experts have either high or low qualification and can invest either high or low effort in their diagnosis. High skilled experts are able to identify problems with some probability even with low effort while low skilled experts here always give false recommendations. Experts compete for consumers by setting prices for diagnosis and service. Consumers can visit multiple experts, which enables an endogenous verifiability of diagnosis. We show that with a sufficient number of high skilled experts, stable secondbest and perfectly nondegenerate equilibria are possible even with flexible prices, although they depend on transactions costs being relatively low. By contrast, with a small share of high skilled experts in the market, setting fixed prices can be beneficial for society. 
Keywords:  credence goods,expert market,moral hazard,qualification,competition,second opinions,diagnostic effort 
JEL:  L10 D82 D40 
Date:  2017 
URL:  http://d.repec.org/n?u=RePEc:zbw:cegedp:317&r=mic 
By:  Itzik Fadlon; David Laibson 
Abstract:  Resource allocations are jointly determined by the actions of social planners and households. In this paper we highlight the distinction between planner optimization and household optimization. We show that planner optimization is a substitute for household optimization and that this is true even when there are information asymmetries, so that households know more about their preferences than planners. Our analysis illustrates the scope for misattribution in economic analysis. Are seemingly optimal allocations caused by optimizing households, or are such allocations caused by planners who paternalistically influence myopic and passive households? We show that widely studied allocative optimality conditions that are implied by household optimization also arise in an economy with a rational planner who uses tools such as default savings and Social Security to influence the choices of nonoptimizing households. Many classical optimization conditions do not resolve the question of household optimization. Pseudorationality arises when rational planners elicit approximately optimal behavior from nonoptimizing households. 
JEL:  D14 E21 H0 H55 
Date:  2017–07 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:23620&r=mic 
By:  Myatt, David P (London Business School); Wallace, Chris (University of Leicester) 
Abstract:  In an asymmetric coordination (or anticoordination) game, players acquire and use signals about a payoffrelevant fundamental from multiple costly information sources. Some sources have greater clarity than others, and generate signals that are more correlated and so more public. Players wish to take actions close to the fundamental but also close to (or far away from) others’ actions. This paper studies how asymmetries in the game, represented as the weights that link players to neighbours on a network, affect how they use and acquire information. Relatively centrally located players (in the sense of Bonacich, when applied to the dependence of players’ payoffs upon the actions of others) acquire fewer signals from relatively clear information sources; they acquire less information in total; and they place more emphasis on relatively public signals 
Keywords:  Networks ; Bonacich Centrality ; Information Acquisition and Use ; Public and Private Information JEL classification numbers: C72 ; D83 ; D85 
Date:  2017 
URL:  http://d.repec.org/n?u=RePEc:wrk:wcreta:32&r=mic 
By:  Craig McLaren (Department of Economics, University of California Riverside) 
Abstract:  This paper argues that â€œgerrymanderingâ€ understood here to mean the intentional redrawing of legislative district boundaries to benefit a given party, robs opposition voters of implicit bargaining power.. Using the Median Voter Theorem and statistical examples, this paper argues that the presence of minority voters in a legislative district influences the majority partyâ€™s choice of candidate, whenever minority voters are present in sufficient number to pose a credible challenge. When, through gerrymandering, lawmakers insure that minority voters cannot mount such a challenge, they deny such voters equal protection under the law. 
Keywords:  Election, Election Law, Voter Protection, Voting, Voting Law, Gill vs. Whitford, gerrymandering, median voter theorem 
Date:  2017–07 
URL:  http://d.repec.org/n?u=RePEc:ucr:wpaper:201706&r=mic 
By:  Dasgupta, Indraneel (Indian Statistical Institute); Neogi, Ranajoy Guha (Indian Statistical Institute) 
Abstract:  We model a contest between two groups of equal population size over the division of a groupspecific public good. Each group is fragmented into subgroups. Each subgroup allocates effort between production and contestation. There is perfect coordination within subgroups, but subgroups cannot coordinate with one another. All subgroups choose effort allocations simultaneously. We find that aggregate rentseeking rises, social welfare falls, and both communities are worse off when the dominant subgroups within both communities increase their population shares relative to the respective average subgroup population. Any unilateral increase in fragmentation within a group reduces conflict and makes its opponent better off. The fragmenting community itself may however be better off as well, even though its share of the public good falls. Thus, a reduced share of public good provisioning cannot be used to infer a negative welfare implication for the losing community. 
Keywords:  contest, groupspecific public good, local public good, ethnic conflict, withingroup fragmentation 
JEL:  D72 D74 O10 O20 
Date:  2017–07 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp10881&r=mic 
By:  Reisinger, Markus; Thomes, Tim Paul 
Abstract:  We investigate how the structure of the distribution channel affects tacit collusion between manufacturers. When selling through a common retailer, we find  in contrast to the conventional understanding of tacit collusion that firms act to maximize industry profits  that colluding manufacturers strategically induce double marginalization so that retail prices are above the monopoly level. This lowers industry profits but increases the profit share that manufacturers appropriate from the retailer. Comparing common distribution with independent (exclusive) distribution, we show that the latter facilitates collusion. Despite this result, common retailing leads to lower welfare because a common retailer monopolizes the downstream market. For the case of independent retailing, we also demonstrate that contract offers that are observable to the rival retailer are not necessarily beneficial for collusive purposes. 
Keywords:  tacit collusion,contract observability,common retailing,independent (exclusive) retailing,twopart tariffs,wholesale price contracts 
Date:  2017 
URL:  http://d.repec.org/n?u=RePEc:zbw:dicedp:261&r=mic 
By:  Burzoni, M. (Center for Mathematical Economics, Bielefeld University); Riedel, Frank (Center for Mathematical Economics, Bielefeld University); Soner, H.M. (Center for Mathematical Economics, Bielefeld University) 
Abstract:  We reconsider the microeconomic foundations of financial economics under Knightian Uncertainty. In a general framework, we discuss the absence of arbitrage, its relation to economic viability, and the existence of suitable nonlinear pricing ex pectations. Classical financial markets under risk and no ambiguity are contained as special cases, including various forms of the Efficient Market Hypothesis. For Knightian uncertainty, our approach unifies recent versions of the Fundamental Theorem of Asset Pricing under a common framework. 
Keywords:  Robust Finance, No Arbitrage, Viability, Knightian Uncertainty 
Date:  2017–07–24 
URL:  http://d.repec.org/n?u=RePEc:bie:wpaper:575&r=mic 
By:  Hitoshi Matsushima (Faculty of Economics, The University of Tokyo) 
Abstract:  We investigate implementation of social choice functions, where we impose severe restrictions on mechanisms, such as boundedness, permitting only tiny transfers, and uniqueness of an iteratively undominated strategy profile in the expost term. We assume that there exists some partial information about the state that is verifiable. We consider the dynamic aspect of information acquisition, where players share information, but the timing of receiving information is different across players. By using this aspect, the central planner designs a dynamic, not a static, mechanism, in which each player announces what he (or she) knows about the state at multiple stages with sufficient intervals. By demonstrating a sufficient condition on the state and on the dynamic aspect, namely full detection, we show that a wide variety of social choice functions are uniquely implementable even if the range of playersâ€™ lies that the verified information can directly detect is quite narrow. With full detection, we can detect all possible lies, not by the verified information alone, but by processing a chain of detection triggered by this information. This paper does not assume either expected utility or quasilinearity. 
Date:  2017–07 
URL:  http://d.repec.org/n?u=RePEc:tky:fseres:2017cf1058&r=mic 
By:  Christian Trudeau (Department of Economics, University of Windsor); Juan VidalPuga (Research Group of Economic Analysis and Departamento de Estatistica e IO, Universidade de Vigo) 
Abstract:  We introduce a new family of cooperative games for which there is coincidence between the nucleolus and the Shapley value. These socalled clique games are such that players are divided into cliques, with the value created by a coalition linearly increasing with the number of agents belonging to the same clique. Agents can belong to multiple cliques, but for a pair of cliques, at most a single agent belong to their intersection. Finally, if two players do not belong to the same clique, there is at most one way to link the two players through a chain of players, with any two adjacent players in the chain belonging to a common clique. We provide multiple examples for clique games, chief among them minimum cost spanning tree problems. This allows us to obtain new correspondence results between the nucleolus and the Shapley value, as well as other cost sharing methods for the minimum cost spanning tree problem. 
Keywords:  nucleolus; Shapley value; clique; minimum cost spanning tree. 
JEL:  C71 D63 
Date:  2017–07 
URL:  http://d.repec.org/n?u=RePEc:wis:wpaper:1705&r=mic 
By:  Antonio JiménezMartínez (Division of Economics, CIDE) 
Abstract:  This paper considers a population of agents that are connected through a network that allows them to aggregate locally their pieces of private information about some uncertain (exogenous) parameter of interest. The agents wish to match their actions to the true value of the parameter and to the actions of the other agents. I ask how the design of (interim) efficient (minimally connected) networks depends on the level of complementarity in the agents’ actions. When the level of complementarity is either low or high, efficient networks are characterized by a high number of different neighborhoods and, as a consequence, by low levels of connectivity. For intermediate levels of complementarity in actions, efficient networks tend to feature low numbers of highly connected neighborhoods. The implications of this paper are relevant in security environments where agents are naturally interpreted as analysts who try to forecast the value of a parameter that describes a potential threat to security. 
Keywords:  Networks, information aggregation, beautycontests, strategic complementarity, efficiency 
JEL:  C72 D83 D84 D85 
Date:  2016–10 
URL:  http://d.repec.org/n?u=RePEc:emc:wpaper:dte601&r=mic 
By:  Antonio JiménezMartínez (Division of Economics, CIDE); Óscar GonzálezGuerra (Division of Economics, CIDE) 
Abstract:  This paper proposes a framework of seconddegree discrimination with two different versions of a service that are served in random networks with positive externalities. In the model, consumers must choose between purchasing a premium version of the service or a free version that comes with advertising about a certain good (unrelated to the service). The ads attached to the free version influence the free version adopters’ opinions and, given the induced effects on the good sales, they affect the optimal pricing of the premium version. We relate the optimal pricing strategy to the underlying hazard rate and degree distribution of the random network. Under increasing hazard rates, hazard rate dominance always implies higher prices for the service. In some applications of the model, decreasing hazard rates are often associated to extreme situations where only the free version of the service is provided. The model provides foundations for empirical analysis since key features of social networks can be related to their underlying hazard rate functions and degree distributions. 
Keywords:  Social networks, seconddegree discrimination, advertising, degree distributions, hazard rate 
JEL:  D83 D85 L1 M3 
Date:  2016–09 
URL:  http://d.repec.org/n?u=RePEc:emc:wpaper:dte600&r=mic 
By:  Massimo Marinacci; Luigi Montrucchio 
Abstract:  We establish sufficient conditions that ensure the uniqueness of Tarskitype fi xed points of monotone operators. Several applications are presented. 
Date:  2017 
URL:  http://d.repec.org/n?u=RePEc:igi:igierp:604&r=mic 