
on Microeconomics 
By:  Philippe Jehiel; Jakub Steiner 
Abstract:  A decisionmaker acquires payoffrelevant information until she reaches her storing capacity, at which point she either terminates the decisionmaking and chooses an action, or discards some information. By conditioning the probability of termination on the information collected, she controls the correlation between the payoff state and her terminal action. We provide an optimality condition for the emerging stochastic choice. The condition highlights the benefits of selective memory applied to the extracted signals. The constrainedoptimal choice rule exhibits (i) confirmation bias, (ii) speedaccuracy complementarity, (iii) overweighting of rare events, and (iv) salience effect. 
Keywords:  bounded rationality; cognitive constraints; information processing; stochastic choice; confirmation bias; speedaccuracy complementarity; probability weighting; salience; 
JEL:  D03 D80 D81 D83 D89 D90 
Date:  2018–07 
URL:  http://d.repec.org/n?u=RePEc:cer:papers:wp621&r=mic 
By:  Miltiadis Makris (Department of Economics, University of Southampton); Ludovic Renou (Queen Mary University of London) 
Abstract:  We consider multistage games, where at each stage, players receive private signals about past and current states, past actions and past signals, and choose actions. We fully characterize the distributions over actions, states, and signals that obtain in any (sequential) communication equilibrium of any expansion of multistage games, i.e., when players can receive additional signals about past and current states, past actions, and past and current signals (including the additional past signals). We interpret our results as revelation principles for information design problems. We apply our characterization to bilateral bargaining problems. 
Keywords:  multistage games, information design, communication equilibrium, sequential communication equilibrium, information structures, Bayes correlated equilibrium, revelation principle 
JEL:  C73 D82 
Date:  2018–06–25 
URL:  http://d.repec.org/n?u=RePEc:qmw:qmwecw:861&r=mic 
By:  Haraguchi, Junichi; Hirose, Kosuke 
Abstract:  We investigate the endogenous order of moves in a pricesetting mixed oligopoly model, comprising two private firms and a public firm. We show that sequential moves emerge as the equilibrium in the observable delay game. Specifically, one of the private firms and the public firm set their prices in period 1, and the other private firm does so in period 2, in equilibrium, if their goods are not significantly differentiated. This is a clear contrast to a mixed duopoly where a simultaneous move game is a unique equilibrium. We also discuss a number of extensions and the robustness of our result. 
Keywords:  Mixed Markets; Endogenous Timing; Stackelberg 
JEL:  H44 L13 
Date:  2018–06–12 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:87285&r=mic 
By:  JeanPierre Drugeon (PSE  Paris School of Economics, PJSE  Paris Jourdan Sciences Economiques  UP1  Université PanthéonSorbonne  ENS Paris  École normale supérieure  Paris  INRA  Institut National de la Recherche Agronomique  EHESS  École des hautes études en sciences sociales  ENPC  École des Ponts ParisTech  CNRS  Centre National de la Recherche Scientifique); Thai HaHuy (EPEE  Centre d'Etudes des Politiques Economiques  UEVE  Université d'ÉvryVald'Essonne) 
Abstract:  This article builds an axiomatization of intertemporal tradeoffs that makes an explicit account of the distant future and therefore encompasses motives related to sustainability, transmission to offsprings and altruism. The focus is on separable representations and the approach is completed following a decisiontheory index based approach that is applied to utility streams. This enlightens the limits of the commonly used tail intensity requesites for the evaluation of utility streams: in this article, these are supersed and replaced by an axiomatic approach to optimal myopia degrees that in its turn precedes the determination of optimal discount. The overall approach is anchored in the new and explicit proof of a temporal decomposition of the preference orders between the distant future and the close future itself directly related to the determination of the optimal myopia degrees. The argument is shown to provide a novel understanding of temporal biases with the scope for a distant future bias when the finite dimensional gets influenced by the infinite dimensional. The reference to robust orders and pessimismlike axioms finally allows for determining tractable representations for the indexes. 
Abstract:  JEL Codes: D11, D15, D90. 
Keywords:  Discount,Temporal Order Decompositions,Infinite Dimensional Topologies,Axiomatization,Myopia 
Date:  2018–04 
URL:  http://d.repec.org/n?u=RePEc:hal:wpaper:halshs01761962&r=mic 
By:  Robin Lindsey (University of Alberta [Edmonton]); André De Palma (CES  Centre d'économie de la Sorbonne  UP1  Université PanthéonSorbonne  CNRS  Centre National de la Recherche Scientifique); Hugo Silva (Instituto Superior Técnico  Technical University of Lisbon) 
Abstract:  Individual users often control a significant share of total traffic flows. Examples include airlines, rail and maritime freight shippers, urban goods delivery companies and passenger transportation network companies. These users have an incentive to internalize the congestion delays their own vehicles impose on each other by adjusting the timing of their trips. We investigate simultaneous triptiming decisions by large users and small users in a dynamic model of congestion. Unlike previous work, we allow for heterogeneity of triptiming preferences and for the presence of small users such as individual commuters and fringe airlines. We derive the optimal fleet departure schedule for a large user as a bestresponse to the aggregate departure rate of other users. We show that when the vehicles in a large user's fleet have a sufficiently dispersed distribution of desired arrival times, there may exist a purestrategy Nashequilibrium (PSNE) in which the large user schedules vehicles when there is a queue. This resolves the problem of nonexistence of a PSNE identified in Silva et al. (2017) for the case of symmetric large users. We also develop some examples to identify under what conditions a PSNE exists. The examples illustrate how selfinternalization of congestion by a large user can affect the nature of equilibrium and the travel costs that it and other users incur. 
Keywords:  departuretime decisions,bottleneck model,congestion,schedule delay costs,large users,user heterogeneity,existence of Nash equilibrium $ 
Date:  2018–04–06 
URL:  http://d.repec.org/n?u=RePEc:hal:wpaper:hal01760135&r=mic 
By:  Masahiko Hattori; Atsuhiro Satoh; Yasuhito Tanaka 
Abstract:  We consider a symmetric multiplayers zerosum game with two strategic variables. There are $n$ players, $n\geq 3$. Each player is denoted by $i$. Two strategic variables are $t_i$ and $s_i$, $i\in \{1, \dots, n\}$. They are related by invertible functions. Using the minimax theorem by \cite{sion} we will show that Nash equilibria in the following states are equivalent. 1. All players choose $t_i,\ i\in \{1, \dots, n\}$, (as their strategic variables). 2. Some players choose $t_i$'s and the other players choose $s_i$'s. 3. All players choose $s_i,\ i\in \{1, \dots, n\}$. 
Date:  2018–06 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1806.07203&r=mic 
By:  Atsuhiro Satoh; Yasuhito Tanaka 
Abstract:  We consider the relation between Sion's minimax theorem for a continuous function and a Nash equilibrium in an asymmetric multiplayers zerosum game in which only one player is different from other players, and the game is symmetric for the other players. Then, 1. The existence of a Nash equilibrium, which is symmetric for players other than one player, implies Sion's minimax theorem for pairs of this player and one of other players with symmetry for the other players. 2. Sion's minimax theorem for pairs of one player and one of other players with symmetry for the other players implies the existence of a Nash equilibrium which is symmetric for the other players. Thus, they are equivalent. 
Date:  2018–06 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1806.07253&r=mic 
By:  Pierre Martinon (Commands  Control, Optimization, Models, Methods and Applications for Nonlinear Dynamical Systems  CMAP  Centre de Mathématiques Appliquées  Ecole Polytechnique  X  École polytechnique  CNRS  Centre National de la Recherche Scientifique  Inria Saclay  Ile de France  Inria  Institut National de Recherche en Informatique et en Automatique  UMA  Unité de Mathématiques Appliquées  Univ. ParisSaclay, ENSTA ParisTech  École Nationale Supérieure de Techniques Avancées  X  École polytechnique  CNRS  Centre National de la Recherche Scientifique); Pierre Picard (Département d'Économie de l'École Polytechnique  X  École polytechnique); Anasuya Raj (Département d'Économie de l'École Polytechnique  X  École polytechnique) 
Abstract:  We analyze the design of optimal medical insurance under ex post moral haz ard, i.e., when illness severity cannot be observed by insurers and policyholders decide for themselves on their health expenditures. The tradeo¤ between ex ante risk sharing and ex post incentive compatibility is analyzed in an optimal revelation mechanism under hidden information and risk aversion. The optimal contract provides partial insurance at the margin, with a deductible when in surers' rates are affected by a positive loading, and it may also include an upper limit on coverage. The potential to audit the health state leads to an upper limit on outofpocket expenses. 
Keywords:  optimal control,health insurance, ex post moral hazard, audit, background risk 
Date:  2018 
URL:  http://d.repec.org/n?u=RePEc:hal:journl:hal01348551&r=mic 
By:  Sakshi Gupta (Department of Economics, Columbia University, New York); Ram Singh (Department of Economics, Delhi School of Economics) 
Abstract:  The use of ‘ratio form’ probability of success function dominates the existing literature on contests. Very few works have focused on the ’difference form’functions, notwithstanding their robust theoretical foundations and intuitive appeal in several contexts. Assuming the cost of efforts to be linear, Hirshleifer (1989) and Baik (1998) have argued that under the difference form contests, there is no interior pure strategy Nash equilibrium. In contrast, existence of interior pure strategy Nash equilibrium is well known for the ratio form contest functions. In this paper we use strictly convex cost functions and demonstrate existence of pure strategy Nash equilibrium for the difference form. Moreover, we show that several properties of equilibria and the comparative statics for the difference form closely resemble those for the ratio form. However, unlike the ratio form, under a difference form contest the existence of pure strategy Nash equilibrium is sensitive to the value of the prize. 
Date:  2018–05 
URL:  http://d.repec.org/n?u=RePEc:cde:cdewps:288&r=mic 
By:  Anderson, Simon P; Foros, Øystein; Kind, Hans Jarle 
Abstract:  Consumer "multihoming" (watching two TV channels, or buying two news magazines) has surprisingly important effects on market equilibrium and performance in (twosided) media markets. We show this by introducing consumer multihoming and advertisingfinance into the classic circle model of product differentiation. When consumers multihome (attend more than one platform), media platforms can charge only incrementalvalue prices to advertisers. Entry or merger leaves consumer prices unchanged under consumer multihoming, but leaves advertiser prices unchanged under singlehoming: multihoming flips the side of the market on which platforms compete. In contrast to standard circle results, equilibrium product variety can be insufficient under multihoming. 
Keywords:  circle model; equilibrium product variety; media platforms; multihoming; twosided markets; media platforms; incrementalvalue prices; merger; singlehoming 
Date:  2018–06 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:13022&r=mic 
By:  Baccara, Mariagiovanna; Lee, SangMok; Yariv, Leeat 
Abstract:  We study a dynamic matching environment where individuals arrive sequentially. There is a tradeoff between waiting for a thicker market, allowing for higher quality matches, and minimizing agents' waiting costs. The optimal mechanism cumulates a stock of incongruent pairs up to a threshold and matches all others in an assortative fashion instantaneously. In discretionary settings, a similar protocol ensues in equilibrium, but expected queues are inefficiently long. We quantify the welfare gain from centralization, which can be substantial, even for low waiting costs. We also evaluate welfare improvements generated by transfer schemes, and alternative priority protocols. 
Keywords:  Dynamic Matching; market design; mechanism design; Organ Donation 
Date:  2018–06 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:12986&r=mic 
By:  Aleksei Yu. Kondratev (National Research University Higher School of Economics); Alexander S. Nesterov (National Research University Higher School of Economics) 
Abstract:  We study practically relevant aspects of popularity in twosided matching where only one side has preferences. A matching is called popular if there does not exist another matching that is preferred by a simple majority. We show that for a matching to be popular it is necessary and sucient that no coalition of size up to 3 decides to exchange their houses by simple majority.We then constructively show that a market where such coalitions meet at random converges to a popular matching whenever it exists. 
Keywords:  twosided matching, popular matching, random paths, house allocation, assignment problem 
JEL:  Z 
Date:  2018 
URL:  http://d.repec.org/n?u=RePEc:hig:wpaper:195/ec/2018&r=mic 
By:  Kwiek, Maksymilian (University of Southampton); Marreiros, Helia (Universidade Catolica Portuguesa, Porto); Vlassopoulos, Michael (University of Southampton) 
Abstract:  We study communication in committees selecting one of two alternatives when consensus is required and agents have private information about their preferences. Delaying the decision is costly, so a form of multiplayer war of attrition emerges. Waiting allows voters to express the intensity of their preferences and may help to select the alternative correctly more often than simple majority. In a series of laboratory experiments, we investigate how various rules affect the outcome reached. We vary the amount of feedback and the communication protocol available to voters: complete secrecy about the pattern of support; feedback about this support; public communication; and withingroup communication. The feedback nocommunication mechanism is worse than no feedback benchmark in all measures of welfare  the efficient alternative is chosen less often, waiting cost is higher, and thus net welfare is lower. Our headline result is that adding communication restores net efficiency, but in different ways. Public communication does poorly in terms of selecting the correct alternative, but limits the cost of delay, while group communication improves allocative efficiency, but has at best a moderate effect on delay. 
Keywords:  voting, intensity of preferences, supermajority, conclave, war of attrition, communication 
JEL:  C78 C92 D72 D74 
Date:  2018–06 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp11595&r=mic 
By:  Christophe Labreuche (Thales Research and Technology [Palaiseau]  THALES); Michel Grabisch (CES  Centre d'économie de la Sorbonne  UP1  Université PanthéonSorbonne  CNRS  Centre National de la Recherche Scientifique, PSE  Paris School of Economics) 
Abstract:  In many MultiCriteria Decision problems, one can construct with the decision maker several reference levels on the attributes such that some decision strategies are conditional on the comparison with these reference levels. The classical models (such as the Choquet integral) cannot represent these preferences. We are then interested in two models. The first one is the Choquet with respect to a pary capacity combined with utility functions, where the pary capacity is obtained from the reference levels. The second one is a specialization of the GeneralizedAdditive Independence (GAI) model, which is discretized to fit with the presence of reference levels. These two models share common properties (monotonicity, continuity, properly weighted, …), but differ on the interpolation means (Lovász extension for the Choquet integral, and multilinear extension for the GAI model). A drawback of the use of the Choquet integral with respect to a pary capacity is that it cannot satisfy decision strategies in each domain bounded by two successive reference levels that are completely independent of one another. We show that this is not the case with the GAI model. 
Abstract:  Dans beaucoup de problème de décision multicritère, on peut construire avec le décideur plusieurs niveaux de référence sur les attributs de telle sorte que des stratégies de décision soient conditionnelles sur la comparaison avec les niveaux de référence. Les modèles classiques (Choquet) ne peuvent représenter ces préférences. Nous nous intéressons à deux modèles, le premier étant Choquet vs. une pcapacité qui est obtenue à partir des niveaux de référence. Le second est une spécialisation du modèle GAI (GeneralizedAdditive Independence). Ces deux modèles ont en commun des propriétés (monotonie, continuité), mais diffèrent sur le type d'interpolation (Lovász, multilinéaire). Un défaut de l'intégrale de Choquet est qu'elle ne satisfait pas les stratégies de décision dans chaque domaine borné par deux niveaux de références indépendants l'un de l'autre. Nous montrons que cela ne peut arriver avec le modèle GAI. 
Keywords:  multiple criteria analysis,Generalized Additive Independence,Choquet integral,reference levels,intégrale de Choquet,niveau de références,interpolation,GAI,analyse multicritère 
Date:  2018–03 
URL:  http://d.repec.org/n?u=RePEc:hal:cesptp:halshs01815028&r=mic 
By:  Kerem Ugurlu 
Abstract:  We consider classical Merton problem of terminal wealth maximization in finite horizon. We assume that the drift of the stock is following OrnsteinUhlenbeck process and the volatility of it is following GARCH(1) process. In particular, both mean and volatility are unbounded. We assume that there is Knightian uncertainty on the parameters of both mean and volatility. We take that the investor has logarithmic utility function, and solve the corresponding utility maximization problem explicitly. To the best of our knowledge, this is the first work on utility maximization with unbounded mean and volatility in Knightian uncertainty under nondominated priors. 
Date:  2018–07 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1807.05773&r=mic 
By:  Claude Fluet; Thomas Lanziyz 
Abstract:  Two parties with opposed interests invest in acquiring evidence which they may only partially disclose. The decision maker then adjudicates. This setup is compared with one permitting crossexamination of the other party?s report. Now the decision maker can better assess whether a report was deceitful through withholding of evidence. Nevertheless, decisionmaking need not be improved. The parties invest less in gathering evidence because they are less able to successfully manipulate information and because crossexamination is a substitute in potentially countering the other party. From the decision maker's standpoint, there is too much crossexamination at the expense of too little direct evidence. 
Keywords:  disclosure, persuasion, evidence, adversarial, crossexamination, judicial procedures. 
JEL:  D82 K41 
Date:  2018 
URL:  http://d.repec.org/n?u=RePEc:lvl:crrecr:1811&r=mic 
By:  Mario, Gilli; Yuan, Li; 
Abstract:  The literature on the functioning of autocracies has not analyzed the consequences of the fact that policies have multiple dimensions and that these dimensions are perceived with diÂ¤erent bias by people. This fact is obviously more striking in autocracies where the public perception of policies' effects might be partially manipulated. We try to fill the gap. This paper makes three contributions to the literature on the functioning of autocratic regimes. First, we show that, may be counterintuitively, both the probability of full eÂ¢ cient and full inefficient policies decrease as opacity increases, while the probability of partially efficient policies has the opposite behavior. This implies that the probability of efficient policies on different policy dimensions diverges as opacity increases, and this provides an explanation for the observed heterogeneity of policies within an autocracy. Second, the expected probability of a coup has a non monotone behavior w.r.t. opacity, so that at intermediate level an increment in opacity might actually increase the likelihood of a selectorate coup. Finally, also the expected probability of a citizens' revolt might have a non monotone behavior w.r.t. opacity, so that the likelihood of a revolt might actually increase as opacity increases. We conclude that the eÂ¤ect of bias in public perception of some policy dimension is non monotone on authoritarian regime stability. These results provide a reason to explain why transition periods are dangerous for a dictator. 
Keywords:  Multidimensional policies, public perception, political stability. 
JEL:  D02 H11 D74 
Date:  2018–07–13 
URL:  http://d.repec.org/n?u=RePEc:mib:wpaper:383&r=mic 
By:  JeanPierre Drugeon (PSE  Paris School of Economics, PJSE  Paris Jourdan Sciences Economiques  UP1  Université PanthéonSorbonne  ENS Paris  École normale supérieure  Paris  INRA  Institut National de la Recherche Agronomique  EHESS  École des hautes études en sciences sociales  ENPC  École des Ponts ParisTech  CNRS  Centre National de la Recherche Scientifique); Thai HaHuy (EPEE  Centre d'Etudes des Politiques Economiques  UEVE  Université d'ÉvryVald'Essonne); ThiDoHanh Nguyen (Vietnam Maritime University) 
Abstract:  This article establishes a dynamic programming argument for a maximin optimization problem where the agent completes a minimization over a set of discount rates. Even though the consideration of a maximin criterion results in a program that is not convex and not stationary over time, it is proved that a careful reference to extended dynamic programming principles and a maxmin functional equation however allows for circumventing these difficulties and recovering an optimal sequence that is time consistent. This in its turn brings about a stationary dynamic programming argument. 
Keywords:  maximin principle,nonconvexities,value funion,policy funion,supermodularity 
Date:  2018–04 
URL:  http://d.repec.org/n?u=RePEc:hal:wpaper:halshs01761997&r=mic 