
on Microeconomics 
By:  Leung, B. T. K. 
Abstract:  This paper looks into how learning behavior changes with the complexity of the inference problem and the individual's cognitive ability, as I compare the optimal learning behavior with bounded memory in small and big worlds. A learning problem is a small world if the state space is much smaller than the size of the bounded memory and is a big world otherwise. I show that first, optimal learning behavior is almost Bayesian in small worlds but is significantly different from Bayesian in big worlds. Second, ignorant learning behaviors, e.g., availability heuristic, correlation neglect, persistent overconfident, are never optimal in small worlds but could be optimal in big worlds. Third, different individuals are bound to agree in small worlds but could disagree and even be bound to disagree in big worlds. These results suggest that the complexity of a learning problem, relative to the cognitive ability of individuals, could explain a wide range of abnormalities in learning behavior. 
Keywords:  Learning, Bounded Memory, Bayesian, Ignorance, Disagreement 
JEL:  D83 D91 
Date:  2020–09–08 
URL:  http://d.repec.org/n?u=RePEc:cam:camdae:2085&r=all 
By:  Christian Ewerhart; GuangZhen Sun 
Abstract:  We characterize the equilibrium set of the nplayer Hirshleifer contest with homogeneous valuations. A symmetric equilibrium always exists. It necessarily corresponds to multilateral peace for sufficient noise and uses finitesupport randomized strategies otherwise. Asymmetric equilibria are feasible for n≥3 contestants only, and only for sufficiently small noise. In pure strategies, any asymmetric equilibrium corresponds to onesided dominance, but there is also a variety of payoffinequivalent mixedstrategy equilibria for small noise. For arbitrarily small noise, at least two contestants engage in cutthroat competition, while any others become ultimately inactive. Of some conceptual interest is the observation that, for n sufficiently large, the unique equilibrium is multilateral peace. 
Keywords:  Hirshleifer contest, Nash equilibrium, rent dissipation, differenceform contest, allpay auction 
JEL:  C72 D72 D74 
Date:  2020–08 
URL:  http://d.repec.org/n?u=RePEc:zur:econwp:361&r=all 
By:  Wenjia Ba; Haim Mendelson; Mingxi Zhu 
Abstract:  We study the implications of selling through a voicebased virtual assistant (VA). The seller has a set of products available and the VA decides which product to offer and at what price, seeking to maximize its revenue, consumer or totalsurplus. The consumer is impatient and rational, seeking to maximize her expected utility given the information available to her. The VA selects products based on the consumer's request and other information available to it and then presents them sequentially. Once a product is presented and priced, the consumer evaluates it and decides whether to make a purchase. The consumer's valuation of each product comprises a preevaluation value, which is common knowledge, and a postevaluation component which is private to the consumer. We solve for the equilibria and develop efficient algorithms for implementing the solution. We examine the effects of information asymmetry on the outcomes and study how incentive misalignment depends on the distribution of private valuations. We find that monotone rankings are optimal in the cases of a highly patient or impatient consumer and provide a good approximation for other levels of patience. The relationship between products' expected valuations and prices depends on the consumer's patience level and is monotone increasing (decreasing) when the consumer is highly impatient (patient). Also, the seller's share of total surplus decreases in the amount of private information. We compare the VA to a traditional webbased interface, where multiple products are presented simultaneously on each page. We find that within a page, the highervalue products are priced lower than the lowervalue products when the private valuations are exponentially distributed. Finally, the webbased interface generally achieves higher profits for the seller than a VA due to the greater commitment power inherent in its presentation. 
Date:  2020–09 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2009.03719&r=all 
By:  Carl Heese 
Abstract:  This paper studies theoretically how endogenous attention to politics affects social welfare and its distribution. When information of citizens about uncertain policy consequences is exogenous, a median voter theorem holds. When information is endogenous, attention shifts election outcomes into a direction that is welfareimproving. For a large class of settings, election outcomes maximize a weighted welfare rule. The implicit decision weight of voters with higher utilities is higher, but less so, when information is more cheap. In general, decision weights are proportional to how informed voters are. The results imply that uninformed voters have effectively almost no voting power, that the ability to access and interpret information is a critical determinant of democratic participation, and that elections are susceptible to thirdparty manipulation of voter information. 
Keywords:  Voting, Information Aggregation, Attention, Costly Information Acquisition, Welfare 
JEL:  D72 
Date:  2020–09 
URL:  http://d.repec.org/n?u=RePEc:bon:boncrc:crctr224_2020_209&r=all 
By:  Alfredo Salgado 
Abstract:  In this paper, we establish sufficient conditions on the domain of preferences and agents' behavior in order to characterize the existence of stable assignments in manytoone matching problems with externalities. The set of stable matchings depends on what agents believe other agents will do if they deviate. Such sets of reactions are called estimation functions or simply estimations. We show that, unless some restrictions would be imposed on agents' preferences, there is no constraint on agents' behavior that assures the existence of stable matchings. In addition, we introduce a condition on preferences called bottom qsubstitutability that guarantees the existence of at least one stable matching when the set of estimations includes all possible matches. Finally, we analyze a notion of the core and its relation with the set of stable assignments. 
Keywords:  Twosided matching: Externalities; Stability; Estimation functions; Pessimistic agents; Core. 
JEL:  C71 C78 D62 
Date:  2020–03 
URL:  http://d.repec.org/n?u=RePEc:bdm:wpaper:202003&r=all 
By:  Hörner, Johannes; Klein, Nicolas 
Abstract:  This paper considers a class of experimentation games with L´evy bandits encompassing those of Bolton and Harris (1999) and Keller, Rady and Cripps (2005). Its main result is that efficient (perfect Bayesian) equilibria exist whenever players’ payoffs have a diffusion component. Hence, the tradeoffs emphasized in the literature do not rely on the intrinsic nature of bandit models but on the commonly adopted solution concept (MPE). This is not an artifact of continuous time: we prove that such equilibria arise as limits of equilibria in the discretetime game. Furthermore, it suffices to relax the solution concept to strongly symmetric equilibrium. 
Keywords:  TwoArmed Bandit; Bayesian Learning; Strategic Experimentation; Strongly Symmetric Equilibrium. 
Date:  2020–08–04 
URL:  http://d.repec.org/n?u=RePEc:tse:wpaper:124603&r=all 
By:  LUO Chenghong, (CORE, UCLouvain and Ca’Foscoari University); MAULEON Ana, (Université Saint Louis, Bruxelles); VANNETELBOSCH Vincent, (CORE, UCLouvain) 
Abstract:  We propose the notion of coalitionproof stability for predicting the networks that could emerge when group deviations are allowed. A network is coalitionproof stable if there exists no coalition which has a credible group deviation. A coalition is said to have a credible group deviation if there is a profitable group deviation to some network and there is no subcoalition of the deviating players which has a subsequent credible group deviation. Coalitionproof stability is ai coarsening of sotrong stability. There is no relationship between the set of coalitionproof stable networks and the set of networks induced by a coalitionproof Nash equilibrium of Myerson’s linking game. Contrary to coalitionproof stability, coalitionproof Nash equilibria of Myerson’s linking game tend to support unreasonable networks. 
Keywords:  friendship networks; stable sets; myopic and farsighted players; assimilation; segregation 
JEL:  A14 C70 D20 
Date:  2020–02–11 
URL:  http://d.repec.org/n?u=RePEc:cor:louvco:2020018&r=all 
By:  Maryam Saeedi; Ali Shourideh 
Abstract:  We study the design of optimal rating systems in the presence of adverse selection and moral hazard. Buyers and sellers interact in a competitive market where goods are vertically differentiated according to their qualities. Sellers differ in their cost of quality provision, which is private information to them. An intermediary observes sellers' quality and chooses a rating system, i.e., a signal of quality for buyers, in order to incentivize sellers to produce highquality goods. We provide a full characterization of the set of payoffs and qualities that can arise in equilibrium under an arbitrary rating system. We use this characterization to analyze Pareto optimal rating systems when seller's quality choice is deterministic and random. 
Date:  2020–08 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2008.09529&r=all 
By:  Jian Wu; William B. Haskell; Wenjie Huang; Huifu Xu 
Abstract:  In behavioural economics, a decision maker's preferences are expressed by choice functions. Preference robust optimization(PRO) is concerned with problems where the decision maker's preferences are ambiguous, and the optimal decision is based on a robust choice function with respect to a preference ambiguity set. In this paper, we propose a PRO model to support choice functions that are: (i) monotonic (prefer more to less), (ii) quasiconcave (prefer diversification), and (iii) multiattribute (have multiple objectives/criteria). As our main result, we show that the robust choice function can be constructed efficiently by solving a sequence of linear programming problems. Then, the robust choice function can be optimized efficiently by solving a sequence of convex optimization problems. Our numerical experiments for the portfolio optimization and capital allocation problems show that our method is practical and scalable. 
Date:  2020–08 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2008.13309&r=all 
By:  Georgiadis, George; Szentes, Balázs 
Abstract:  This paper considers a Principal–Agent model with hidden action in which the Principal can monitor the Agent by acquiring independent signals conditional on effort at a constant marginal cost. The Principal aims to implement a target effort level at minimal cost. The main result of the paper is that the optimal informationacquisition strategy is a twothreshold policy and, consequently, the equilibrium contract specifies two possible wages for the Agent. This result provides a rationale for the frequently observed singlebonus wage contracts. 
JEL:  J1 
Date:  2020–03–29 
URL:  http://d.repec.org/n?u=RePEc:ehl:lserod:104062&r=all 
By:  YeonKoo Che; Jinwoo Kim; Fuhito Kojima; Christopher Thomas Ryan 
Abstract:  We characterize Pareto optimality via sequential utilitarian welfare maximization: a utility vector u is Pareto optimal if and only if there exists a finite sequence of nonnegative (and eventually positive) welfare weights such that $u$ maximizes utilitarian welfare with each successive welfare weights among the previous set of maximizers. The characterization can be further related to maximization of a piecewiselinear concave social welfare function and sequential bargaining among agents a la generalized Nash bargaining. We provide conditions enabling simpler utilitarian characterizations and a version of the second welfare 
Date:  2020–08 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2008.10819&r=all 
By:  Vaissman Guinsburg, Pedro 
Abstract:  I study the problem of firms that disclose verifiable information to each other publicly, in the form of Blackwell Experiments, before engaging in strategic decisions. The signals designed can be either interpreted as statistical reports or as slices of physical quantities, i.e. market segments. Before the state of the world is realized, firms choose a signal policy, an estimation technique, about a private individual payoff state and then are forced to publicize the results of the investigations to all other firms before engaging in price or quantity competition. Because signals are made public, when a firm tries to assess the firm's individual payoff, it also ends up revealing the same information to her opponents. Full Disclosure enables companies to adapt to local market fundamentals at the expense of releasing crucial information to the competitors. On the other hand, Partial Revelation makes companies loose optimality of the decisions with regards to the true state of the world but enable them to commit to an aggressive policy of preclusion that increases the frequency of a favorable distribution of players actions. Whereas Partial Revelation acts as a commitment device and preclude entry in otherwise competitive markets, inducing insensitivity of the decisions with respect to local fundamentals, decentralized decision making is a dominant strategy when the profile of competitors is constant across markets or when a company cannot influence the extensive margin entry decision of the competitor with more or less disclosure of information. Since decentralization acts as a way to correlate decisions with local market fundamentals, and running one single policy in multiple states of the world acts as a commitment device to avoid competitors, I describe a trade off between commitment over a distribution of actions versus correlation with states of the world. 
Keywords:  Information Design, Oligopoly and Market Power 
JEL:  D43 D89 
Date:  2020–05–31 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:101496&r=all 
By:  Bruno Jullien; Wilfried SandZantman 
Abstract:  We propose an analysis of platform competition based on the academic literature with a view towards competition policy. First, we discuss to which extent competition can emerge in digital markets and show which forms it can take. In particular, we underline the role of dynamics, but also of platform differentiation, consumers multihoming and beliefs to allow competition in platform markets. Second, we analyse competition policy issues and discuss how rules designed for standard markets can perform in twosided markets. We show that multisided externalities create new opportunities for anticompetitive conducts, often related to pricing and contractual imperfections. 
Keywords:  networks, platforms, markets, competition policy 
JEL:  L13 L41 L86 D82 
Date:  2020 
URL:  http://d.repec.org/n?u=RePEc:ces:ceswps:_8463&r=all 
By:  Matthew HarrisonTrainor 
Abstract:  In an election in which each voter ranks all of the candidates, we consider the headtohead results between each pair of candidates and form a labeled directed graph, called the margin graph, which contains the margin of victory of each candidate over each of the other candidates. A central issue in developing voting methods is that there can be cycles in this graph, where candidate $\mathsf{A}$ defeats candidate $\mathsf{B}$, $\mathsf{B}$ defeats $\mathsf{C}$, and $\mathsf{C}$ defeats $\mathsf{A}$. In this paper we apply the central limit theorem, graph homology, and linear algebra to analyze how likely such situations are to occur for large numbers of voters. There is a large literature on analyzing the probability of having a majority winner; our analysis is more finegrained. The result of our analysis is that in elections with the number of voters going to infinity, margin graphs that are more cyclic in a certain precise sense are less likely to occur. 
Date:  2020–09 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2009.02979&r=all 
By:  YiHsuan Lin 
Abstract:  In random expected utility (Gul and Pesendorfer, 2006), the distribution of preferences is uniquely recoverable from random choice. This paper shows through two examples that such uniqueness fails in general if risk preferences are random but do not conform to expected utility theory. In the first, nonuniqueness obtains even if all preferences are confined to the betweenness class (Dekel, 1986) and are suitably monotone. The second example illustrates random choice behavior consistent with random expected utility that is also consistent with random nonexpected utility. On the other hand, we find that if risk preferences conform to weighted utility theory (Chew, 1983) and are monotone in firstorder stochastic dominance, random choice again uniquely identifies the distribution of preferences. Finally, we argue that, depending on the domain of risk preferences, uniqueness may be restored if joint distributions of choice across a limited number of feasible sets are available. 
Date:  2020–09 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2009.04173&r=all 
By:  David Hirshleifer; Joshua Plotkin 
Abstract:  Biased information about the payoffs received by others can drive innovation, risktaking, and investment booms. We study this cultural phenomenon using a model based on two premises. The first premise is a tendency for large successes, and the actions that lead to them, to be more salient to onlookers than small successes or failures. The second premise is selection neglect – the failure of observers to adjust for biased observation. In our model, each firm in sequence chooses to adopt or to reject a project that has two possible payoffs, one positive and one negative. The probability of success is higher in the high state of the world than in the low state. Each firm observes the payoffs received by past adopters before making its decision, but there is a chance that an adopter's outcome will be censored, especially if the payoff was negative. Failure to account for biased censoring causes firms to become overly optimistic, leading to irrational booms in adoption. Booms may eventually collapse, or they may last forever. We describe these effects as a form of cultural evolution with adoption or rejection viewed as traits transmitted between firms. Evolution here is driven not only by differential copying of successful traits, but also by cognitive reasoning about which traits are more likely to succeed – quantified using the Price Equation to decompose the effects of mutation pressure and evolutionary selection. This account provides a new explanation for investment booms, merger and IPO waves, and waves of technological innovation. 
JEL:  D03 D21 D53 D83 D92 G02 G3 M2 O31 O35 
Date:  2020–08 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:27735&r=all 
By:  Markus DertwinkelKalt; Jonas Frey; Mats Köster 
Abstract:  While many puzzles in static choices under risk can be explained by a preference for positive and an aversion toward negative skewness, little is known about the implications of such skewness preferences for decision making in dynamic problems. Indeed, skewness preferences might play an even bigger role in dynamic environments because, even if the underlying stochastic process is symmetric, the agent can endogenously create a skewed distribution of returns through the choice of her stopping strategy. Guided by salience theory, we theoretically and experimentally analyze the implications of skewness preferences for optimal stopping problems. We find strong support for all saliencebased predictions in a laboratory experiment, and we verify that salience theory coherently explains skewness preferences revealed in static and dynamic decisions. Based on these findings we conclude that the static salience model—unlike (static) cumulative prospect theory—can be reasonably applied to dynamic decision problems. Our results have important implications for common optimal stopping problems such as when to sell an asset, when to stop gambling, when to enter the job market or to retire, and when to stop searching for a house or a spouse. 
Keywords:  salience theory, prospect theory, skewness preferences, behavioural stopping 
JEL:  D01 D81 D90 
Date:  2020 
URL:  http://d.repec.org/n?u=RePEc:ces:ceswps:_8496&r=all 
By:  David Bounie (Télécom ParisTech); Antoine Dubus (Télécom ParisTech); Patrick Waelbroeck (Ecole Nationale Supérieure des Télécommunications de Bretagne) 
Abstract:  This article investigates the strategies of a data broker selling information to one or to two competing firms. The data broker combines segments of the consumer demand that allow firms to thirddegree price discriminate consumers. We show that the data broker (1) sells information on consumers with the highest willingness to pay; (2) keeps consumers with low willingness to pay unidentified. The data broker strategically chooses to withhold information on consumer demand to soften competition between firms. These results hold under first degree price discrimination, which is a limit case when information is perfect. 
Keywords:  Data broker,Information Structure,Pricediscrimination 
Date:  2020 
URL:  http://d.repec.org/n?u=RePEc:hal:journl:hal01794886&r=all 
By:  Moszoro, Marian; Spiller, Pablo 
Abstract:  Do public agents undertake socially inefficient activities to protect themselves? In politically contestable markets, part of the lack of flexibility in the design and implementation of the public procurement process reflects public agents' risk adaptations to limit the political hazards from opportunistic third partiespolitical opponents, competitors, and interest groups. Reduced flexibility limits the likelihood of opportunistic challenges, while externalizing the associated adaptation costs to the public at large. We study this matter and provide a comprehensive theoretical framework with empirically testable predictions. 
Keywords:  Transaction Costs, Bureaucracy, Procurement 
JEL:  D23 D73 H57 
Date:  2019–10 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:102692&r=all 
By:  Anqi Li; Lin Hu; Ilya Segal 
Abstract:  We study a model of electoral accountability and selection (EAS) in which voters with heterogeneous horizontal preferences pay limited attention to the incumbent's performance using personalized news aggregators. Extreme voters' news aggregators exhibit an ownparty bias, which hampers their abilities to discern good and bad performances. While this effect alone would undermine EAS, there is a countervailing effect stemming from the disagreement between extreme voters, which makes the centrist voter pivotal and could potentially improve EAS. Thus increasing mass polarization and shrinking attention spans have ambiguous effects on EAS, whereas nuanced regulations of news aggregators unambiguously improve EAS and voter welfare. 
Date:  2020–09 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2009.03761&r=all 
By:  Shota Ichihashi 
Abstract:  I study a model of competing data intermediaries (e.g., online platforms and data brokers) that collect personal data from consumers and sell it to downstream firms. Competition in this market has a limited impact in terms of benefits to consumers: If intermediaries offer high compensation for their data, then consumers may share this data with multiple intermediaries, and this lowers its downstream price and hurts intermediaries. As intermediaries anticipate this problem, they offer low compensation for this data. Competing intermediaries can earn a monopoly profit if and only if firms’ data acquisition unambiguously hurts consumers. I generalize the results to include arbitrary consumer preferences and study the information design of data intermediaries. The results provide new insights into when competition among data intermediaries benefits consumers. It also highlights the limits of competition in terms of improving efficiency in the market for data. 
Keywords:  Economic models 
JEL:  D80 L12 
Date:  2020–07 
URL:  http://d.repec.org/n?u=RePEc:bca:bocawp:2028&r=all 