|
on Microeconomics |
By: | Stephen Morris; Takashi Ui |
Abstract: | Consider an analyst who models a strategic situation using an incomplete information game. The true game may involve correlated, duplicated belief hierarchies, but the analyst lacks knowledge of the correlation structure and can only approximate each belief hierarchy. To make predictions in this setting, the analyst uses belief-invariant Bayes correlated equilibria (BIBCE) and seeks to determine which one is justifiable. We address this question by introducing the notion of robustness: a BIBCE is robust if, for every nearby incomplete information game, there exists a BIBCE close to it. Our main result provides a sufficient condition for robustness using a generalized potential function. In a supermodular potential game, a robust BIBCE is a Bayes Nash equilibrium, whereas this need not hold in other classes of games. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.19075 |
By: | Davide Carpentiere; Angelo Petralia |
Abstract: | We show that many models of choice can be alternatively represented as special cases of choice with limited attention (Masatlioglu, Nakajima, and Ozbay, 2012), and the properties of the unobserved attention filters that explain the observed choices are singled out. Moreover, for each specification, we infer information about the DM's attention and preference from irrational features of choice data. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.14879 |
By: | Siddarth Srinivasan; Ezra Karger; Michiel Bakker; Yiling Chen |
Abstract: | Common sense suggests that when individuals explain why they believe something, we can arrive at more accurate conclusions than when they simply state what they believe. Yet, there is no known mechanism that provides incentives to elicit explanations for beliefs from agents. This likely stems from the fact that standard Bayesian models make assumptions (like conditional independence of signals) that preempt the need for explanations, in order to show efficient information aggregation. A natural justification for the value of explanations is that agents' beliefs tend to be drawn from overlapping sources of information, so agents' belief reports do not reveal all that needs to be known. Indeed, this work argues that rationales-explanations of an agent's private information-lead to more efficient aggregation by allowing agents to efficiently identify what information they share and what information is new. Building on this model of rationales, we present a novel 'deliberation mechanism' to elicit rationales from agents in which truthful reporting of beliefs and rationales is a perfect Bayesian equilibrium. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.13410 |
By: | Kento Hashimoto; Keita Kuwahara; Reo Nonaka |
Abstract: | Finding the optimal (revenue-maximizing) mechanism to sell multiple items has been a prominent and notoriously difficult open problem. Existing work has mainly focused on deriving analytical results tailored to a particular class of problems (for example, Giannakopoulos (2015) and Yang (2023)). The present paper explores the possibility of a generally applicable methodology of the Automated Mechanism Design (AMD). We first employ the deep learning algorithm developed by D\"utting et al. (2023) to numerically solve small-sized problems, and the results are then generalized by educated guesswork and finally rigorously verified through duality. By focusing on a single buyer who can consume one item, our approach leads to two key contributions: establishing a much simpler way to verify the optimality of a wide range of problems and discovering a completely new result about the optimality of grand bundling. First, we show that selling each item at an identical price (or equivalently, selling the grand bundle of all items) is optimal for any number of items when the value distributions belong to a class that includes the uniform distribution as a special case. Different items are allowed to have different distributions. Second, for each number of items, we established necessary and sufficient conditions that $c$ must satisfy for grand bundling to be optimal when the value distribution is uniform over an interval $[c, c + 1]$. This latter model does not satisfy the previously known sufficient conditions for the optimality of grand bundling Haghpanah and Hartline (2021). Our results are in contrast to the only known results for $n$ items (for any $n$), Giannakopoulos (2015) and Daskalakis et al. (2017), which consider a single buyer with additive preferences, where the values of items are narrowly restricted to i.i.d. according to a uniform or exponential distribution. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.10086 |
By: | Yuichiro Kamada; Shunya Noda |
Abstract: | We develop a dynamic model of the Bitcoin market where users set fees themselves and miners decide whether to operate and whom to validate based on those fees. Our analysis reveals how, in equilibrium, users adjust their bids in response to short-term congestion (i.e., the amount of pending transactions), how miners decide when to start operating based on the level of congestion, and how the interplay between these two factors shapes the overall market dynamics. The miners hold off operating when the congestion is mild, which harms social welfare. However, we show that a block reward (a fixed reward paid to miners upon a block production) can mitigate these inefficiencies. We characterize the socially optimal block reward and demonstrate that it is always positive, suggesting that Bitcoin's halving schedule may be suboptimal. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.15505 |
By: | Philipp Denter |
Abstract: | I analyze an election involving two parties who are both office- and policy-motivated and who are ideologically polarized. One party may possess a valence advantage. The parties compete by proposing policies on a second policy issue. The analysis reveals a subtle relationship between ideological polarization and policy polarization. If ideologies are highly dispersed, there is a U-shaped relationship between ideological polarization and platform polarization. In contrast, if ideological dispersion is limited, increasing ideological polarization generally results in policy moderation. In both cases, valence plays no role in policy polarization. Finally, as in Buisseret and van Weelden (2022), adding ideological polarization adds nuance on the effects of increasing valence: both high- and low-valence candidates may adopt more extreme positions, depending on the electorate's degree of ideological polarization. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.14712 |
By: | Robert Day; Benjamin Lubin |
Abstract: | We use valid inequalities (cuts) of the binary integer program for winner determination in a combinatorial auction (CA) as "artificial items" that can be interpreted intuitively and priced to generate Artificial Walrasian Equilibria. While the lack of an integer programming gap is sufficient to guarantee a Walrasian equilibrium, we show that it does not guarantee a "price-match equilibrium" (PME), a refinement that we introduce, in which prices are justified by an iso-revenue outcome for any hypothetical removal of a single bidder. We prove the existence of PME for any CA and characterize their economic properties and computation. We implement minimally artificial PME rules and compare them with other prominent CA payment rules in the literature. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.15893 |
By: | Cesare Carissimo; Jan Nagler; Heinrich Nax |
Abstract: | We investigate the dynamics of Q-learning in a class of generalized Braess paradox games. These games represent an important class of network routing games where the associated stage-game Nash equilibria do not constitute social optima. We provide a full convergence analysis of Q-learning with varying parameters and learning rates. A wide range of phenomena emerges, broadly either settling into Nash or cycling continuously in ways reminiscent of "Edgeworth cycles" (i.e. jumping suddenly from Nash toward social optimum and then deteriorating gradually back to Nash). Our results reveal an important incentive incompatibility when thinking in terms of a meta-game being played by the designers of the individual Q-learners who set their agents' parameters. Indeed, Nash equilibria of the meta-game are characterized by heterogeneous parameters, and resulting outcomes achieve little to no cooperation beyond Nash. In conclusion, we suggest a novel perspective for thinking about regulation and collusion, and discuss the implications of our results for Bertrand oligopoly pricing games. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.18984 |
By: | Rachitesh Kumar; Omar Mouchtaki |
Abstract: | First-price auctions are one of the most popular mechanisms for selling goods and services, with applications ranging from display advertising to timber sales. Unlike their close cousin, the second-price auction, first-price auctions do not admit a dominant strategy. Instead, each buyer must design a bidding strategy that maps values to bids -- a task that is often challenging due to the lack of prior knowledge about competing bids. To address this challenge, we conduct a principled analysis of prior-independent bidding strategies for first-price auctions using worst-case regret as the performance measure. First, we develop a technique to evaluate the worst-case regret for (almost) any given value distribution and bidding strategy, reducing the complex task of ascertaining the worst-case competing-bid distribution to a simple line search. Next, building on our evaluation technique, we minimize worst-case regret and characterize a minimax-optimal bidding strategy for every value distribution. We achieve it by explicitly constructing a bidding strategy as a solution to an ordinary differential equation, and by proving its optimality for the intricate infinite-dimensional minimax problem underlying worst-case regret minimization. Our construction provides a systematic and computationally-tractable procedure for deriving minimax-optimal bidding strategies. When the value distribution is continuous, it yields a deterministic strategy that maps each value to a single bid. We also show that our minimax strategy significantly outperforms the uniform-bid-shading strategies advanced by prior work. Our result allows us to precisely quantify, through minimax regret, the performance loss due to a lack of knowledge about competing bids. We leverage this to analyze the impact of the value distribution on the performance loss, and find that it decreases as the buyer's values become more dispersed. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.09907 |
By: | Nahed Eddai (GAEL - Laboratoire d'Economie Appliquée de Grenoble - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement - UGA - Université Grenoble Alpes - Grenoble INP - Institut polytechnique de Grenoble - Grenoble Institute of Technology - UGA - Université Grenoble Alpes, IÉSEG School Of Management [Puteaux]); Ani Guerdjikova (GAEL - Laboratoire d'Economie Appliquée de Grenoble - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement - UGA - Université Grenoble Alpes - Grenoble INP - Institut polytechnique de Grenoble - Grenoble Institute of Technology - UGA - Université Grenoble Alpes) |
Abstract: | We analyze the effect of strategic ambiguity and heterogeneous attitudes towards such ambiguity on optimal mitigation and adaptation. Pessimistic players tend to invest more in mitigation, while optimists favor adaptation. When adaptation is more expensive than mitigation, three types of equilibria can obtain depending on the level and distribution of ambiguity aversion: (i) a mitigation equilibrium, (ii) an adaptation equilibrium and (iii) a mixed equilibrium with both adaptation and mitigation. The interaction between ambiguity attitudes and wealth distribution plays a crucial role for the aggregate environmental policy: a wealth transfer from pessimistic to optimistic agents increases total mitigation. A similar result applies to the choice of an optimal mitigation subsidy, which is shown to increase in optimism, but decrease following a transfer of income towards the more optimistic players. Finally, we show that under strategic ambiguity, the introduction of a non-binding standard can impact agents' beliefs about their opponents' behavior and as a result lower total equilibrium mitigation. Our results highlight the necessity to consider attitudes towards strategic ambiguity in the design of economic policies targeting climate change. They might also shed some light on the slow rate of convergence of environmental policies across countries. |
Keywords: | Climate policy, Ambiguity, Heterogeneity, Choquet expected utility |
Date: | 2023–04 |
URL: | https://d.repec.org/n?u=RePEc:hal:journl:hal-03590990 |
By: | Guanyu Jin; Roger J. A. Laeven; Dick den Hertog |
Abstract: | This paper studies distributionally robust optimization for a large class of risk measures with ambiguity sets defined by $\phi$-divergences. The risk measures are allowed to be non-linear in probabilities, are represented by a Choquet integral possibly induced by a probability weighting function, and include many well-known examples (for example, CVaR, Mean-Median Deviation, Gini-type). Optimization for this class of robust risk measures is challenging due to their rank-dependent nature. We show that for many types of probability weighting functions including concave, convex and inverse $S$-shaped, the robust optimization problem can be reformulated into a rank-independent problem. In the case of a concave probability weighting function, the problem can be further reformulated into a convex optimization problem with finitely many constraints that admits explicit conic representability for a collection of canonical examples. While the number of constraints in general scales exponentially with the dimension of the state space, we circumvent this dimensionality curse and provide two types of upper and lower bounds algorithms. They yield tight upper and lower bounds on the exact optimal value and are formally shown to converge asymptotically. This is illustrated numerically in two examples given by a robust newsvendor problem and a robust portfolio choice problem. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.11780 |
By: | Frank Yang; Kai Hao Yang |
Abstract: | We characterize the extreme points of multidimensional monotone functions from $[0, 1]^n$ to $[0, 1]$, as well as the extreme points of the set of one-dimensional marginals of these functions. These characterizations lead to new results in various mechanism design and information design problems, including public good provision with interdependent values; interim efficient bilateral trade mechanisms; asymmetric reduced form auctions; and optimal private private information structure. As another application, we also present a mechanism anti-equivalence theorem for two-agent, two-alternative social choice problems: A mechanism is payoff-equivalent to a deterministic DIC mechanism if and only if they are ex-post equivalent. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.18876 |
By: | Péter Bayer (TSE-R - Toulouse School of Economics - UT Capitole - Université Toulouse Capitole - UT - Université de Toulouse - EHESS - École des hautes études en sciences sociales - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement); Ani Guerdjikova (GAEL - Laboratoire d'Economie Appliquée de Grenoble - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement - UGA - Université Grenoble Alpes - Grenoble INP - Institut polytechnique de Grenoble - Grenoble Institute of Technology - UGA - Université Grenoble Alpes) |
Abstract: | We analyze a model of endogenous two-sided network formation where players are affected by uncertainty about their opponents' decisions. We model this uncertainty using the notion of equilibrium under ambiguity as in Eichberger and Kelsey (2014). Unlike the set of Nash equilibria, the set of equilibria under ambiguity does not always include underconnected and thus inefficient networks such as the empty network. On the other hand, it may include networks with unreciprocated, one-way links, which comes with an efficiency loss as linking efforts are costly. We characterize equilibria under ambiguity and provide conditions under which increased player optimism comes with an increase in connectivity and realized benefits in equilibrium. Next, we analyze network realignment under a myopic updating process with optimistic shocks and derive a global stability condition of efficient networks in the sense of Kandori et al. (1993). Under this condition, a subset of the Pareto optimal equilibrium networks is reached, specifically, networks that maximize the players' total benefits of connections. |
Keywords: | Pessimism, Optimism, Pareto-optimality, Equilibrium selection, Ambiguity, Network formation |
Date: | 2024–08–22 |
URL: | https://d.repec.org/n?u=RePEc:hal:journl:hal-03005107 |
By: | Yuxin Liu; M. Amin Rahimian |
Abstract: | In a differentially private sequential learning setting, agents introduce endogenous noise into their actions to maintain privacy. Applying this to a standard sequential learning model leads to different outcomes for continuous vs. binary signals. For continuous signals with a nonzero privacy budget, we introduce a novel smoothed randomized response mechanism that adapts noise based on distance to a threshold, unlike traditional randomized response, which applies uniform noise. This enables agents' actions to better reflect both private signals and observed history, accelerating asymptotic learning speed to $\Theta_{\epsilon}(\log(n))$, compared to $\Theta(\sqrt{\log(n)})$ in the non-private regime where privacy budget is infinite. Moreover, in the non-private setting, the expected stopping time for the first correct decision and the number of incorrect actions diverge, meaning early agents may make mistakes for an unreasonably long period. In contrast, under a finite privacy budget $\epsilon \in (0, 1)$, both remain finite, highlighting a stark contrast between private and non-private learning. Learning with continuous signals in the private regime is more efficient, as smooth randomized response enhances the log-likelihood ratio over time, improving information aggregation. Conversely, for binary signals, differential privacy noise hinders learning, as agents tend to use a constant randomized response strategy before an information cascade forms, reducing action informativeness and hampering the overall process. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.19525 |
By: | Jack Hirsch; Eric Tang |
Abstract: | We study the effect of providing information to agents who queue before a scarce good is distributed at a fixed time. When agents have quasi-linear utility in time spent waiting, they choose entry times as they would bids in a descending auction. An information designer can influence their behavior by providing updates about the length of the queue. Many natural information policies release "sudden bad news, " which occurs when agents learn that the queue is longer than previously believed. We show that sudden bad news can cause assortative inefficiency by prompting a mass of agents to simultaneously attempt to join the queue. As a result, if the value distribution has an increasing (decreasing) hazard rate, information policies that release sudden bad news increase (decrease) total surplus, relative to releasing no information. When agents face entry costs to join the queue and the value distribution has a decreasing hazard rate, an information designer maximizes total surplus by announcing only when the queue is full. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.19553 |
By: | Agustin G. Bonifacio |
Abstract: | In the problem of fully allocating a social endowment of perfectly divisible commodities among a group of agents with multidimensional single-peaked preferences, we study strategy-proof rules that are not Pareto-dominated by other strategy-proof rules. Specifically, we: (i) establish a sufficient condition for a rule to be Pareto-undominated strategy-proof; (ii) introduce a broad class of rules satisfying this property by extending the family of "sequential allotment rules" to the multidimensional setting; and (iii) provide a new characterization of the "multidimensional uniform rule" involving Pareto-undominated strategy-proofness. Results (i) and (iii) generalize previous findings that were only applicable to the two-agent case. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.17682 |
By: | Xiaoyun Qiu; Liren Shan |
Abstract: | How should one jointly design tests and the arrangement of agencies to administer these tests (testing procedure)? To answer this question, we analyze a model where a principal must use multiple tests to screen an agent with a multi-dimensional type, knowing that the agent can change his type at a cost. We identify a new tradeoff between setting difficult tests and using a difficult testing procedure. We compare two settings: (1) the agent only misrepresents his type (manipulation) and (2) the agent improves his actual type (investment). Examples include interviews, regulations, and data classification. We show that in the manipulation setting, stringent tests combined with an easy procedure, i.e., offering tests sequentially in a fixed order, is optimal. In contrast, in the investment setting, non-stringent tests with a difficult procedure, i.e., offering tests simultaneously, is optimal; however, under mild conditions offering them sequentially in a random order may be as good. Our results suggest that whether the agent manipulates or invests in his type determines which arrangement of agencies is optimal. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.12264 |
By: | Haris Aziz; Md. Shahidul Islam; Szilvia P\'apai |
Abstract: | We consider a two-sided matching problem in which the agents on one side have dichotomous preferences and the other side representing institutions has strict preferences (priorities). It captures several important applications in matching market design in which the agents are only interested in getting matched to an acceptable institution. These include centralized daycare assignment and healthcare rationing. We present a compelling new mechanism that satisfies many prominent and desirable properties including individual rationality, maximum size, fairness, Pareto-efficiency on both sides, strategyproofness on both sides, non-bossiness and having polynomial time running time. As a result, we answer an open problem whether there exists a mechanism that is agent-strategyproof, maximum, fair and non-bossy. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.09962 |
By: | Roger F. Sewell |
Abstract: | In 1950 Arrow famously showed that there is no social welfare function satisfying four basic conditions. In 1976, on the other hand, Gibbard and Sonnenschein showed that there does exist a unique probabilistic social welfare method that satisfies a different set of strictly stronger conditions. In this paper we discuss a deterministic electoral method satisfying those same stronger conditions in an appropriate sense; it is not a counterexample to either of these theorems. We attach a simple reference implementation written in C with executables for Linux and Windows. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.07444 |