|
on Microeconomics |
By: | Joshua S. Gans |
Abstract: | This paper examines the relationship between Knightian uncertainty and Bayesian approaches to entrepreneurship. Using Bewley's formal model of uncertainty and incomplete preferences, it demonstrates that key predictions from Bayesian entrepreneurship remain robust when accounting for Knightian uncertainty, particularly regarding experimentation, venture financing, and strategic choice. The analysis shows that while Knightian uncertainty creates a more challenging decision environment, it maintains consistency with the three pillars of Bayesian entrepreneurship: heterogeneous beliefs, stronger entrepreneurial priors, and Bayesian updating. The paper also explores connections to effectuation theory, finding that formal uncertainty models can bridge different entrepreneurial methodologies. |
JEL: | D81 O30 |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:33507 |
By: | Dirk Bergemann; Rahul Deb |
Abstract: | We study the robust sequential screening problem of a monopolist seller of multiple cloud computing services facing a buyer who has private information about his demand distribution for these services. At the time of contracting, the buyer knows the distribution of his demand of various services and the seller simply knows the mean of the buyer's total demand. We show that a simple "committed spend mechanism" is robustly optimal: it provides the seller with the highest profit guarantee against all demand distributions that have the known total mean demand. This mechanism requires the buyer to commit to a minimum total usage and a corresponding base payment; the buyer can choose the individual quantities of each service and is free to consume additional units (over the committed total usage) at a fixed marginal price. This result provides theoretical support for prevalent cloud computing pricing practices while highlighting the robustness of simple pricing schemes in environments with complex uncertainty. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.07168 |
By: | Steven Kivinen; Christoph Kuzmics |
Abstract: | An informed Advisor and an uninformed Decision-Maker engage in repeated cheap talk communication in always new (stochastically independent) decision problems. They have a conflict of interest over which action should be implemented at least in some cases. Our main result is that, while the Decision-Maker's optimal payoff is attainable in some subgame perfect equilibrium (by force of the usual folk theorem), no payoff profile close to the Decision-Maker's optimal one is immune to renegotiation. Pareto efficient renegotiation-proof equilibria are typically attainable, and they entail a compromise between the Advisor and the Decision-Maker. This could take the form of the Advisor being truthful and the Decision-Maker not utilizing this information to their own full advantage, or the Advisor being somewhat liberal with the truth and the Decision-Maker, while fully aware of this, pretending to believe the Advisor. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.08296 |
By: | Dirk Bergemann; Alessandro Bonatti; Alex Smolin |
Abstract: | We develop an economic framework to analyze the optimal pricing and product design of Large Language Models (LLM). Our framework captures several key features of LLMs: variable operational costs of processing input and output tokens; the ability to customize models through fine-tuning; and high-dimensional user heterogeneity in terms of task requirements and error sensitivity. In our model, a monopolistic seller offers multiple versions of LLMs through a menu of products. The optimal pricing structure depends on whether token allocation across tasks is contractible and whether users face scale constraints. Users with similar aggregate value-scale characteristics choose similar levels of fine-tuning and token consumption. The optimal mechanism can be implemented through menus of two-part tariffs, with higher markups for more intensive users. Our results rationalize observed industry practices such as tiered pricing based on model customization and usage levels. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.07736 |
By: | Frederic Koessler; Marco Scarsini; Tristan Tomala |
Abstract: | This paper studies the implementation of Bayes correlated equilibria in symmetric Bayesian nonatomic games, using direct information structures and obedient strategies. The main results demonstrate full implementation in a class of games with positive cost externalities. Specifically, if the game admits a strictly convex potential in every state, then for every Bayes correlated equilibrium outcome with finite support and rational action distributions, there exists a direct information structure that implements this outcome under all equilibria. When the potential is only weakly convex, we show that all equilibria implement the same expected social cost. Additionally, all Bayes correlated equilibria, including those with infinite support or irrational action distributions, are approximately implemented. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.05920 |
By: | Christopher P Chambers; Federico Echenique |
Abstract: | We propose to relax traditional axioms in decision theory by incorporating a measurement, or degree, of satisfaction. For example, if the independence axiom of expected utility theory is violated, we can measure the size of the violation. This measure allows us to derive an approximation guarantee for a utility representation that aligns with the unmodified version of the axiom. Almost satisfying the axiom implies, then, a utility that is near a utility representation. We develop specific examples drawn from expected utility theory under risk and uncertainty. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.07126 |
By: | Dirk Bergemann; Michael C. Wang |
Abstract: | We consider a seller who offers services to a buyer with multi-unit demand. Prior to the realization of demand, the buyer receives a noisy signal of their future demand, and the seller can design contracts based on the reported value of this signal. Thus, the buyer can contract with the service provider for an unknown level of future consumption, such as in the market for cloud computing resources or software services. We characterize the optimal dynamic contract, extending the classic sequential screening framework to a nonlinear and multi-unit setting. The optimal mechanism gives discounts to buyers who report higher signals, but in exchange they must provide larger fixed payments. We then describe how the optimal mechanism can be implemented by two common forms of contracts observed in practice, the two-part tariff and the committed spend contract. Finally, we use extensions of our base model to shed light on policy-focused questions, such as analyzing how the optimal contract changes when the buyer faces commitment costs, or when there are liquid spot markets. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.08022 |
By: | Kensei Nakamura |
Abstract: | In Nash's (1950) seminal result, independence of irrelevant alternatives (IIA) plays a central role, but it has long been a subject of criticism in axiomatic bargaining theory. This paper examines the implication of a weak version of IIA in multi-valued bargaining solutions defined on non-convex bargaining problems. We show that if a solution satisfies weak IIA together with standard axioms, it can be represented, like the Nash solution, using weighted products of normalized utility levels. In this representation, the weight assigned to players for evaluating each agreement is determined endogenously through a two-stage optimization process. These solutions bridge the two dominant solution concepts, the Nash solution and the Kalai-Smorodinsky solution (Kalai and Smorodinsky, 1975). Furthermore, we consider special cases of these solutions in the context of bargaining over linear production technologies. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.06157 |
By: | Claude Crampes (TSE-R - Toulouse School of Economics - UT Capitole - Université Toulouse Capitole - UT - Université de Toulouse - EHESS - École des hautes études en sciences sociales - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement); Antonio Estache (ULB - Université libre de Bruxelles) |
Abstract: | The paper makes the case for a systematic ex-ante assessment of the distributional impact of efficiency enhancing innovations regulatory sandboxes are expected to test. It shows how their prior formal modeling can inform on the possible need to control in the sandbox design for otherwise underestimated but predictable distributional effects. Failing to do so is likely to lead to underestimate efficiency-equity trade-offs and other distributional issues. This may influence the political sustainability of otherwise potentially welfare enhancing innovations. Simple industrial organization models will often suffice to identify the potential issues at the design stage |
Keywords: | Regulatory sandboxes, Innovation, Governance, Anti-trust, Regulation, Efficiency, Equity, Quality standards |
Date: | 2025 |
URL: | https://d.repec.org/n?u=RePEc:hal:journl:hal-04953563 |
By: | Aram Grigoryan; Markus Möller |
Keywords: | Allocation Problems, Robust Market Design, Opaque Announcements, Strategy-Proofness, Transparency We introduce a framework where the announcements of a clearinghouse about the allocation process are opaque in the sense that there can be more than one outcome compatible with a realization of type reports. We ask whether desirable properties can be ensured under opacity in a robust sense. A property can be guaranteed under an opaque announcement if every mechanism compatible with it satisfies the property. We find an impossibility result: strategy-proofness cannot be guaranteed under any level of opacity. In contrast, in some environments, weak Maskin monotonicity and non-bossiness can be guaranteed under opacity. |
JEL: | C78 D47 D82 |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:bon:boncrc:crctr224_2025_653 |
By: | Vikram Manjunath; Alexander Westkamp |
Abstract: | We consider the balanced exchange of bundles of indivisible goods. We are interested in mechanisms that only rely on marginal preferences over individual objects even though agents' actual preferences compare bundles. Such mechanisms play an important role in two-sided matching but have not received much attention in exchange settings. We show that individually rational and Pareto-efficient marginal mechanisms exist if and only if no agent ever ranks any of her endowed objects lower than in her second indifference class. We call such marginal preferences trichotomous. In proving sufficiency, we define mechanisms, which are constrained versions of serial dictatorship, that achieve both desiderata based only agents' marginal preferences. We then turn to strategy-proofness. An individually rational, efficient and strategy-proof mechanism-marginal or not-exists if and only if each agent's marginal preference is not only trichotomous, but does not contain a non-endowed object in her second indifference class. We call such marginal preferences strongly trichotomous. For such preferences, our mechanisms reduce to the class of strategy-proof mechanisms introduced in Manjunath and Westkamp (2018). For trichotomous preferences, while our variants of serial dictatorship are not strategy-proof, they are truncation-proof and not obviously manipulable (Troyan and Morrill, 2020). |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.06499 |
By: | Matthew Stephenson; Andrew Miller; Xyn Sun; Bhargav Annem; Rohan Parikh |
Abstract: | We study a fundamental challenge in the economics of innovation: an inventor must reveal details of a new idea to secure compensation or funding, yet such disclosure risks expropriation. We present a model in which a seller (inventor) and buyer (investor) bargain over an information good under the threat of hold-up. In the classical setting, the seller withholds disclosure to avoid misappropriation, leading to inefficiency. We show that trusted execution environments (TEEs) combined with AI agents can mitigate and even fully eliminate this hold-up problem. By delegating the disclosure and payment decisions to tamper-proof programs, the seller can safely reveal the invention without risking expropriation, achieving full disclosure and an efficient ex post transfer. Moreover, even if the invention's value exceeds a threshold that TEEs can fully secure, partial disclosure still improves outcomes compared to no disclosure. Recognizing that real AI agents are imperfect, we model "agent errors" in payments or disclosures and demonstrate that budget caps and acceptance thresholds suffice to preserve most of the efficiency gains. Our results imply that cryptographic or hardware-based solutions can function as an "ironclad NDA, " substantially mitigating the fundamental disclosure-appropriation paradox first identified by Arrow (1962) and Nelson (1959). This has far-reaching policy implications for fostering R&D, technology transfer, and collaboration. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.07924 |
By: | Shengyuan Huang; Wenjun Mei; Xiaoguang Yang; Zhigang Cao |
Abstract: | This paper studies allocation mechanisms in max-flow games with players' capacities as private information. We first show that no core-selection mechanism is truthful: there may exist a player whose payoff increases if she under-reports her capacity when a core-section mechanism is adopted. We then introduce five desirable properties for mechanisms in max-flow games: DSIC (truthful reporting is a dominant strategy), SIR (individual rationality and positive payoff for each player contributing positively to at least one coalition), SP (no edge has an incentive to split into parallel edges), MP (no parallel edges have incentives to merge), and CM (a player's payoff does not decrease as another player's capacity and max-flow increase). While the Shapley value mechanism satisfies DSIC and SIR, it fails to meet SP, MP and CM. We propose a new mechanism based on minimal cuts that satisfies all five properties. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.08248 |
By: | Kyung Hwan Baik; Dongwoo Lee |
Abstract: | We study contests in which two groups compete to win (or not to win) a group-specific public-good/bad prize. Each player in the groups can exert two types of effort: one to help her own group win the prize, and one to sabotage her own group's chances of winning it. The players in the groups choose their effort levels simultaneously and independently. We introduce a specific form of contest success function that determines each group's probability of winning the prize, taking into account players' sabotage activities. We show that two types of purestrategy Nash equilibrium occur, depending on parameter values: one without sabotage activities and one with sabotage activities. In the first type, only the highest-valuation player in each group expends positive effort, whereas, in the second type, only the lowest-valuation player in each group expends positive effort. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.08100 |
By: | Aram Grigoryan; Markus Möller |
Keywords: | Matching and Allocation, Auditability, Deviation Detection, In centralized mechanisms and platforms, participants do not fully observe each others' type reports. Hence, if there is a deviation from the promised mechanism, participants may be unable to detect it. We formalize a notion of auditabilty that captures how easy or hard it is to detect deviations from a mechanism. We find a stark contrast between the auditabilities of prominent mechanisms. We also provide tight characterizations of maximally auditable classes of allocation mechanisms. |
JEL: | C78 D47 D82 |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:bon:boncrc:crctr224_2025_652 |
By: | Yu Gui; Bahar Ta\c{s}kesen |
Abstract: | We introduce the Statistical Equilibrium of Optimistic Beliefs (SE-OB) for the mixed extension of finite normal-form games, drawing insights from discrete choice theory. Departing from the conventional best responders of Nash equilibrium and the better responders of quantal response equilibrium, we reconceptualize player behavior as that of optimistic better responders. In this setting, the players assume that their expected payoffs are subject to random perturbations, and form optimistic beliefs by selecting the distribution of perturbations that maximizes their highest anticipated payoffs among belief sets. In doing so, SE-OB subsumes and extends the existing equilibria concepts. The player's view of the existence of perturbations in their payoffs reflects an inherent risk sensitivity, and thus, each player is equipped with a risk-preference function for every action. We demonstrate that every Nash equilibrium of a game, where expected payoffs are regularized with the risk-preference functions of the players, corresponds to an SE-OB in the original game, provided that the belief sets coincide with the feasible set of a multi-marginal optimal transport problem with marginals determined by risk-preference functions. Building on this connection, we propose an algorithm for repeated games among risk-sensitive players under optimistic beliefs when only zeroth-order feedback is available. We prove that, under appropriate conditions, the algorithm converges to an SE-OB. Our convergence analysis offers key insights into the strategic behaviors for equilibrium attainment: a player's risk sensitivity enhances equilibrium stability, while forming optimistic beliefs in the face of ambiguity helps to mitigate overly aggressive strategies over time. As a byproduct, our approach delivers the first generic convergent algorithm for general-form structural QRE beyond the classical logit-QRE. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.09569 |
By: | Kensei Nakamura |
Abstract: | This paper examines normatively acceptable criteria for evaluating social states when individuals are responsible for their skills or productivity and these factors should be accounted for. We consider social choice rules over sets of feasible utility vectors \`a la Nash's (1950) bargaining problem. First, we identify necessary and sufficient conditions for choice rules to be rationalized by welfare orderings or functions over ability-normalized utility vectors. These general results provide a foundation for exploring novel choice rules with the normalization and providing their axiomatic foundations. By adding natural axioms, we propose and axiomatize a new class of choice rules, which can be viewed as combinations of three key principles: distribution according to individuals' abilities, utilitarianism, and egalitarianism. Furthermore, we show that at the axiomatic level, this class of choice rules is closely related to the classical bargaining solution introduced by Kalai and Smorodinsky (1975). |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.04989 |
By: | Zhe Zhang; Young Kwark; Srinivasan Raghunathan |
Abstract: | The use of sponsored product listings in prominent positions of consumer search results has made e-commerce platforms, which traditionally serve as marketplaces for third-party sellers to reach consumers, a major medium for those sellers to advertise their products. On the other hand, regulators have expressed anti-trust concerns about an e-commerce platform's integration of marketplace and advertising functions; they argue that such integration benefits the platform and sellers at the expense of consumers and society and have proposed separating the advertising function from those platforms. We show, contrary to regulators' concerns, that separating the advertising function from the e-commerce platform benefits the sellers, hurts the consumers, and does not necessarily benefit the social welfare. A key driver of our findings is that an independent advertising firm, which relies solely on advertising revenue, has same or lesser economic incentive to improve targeting precision than an e-commerce platform that also serves as the advertising medium, even if both have the same ability to target consumers. This is because an improvement in targeting precision enhances the marketplace commission by softening the price competition between sellers, but hurts the advertising revenue by softening the competition for prominent ad positions. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.08548 |
By: | Yan Dai; Moise Blanchard; Patrick Jaillet |
Abstract: | We study a repeated resource allocation problem with strategic agents where monetary transfers are disallowed and the central planner has no prior information on agents' utility distributions. In light of Arrow's impossibility theorem, acquiring information about agent preferences through some form of feedback is necessary. We assume that the central planner can request powerful but expensive audits on the winner in any round, revealing the true utility of the winner in that round. We design a mechanism achieving $T$-independent $O(K^2)$ regret in social welfare while requesting $O(K^3 \log T)$ audits in expectation, where $K$ is the number of agents and $T$ is the number of rounds. We also show an $\Omega(K)$ lower bound on the regret and an $\Omega(1)$ lower bound on the number of audits when having low regret. Algorithmically, we show that incentive-compatibility can be mostly enforced with an accurate estimation of the winning probability of each agent under truthful reporting. To do so, we impose future punishments and introduce a *flagging* component, allowing agents to flag any biased estimate (we show that doing so aligns with individual incentives). On the technical side, without monetary transfers and distributional information, the central planner cannot ensure that truthful reporting is exactly an equilibrium. Instead, we characterize the equilibrium via a reduction to a simpler *auxiliary game*, in which agents cannot strategize until late in the $T$ rounds of the allocation problem. The tools developed therein may be of independent interest for other mechanism design problems in which the revelation principle cannot be readily applied. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.08412 |
By: | Eden Hartman; Erel Segal-Halevi; Biaoshuai Tao |
Abstract: | The classic notion of truthfulness requires that no agent has a profitable manipulation -- an untruthful report that, for some combination of reports of the other agents, increases her utility. This strong notion implicitly assumes that the manipulating agent either knows what all other agents are going to report, or is willing to take the risk and act as-if she knows their reports. Without knowledge of the others' reports, most manipulations are risky -- they might decrease the manipulator's utility for some other combinations of reports by the other agents. Accordingly, a recent paper (Bu, Song and Tao, ``On the existence of truthful fair cake cutting mechanisms'', Artificial Intelligence 319 (2023), 103904) suggests a relaxed notion, which we refer to as risk-avoiding truthfulness (RAT), which requires only that no agent can gain from a safe manipulation -- one that is sometimes beneficial and never harmful. Truthfulness and RAT are two extremes: the former considers manipulators with complete knowledge of others, whereas the latter considers manipulators with no knowledge at all. In reality, agents often know about some -- but not all -- of the other agents. This paper introduces the RAT-degree of a mechanism, defined as the smallest number of agents whose reports, if known, may allow another agent to safely manipulate, or $n$ if there is no such number. This notion interpolates between classic truthfulness (degree $n$) and RAT (degree at least $1$): a mechanism with a higher RAT-degree is harder to manipulate safely. To illustrate the generality and applicability of this concept, we analyze the RAT-degree of prominent mechanisms across various social choice settings, including auctions, indivisible goods allocations, cake-cutting, voting, and stable matchings. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.18805 |
By: | Shitong Wang |
Abstract: | This paper revisits the limitations of the Median Voter Theorem and introduces a novel framework to analyze the optimal economic ideological positions of political parties. By incorporating Nash equilibrium, we examine the mechanisms and elasticity of ideal deviation costs, voter distribution, and policy feasibility. Our findings show that an increase in a party's ideal deviation cost shifts its optimal ideological position closer to its ideal point. Additionally, if a voter distribution can be expressed as a positive linear combination of two other distributions, its equilibrium point must lie within the interval defined by the equilibrium points of the latter two. We also find that decreasing feasibility costs incentivize governments, regardless of political orientation, to increase fiscal expenditures (e.g., welfare) and reduce fiscal revenues (e.g., taxes). This dynamic highlights the fiscal pressures commonly faced by democratic nations under globalization. Moreover, we demonstrate that even with uncertain voter distributions, parties can identify optimal ideological positions to maximize their utility. Lastly, we explain why the proposed framework cannot be applied to community ideologies due to their fundamentally different nature. This study provides new theoretical insights into political strategies and establishes a foundation for future empirical research. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.06562 |
By: | David Easley; Yoav Kolumbus; Eva Tardos |
Abstract: | We analyze the performance of heterogeneous learning agents in asset markets with stochastic payoffs. Our agents aim to maximize the expected growth rate of their wealth but have different theories on how to learn this best. We focus on comparing Bayesian and no-regret learners in market dynamics. Bayesian learners with a prior over a finite set of models that assign positive prior probability to the correct model have posterior probabilities that converge exponentially to the correct model. Consequently, they survive even in the presence of agents who invest according to the correct model of the stochastic process. Bayesians with a continuum prior converge to the correct model at a rate of $O((\log T)/T)$. Online learning theory provides no-regret algorithms for maximizing the log of wealth in this setting, achieving a worst-case regret bound of $O(\log T)$ without assuming a steady underlying stochastic process but comparing to the best fixed investment rule. This regret, as we observe, is of the same order of magnitude as that of a Bayesian learner with a continuum prior. However, we show that even such low regret may not be sufficient for survival in asset markets: an agent can have regret as low as $O(\log T)$, but still vanish in market dynamics when competing against agents who invest according to the correct model or even against a perfect Bayesian with a finite prior. On the other hand, we show that Bayesian learning is fragile, while no-regret learning requires less knowledge of the environment and is therefore more robust. Any no-regret learner will drive out of the market an imperfect Bayesian whose finite prior or update rule has even small errors. We formally establish the relationship between notions of survival, vanishing, and market domination studied in economics and the framework of regret minimization, thus bridging these theories. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.08597 |
By: | Aurélien Baillon (EM - EMLyon Business School, GATE Lyon Saint-Étienne - Groupe d'Analyse et de Théorie Economique Lyon - Saint-Etienne - UL2 - Université Lumière - Lyon 2 - UJM - Université Jean Monnet - Saint-Étienne - EM - EMLyon Business School - CNRS - Centre National de la Recherche Scientifique); Han Bleichrodt (Universidad de Alicante); Chen Li (Erasmus University Rotterdam); Peter P. Wakker (Erasmus University Rotterdam) |
Abstract: | This paper introduces source theory, a new theory for decision under ambiguity (unknown probabilities). It shows how Savage's subjective probabilities, with source-dependent nonlinear weighting functions, can model Ellsberg's ambiguity. It can do so in Savage's framework of state-contingent assets, permits nonexpected utility for risk, and avoids multistage complications. It is tractable, shows ambiguity attitudes through simple graphs, is empirically realistic, and can be used prescriptively. We provide a new tool to analyze weighting functions: pmatchers. They give Arrow–Pratt-like transformations but operate "within" rather than "outside" functions. We further show that ambiguity perception and inverse S probability weighting, seemingly unrelated concepts, are two sides of the same "insensitivity" coin. |
Keywords: | subjective beliefs, ambiguity aversion, Ellsberg paradox, source of uncertainty |
Date: | 2025–02–12 |
URL: | https://d.repec.org/n?u=RePEc:hal:journl:hal-04964898 |
By: | Harry Pei |
Abstract: | Each period, two players bargain over a unit of surplus. Each player chooses between remaining flexible and committing to a take-it-or-leave-it offer at a cost. If players' committed demands are incompatible, then the current-period surplus is destroyed in the conflict. When both players are flexible, the surplus is split according to the status quo, which is the division in the last period where there was no conflict. We show that when players are patient and the cost of commitment is small, there exist a class of symmetric Markov Perfect equilibria that are asymptotically efficient and renegotiation proof, in which players commit to fair demands in almost all periods. |
Date: | 2025–03 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2503.01053 |
By: | Zhiming Feng |
Abstract: | The optimal bundling problem is the design of bundles (and prices) that maximize the expected virtual surplus, constrained by individual rationality and incentive compatibility. I focus on a relaxed, constraint-free problem that maximizes the expected virtual surplus with deterministic allocations. I show that when the difference of two virtual value functions for any pair of stochastic bundles is single-crossing, the minimal optimal menu for the relaxed problem is the set of all undominated bundles, where dominance refers to the natural comparison of virtual values across the type space. Under single-crossing differences, this comparison simplifies to comparisons on boundary types, enabling an algorithm to generate the minimal optimal menu. I further show that when the difference of two virtual value functions for any pair of stochastic bundles is monotonic, there exists an endogenous order among deterministic bundles. This order justifies that the minimal optimal menu for the relaxed problem is also optimal for the nonrelaxed problem. I also characterize the conditions under which consumers' valuation functions and distributions allow the virtual value functions to exhibit monotonic differences. This characterization, combined with the dominance notion, solves related bundling problems such as distributional robustness, bundling with additive values, and the optimality of pure, nested, and relatively novel, singleton-bundle menus. As a side result, I discover new properties of single crossing and monotonic differences in convex environments. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.07863 |
By: | Keisuke Bando; Kenzo Imamura; Yasushi Kawase |
Abstract: | Choice correspondences are crucial in decision-making, especially when faced with indifferences or ties. While tie-breaking can transform a choice correspondence into a choice function, it often introduces inefficiencies. This paper introduces a novel notion of path-independence (PI) for choice correspondences, extending the existing concept of PI for choice functions. Intuitively, a choice correspondence is PI if any consistent tie-breaking produces a PI choice function. This new notion yields several important properties. First, PI choice correspondences are rationalizabile, meaning they can be represented as the maximization of a utility function. This extends a core feature of PI in choice functions. Second, we demonstrate that the set of choices selected by a PI choice correspondence for any subset forms a generalized matroid. This property reveals that PI choice correspondences exhibit a nice structural property. Third, we establish that choice correspondences rationalized by ordinally concave functions inherently satisfy the PI condition. This aligns with recent findings that a choice function satisfies PI if and only if it can be rationalized by an ordinally concave function. Building on these theoretical foundations, we explore stable and efficient matchings under PI choice correspondences. Specifically, we investigate constrained efficient matchings, which are efficient (for one side of the market) within the set of stable matchings. Under responsive choice correspondences, such matchings are characterized by cycles. However, this cycle-based characterization fails in more general settings. We demonstrate that when the choice correspondence of each school satisfies both PI and monotonicity conditions, a similar cycle-based characterization is restored. These findings provide new insights into the matching theory and its practical applications. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.09265 |
By: | Raj Pabari; Udaya Ghai; Dominique Perrault-Joncas; Kari Torkkola; Orit Ronen; Dhruv Madeka; Dean Foster; Omer Gottesman |
Abstract: | We introduce and analyze a variation of the Bertrand game in which the revenue is shared between two players. This game models situations in which one economic agent can provide goods/services to consumers either directly or through an independent seller/contractor in return for a share of the revenue. We analyze the equilibria of this game, and show how they can predict different business outcomes as a function of the players' costs and the transferred revenue shares. Importantly, we identify game parameters for which independent sellers can simultaneously increase the original player's payoff while increasing consumer surplus. We then extend the shared-revenue Bertrand game by considering the shared revenue proportion as an action and giving the independent seller an outside option to sell elsewhere. This work constitutes a first step towards a general theory for how partnership and sharing of resources between economic agents can lead to more efficient markets and improve the outcomes of both agents as well as consumers. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.07952 |
By: | Ruiqin Wang; Cagil Kocyigit; Napat Rujeerapaiboon |
Abstract: | We study a mechanism design problem where a seller aims to allocate a good to multiple bidders, each with a private value. The seller supports or favors a specific group, referred to as the minority group. Specifically, the seller requires that allocations to the minority group are at least a predetermined fraction (equity level) of those made to the rest of the bidders. Such constraints arise in various settings, including government procurement and corporate supply chain policies that prioritize small businesses, environmentally responsible suppliers, or enterprises owned by historically disadvantaged individuals. We analyze two variants of this problem: stochastic mechanism design, which assumes bidders' values follow a known distribution and seeks to maximize expected revenue, and regret-based mechanism design, which makes no distributional assumptions and aims to minimize the worst-case regret. We characterize a closed-form optimal stochastic mechanism and propose a closed-form regret-based mechanism, and establish that the ex-post regret under the latter is at most a constant multiple (dependent on the equity level) of the optimal worst-case regret. We further quantify that this approximation constant is at most 1.31 across different equity levels. Both mechanisms can be interpreted as set-asides, a common policy tool that reserves a fraction of goods for minority groups. Furthermore, numerical results demonstrate that the stochastic mechanism performs well when the bidders' value distribution is accurately estimated, while the regret-based mechanism exhibits greater robustness under estimation errors. |
Date: | 2025–02 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2502.08369 |