|
on Microeconomics |
By: | Manuel Foerster (Bielefeld University); Daniel Habermacher (Universidad de los Andes) |
Abstract: | We revisit the trade-off between keeping authority and granting decisionrights to an informed agent. We introduce transfers, allowing the agent to charge a fee for her services, but she may also offer the principal a side payment. In equilibrium, the principal’s contracting decision maximizes the aggregate payoff. In particular, introducing transfers changes the contracting decision from centralization to delegation and improves efficiency if delegation maximizes the aggregate payoff but requires a side payment. We then introduce general delegation mechanisms. We first show that the agent, behaving ex ante like a social planner would do, restricts the discretion ofher interim self in equilibrium. We then derive the optimal delegation set and show that centralization will occur with optimal delegation only if it is informative. Our results contribute to the debate over subsidiaries in multinational corporations, showing how transfers can induce the parties to act in the headquarters’ interest. |
Keywords: | Principal-agent problem, communication, (optimal) delegation, transfers, subsidiaries, private information |
JEL: | D23 D83 D61 D82 C72 |
Date: | 2025–05 |
URL: | https://d.repec.org/n?u=RePEc:aoz:wpaper:361 |
By: | Ophir Friedler; Hu Fu; Anna Karlin; Ariana Tang |
Abstract: | Platforms design the form of presentation by which sellers are shown to the buyers. This design not only shapes the buyers' experience but also leads to different market equilibria or dynamics. One component in this design is through the platform's mediation of the search frictions experienced by the buyers for different sellers. We take a model of monopolistic competition and show that, on one hand, when all sellers have the same inspection costs, the market sees no stable price since the sellers always have incentives to undercut each other, and, on the other hand, the platform may stabilize the price by giving prominence to one seller chosen by a carefully designed mechanism. This calls to mind Amazon's Buy Box. We study natural mechanisms for choosing the prominent seller, characterize the range of equilibrium prices implementable by them, and find that in certain scenarios the buyers' surplus improves as the search friction increases. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.14793 |
By: | Denis Kojevnikov; Kyungchul Song |
Abstract: | We consider incomplete information finite-player games where players may hold mutually inconsistent beliefs without a common prior. We introduce absolute continuity of beliefs, extending the classical notion of absolutely continuous information in Milgrom and Weber (1985), and prove that a Bayesian equilibrium exists under broad conditions. Applying these results to games with rich type spaces that accommodate infinite belief hierarchies, we show that when the analyst's game has a type space satisfying absolute continuity of beliefs, the actual game played according to the belief hierarchies induced by the type space has a Bayesian equilibrium for a wide class of games. We provide examples that illustrate practical applications of our findings. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.16240 |
By: | Florian Brandl |
Abstract: | We consider long-lived agents who interact repeatedly in a social network. In each period, each agent learns about an unknown state by observing a private signal and her neighbors' actions in the previous period before taking an action herself. Our main result shows that the learning rate of the slowest learning agent is bounded from above independently of the number of agents, the network structure, and the agents' strategies. Applying this result to equilibrium learning with rational agents shows that the learning rate of all agents in any equilibrium is bounded under general conditions. This extends recent findings on equilibrium learning and demonstrates that the limitation stems from an inherent tradeoff between optimal action choices and information revelation rather than strategic considerations. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.12136 |
By: | Srihari Govindan; Robert B. Wilson |
Abstract: | A solution concept that is a refinement of Nash equilibria selects for each finite game a nonempty collection of closed and connected subsets of Nash equilibria as solutions. We impose three axioms for such solution concepts. The axiom of backward induction requires each solution to contain a quasi-perfect equilibrium. Two invariance axioms posit that solutions of a game are the same as those of a game obtained by the addition of strategically irrelevant strategies and players. Stability satisfies these axioms; and any solution concept that satisfies them must, for generic extensive-form games, select from among its stable outcomes. A strengthening of the two invariance axioms provides an analogous axiomatization of components of equilibria with a nonzero index. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.16908 |
By: | Kun Zhang |
Abstract: | Standard procurement models assume that the buyer knows the quality of the good at the time of procurement; however, in many settings, the quality is learned only long after the transaction. We study procurement problems in which the buyer's valuation of the supplied good depends directly on its quality, which is unverifiable and unobservable to the buyer. For a broad class of procurement problems, we identify procurement mechanisms maximizing any weighted average of the buyer's expected payoff and social surplus. The optimal mechanism can be implemented by an auction that restricts sellers to submit bids within specific intervals. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.15555 |
By: | Sam Ganzfried |
Abstract: | Dominance is a fundamental concept in game theory. In strategic-form games dominated strategies can be identified in polynomial time. As a consequence, iterative removal of dominated strategies can be performed efficiently as a preprocessing step for reducing the size of a game before computing a Nash equilibrium. For imperfect-information games in extensive form, we could convert the game to strategic form and then iteratively remove dominated strategies in the same way; however, this conversion may cause an exponential blowup in game size. In this paper we define and study the concept of dominated actions in imperfect-information games. Our main result is a polynomial-time algorithm for determining whether an action is dominated (strictly or weakly) by any mixed strategy in n-player games, which can be extended to an algorithm for iteratively removing dominated actions. This allows us to efficiently reduce the size of the game tree as a preprocessing step for Nash equilibrium computation. We explore the role of dominated actions empirically in the "All In or Fold" No-Limit Texas Hold'em poker variant. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.09716 |
By: | Xiaoyan Xu; Weishi Lim; Xing Zhang; Jeff Cai |
Abstract: | In many service markets, expert providers possess an information advantage over consumers regarding the necessary services, creating opportunities for fraudulent practices. These may involve overtreatment through unnecessary services or undertreatment with ineffective solutions that fail to address consumers' problems. When issues are resolved, consumers exit the market; when unresolved, they must decide whether to revisit the initial provider or seek a new one. Little is known about how repeated interactions and the consumer search process influence expert fraud and consumer welfare in such markets. We develop a dynamic game-theoretic model to examine the role of consumer search behavior and repeated interactions between consumers and service providers. We find that overtreatment and undertreatment can arise simultaneously in equilibrium. Interestingly, undertreatment-being less costly for the consumer-can initially act as a "hook" to induce acceptance of a minor treatment recommendation. When this minor treatment fails to resolve the issue, it can generate additional demand for a more expensive and serious treatment. This would arise when the cost of revisiting the initial provider is lower than that of searching for a new one. The extent of undertreatment exhibits a non-monotonic relationship with consumers' ex ante belief about the nature of their problems and the market's ethical level. Our results can shed light on how market ethical levels, provider capabilities and capacities, and consumer privacy protection policies interact with undertreatment and affect consumer welfare. Specifically, consumer welfare can decrease as the market becomes more ethical. Enhancing providers' diagnostic capabilities and capacities can exacerbate undertreatment. Providing access to consumers' diagnosis histories can help mitigate the undertreatment issue and improve consumer welfare. |
Date: | 2025–03 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2503.21175 |
By: | Fuhito Kojima; Bobak Pakzad-Hurson |
Abstract: | We study a variation of the price competition model a la Bertrand, in which firms must offer menus of contracts that obey monotonicity constraints, e.g., wages that rise with worker productivity to comport with equal pay legislation. While such constraints limit firms' ability to undercut their competitors, we show that Bertrand's classic result still holds: competition drives firm profits to zero and leads to efficient allocations without rationing. Our findings suggest that Bertrand's logic extends to a broader variety of markets, including labor and product markets that are subject to real-world constraints on pricing across workers and products. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.16842 |
By: | Luke Boosey; Philip Brookins; Dmitry Ryvkin |
Abstract: | We study information disclosure policies for contests among groups. Each player endogenously decides whether or not to participate in competition as a member of their group. Within-group aggregation of effort is best-shot, i.e., each group's performance is determined by the highest investment among its members. We consider a generalized all-pay auction setting, in which the group with the highest performance wins the contest with certainty. Players' values for winning are private information at the entry stage, but may be disclosed at the competition stage. We compare three disclosure policies: (i) no disclosure, when the number of entrants remains unknown and their values private; (ii) within-group disclosure, when this information is disclosed within each group but not across groups; and (iii) full disclosure, when the information about entrants is disclosed across groups. For the benchmark case of contests between individuals, information disclosure always reduces expected aggregate investment. However, this is no longer true in group contests: Within-group disclosure unambiguously raises aggregate investment, while the effect of full disclosure is ambiguous. |
Date: | 2025–03 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2503.20092 |
By: | Rey, Patrick; Loertscher, Simon; Marx, Leslie |
Abstract: | We develop the procurement analogue to an all-pay auction for an independent private values model with identical distributions. In this all-receive procurement auction (ARPA), suppliers simultaneously submit bids. Suppliers with bids below (above) the reserve are paid their bids (are paid and produce nothing). The supplier with the largest bid below the reserve produces the good. With appropriately chosen reserves, which decrease in the number of suppliers, the ARPA is efficient and, given increasing virtual costs, implements the optimal procurement. Appropriately adjusted, ARPAs implement the optimal procurement in general. ARPAs can render supply chains resilient to nonanticipated liquidity shocks. |
Keywords: | Resilience; Liquidity shocks; All-pay auctions; Multiple-receive procurement auctions |
JEL: | D44 D82 L41 |
Date: | 2025–04–29 |
URL: | https://d.repec.org/n?u=RePEc:tse:wpaper:130525 |
By: | Nikhil Kumar |
Abstract: | This paper examines the market for AI models in which firms compete to provide accurate model predictions and consumers exhibit heterogeneous preferences for model accuracy. We develop a consumer-firm duopoly model to analyze how competition affects firms' incentives to improve model accuracy. Each firm aims to minimize its model's error, but this choice can often be suboptimal. Counterintuitively, we find that in a competitive market, firms that improve overall accuracy do not necessarily improve their profits. Rather, each firm's optimal decision is to invest further on the error dimension where it has a competitive advantage. By decomposing model errors into false positive and false negative rates, firms can reduce errors in each dimension through investments. Firms are strictly better off investing on their superior dimension and strictly worse off with investments on their inferior dimension. Profitable investments adversely affect consumers but increase overall welfare. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.13375 |
By: | Jens Leth Hougaard (University of Copenhagen, DK-1958 Frederiksberg C, Denmark); Mich Tvede (School of Economics, University of Sheffield, Sheffield S1 4DT, UK) |
Abstract: | We consider network formation. A set of locations can be connected in various network configurations. Every network has a cost and every agent has an individual value of every network. A planner aims at implementing a welfare maximizing network and allocating the resulting cost, but information is asymmetric: agents are fully informed and the planner is ignorant. Full implementation in Nash and strong Nash equilibria is studied. We show the correspondence consisting of welfare maximizing networks and individually rational cost allocations is implementable. We construct a minimal Nash implementable, welfare maximizing, and individually rational solution in the set of upper hemi-continuous and Nash implementable solutions. It is not possible to have full implementation single valued solutions such as the Shapley value. |
Keywords: | Networks; Welfare maximization; Nash Implementation; Strong Nash Implementation |
JEL: | C70 C72 D71 D85 |
Date: | 2025–05 |
URL: | https://d.repec.org/n?u=RePEc:shf:wpaper:2025005 |
By: | Gurkirat Wadhwa; Veeraruna Kavitha |
Abstract: | Considering a supply chain with partial vertical integration, we attempt to seek answers to several questions related to the cooperation competition based friction, abundant in such networks. Such an SC can represent a supplier with an inhouse production unit that attempts to control an outhouse production unit via the said friction. The two production units can have different sets of loyal customer bases and the aim of the manufacturer supplier duo would be to get the best out of the two customer bases. Our analysis shows that under certain market conditions, an optimal strategy might be to allow both units to earn positive profits particularly when they hold similar market power and when customer loyalty is high. In cases of weaker customer loyalty, however, the optimal approach may involve pressurizing the outhouse unit to operate at minimal profits. Even more intriguing is the scenario where the outhouse unit has a greater market power and customer loyalty remains strong here, it may be optimal for the inhouse unit to operate at a loss just enough to dismantle the downstream monopoly. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.09591 |
By: | Chengfeng Shen; Felix K\"ubler; Yucheng Yang; Zhennan Zhou |
Abstract: | We develop a new method to efficiently solve for optimal lotteries in models with non-convexities. In order to employ a Lagrangian framework, we prove that the value of the saddle point that characterizes the optimal lottery is the same as the value of the dual of the deterministic problem. Our algorithm solves the dual of the deterministic problem via sub-gradient descent. We prove that the optimal lottery can be directly computed from the deterministic optima that occur along the iterations. We analyze the computational complexity of our algorithm and show that the worst-case complexity is often orders of magnitude better than the one arising from a linear programming approach. We apply the method to two canonical problems with private information. First, we solve a principal-agent moral-hazard problem, demonstrating that our approach delivers substantial improvements in speed and scalability over traditional linear programming methods. Second, we study an optimal taxation problem with hidden types, which was previously considered computationally infeasible, and examine under which conditions the optimal contract will involve lotteries. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.15997 |
By: | Takashi Izumo |
Abstract: | In everyday life, we frequently make coarse-grained judgments. When we say that Olivia and Noah excel in mathematics, we disregard the specific differences in their mathematical abilities. Similarly, when we claim that a particular automobile manufacturer produces high-quality cars, we overlook the minor variations among individual vehicles. These coarse-grained assessments are distinct from erroneous or deceptive judgments, such as those resulting from student cheating or false advertising by corporations. Despite the prevalence of such judgments, little attention has been given to their underlying mathematical structure. In this paper, we introduce the concept of coarse-graining into game theory, analyzing games where players may perceive different payoffs as identical while preserving the underlying order structure. We call it a Coarse-Grained Game (CGG). This framework allows us to examine the rational inference processes that arise when players equate distinct micro-level payoffs at a macro level, and to explore how Nash equilibria are preserved or altered as a result. Our key findings suggest that CGGs possess several desirable properties that make them suitable for modeling phenomena in the social sciences. This paper demonstrates two such applications: first, in cases of overly minor product updates, consumers may encounter an equilibrium selection problem, resulting in market behavior that is not driven by objective quality differences; second, the lemon market can be analyzed not only through objective information asymmetry but also through asymmetries in perceptual resolution or recognition ability. |
Date: | 2025–03 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2503.17598 |
By: | Stephen Martin |
Abstract: | The paper analyzes a two-stage Main Street model. In the second stage, taking locations as given, each of two firms sets price to maximize own profit, subject to the constraint that it is not profitable for the other firm to undercut its price at its location. We find constrained price best-response equations and pure-strategy equilibrium prices for all pairs of locations. In the first stage, firms noncooperatively pick locations to maximize second-stage payoffs. We find location best-response equations and equilibrium locations. Equilibrium locations are efficient in the sense of minimizing transportation cost. |
Keywords: | Main Street, Hotelling, price undercutting |
JEL: | C72 D21 D43 L13 |
Date: | 2025–05 |
URL: | https://d.repec.org/n?u=RePEc:pur:prukra:1354 |
By: | Yiyin Cao; Chuangyin Dang |
Abstract: | Sequential equilibrium requires a consistent assessment and sequential rationality, where the consistent assessment emerges from a convergent sequence of totally mixed behavioral strategies and associated beliefs. However, the original definition lacks explicit guidance on constructing such convergent sequences. To overcome this difficulty, this paper presents a characterization of sequential equilibrium by introducing $\varepsilon$-perfect $\gamma$-sequential equilibrium with local sequential rationality. For any $\gamma>0$, we establish a perfect $\gamma$-sequential equilibrium as a limit point of a sequence of $\varepsilon_k$-perfect $\gamma$-sequential equilibrium with $\varepsilon_k\to 0$. A sequential equilibrium is then derived from a limit point of a sequence of perfect $\gamma_q$-sequential equilibrium with $\gamma_q\to 0$. This characterization systematizes the construction of convergent sequences and enables the analytical determination of sequential equilibria and the development of a polynomial system serving as a necessary and sufficient condition for $\varepsilon$-perfect $\gamma$-sequential equilibrium. Exploiting the characterization, we develop a differentiable path-following method to compute a sequential equilibrium. |
Date: | 2025–03 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2503.19493 |
By: | Tuong Manh Vu; Ernesto Carrella; Robert Axtell; Omar A. Guerrero |
Abstract: | We develop a model where firms determine the price at which they sell their differentiable goods, the volume that they produce, and the inputs (types and amounts) that they purchase from other firms. A steady-state production network emerges endogenously without resorting to assumptions such as equilibrium or perfect knowledge about production technologies. Through a simple version of reinforcement learning, firms with heterogeneous technologies cope with uncertainty and maximize profits. Due to this learning process, firms can adapt to shocks such as demand shifts, suppliers/clients closure, productivity changes, and production technology modifications; effectively reshaping the production network. To demonstrate the potential of this model, we analyze the upstream and downstream impact of demand and productivity shocks. |
Date: | 2025–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2504.16010 |
By: | Duncan K. Foley; Ellis Scharfenaker |
Abstract: | Bayes' theorem incorporates distinct types of information through the likelihood and prior. Direct observations of state variables enter the likelihood and modify posterior probabilities through consistent updating. Information in terms of expected values of state variables modify posterior probabilities by constraining prior probabilities to be consistent with the information. Constraints on the prior can be exact, limiting hypothetical frequency distributions to only those that satisfy the constraints, or be approximate, allowing residual deviations from the exact constraint to some degree of tolerance. When the model parameters and constraint tolerances are known, posterior probability follows directly from Bayes' theorem. When parameters and tolerances are unknown a prior for them must be specified. When the system is close to statistical equilibrium the computation of posterior probabilities is simplified due to the concentration of the prior on the maximum entropy hypothesis. The relationship between maximum entropy reasoning and Bayes' theorem from this point of view is that maximum entropy reasoning is a special case of Bayesian inference with a constrained entropy-favoring prior. |
Keywords: | Bayesian inference, Maximum entropy, Priors, Information theory, Statistical equilibrium JEL Classification: |
Date: | 2024 |
URL: | https://d.repec.org/n?u=RePEc:uta:papers:2024-03 |
By: | Aïd, René; Bonesini, Ofelia; Callegaro, Giorgia; Campi, Luciano |
Abstract: | We frame dynamic persuasion in a partial observation stochastic control Leader-Follower game with an ergodic criterion. The Receiver controls the dynamics of a multidimensional unobserved state process. Information is provided to the Receiver through a device designed by the Sender that generates the observation process. The commitment of the Sender is enforced. We develop this approach in the case where all dynamics are linear and the preferences of the Receiver are linear-quadratic. We prove a verification theorem for the existence and uniqueness of the solution of the HJB equation satisfied by the Receiver's value function. An extension to the case of persuasion of a mean field of interacting Receivers is also provided. We illustrate this approach in two applications: the provision of information to electricity consumers with a smart meter designed by an electricity producer; the information provided by carbon footprint accounting rules to companies engaged in a best-in-class emissions reduction effort. In the first application, we link the benefits of information provision to the mispricing of electricity production. In the latter, we show that even in the absence of information cost, it might be optimal for the regulator to blur information available to firms to prevent them from coordinating on a higher level of carbon footprint to reduce their cost of reaching a below average emission target. |
Keywords: | persuasion; filtering; ergodic control; Stackelberg games; mean field games; smart meters; carbon footprint |
JEL: | C61 C73 D82 D83 Q51 |
Date: | 2025–07–31 |
URL: | https://d.repec.org/n?u=RePEc:ehl:lserod:127889 |
By: | Tsuyoshi Toshimitsu (School of Economics, Kwansei Gakuin University) |
Abstract: | Network connectivity, compatibility, and horizontal interoperability are important functions in network industries. Based on the framework of a Hotelling model, we consider the impact of connectivity between network goods on incentives to innovate and profits. We focus on the role of market coverage (i.e., full and partial coverage) and consumer expectations (i.e., rational and active expectations). We demonstrate that in the case of full market coverage, as the degree of connectivity increases, research and development (R&D) activities decrease, but profits increase. Then, relaxing the assumption of market coverage, we demonstrate that in the case of partial market coverage, as the degree of connectivity increases, R&D activities and profits increase. Furthermore, regarding the full market coverage case, we examine the case that the formation of consumer expectations is active and demonstrate that an improvement in connectivity does not affect R&D activities, but increases profits. |
Keywords: | Network externality, Connectivity, Compatibility, Horizontal interoperability, R&D competition, Market coverage, Consumer expectations |
JEL: | L13 L15 L31 L32 D43 |
Date: | 2025–05 |
URL: | https://d.repec.org/n?u=RePEc:kgu:wpaper:293 |
By: | Darpoe, Erik; Dominguez, Alvaro; Martin-Rodriguez, Maria |
Abstract: | n a scenario featuring two distinct player types, we examine the pairwise stability of stationary networks where agents engage in infinite-horizon bargaining games akin to Manea's framework. Link formation and maintenance costs are contingent upon communication ease and complementarities, with connections between individuals of different types becoming less expensive when complementarities are sufficiently strong. In such instances, various bipartite components emerge as stable, characterized by a lack of direct connections between players of the same type. These components exhibit inequitable disributions of surplus, resulting in asymmetric splits among linked individuals. This contrasts with scenarios where connections between individuals of the same type are less costly, leading to predominantly equitable stable components. OUr findings highlight how complementarities and the relative scarcity of certain types can influence the fairness of bargaining outcomes within networks. |
Keywords: | Bargaining, Heterogeneity, Networks, Pairwise stability |
JEL: | C72 C78 D85 |
URL: | https://d.repec.org/n?u=RePEc:agi:wpaper:02000085 |
By: | Ohnishi, Kazuhiro |
Abstract: | An existing study examines an international mixed duopoly involving a state-owned public firm and a foreign private firm, focusing on their timing choices for quantities and showing that the state-owned public firm should act as the leader. This result differs from that for an endogenous-timing mixed duopoly model where a state-owned public firm coexists with a domestic private firm. We investigate the endogenous order of moves in a mixed duopoly model where a state-owned public firm competes with a private firm that is partially foreign-owned. Specifically, we explore the desirable role of the state-owned public firm, either as a leader or a follower, and present the equilibrium outcome of the model. Our findings reveal that the equilibrium differs depending on whether the foreign ownership ratio of the private firm is low or high. |
Keywords: | Endogenous timing; Mixed oligopoly; Partial foreign ownership; Stackelberg |
JEL: | C72 D21 F23 L13 L32 |
Date: | 2025–05–02 |
URL: | https://d.repec.org/n?u=RePEc:pra:mprapa:124662 |
By: | Heesen, Remco; Bright, Liam Kofi |
Abstract: | It might seem obvious that the scientific process should not be biased. We strive for reliable inference, and systematically skewing the results of inquiry apparently conflicts with this. Publication bias—which involves only publishing certain types of results—seems particularly troubling and has been blamed for the replication crisis. While we ultimately agree, there are considerable nuances to take into account. Using a Bayesian model of scientific reasoning we show that a scientist who is aware of publication bias can (theoretically) interpret the published literature so as to avoid acquiring biased beliefs. Moreover, in some highly specific circumstances she might prefer not to bother with policies designed to mitigate or reduce the presence of publication bias—it would impose a cost in time or effort that she would not see any benefit in paying. However, we also argue that science as a social endeavour is made worse off by publication bias. This is because the social benefits of science are largely secured via go-between agents, various non-experts who nonetheless need to make use of or convey the results of scientific inquiry if its fruits are to be enjoyed by society at large. These are unlikely to be well-informed enough to account for publication bias appropriately. As such, we conclude, the costs of having to implement policies like mandatory pre-registration are worth imposing on scientists, even if they would perhaps not view these costs as worth paying for their own sake. The benefits are reaped by the go-between agents, and we argue that their perspective is quite properly favoured when deciding how to govern scientific institutions. |
Keywords: | replication crisis; philosophy of statistics; publication bias; preregistration; filedrawer effect; REF fund |
JEL: | C1 |
Date: | 2025–04–30 |
URL: | https://d.repec.org/n?u=RePEc:ehl:lserod:127420 |
By: | Jennie Ebihara; Ryuichiro Izumi (Department of Economics, Wesleyan University) |
Abstract: | We study how the speed of withdrawals affects bank fragility by examining two dimensions: unpredictability in outflows and frictions in the timing of policy intervention. We extend Ennis and Keister (2009) by introducing uncertainty into the policymaker’s ex-post suspension problem. When withdrawals are more unpredictable, the policymaker intervenes earlier, making the bank less fragile. In contrast, the frequency of intervention opportunities may have a non-monotonic effect: a modest decrease delays suspension and increases fragility, but when opportunities become sufficiently infrequent, the authority suspends earlier to avoid costly delays. |
Keywords: | Fast bank runs, Suspensions, Ex-post optimal intervention |
JEL: | G21 G28 E58 |
Date: | 2025–05 |
URL: | https://d.repec.org/n?u=RePEc:wes:weswpa:2025-004 |