nep-cta New Economics Papers
on Contract Theory and Applications
Issue of 2025–09–15
five papers chosen by
Guillem Roig, University of Melbourne


  1. Price cap regulation with limited commitment By Bouvard, Matthieu; Jullien, Bruno
  2. Advising with Threshold Tests: Complexity, Signaling, and Effort By Georgy Lukyanov; Mark Izgarshev
  3. Endogenous Quality in Social Learning By Georgy Lukyanov; Konstantin Shamruk; Ekaterina Logina
  4. Mutual Reputation and Trust in a Repeated Sender-Receiver Game By Georgy Lukyanov
  5. Incentives for Digital Twins: Task-Based Productivity Enhancements with Generative AI By Catherine Wu; Arun Sundararajan

  1. By: Bouvard, Matthieu; Jullien, Bruno
    Abstract: We consider the price-cap regulation of a monopolistic network operator when the regulator has limited commitment. Operating the network requires xed investments and the regulator has the opportunity to unilaterally revise the price cap at random times. When the regulator maximizes consumer surplus, he has an incentive to lower the price cap once the operator's xed investments are sunk. This hold-up problem gives rise to two types of ineciencies. In one type of equilibrium, the operator breaks even but strategically under-invests to induce the regulator to maintain the price cap. In another type of equilibrium the operator makes strictly positive prots and periods of high investment and high prices are followed by periods of low prices and capacity decline. Overall, the model suggests that the regulator's lack of commitment limits the deployment of network infrastructures.
    Date: 2025–02
    URL: https://d.repec.org/n?u=RePEc:tse:wpaper:130871
  2. By: Georgy Lukyanov; Mark Izgarshev
    Abstract: A benevolent advisor observes a project's complexity and posts a pass - fail threshold before the agent chooses effort. The project succeeds only if ability and effort together clear complexity. We compare two informational regimes. In the naive regime, the threshold is treated as non-informative; in the sophisticated regime, the threshold is a signal and the agent updates beliefs. We characterize equilibrium threshold policies and show that the optimal threshold rises with complexity under mild regularity. We then give primitives-based sufficient conditions that guarantee separating, pooling, or semi-separating outcomes. In a benchmark with uniform ability, exponential complexity, and power costs, we provide explicit parameter regions that partition the space by equilibrium type; a standard refinement eliminates most pooling. The results yield transparent comparative statics and welfare comparisons across regimes.
    Date: 2025–08
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2508.20540
  3. By: Georgy Lukyanov; Konstantin Shamruk; Ekaterina Logina
    Abstract: We study a dynamic reputation model with a fixed posted price where only purchases are public. A long-lived seller chooses costly quality; each buyer observes the purchase history and a private signal. Under a Markov selection, beliefs split into two cascades - where actions are unresponsive and investment is zero - and an interior region where the seller invests. The policy is inverse-U in reputation and produces two patterns: Early Resolution (rapid absorption at the optimistic cascade) and Double Hump (two investment episodes). Higher signal precision at fixed prices enlarges cascades and can reduce investment. We compare welfare and analyze two design levers: flexible pricing, which can keep actions informative and remove cascades for patient sellers, and public outcome disclosure, which makes purchases more informative and expands investment.
    Date: 2025–08
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2508.20539
  4. By: Georgy Lukyanov
    Abstract: We study a repeated sender-receiver game where inspections are public but the sender's action is hidden unless inspected. A detected deception ends the relationship or triggers a finite punishment. We show the public state is one dimensional and prove existence of a stationary equilibrium with cutoff inspection and monotone deception. The sender's mixing pins down a closed-form total inspection probability at the cutoff, and a finite punishment phase implements the same cutoffs as termination. We extend to noisy checks, silent audits, and rare public alarms, preserving the Markov structure and continuity as transparency vanishes or becomes full. The model yields testable implications for auditing, certification, and platform governance: tapering inspections with reputation, bunching of terminations after inspection spurts, and sharper cutoffs as temptation rises relative to costs.
    Date: 2025–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2509.04035
  5. By: Catherine Wu; Arun Sundararajan
    Abstract: Generative AI is a technology which depends in part on participation by humans in training and improving the automation potential. We focus on the development of an "AI twin" that could complement its creator's efforts, enabling them to produce higher-quality output in their individual style. However, AI twins could also, over time, replace individual humans. We analyze this trade-off using a principal-agent model in which agents have the opportunity to make investments into training an AI twin that lead to a lower cost of effort, a higher probability of success, or both. We propose a new framework to situate the model in which the tasks performed vary in the ease to which AI output can be improved by the human (the "editability") and also vary in the extent to which a non-expert can assess the quality of output (its "verifiability.") Our synthesis of recent empirical studies indicates that productivity gains from the use of generative AI are higher overall when task editability is higher, while non-experts enjoy greater relative productivity gains for tasks with higher verifiability. We show that during investment a strategic agent will trade off improvements in quality and ease of effort to preserve their wage bargaining power. Tasks with high verifiability and low editability are most aligned with a worker's incentives to train their twin, but for tasks where the stakes are low, this alignment is constrained by the risk of displacement. Our results suggest that sustained improvements in company-sponsored generative AI will require nuanced design of human incentives, and that public policy which encourages balancing worker returns with generative AI improvements could yield more sustained long-run productivity gains.
    Date: 2025–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2509.08732

This nep-cta issue is ©2025 by Guillem Roig. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.