nep-upt New Economics Papers
on Utility Models and Prospect Theory
Issue of 2026–04–20
seven papers chosen by
Alexander Harin


  1. Investing Is Compression By Oscar Stiffelman
  2. Training Neural Networks Embedded in Dynamic Discrete Choice Models By Ecenur Oguz; Robert L. Bray
  3. Risk-Constrained Kelly for Mutually Exclusive Outcomes: CRRA Support Invariance and Logarithmic One-Dimensional Calibration By Christopher D. Long
  4. Learning Preferences from Conjoint Data: A Structural Deep Learning Approach By Avidit Acharya; Jens Hainmueller; Yiqing Xu
  5. On the Snowballing Welfare Effects of Cartels and the Allocation of Fines By Marc Deschamps; Dongshuang Hou; Aymeric Lardon; Christian Trudeau
  6. Technological Changes and Equilibrium Wage Structure: A Cooperative Game Approach By Hideo Konishi; Ryo Tsukamoto
  7. Mechanism Design for Investment Regulation under Herding By Huisheng Wang; H. Vicky Zhao

  1. By: Oscar Stiffelman
    Abstract: In 1956 John Kelly wrote a paper at Bell Labs describing the relationship between gambling and Information Theory. What became known as the Kelly criterion is an objective or utility function and a closed form solution in simple cases. The economist Paul Samuelson argued that it was an arbitrary utility function, and he successfully kept it out of mainstream economics. But he was wrong. We now know, largely through the work of Tom Cover at Stanford, that Kelly's proposal is objectively optimal: it maximizes long-term wealth, it minimizes the risk of ruin, and in a game-theoretic sense, it is competitively optimal, even over the short term. One of Cover's most surprising contributions to portfolio theory was the universal portfolio, related to universal compression in information theory, which performs asymptotically as well as the best constant-rebalanced portfolio in hindsight. Although the algorithm itself is very abstract, one of the key steps Cover used -- rewriting the multi-period investing problem as a sum of products rather than a product of sums -- reveals the information structure of the investing problem, making it accessible to the techniques of information theory. That same technique is applied here to show that even in the most general form, Kelly's objective factors the investing problem into three terms: a money term, an entropy term, and a divergence term. Because the first two terms are independent of the allocation, the only way to maximize the compounding growth rate is to minimize the friction from the divergence term, which measures, in bits, the difference between the chosen distribution and the unknown true distribution. This means that investing is, fundamentally, a compression problem.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.10758
  2. By: Ecenur Oguz; Robert L. Bray
    Abstract: We develop the first general-purpose estimator for infinite-horizon dynamic discrete choice models whose estimation problem, after pre-computation, is unencumbered by large systems of linear equations -- either imposed as constraints, or embedded in the objective function. Our unnested fixed point (UFXP) and optimal unnested fixed point (OUFXP) estimators exploit a dual representation of Bellman's equation to separate the utility parameters from the dynamic programming fixed point. We establish the consistency and asymptotic normality of UFXP and OUFXP, as well as the efficiency of the latter. Our estimators enable researchers to model utility functions non-parametrically via flexible neural-network approximations.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.09736
  3. By: Christopher D. Long
    Abstract: We study the finite mutually exclusive outcome version of risk-constrained Kelly optimization with explicit state prices. The market has outcome probabilities $p_i>0$, state prices $q_i>0$, terminal wealths $W_i=c+x_i/q_i$, and a drawdown-surrogate constraint \[ \sum_{i=1}^n p_i W_i^{-\lambda}\le 1, \qquad \lambda>0. \] For constant relative risk aversion utility, we work primarily in the standard overround regime $\sum_i q_i>1$, where every optimizer is necessarily non-full-support. Under the usual unique likelihood-ratio prefix hypothesis for the unconstrained problem, we prove that the constrained optimizer has exactly the same active set. Thus, in the regime where the prefix theorem is meaningful, the risk constraint deforms the funded wealth profile but does not change the active set. The support is therefore invariant across both the CRRA parameter and the drawdown-surrogate parameter. We then isolate the logarithmic case $\gamma=1$. Once the common active prefix is known, the constrained problem reduces to a one-dimensional outer calibration together with independent one-dimensional inner equations on the active states. In this case we prove existence, uniqueness, and monotonicity for the inner solves, derive a complete calibration theorem, and record the resulting structured algorithm. We treat the fair and subfair regimes only as boundary cases: full-support phenomena can occur there, so the overround prefix theory no longer yields a parallel exact description of comparable sharpness. A numerical example illustrates how the risk constraint alters the funded wealth profile while leaving support unchanged.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.11577
  4. By: Avidit Acharya; Jens Hainmueller; Yiqing Xu
    Abstract: Conjoint experiments randomize multidimensional profiles, offering a powerful design for recovering structural preference parameters -- including marginal rates of substitution, willingness to pay, and the distribution of preferences across a population. Yet the dominant approach in political science has focused on nonparametric causal estimands that do not leverage this potential. We propose a structural approach that embeds a deep neural network within a random utility logit model, allowing preference parameters to vary as a fully flexible function of respondent characteristics. The neural network addresses the concern that a parametric specification may not capture the true data generating process, while double/debiased machine learning provides valid inference on average preference parameters. We apply our method to three prominent conjoint studies and find rich preference heterogeneity masked by reduced-form averages: a near-zero gender effect coexists with 83% preferring female candidates, opposition to undemocratic behavior is near-universal but varies sharply in intensity, and progressive tax preferences cut across every partisan subgroup.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.10845
  5. By: Marc Deschamps (Université Marie et Louis Pasteur); Dongshuang Hou (Department of Applied Mathematics, Northwestern Polytechnical University); Aymeric Lardon (Université Jean Monnet Saint-Étienne, CNRS, Université Lyon 2, GATE Lyon Saint-Étienne); Christian Trudeau (Department of Economics, University of Windsor)
    Abstract: We consider a homogeneous Cournot oligopoly where the inverse demand function is obtained by the utility maximization of a representative consumer, and firms may operate at different marginal costs. Assuming that some firms make a cartel while others remain independent, we introduce three new classes of TU-games, referred to as welfare TU-games, each corresponding to consumer surplus, total profit, and total welfare, respectively. Our results show that the games associated with consumer surplus and total welfare are monotonically decreasing and concave, highlighting a snowball effect of cartel formation on these two welfare measures. In contrast, the game associated with total profit is never superadditive, but it is monotonically increasing and concave when the number of firms is sufficiently small. Furthermore, we apply allocation methods, including the Shapley value and the serial method, to determine ex ante fair fines that firms must pay for participating in the cartel, allowing to differentiate fines both on the order of arrival in the cartel and on the technologies of the firms. For instance, in certain scenarios, some inefficient firms may receive lower fines for joining the cartel due to cost synergies.
    Keywords: Cournot competition; Cartel; Welfare; Shapley value; Antitrust.
    JEL: C71 D43 K21 L40
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:wis:wpaper:2601
  6. By: Hideo Konishi (Boston College); Ryo Tsukamoto (Boston College)
    Abstract: In recent years, lumpy technological innovations have occurred with increasing frequency, affecting workers’ wages in ways that depend on their skills and abilities. This paper proposes a framework to evaluate which types of workers gain or lose during the interim period following the introduction of a major technological innovation. Using a cooperative game-theoretic approach, we develop a model of large labor markets with finitely many types of atomless workers and a finite set of available technologies, each described by a pair consisting of a labor input vector and an output value. The equilibrium wage structure is characterized by the f-core (the coalition structure core with finite membership coalitions as in Kaneko and Wooders (1986) of an atomless transferable utility game generated by the model. The generically unique equilibrium wage rates can be computed efficiently, as the f-core allocation maximizes total production value. Within this framework, we analyze how the equilibrium wage structure for heterogeneous worker types changes when a new technology becomes available. We show that, under mild conditions, the introduction of a relevant and efficient technology almost always disadvantages at least one type of labor, while benefiting another. However, when there are more than two labor types, identifying which types gain and which lose is generally nontrivial. We also show that if the population of a given worker type increases, holding available technologies fixed, the wage rate for that type weakly decreases.
    Keywords: technological innovation, wage structure, f-core, large game, activity analysis
    JEL: C71 D33 J31
    Date: 2026–04–06
    URL: https://d.repec.org/n?u=RePEc:boc:bocoec:1109
  7. By: Huisheng Wang; H. Vicky Zhao
    Abstract: Herding, where investors imitate others' decisions rather than relying on their own analysis, is a prevalent phenomenon in financial markets. Excessive herding distorts rational decisions, amplifies volatility, and can be exploited by manipulators to harm the market. Traditional regulatory tools, such as information disclosure and transaction restrictions, are often imprecise and lack theoretical guarantees for effectiveness. This calls for a quantitative approach to regulating herding. We propose a regulator-leader-follower trilateral game framework based on optimal control theory to study the complex dynamics among them. The leader makes rational decisions, the follower maximizes utility while aligning with the leader's decisions, whereas the regulator designs a mechanism to maximize social welfare and minimize regulatory cost. We derive the follower's decisions and the regulator's mechanisms, theoretically analyze the impact of regulation on decisions, and investigate effective mechanisms to improve social welfare.
    Date: 2026–04
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2604.11100

This nep-upt issue is ©2026 by Alexander Harin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the Griffith Business School of Griffith University in Australia.