nep-upt New Economics Papers
on Utility Models and Prospect Theory
Issue of 2009‒08‒30
thirteen papers chosen by
Alexander Harin
Modern University for the Humanities

  1. Shape invariant modelling pricing kernels and risk aversion By Maria Grith; Wolfgang Härdle; Juhyun Park
  2. Utility, games, and narratives By Fioretti, Guido
  3. Updating Choquet valuation and discounting information arrivals By André Lapied; Robert Kast
  4. Gratuitous Violence and the Rational Offender Model By Foreman-Peck, James; Moore, Simon
  5. The effect of uncertainty on decision making about climate change mitigation. A numerical approach of stochastic control By Thomas S. Lontzek; Daiju Narita
  6. Weak moral motivation leads to the decline of voluntary contributions By Charles FIGUIERES; Marc WILLINGER; David MASCLET
  7. Long-term risk management for utility companies: the next challenges By René Aïd
  8. Statistical modelling of financial crashes: Rapid growth, illusion of certainty and contagion By John M. Fry
  9. Willingness to Pay to Reduce Future Risk By Jim Engle-Warnick; Julie Héroux; Claude Montmarquette
  10. Decision Making Using Rating Systems: When Scale Meets Binary By Bargagliotti, Anna E.; Li, Lingfang (Ivy)
  11. Exploring Time-Varying Jump Intensities: Evidence from S&P500 Returns and Options By Peter Christoffersen; Kris Jacobs; Chayawat Ornthanalai
  12. The Forward- and the Equity-Premium Puzzles: Two Symptoms of the Same Illness? By Costa, Carlos Eugênio da; Issler, João Victor; Matos, Paulo F.
  13. Option-Implied Measures of Equity Risk By Bo-Young Chang; Peter Christoffersen; Kris Jacobs; Gregory Vainberg

  1. By: Maria Grith; Wolfgang Härdle; Juhyun Park
    Abstract: Pricing kernels play a major role in quantifying risk aversion and investors' preferences. Several empirical studies reported that pricing kernels exhibit a common pattern across dierent markets. Mostly visual inspection and occasionally numerically summarise are used to make comparison. With increasing amount of information updated every day, the empirical pricing kernels can be viewed as an object evolving over time. We propose a systematic modelling approach to describing the evolution of the empirical pricing kernels. The approach is based on shape invariant models. It captures the common features contained in the shape of the functions and at the same time characterises the variability between the pricing kernels based on a few interpretable parameters. The method is demonstrated with the European options and returns values of DAX index.
    Keywords: pricing kernels, risk aversion, risk neutral density
    JEL: C14 C32 G12
    Date: 2009–08
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2009-041&r=upt
  2. By: Fioretti, Guido
    Abstract: This paper provides a general overview of theories and tools to model individual and collective decision-making. In particular, stress is laid on the interaction of several decision-makers. A substantial part of this paper is devoted to utility maximization and its application to collective decision-making, Game Theory. However, the pitfalls of utility maximization are thoroughly discussed, and the radically alternative approach of viewing decision-making as constructing narratives is presented with its emerging computational tools.
    Keywords: Interactions; Collective Decision-Making
    JEL: C79 D79
    Date: 2009–08–10
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:16976&r=upt
  3. By: André Lapied (GREQAM - Groupement de Recherche en Économie Quantitative d'Aix-Marseille - Université de la Méditerranée - Aix-Marseille II - Université Paul Cézanne - Aix-Marseille III - Ecole des Hautes Etudes en Sciences Sociales - CNRS : UMR6579); Robert Kast (LAMETA - Laboratoire Montpellierain d'économie théorique et appliquée - CNRS : UMR5474 - INRA : UR1135 - CIHEAM - Université Montpellier I - Montpellier SupAgro)
    Abstract: We explore different possible definitions for conditional Choquet integrals and their implications for updating capacities. Many recent works consider relaxing dynamic consistency within Choquet Expected Utility models, but all of them deal with models where time is not explicitly introduced. We confront the different definitions with dynamic consistency when information arrives along with time through a Choquet version of the Net Present Value. We show that only one definition is dynamically consistent in a decision model where time is discounted according to the agent's preferences. However, it violates consequentialism because all future outcomes must be taken into consideration.
    Keywords: Conditional Expectations, Updating, Choquet Expected Utility, Learning, Dynamic Consistency, Discounting
    Date: 2009–08–21
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:halshs-00410532_v1&r=upt
  4. By: Foreman-Peck, James (Cardiff Business School); Moore, Simon
    Abstract: Rational offender models assume that individuals choose whether to offend by weighing the rewards against the chances of apprehension and the penalty if caught. While evidence indicates that rational theory is applicable to acquisitive crimes, the explanatory power for gratuitous non-fatal violent offending has not been evaluated. Lottery-type questions elicited risk attitudes and time preferences from respondents in a street survey. Admitted violent behaviour was predictable on the basis of some of these responses. Consistent with the rational model, less risk averse and more impatient individuals were more liable to violence. Such people were also more likely to be victims of violence. In line with a 'subjective' version of the rational model, respondents with lower estimates of average violence conviction chances and of fines were more prone to be violent.
    Keywords: Violence; alcohol; risk; intertemporal choice; rational offending
    JEL: D81 D9 K14
    Date: 2009–08
    URL: http://d.repec.org/n?u=RePEc:cdf:wpaper:2009/12&r=upt
  5. By: Thomas S. Lontzek; Daiju Narita
    Abstract: We apply standardized numerical techniques of stochastic optimization (Judd [1998]) to the climate change issue. The model captures the feature that the effects of uncertainty are different with different levels of agent's risk aversion. A major finding is that the effects of stochasticity differ even in sign as to emission control with varying parameters: introduction of stochasticity may increase or decrease emission control depending on parameter settings, in other words, uncertainties of climatic trends may induce people's precautionary emission reduction but also may drive away money from abatement
    Keywords: climate change and uncertainties, stochastic control, climate policy
    JEL: C63 Q54 D81
    Date: 2009–08
    URL: http://d.repec.org/n?u=RePEc:kie:kieliw:1539&r=upt
  6. By: Charles FIGUIERES; Marc WILLINGER; David MASCLET
    Abstract: This paper provides a general framework that accounts for the decay of the average contribution observed in most experiments on voluntary contributions to a public good. Each player balances her material utility loss from contributing with her psychological utility loss of deviating from her moral ideal. The novel and central idea of our model is that people.s moral motivation is "weak": their judgement about what is the right contribution to a public good can evolve in the course of interactions, depending partly on observed past contributions and partly on an intrinsic "moral ideal". Under the assumption of weakly morally motivated agents, average voluntary contributions can decline with repetition of the game. Our model also explains other regularities observed in experiments, in particular the phenomenon of over-contributions compared to the Nash prediction and the so-called restart e¤ect, and it is compatible with the conditional cooperation hypothesis.
    Date: 2009–08
    URL: http://d.repec.org/n?u=RePEc:lam:wpaper:09-09&r=upt
  7. By: René Aïd (EDF R&D - EDF, FiME Lab - Laboratoire de Finance des Marchés d'Energie - Université Paris Dauphine - Paris IX - CREST - EDF R&D)
    Abstract: Since the energy markets liberalisation at the beginning of the 1990s in Europe, electricity monopolies have gone through a profound evolution process. From an industrial organisation point of view, they lost their monopoly on their historical business, but gained the capacity to develop in any sector. Companies went public and had to upgrade their financial risk management process to international standards and implement modern risk management concepts and reporting processes (VaR, EaR...). Even though important evolutions have been accomplished, we argue here that the long-term risk management process of utility companies has not yet reached its full maturity and is still facing two main challenges. The first one concerns the time consistency of long-term and mid-term risk management processes. We show that consistencies issues are coming from the different classical financial parameters carrying information on firms' risk aversion (cost of capital and short-term risk limits) and the concepts inherited from the monopoly period, like the loss of load value, that are still involved in the utility company decision-making process. The second challenge concerns the need for quantitative models to assess their business model. With the deregulation, utilities have to address the question of their boundaries. Although intuition can provide insights on the benefits of some firm structures like vertical integration, only sound and tractable quantitative models can bring answers to the optimality of different possible firm structures.
    Keywords: electricity markets; risk management; investment decision; long-term risk
    Date: 2008–12–30
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-00409030_v4&r=upt
  8. By: John M. Fry
    Abstract: We develop a rational expectations model of financial bubbles and study ways in which a generic risk-return interplay is incorporated into prices. We retain the interpretation of the leading Johansen-Ledoit-Sornette model, namely, that the price must rise prior to a crash in order to compensate a representative investor for the level of risk. This is accompanied, in our stochastic model, by an illusion of certainty as described by a decreasing volatility function. The basic model is then extended to incorporate multivariate bubbles and contagion, non-Gaussian models and models based on stochastic volatility. Only in a stochastic volatility model where the mean of the log-returns is considered fixed does volatility increase prior to a crash.
    Keywords: Financial crashes, super-exponential growth, illusion of certainty, contagion, housing-bubble.
    JEL: C00 E30 G10
    Date: 2009–10–08
    URL: http://d.repec.org/n?u=RePEc:eei:rpaper:eeri_rp_2009_10&r=upt
  9. By: Jim Engle-Warnick; Julie Héroux; Claude Montmarquette
    Abstract: We elicit subjects’ willingness to pay to reduce future risk. In our experiments, subjects are given a cash endowment and a risky lottery. They report their willingness to pay to exchange the risky lottery for a safe one. Subjects play the lottery either immediately, eight weeks later, or twenty-five weeks later. Thus, both the lottery and the future are sources of uncertainty in our experiments. In two additional treatments, we control for future uncertainty with a continuation probability, constant and independent across periods, that simulates the chances of not returning to play the lottery after eight and twenty-five periods. We find evidence for present bias in both the time-delay sessions and the continuation probability sessions, suggesting that this bias robustly persists in environments including both risk and future uncertainty, and suggesting that the stopping rule may be a tool to continue study in this area without the need to delay payments into the future. <P>Nous mesurons la volonté des participants de payer pour réduire les risques futurs. Au cours de nos séances expérimentales, les participants reçoivent une dotation en espèces et une loterie risquée. Ils signalent leur volonté de payer pour échanger la loterie risquée pour une loterie moins risquée. Les participants jouent à la loterie soit immédiatement, ou huit semaines plus tard, ou vingt-cinq semaines plus tard. Ainsi, dans ces expériences, la loterie et le futur forment deux sources d'incertitude. Lors de deux traitements additionnels, nous contrôlons l'aspect incertain de l'avenir avec une probabilité de continuation, constante et indépendante à travers les périodes, qui simule les chances de ne pas revenir jouer à la loterie après huit et vingt-cinq périodes. Nous avons trouvé des preuves d'un biais pour le présent à la fois dans les séances avec un délai temporel, que dans les séances avec une probabilité de continuation, ce qui suggère que cette tendance persiste avec vigueur dans les environnements comprenant de l'incertitude provenant à la fois du risque et du futur. Ceci suggère que cette règle d'arrêt peut constituer un outil efficace pour étudier ce domaine sans la nécessité de retarder les paiements dans le futur.
    Keywords: Hyperbolic discounting, uncertainty, risk, experiments , escompte hyperbolique, incertitude, risque, expériences
    JEL: C91 D81
    Date: 2009–08–01
    URL: http://d.repec.org/n?u=RePEc:cir:cirwor:2009s-37&r=upt
  10. By: Bargagliotti, Anna E.; Li, Lingfang (Ivy)
    Abstract: Rating systems measuring quality of products and services (i.e., the state of the world) are widely used to solve the asymmetric information problem in markets. Decision makers typically make binary decisions such as buy/hold/sell based on aggregated individuals' opinions presented in the form of ratings. Problems arise, however, when different rating metrics and aggregation procedures translate the same underlying popular opinion to different conclusions about the true state of the world. This paper investigates the inconsistency problem by examining the mathematical structure of the metrics and their relationship to the aggregation rules. It is shown that at the individual level, the only scale metric (1,. . . ,N) that reports people's opinion equivalently in the a binary metric (-1, 0, 1) is one where N is odd and N-1 is not divisible by 4. At aggregation level, however, the inconsistencies persist regardless of which scale metric is used. In addition, this paper provides simple tools to determine whether the binary and scale rating systems report the same information at individual level, as well as when the systems di®er at the aggregation level.
    Keywords: rating, ranking, preference, asymmetric information
    JEL: D70 D82
    Date: 2009–08–25
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:16947&r=upt
  11. By: Peter Christoffersen; Kris Jacobs; Chayawat Ornthanalai
    Abstract: Standard empirical investigations of jump dynamics in returns and volatility are fairly complicated due to the presence of latent continuous-time factors. We present a new discrete-time framework that combines heteroskedastic processes with rich specifications of jumps in returns and volatility. Our models can be estimated with ease using standard maximum likelihood techniques. We provide a tractable risk neutralization framework for this class of models which allows for separate modeling of risk premia for the jump and normal innovations. We anchor our models in the literature by providing continuous time limits of the models. The models are evaluated by fitting a long sample of S&P500 index returns, and by valuing a large sample of options. We find strong empirical support for time-varying jump intensities. A model with jump intensity that is affine in the conditional variance performs particularly well both in return fitting and option valuation. Our implementation allows for multiple jumps per day, and the data indicate support for this model feature, most notably on Black Monday in October 1987. Our results also confirm the importance of jump risk premia for option valuation: jumps cannot significantly improve the performance of option pricing models unless sizeable jump risk premia are present. <P>Les recherches empiriques standards portant sur la dynamique des sauts dans les rendements et dans la volatilité sont plutôt complexes en raison de la présence de facteurs inobservables en temps continu. Nous présentons un nouveau cadre d’étude en temps discret qui combine des processus hétéroscédastiques et des caractéristiques à concentration élevée de sauts dans les rendements et dans la volatilité. Nos modèles peuvent être facilement évalués à l’aide des méthodes standards du maximum de vraisemblance. Nous offrons une démarche souple de neutralisation du risque pour cette catégorie de modèles, ce qui permet de modéliser distinctement les primes de risque liées aux sauts et celles liées aux innovations normales. Nous imbriquons nos modèles dans la littérature en établissant leurs limites en temps continu. Ces derniers sont évalués en intégrant un échantillon de rendements à long terme de l’indice S&P 500 et en évaluant un vaste échantillon d’options. Nous trouvons un solide appui empirique en ce qui a trait aux intensités de sauts variant dans le temps. Un modèle avec intensité de saut affine dans la variance conditionnelle est particulièrement efficace sur les plans de l’ajustement des rendements et de l’évaluation des options. La mise en œuvre de notre modèle permet de multiples sauts par jour et les données appuient cette caractéristique, plus particulièrement en ce qui a trait au lundi noir d’octobre 1987. Nos résultats confirment aussi l’importance des primes liées au risque de sauts pour l’évaluation du prix des options : les sauts ne peuvent contribuer à améliorer considérablement la performance des modèles utilisés pour fixer les prix des options, sauf en présence de primes de risque de sauts assez importantes.
    Keywords: compound Poisson process, option valuation, filtering; volatility jumps, jump risk premia, time-varying jump intensity, heteroskedasticity. , processus composé de Poisson, évaluation du prix des options, filtrage, sauts liés à la volatilité, primes de risque de sauts, intensité des sauts variant dans le temps, hétéroscédasticité.
    JEL: G12
    Date: 2009–08–01
    URL: http://d.repec.org/n?u=RePEc:cir:cirwor:2009s-34&r=upt
  12. By: Costa, Carlos Eugênio da; Issler, João Victor; Matos, Paulo F.
    Abstract: We build a pricing kernel using only US domestic assets data and checkwhether it accounts for foreign markets stylized facts that escape consumptionbased models. By interpreting our stochastic discount factor as the projection ofa pricing kernel from a fully specified model in the space of returns, our results indicatethat a model that accounts for the behavior of domestic assets goes a longway toward accounting for the behavior of foreign assets. We address predictabilityissues associated with the forward premium puzzle by: i) using instrumentsthat are known to forecast excess returns in the moments restrictions associatedwith Euler equations, and; ii) by pricing Lustig and Verdelhan (2007)'s foreigncurrency portfolios. Our results indicate that the relevant state variables that explainforeign-currency market asset prices are also the driving forces behind U.S.domestic assets behavior.
    Date: 2009–08–12
    URL: http://d.repec.org/n?u=RePEc:fgv:epgewp:697&r=upt
  13. By: Bo-Young Chang; Peter Christoffersen; Kris Jacobs; Gregory Vainberg
    Abstract: Equity risk measured by beta is of great interest to both academics and practitioners. Existing estimates of beta use historical returns. Many studies have found option-implied volatility to be a strong predictor of future realized volatility. We .nd that option-implied volatility and skewness are also good predictors of future realized beta. Motivated by this .nding, we establish a set of assumptions needed to construct a beta estimate from option-implied return moments using equity and index options. This beta can be computed using only option data on a single day. It is therefore potentially able to re.ect sudden changes in the structure of the underlying company. <P>Le risque du marché des actions mesuré selon le coefficient bêta suscite un vif intérêt de la part des universitaires et des praticiens. Les estimations existantes du coefficient bêta utilisent les rendements historiques. De nombreuses études ont démontré que la volatilité implicite du prix des options constitue un indice solide de la volatilité future réalisée. Nous constatons que la volatilité implicite des options et leur caractère asymétrique sont aussi de bons facteurs prévisionnels du bêta futur réalisé. Motivés par ce constat, nous établissons un ensemble d’hypothèses nécessaires pour effectuer une estimation du bêta, à partir des moments de rendement implicite des options, en recourant aux actions et aux options sur indices boursiers. Ce bêta peut être calculé en utilisant seulement les données obtenues sur les options au cours d’une même journée. Il peut donc refléter les changements soudains de la structure de la société sous-jacente.
    Keywords: market beta; CAPM; historical; capital budgeting; model-free moments, bêta du marché, MEDAF (modèle d’équilibre des actifs financiers), historique, budgétisation des investissements, moments non paramétriques.
    JEL: G12
    Date: 2009–08–01
    URL: http://d.repec.org/n?u=RePEc:cir:cirwor:2009s-33&r=upt

This nep-upt issue is ©2009 by Alexander Harin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.