nep-big New Economics Papers
on Big Data
Issue of 2017‒11‒19
two papers chosen by
Tom Coupé
University of Canterbury

  1. The Effect of Positive Mood on Cooperation in Repeated Interaction By Proto, Eugenio; Sgroi, Daniel; Mahnaz Nazneen, Mahnaz
  2. Machine learning for dynamic incentive problems By Philipp Renner; Simon Scheidegger

  1. By: Proto, Eugenio (DepartmentofEconomics,University of Warwick, CAGE and IZA); Sgroi, Daniel (Department of Economics, University of Warwick,CAGE and Nuffield College, University of Oxford); Mahnaz Nazneen, Mahnaz (Department of Economics, University of Warwick)
    Abstract: Existing research supports two opposing mechanisms through which positive mood might affect cooperation. Some studies have suggested that positive mood produces more altruistic, open and helpful behavior, fostering cooperation. However, there is contrasting research supporting the idea that positive mood produces more assertiveness and inward-orientation and reduced use of information, hampering cooperation. We find evidence that suggests the second hypothesis dominates when playing the repeated Prisoner's Dilemma. Players in an induced positive mood tend to cooperate less than players in a neutral mood setting. This holds regardless of uncertainty surrounding the number of repetitions or whether pre-play communication has taken place. This finding is consistent with a text analysis of the pre-play communication between players indicating that subjects in a more positive mood use more inward-oriented, more negative and less positive language. To the best of our knowledge we are the rst to use text analysis in pre-play communication.
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:wrk:warwec:1141&r=big
  2. By: Philipp Renner; Simon Scheidegger
    Abstract: We propose a generic method for solving infinite-horizon, discrete-time dynamic incentive problems with hidden states. We first combine set-valued dynamic programming techniques with Bayesian Gaussian mixture models to determine irregularly shaped equilibrium value correspondences. Second, we generate training data from those pre-computed feasible sets to recursively solve the dynamic incentive problem by a massively parallelized Gaussian process machine learning algorithm. This combination enables us to analyze models of a complexity that was previously considered to be intractable. To demonstrate the broad applicability of our framework, we compute solutions for models of repeated agency with history dependence, many types, and varying preferences.
    Keywords: Dynamic Contracts, Principal-Agent Model, Dynamic Programming, Machine Learning, Gaussian Processes, High-performance Computing
    JEL: C61 C73 D82 D86 E61
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:lan:wpaper:203620397&r=big

This nep-big issue is ©2017 by Tom Coupé. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.