|
on Cognitive and Behavioural Economics |
Issue of 2024‒10‒14
three papers chosen by Marco Novarese, Università degli Studi del Piemonte Orientale |
By: | Jin, Shuxian; Spadaro, Giuliana (Vrije Universiteit Amsterdam); Balliet, Daniel |
Abstract: | Cooperation underlies the ability of groups to realize collective benefits (e.g., creation of public goods). Yet, cooperation can be difficult to achieve when people face situations with conflicting interests between what is best for individuals versus the collective (i.e., social dilemmas). To address this challenge, groups can implement rules about structural changes in a situation. But what institutional rules can best facilitate cooperation? Theoretically, rules can be made to affect structural features of a social dilemma, such as the possible actions, outcomes, and people involved. We derived 13 pre-registered hypotheses from existing work and collected six decades of empirical research to test how nine structural features influence cooperation within prisoner’s dilemmas and public goods dilemmas. We do this by meta-analyzing mean levels of cooperation across studies (Study 1, k = 2, 340, N = 229, 528), and also examining how manipulations of these structural features in social dilemmas affect cooperation within studies (Study 2, k = 909). Results indicated that lower conflict of interests was associated with higher cooperation, and that (1) the implementation of sanctions (i.e., reward and punishment of behaviors) and (2) allowing for communication most strongly enhanced cooperation. However, we found inconsistent support for the hypotheses that group size and matching design affect cooperation. Other structural features (e.g., symmetry of dilemmas, sequential decision making, payment) were not associated with cooperation. Overall, these findings inform institutions that can (or not) facilitate cooperation. |
Date: | 2024–09–02 |
URL: | https://d.repec.org/n?u=RePEc:osf:osfxxx:9r2qb |
By: | Bašić, Zvonimir (University of Glasgow); Bortolotti, Stefania (University of Bologna); Salicath, Daniel (NAV Norwegian Labour and Welfare Administration); Schmidt, Stefan (Max Planck Institute for Research on Collective Goods); Schneider, Sebastian O. (Max Planck Institute for Research on Collective Goods); Sutter, Matthias (Max Planck Institute for Research on Collective Goods) |
Abstract: | Incentives are supposed to increase effort, yet individuals react differently to incentives. We examine this heterogeneity by investigating how personal characteristics, preferences, and socio-economic background relate to incentives and performance in a real effort task. We analyze the performance of 1, 933 high-school students under a Fixed, Variable, or Tournament payment. Productivity and beliefs about relative performance, but hardly any personal characteristics, play a decisive role for performance when payment schemes are exogenously imposed. Only when given the choice to select the payment scheme, personality traits, economic preferences and socioeconomic background matter. Algorithmic assignment of payment schemes could improve performance, earnings, and utility, as we show. |
Keywords: | effort, productivity, incentives, personality traits, preferences, socio-economic background, ability, heterogeneity, sorting, algorithm, lab-in-the-field experiment |
JEL: | C93 D91 J24 J41 |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:iza:izadps:dp17287 |
By: | Ziyan Cui; Ning Li; Huaikang Zhou |
Abstract: | Artificial Intelligence (AI) is increasingly being integrated into scientific research, particularly in the social sciences, where understanding human behavior is critical. Large Language Models (LLMs) like GPT-4 have shown promise in replicating human-like responses in various psychological experiments. However, the extent to which LLMs can effectively replace human subjects across diverse experimental contexts remains unclear. Here, we conduct a large-scale study replicating 154 psychological experiments from top social science journals with 618 main effects and 138 interaction effects using GPT-4 as a simulated participant. We find that GPT-4 successfully replicates 76.0 percent of main effects and 47.0 percent of interaction effects observed in the original studies, closely mirroring human responses in both direction and significance. However, only 19.44 percent of GPT-4's replicated confidence intervals contain the original effect sizes, with the majority of replicated effect sizes exceeding the 95 percent confidence interval of the original studies. Additionally, there is a 71.6 percent rate of unexpected significant results where the original studies reported null findings, suggesting potential overestimation or false positives. Our results demonstrate the potential of LLMs as powerful tools in psychological research but also emphasize the need for caution in interpreting AI-driven findings. While LLMs can complement human studies, they cannot yet fully replace the nuanced insights provided by human subjects. |
Date: | 2024–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2409.00128 |