|
on Cognitive and Behavioural Economics |
|
Issue of 2026–01–05
five papers chosen by Marco Novarese, Università degli Studi del Piemonte Orientale |
| By: | Yigit Oezcelik (University of Liverpool); Michel Tolksdorf (TU Berlin) |
| Abstract: | We conduct an online experiment to assess the effect of the anchoring bias on consumer ratings. We depart from the canonical anchoring literature by implementing non-numerical (visual) anchors in a framed rating task. We compare three anchoring conditions, with either high, low, or socially derived anchors present, against two control conditions – one without anchors and one without framing. Our framing replicates the common observation of overrating. We unveil asymmetric non-numerical anchoring effects that contribute to the explanation of overrating. Both high anchors and socially derived anchors lead to significant overrating compared to the control condition without anchors. The latter finding is driven by instances of high social anchors. The upward rating bias is exacerbated in a social context, where participants exhibit more trust in anchors. In contrast, low anchors and instances of low social anchors have no effect compared to the control condition without anchors. Beyond consumer ratings, our results may have broader implications for online judgment environments, such as surveys, crowdfunding platforms, and other user interfaces that employ visual indicators such as stars, bars, or progress displays. |
| Keywords: | anchoring bias; consumer judgment; economic experiment; online feedback systems; user interface design; |
| JEL: | C91 D80 D91 |
| Date: | 2025–12–22 |
| URL: | https://d.repec.org/n?u=RePEc:rco:dpaper:556 |
| By: | James C. Cox (Georgia State University); Cary Deck (University of Alabama and Chapman University, Economic Science Institute); Laura Razzolini (University of Alabama); Vjollca Sadiraj (Georgia State University) |
| Abstract: | Deviations from choices predicted by self-regarding preferences have regularly been observed in standard dictator games. Such behavior is not inconsistent with conventional preference theory or revealed preference theory, which accommodate other-regarding preferences. By contrast, experiments in which giving nothing is not the least generous feasible act produce data that is inconsistent with conventional preference theory including social preference models and suggest the possible relevance of reference point models. Two such models are the reference-dependent theory of riskless choice with loss aversion and choice monotonicity in moral reference points. Our experiment includes novel treatments designed to challenge both theoretical models of reference dependence and conventional rational choice theory by poking holes in or adding to the dictator’s feasible set along with changes to the initial endowment of the players. Our design creates tests that at most one of these models can pass. However, we do not find that any of these models fully capture behavior. In part this result is due to our observing behavior in some treatments that differs from previous experiments for reasons attributable to implementation differences across studies. |
| Keywords: | Rational Choice Theory, Reference Dependence, Behavioral Models, Laboratory Experiments |
| JEL: | C7 C9 D9 |
| Date: | 2025 |
| URL: | https://d.repec.org/n?u=RePEc:chu:wpaper:25-13 |
| By: | Ala Avoyan; Mauricio Ribeiro; Andrew Schotter |
| Abstract: | There are two ways in which people usually engage with contracts—compensation schemes to execute tasks. They can choose between them (contract choice), or allocate time across them (contract time allocation). In this paper, we study how people behave in each of these problems. A standard model suggests that drafting a cost-effective contract that both induces an agent to choose it and allocate time to it presents a significant challenge. However, our experimental results indicate that this tradeoff might be less pronounced than the model predicts due to what we call the attractiveness bias–a tendency for subjects to allocate more time to contracts they find appealing, even when the model suggests they should get relatively little time. |
| Date: | 2025–04–02 |
| URL: | https://d.repec.org/n?u=RePEc:bri:uobdis:25/808 |
| By: | Taisuke Imai; Salvatore Nunnari; Jilong Wu; Ferdinand M. Vieider |
| Abstract: | We present a meta-analysis of prospect theory (PT ) parameters, summarizing data from 166 papers reporting 812 estimates. These parameters capture risk-taking propensities, thus holding interest beyond PT. We develop an inverse-variance weighted method that accounts for correlations in PT parameters and imputes missing information on standard errors. The mean patterns align with the stylized facts of diminishing sensitivity towards outcomes and probabilities discussed in PT. Beyond this, the analysis yields several new insights: 1) between-study variation in parameters is vast; 2) heterogeneity is difficult to explain with observable study characteristics; and 3) the strongest predictors are experimental and measurement indicators, revealing systematic violations of procedure invariance. These findings highlight the promise of cognitive accounts of behavior in organizing unexplained variation in risk-taking, which we discuss. |
| Keywords: | prospect theory, probability weighting function, meta-analysis |
| JEL: | C11 D81 D91 |
| Date: | 2025 |
| URL: | https://d.repec.org/n?u=RePEc:ces:ceswps:_12334 |
| By: | Prochazka, Jakub; Zhou, Jing; Coita, Ioana-Florina; Akhtar, Shumi |
| Abstract: | This report describes a computational reproduction of Lee and Chung's (2024) paper, which examined whether using ChatGPT (GPT‑3.5) enhances creativity in adults compared to web-search assistance or no assistance. The authors presented six randomized controlled experiments showing that ChatGPT-assisted responses were assessed as significantly more creative (effect sizes ranging from Cohen's d = 0.32 to 1.88). These effects were robust across diverse tasks and contexts. We first computationally reproduced all the main results using the original dataset and code, obtaining the same results as those presented by the authors in their paper. During the reproduction process, we identified two minor coding errors and one typographical error in the original table, none of which affected the substantive conclusions. Second, we performed a recreate reproduction for the main analysis in Experiments 1 and 3 by writing new R code. Our results again matched the results presented in the original paper. Overall, based on our analyses, the study is fully computationally reproducible from raw data, although only with access to the original code, due to undocumented cleaning steps, some non-described exclusion criteria, and missing codebooks. Several analyses in the original paper showed that ideas generated by ChatGPT are rated as similarly creative regardless of whether people modify them or not. We contributed to this conclusion by introducing a new robustness check using response time as a proxy for human effort in modifying ChatGPT outputs. Using data from Experiment 3, we found no significant correlation between response time and creativity in the ChatGPT condition (r = −.079, p = .449) and no moderating effect of response time on the influence of using ChatGPT on creativity. This suggests that human effort does not incrementally improve creativity beyond ChatGPT's contribution. Taken together, our findings support the original claim that using ChatGPT increases creativity regardless of the human input. |
| Date: | 2025 |
| URL: | https://d.repec.org/n?u=RePEc:zbw:i4rdps:274 |