nep-cbe New Economics Papers
on Cognitive and Behavioural Economics
Issue of 2022‒06‒13
six papers chosen by
Marco Novarese
Università degli Studi del Piemonte Orientale

  1. Trust Can Be Learned. Order of moves and agents' behavior in two trust game By Mario A. Maggioni; Domenico Rossignoli
  2. Lab vs online experiments: no differences By Benjamin Prissé; Diego Jorrat
  3. Do Losses Trigger Deliberative Reasoning? By Carpenter, Jeffrey P.; Munro, David
  4. How Do Reward Versus Penalty Framed Incentives Affect Diagnostic Performance in Auditing? By Bright (Yue) Hong; Timothy W. Shields
  5. Complexity and Choice By Yuval Salant; Jorg L. Spenkuch
  6. Can defaults change behavior when post-intervention effort is required? Evidence from education By Behlen, Lars; Himmler, Oliver; Jaeckle, Robert

  1. By: Mario A. Maggioni; Domenico Rossignoli
    Abstract: In this paper, we devise a randomized experiment to test whether the order of play in two Trust Games influences the observed level of trust displayed by Trustors (as measured by the share of endowment sent to Trustees). We find that playing Trustor in the second game increases the average share sent to the Trustee. We suggest a role for information acquisitions and learning due to the different order in which subjects play the Trustor role.
    JEL: C91 D83 D91
    Date: 2022
  2. By: Benjamin Prissé (LoyolaBehLab, Universidad Loyola Andalucía); Diego Jorrat (LoyolaBehLab, Universidad Loyola Andalucía)
    Abstract: We ran an experiment to study whether lack of control, meaning not controlling the experimental environment, has an effect on experimental results. Subjects were recruited following standard procedures and randomly assigned to complete the experiment online or in the laboratory. The experimental design is otherwise identical between conditions. Results suggest that there are no differences between conditions, except for a larger percentage of online subjects donating nothing in the Dictator Game.
    Keywords: Time Preferences, CTB, Experiments.
    JEL: C91 C93 D15
    Date: 2022–04
  3. By: Carpenter, Jeffrey P. (Middlebury College); Munro, David (Middlebury College)
    Abstract: There is a large literature evaluating the dual process model of cognition, including the biases and heuristic it implies. To advance this literature, we focus on what triggers decision makers to switch from the intuitive process (aka System 1) to the more deliberative process (aka System 2). Based on previous studies indicating that potential losses increase cognitive effort, we posit that losses may also differentially trigger System 2 reasoning. To evaluate this hypothesis, we design an experiment based on a task that has been developed to distinguish between System 1 and System 2 thinking – the cognitive reflection task. Replicating previous research, we find that losses elicit more effort (measured by the time spent on the task and the incidence of correct answers). However, we also find that losses differentially reduce the incidence of intuitive answers, consistent with triggering System 2. To complement these results, we provide tests of the robustness of our results using aggregated data, subgroup analysis and the imposition of a cognitive load to hinder the activation of System 2.
    Keywords: dual process theory, cognitive effort, loss, experiment
    JEL: C9 D9
    Date: 2022–05
  4. By: Bright (Yue) Hong (School of Accountancy & MIS, Driehaus College of Business, DePaul University); Timothy W. Shields (Argyros School of Business and Management, Economic Science Institute, Chapman University)
    Abstract: Prior research examines how rewards versus economically equivalent penalties affect effort. However, accountants perform various diagnostic analyses that involve more than exerting effort. For example, auditors often need to identify whether a material misstatement is the underlying cause of a phenomenon among the possible causes. Testing helps identify the cause, but testing is costly. When participants are incentivized to test accurately (rather than test more) and objectively (unbiased between testing and not testing), we find that framing the incentives as rewards versus equivalent penalties increases testing by lowering the subjective testing criterion and by increasing the assessed risk of material misstatement. However, testing increases primarily when a misstatement is absent, causing more false alarms under a reward frame with no improvement in misstatement detection. Penalties are pervasive in auditing. Our study suggests that rewards are more effective for increasing testing, and that increasing testing blindly can impair audit efficiency.
    Keywords: Frame, rewards, penalties, objectivity, accuracy, judgment, diagnostic tasks, experiment, auditing
    JEL: C92 D82 D81 M40
    Date: 2022
  5. By: Yuval Salant; Jorg L. Spenkuch
    Abstract: We develop a model of satisficing with evaluation errors that incorporates complexity at the level of individual alternatives. We test the model predictions in a novel data set with information on hundreds of millions of chess moves by experienced players. Consistent with the theory, complex optimal moves are chosen less frequently than simpler ones. Choice frequencies of suboptimal moves follow the opposite pattern. The former finding distinguishes satisficing from a large class of maximization-based models. We further document that skill and time moderate the adverse effect of complexity, and that they complement each other in doing so. Finally, we provide evidence that suboptimal behavior also hinges on the composition of the choice set but not its size. Our findings help to shed some of the first light on the importance of complexity outside of the laboratory.
    JEL: D00 D01 D03 D9 D90
    Date: 2022–04
  6. By: Behlen, Lars; Himmler, Oliver; Jaeckle, Robert
    Abstract: Little is known about the effectiveness of defaults whenmoving the target outcome requires substantial post-intervention effort. In two field experiments, we change the university exam sign-up procedure to “opt-out” for a single exam (Exp1), and for many exams (Exp2). Both interventions increase sign-up at the beginning of the semester. Downstream, at the end of the semester, opt-out increases exam participation for a single but not for many exams. For the single exam, effects on passing are heterogeneous: students responsive to unrelated university requests convert increased sign-ups into passed exams. For non-responsive students, increased sign-ups result in failed exams due to no-shows. Defaults can thus be effective but need to be carefully targeted.
    Keywords: Default, Randomized Field Experiment, Higher Education
    JEL: C93 I23
    Date: 2022–05–03

This nep-cbe issue is ©2022 by Marco Novarese. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.