nep-cbe New Economics Papers
on Cognitive and Behavioural Economics
Issue of 2026–01–12
five papers chosen by
Marco Novarese, Università degli Studi del Piemonte Orientale


  1. Separating Preferences from Endogenous Effort and Cognitive Noise in Observed Decisions By Belzil, Christian; Jagelka, Tomáš
  2. Morality Meets Risk: What Makes a Good Excuse for Selshness By Wanxin Dong; Jiakun Zheng
  3. OUTCOME- AND SIGN-DEPENDENT TIME PREFERENCES: AN INCENTIVIZED INTERTEMPORAL CHOICE EXPERIMENT INVOLVING EFFORT AND MONEY By Shohei Yamamoto; Shotaro Shiba; Nobuyuki Hanaki
  4. Overreaction in Expectations under Signal Extraction: Experimental Evidence By John Duffy; Nobuyuki Hanaki; Donghoon Yoo
  5. Knowing (not) to know: Explainable artificial intelligence and human metacognition By von Zahn, Moritz; Liebich, Lena; Jussupow, Ekaterina; Hinz, Oliver; Bauer, Kevin

  1. By: Belzil, Christian (Ecole Polytechnique, Paris); Jagelka, Tomáš (University of Bonn)
    Abstract: We develop a micro-founded framework to account for individuals' effort and cognitive noise which confound estimates of preferences based on observed behavior. Using a large-scale experimental dataset we find that observed decision noise responds to the costs and benefits of exerting effort on individual choice tasks as predicted by our model. We estimate that failure to properly account for decision errors due to (rational) inattention on a more complex, but commonly used, task design biases estimates of risk aversion by 50% for the median individual. Effort propensities recovered from preference elicitation tasks generalize to other settings and predict performance on an OECD-sponsored achievement test used to make international comparisons. Furthermore, accounting for endogenous effort allows us to empirically reconcile competing models of discrete choice.
    Keywords: cognitive noise, endogenous effort, stochastic choice models, latent attributes, economic preferences, complexity, experimental design, achievement tests
    JEL: D91 C40
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:iza:izadps:dp18315
  2. By: Wanxin Dong (School of Finance, Renmin University of China); Jiakun Zheng (Aix-Marseille Univ., CNRS, AMSE, Marseille, France)
    Abstract: Prior work finds that individuals are often less prosocial when they can exploit uncertainty as an excuse. In contrast to prior work that largely explores the relevance of excuses in the gain domain, this paper investigates the relevance of excuses in both the loss and gain domains. In our laboratory experiment, participants evaluated risky payoffs for themselves and their partners in either the gain or loss domain, with or without interpersonal trade-offs. We found that participants exhibited excuse-driven risk behaviors in both domains. We also documented significant individual heterogeneity in the degree of excuses, influenced by factors such as individuals’ risk preferences, beliefs about others’ risk preferences, and the size of the risk.We present a self-signaling model that incorporates self-image concerns to explain our experimental findings. We show that excuse-driven risk behavior arises because people misattribute their selfish behavior to risk preferences rather than a reduced level of altruism.
    Keywords: Prosocial behavior, Risk preferences, Self-image, Misattribution, Experiment
    JEL: D71 D80 D91
    Date: 2025–12
    URL: https://d.repec.org/n?u=RePEc:aim:wpaimx:2522
  3. By: Shohei Yamamoto; Shotaro Shiba; Nobuyuki Hanaki
    Abstract: Previous research consistently identified differences in time preferences between effort and monetary decisions. However, the root cause of this difference— whether it stemmed from the intrinsic nature of the outcomes or the associated pleasurable or unpleasurable experiences—remained undefined. In response, we conducted novel two-stage experiments employing a 2 2 design contrasting outcomes (money and effort) and domains (gain and loss). This approach allowed for the incentivization of all decisions, including those involving future monetary losses. Our study reveals that while there is no significant difference in present bias between monetary and effort-based choices, the degree of time inconsistency differs significantly, indicating outcome-dependent preferences. Across both experiments, we consistently found no evidence of present bias in any of the four conditions.
    Date: 2024–02
    URL: https://d.repec.org/n?u=RePEc:dpr:wpaper:1230r
  4. By: John Duffy; Nobuyuki Hanaki; Donghoon Yoo
    Abstract: We experimentally evaluate three behavioral models of expectation formation that predict overreaction to new information: overconfidence in private signals, misperceptions about the persistence of the data-generating process (DGP), and diagnostic expectations. In our main experiment, participants repeatedly forecast the contemporaneous and one-step-ahead values of a random variable. They are incentivized for accuracy, informed of the exact DGP and its past history, and provided with noisy signals about the unobserved contemporaneous value. One treatment features a persistent AR(1) process, while another has no persistence. We also report on an experiment with no noisy signals. At the individual level, we find systematic overreaction even when the DGP is not persistent and regardless of whether a signal-extraction problem is present. By contrast, consensus (mean) forecasts exhibit underreaciton, consistent with evidence from other studies. Overall, our results indicate that misperceptions about persistence provide the most compelling explanation for the observed patterns of expectation formation.
    Date: 2025–10
    URL: https://d.repec.org/n?u=RePEc:dpr:wpaper:1293
  5. By: von Zahn, Moritz; Liebich, Lena; Jussupow, Ekaterina; Hinz, Oliver; Bauer, Kevin
    Abstract: The use of explainable AI (XAI) methods to render the prediction logic of black-box AI interpretable to humans is becoming more popular and more widely used in practice, among other things due to regulatory requirements such as the EU AI Act. Previous research on human-XAI interaction has shown that explainability may help mitigate black-box problems but also unintentionally alter individuals' cognitive processes, e.g., distorting their reasoning and evoking informational overload. While empirical evidence on the impact of XAI on how individuals "think" is growing, it has been largely overlooked whether XAI can even affect individuals' "thinking about thinking", i.e., metacognition, which theory conceptualizes to monitor and control previously-studied thinking processes. Aiming to take a first step in filling this gap, we investigate whether XAI affects confidence calibrations, and, thereby, decisions to transfer decision-making responsibility to AI, on the meta-level of cognition. We conduct two incentivized experiments in which human experts repeatedly perform prediction tasks, with the option to delegate each task to an AI. We exogenously vary whether participants initially receive explanations that reveal the AI's underlying prediction logic. We find that XAI improves individuals' metaknowledge (the alignment between confidence and actual performance) and partially enhances confidence sensitivity (the variation of confidence with task performance). These metacognitive shifts causally increase both the frequency and effectiveness of human-to-AI delegation decisions. Interestingly, these effects only occur when explanations reveal to individuals that AI's logic diverges from their own, leading to a systematic reduction in confidence. Our findings suggest that XAI can correct overconfidence at the potential cost of lowering confidence even when individuals perform well. Both effects influence decisions to cede responsibility to AI, highlighting metacognition as a central mechanism in human-XAI collaboration.
    Keywords: Explainable Artificial Intelligence, Metacognition, Metaknowledge, Delegation, Machine Learning, Human-AI Collaboration
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:zbw:safewp:334511

This nep-cbe issue is ©2026 by Marco Novarese. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.