|
on Neuroeconomics |
| By: | Xuqian Ma; Michelle N. Layvant; Edward Miguel; Eric Ochieng; Ajay Pillarisetti; Michael W. Walker |
| Abstract: | We estimate the short-term cognitive effects of fine particulate matter (PM2.5) exposure using highly time-resolved, individual-level data collected during cognitive testing in Kenya. By linking real-time portable monitor readings to Harmonized Cognitive Assessment Protocol (HCAP) scores, we identify acute impacts of pollution on general and domain-specific cognition. Higher PM2.5 exposure during testing is associated with lower cognitive performance, particularly in memory, executive function, and visuospatial tasks. Nonlinear models suggest threshold effects, with larger declines at higher exposure levels. Notably, effects are significantly larger among more educated individuals, possibly due to greater task demands or lower chronic exposure that limits physiological adaptation. Given that cognitive impairment is evident even at PM2.5 levels below Kenya’s annual regulatory threshold of 35 μg/m³, the findings suggest that short-term exposure may impose underappreciated human capital costs that current regulatory standards fail to mitigate. The results highlight the potential cognitive and economic returns to interventions that reduce air pollution exposures in low-resource settings. |
| JEL: | I10 Q53 Q56 |
| Date: | 2025–12 |
| URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:34557 |
| By: | Belzil, Christian (Ecole Polytechnique, Paris); Jagelka, Tomáš (University of Bonn) |
| Abstract: | We develop a micro-founded framework to account for individuals' effort and cognitive noise which confound estimates of preferences based on observed behavior. Using a large-scale experimental dataset we find that observed decision noise responds to the costs and benefits of exerting effort on individual choice tasks as predicted by our model. We estimate that failure to properly account for decision errors due to (rational) inattention on a more complex, but commonly used, task design biases estimates of risk aversion by 50% for the median individual. Effort propensities recovered from preference elicitation tasks generalize to other settings and predict performance on an OECD-sponsored achievement test used to make international comparisons. Furthermore, accounting for endogenous effort allows us to empirically reconcile competing models of discrete choice. |
| Keywords: | cognitive noise, endogenous effort, stochastic choice models, latent attributes, economic preferences, complexity, experimental design, achievement tests |
| JEL: | D91 C40 |
| Date: | 2025–12 |
| URL: | https://d.repec.org/n?u=RePEc:iza:izadps:dp18315 |
| By: | Inoue, Atsushi (Nippon Institute for Research Advancement); Tanaka, Ryuichi (University of Tokyo) |
| Abstract: | This study investigated the effects of bullying victimization on cognitive, school engagement, and friendship outcomes using panel data collected from elementary school students in a Japanese city. Employing a value-added model that controls for prior outcomes, our findings revealed that bullying victimization significantly impairs both cognitive and school engagement and weakens friendship formation. Furthermore, a high prevalence of bullying victimization within the classroom was found to negatively impact cognitive outcomes in subsequent years. These findings underscore the importance of effective school bullying prevention in fostering human and social capital among school-aged children. |
| Keywords: | school engagement, academic performance, school bullying, friendship, Japan |
| JEL: | I21 |
| Date: | 2025–12 |
| URL: | https://d.repec.org/n?u=RePEc:iza:izadps:dp18318 |
| By: | Takeshi Ojima; Shinsuke Ikeda |
| Abstract: | If dishonest behavior stems from a self-control problem, then offering the option to commit to honesty will reduce dishonesty, provided that it lowers the self-control costs of being honest. To test this theoretical prediction, we conducted an incentivized online experiment in which participants could cheat at a game of rock-paper-scissors. Treatment groups were randomly or invariably offered a hard Honesty-Commitment Option (HCO), which could be used to prevent cheating. Our between- and within-subject analyses reveal that the HCO provision significantly reduced cheating rates by approximately 64%. Evidence suggests that the commitment device works by lowering self-control costs, which is more pronounced in individuals with low cognitive reflection, rather than by an observer effect. Further analyses reveal two key dynamics. First, an individual’s frequency of not using the HCO reliably predicts their propensity to cheat when the option is unavailable. Second, repeatedly deciding not to use the commitment device can become habitual, diminishing the HCO provision’s effect in reducing cheating over time. This research highlights the effectiveness of honesty-commitment devices in policy design while also noting that their disuse can become habitual, pointing to a new dynamic in the study of cheating. |
| Date: | 2025–11 |
| URL: | https://d.repec.org/n?u=RePEc:dpr:wpaper:1295 |
| By: | von Zahn, Moritz; Liebich, Lena; Jussupow, Ekaterina; Hinz, Oliver; Bauer, Kevin |
| Abstract: | The use of explainable AI (XAI) methods to render the prediction logic of black-box AI interpretable to humans is becoming more popular and more widely used in practice, among other things due to regulatory requirements such as the EU AI Act. Previous research on human-XAI interaction has shown that explainability may help mitigate black-box problems but also unintentionally alter individuals' cognitive processes, e.g., distorting their reasoning and evoking informational overload. While empirical evidence on the impact of XAI on how individuals "think" is growing, it has been largely overlooked whether XAI can even affect individuals' "thinking about thinking", i.e., metacognition, which theory conceptualizes to monitor and control previously-studied thinking processes. Aiming to take a first step in filling this gap, we investigate whether XAI affects confidence calibrations, and, thereby, decisions to transfer decision-making responsibility to AI, on the meta-level of cognition. We conduct two incentivized experiments in which human experts repeatedly perform prediction tasks, with the option to delegate each task to an AI. We exogenously vary whether participants initially receive explanations that reveal the AI's underlying prediction logic. We find that XAI improves individuals' metaknowledge (the alignment between confidence and actual performance) and partially enhances confidence sensitivity (the variation of confidence with task performance). These metacognitive shifts causally increase both the frequency and effectiveness of human-to-AI delegation decisions. Interestingly, these effects only occur when explanations reveal to individuals that AI's logic diverges from their own, leading to a systematic reduction in confidence. Our findings suggest that XAI can correct overconfidence at the potential cost of lowering confidence even when individuals perform well. Both effects influence decisions to cede responsibility to AI, highlighting metacognition as a central mechanism in human-XAI collaboration. |
| Keywords: | Explainable Artificial Intelligence, Metacognition, Metaknowledge, Delegation, Machine Learning, Human-AI Collaboration |
| Date: | 2025 |
| URL: | https://d.repec.org/n?u=RePEc:zbw:safewp:334511 |