|
on Cognitive and Behavioural Economics |
Issue of 2021‒08‒23
ten papers chosen by Marco Novarese Università degli Studi del Piemonte Orientale |
By: | Bigoni, Maria (University of Bologna); Dragone, Davide (University of Bologna); Luchini, Stéphane (Aix-Marseille University); Prati, Alberto (Aix-Marseille University) |
Abstract: | We study time preferences by means of a longitudinal lab experiment involving both monetary and non-monetary rewards (leisure). Our novel design allows to measure whether participants prefer to anticipate or delay gratification, without imposing any structural assumption on the instantaneous utility, intertemporal utility or the discounting functions. We find that most people prefer to anticipate monetary rewards (positive time preferences for money), but they delay non-monetary rewards (negative time preferences for leisure). These results cannot be explained by personal timetables and heterogeneous preferences only. They invite to reconsider the psychological interpretation of the discount factor, and suggest that the assumption that discounting is consistent across domains can lead to non-negligible prediction errors in models involving non-monetary decisions, such as labor supply models. |
Keywords: | consistency across domains, negative discounting, laboratory experiment, non-monetary rewards |
JEL: | C91 D01 D91 J22 |
Date: | 2021–07 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp14590&r= |
By: | Stefania Bortolotti (Economics Department, University of Bologna & IZA); Felix Kölle (Department of Economics, University of Cologne, Albertus MagnusPlatz, 50923 Cologne, Germany); Lukas Wenner (Department of Economics, University of Cologne) |
Abstract: | In social and economic interactions, individuals often exploit informational asymmetries and behave dishonestly to pursue private ends. In many of these situations the costs and benefits from dishonest behavior do not accrue immediately and at the same time. In this paper, we experimentally investigate the role of time on dishonesty. Contrary to our predictions, we find that neither delaying the gains from cheating, nor increasing temporal engagement with one's own unethical behavior reduces the likelihood of cheating. Furthermore, allowing for a delay between the time when private information is obtained and when it is reported does not affect cheating in our experiment. |
Keywords: | Dishonesty, cheating, delay, discounting, experiment |
JEL: | C91 D82 D91 |
Date: | 2021–08 |
URL: | http://d.repec.org/n?u=RePEc:ajk:ajkdps:111&r= |
By: | Bartos, Vojtech (University of Munich); Bauer, Michal (Charles University, Prague); Chytilová, Julie (Charles University, Prague); Levely, Ian (King's College London) |
Abstract: | We test whether an environment of poverty affects time preferences through purely psychological channels. We measured discount rates among farmers in Uganda who made decisions about when to enjoy entertainment instead of working. To circumvent the role of economic constraints, we experimentally induced thoughts about poverty-related problems, using priming techniques. We find that thinking about poverty increases the preference to consume entertainment early and to delay work. Using monitoring tools similar to eye tracking, a novel feature for this subject pool, we show that this effect is unlikely to be driven by less careful decision-making processes. |
Keywords: | poverty, scarcity, time preferences, self-control, inattention |
JEL: | C93 D91 O12 |
Date: | 2021–07 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp14607&r= |
By: | Rui (Aruhan) Shi |
Abstract: | This exercise offers an innovative learning mechanism to model economic agent’s decision-making process using a deep reinforcement learning algorithm. In particular, this AI agent is born in an economic environment with no information on the underlying economic structure and its own preference. I model how the AI agent learns from square one in terms of how it collects and processes information. It is able to learn in real time through constantly interacting with the environment and adjusting its actions accordingly (i.e., online learning). I illustrate that the economic agent under deep reinforcement learning is adaptive to changes in a given environment in real time. AI agents differ in their ways of collecting and processing information, and this leads to different learning behaviours and welfare distinctions. The chosen economic structure can be generalised to other decision-making processes and economic models. |
Keywords: | expectation formation, exploration, deep reinforcement learning, bounded rationality, stochastic optimal growth |
JEL: | C45 D83 D84 E21 E70 |
Date: | 2021 |
URL: | http://d.repec.org/n?u=RePEc:ces:ceswps:_9255&r= |
By: | Yonatan Berman; Mark Kirstein |
Abstract: | An important but understudied question in economics is how people choose when facing uncertainty in the timing of events. Here we study preferences over time lotteries, in which the payment amount is certain but the payment time is uncertain. Expected discounted utility theory (EDUT) predicts decision makers to be risk-seeking over time lotteries. We explore a normative model of growth-optimality, in which decision makers maximise the long-term growth rate of their wealth. Revisiting experimental evidence on time lotteries, we find that growth-optimality accords better with the evidence than EDUT. We outline future experiments to scrutinise further the plausibility of growth-optimality. |
Date: | 2021–08 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2108.08366&r= |
By: | De Grauwe, Paul; Foresti, Pasquale |
Abstract: | In this paper, we study the effects of government spending with a behavioral macroeconomic model in which agents have limited cognitive capabilities and use simple heuristics to form their expectations. However, thanks to a learning mechanism, agents can revise their forecasting rule according to its performance. This feature produces endogenous and self-fulfilling waves of optimistic and pessimistic beliefs (animal spirits). This framework allows us to show that the short-run spending multiplier is state dependent. The multiplier is stronger under either extreme optimism or pessimism and reduces in periods of tranquility. Furthermore, the more the central bank focuses on output gap stabilization, the smaller the multiplier. We also show that periods of increasing public debt are characterized by intense pessimism, while intense optimism occurs in periods of decreasing debt. This allows us to show that governments face a trade-off between the stabilization of the animal spirits and the stabilization of public debt. Then, we show that the existence of this trade-off has implications also for the stabilization of the output gap. |
Keywords: | animal spirits; behavioral DSGE model; fiscal policy; policy state-dependent effects; public debt; spending multiplier |
JEL: | E10 E32 E62 D83 |
Date: | 2020–03–01 |
URL: | http://d.repec.org/n?u=RePEc:ehl:lserod:103500&r= |
By: | Si Chen; Carl Heese |
Abstract: | The literature on motivated reasoning argues that people skew their beliefs to feel moral when acting selfishly. We study information acquisition of decision-makers with a motive to form positive moral self-views and a motive to act selfishly. Theoretically and experimentally, we find that a motive to act selfishly makes individuals 'fish for good news': they are more likely to continue (stop) acquiring information, having received mostly information suggesting that acting selfishly is harmful (harmless) to others. We find that fishing for good news may improve social welfare. Finally, more intelligent individuals have a higher tendency to fish for good news. |
Keywords: | Motivated Beliefs, Social Preferences, Information Preferences, Bayesian Persuasion, Belief Utility |
JEL: | D90 D91 |
Date: | 2021–08 |
URL: | http://d.repec.org/n?u=RePEc:bon:boncrc:crctr224_2021_223v3&r= |
By: | Benno Torgler |
Abstract: | The field of behavioral taxation dates back at least to the 1950s. In this contribution I will explore the opportunities and challenges in the area, with a particular focus on tax compliance. I will focus on the data required to make further progress, discussing what can be improved when working with surveys and how the fie ld could benefit from open government data initiatives. I focus on collaborative efforts among scientists as well as with the government or the tax administration and examine many potenti al areas of exploration. The opportunities currently emerging due to digitalization provide not only interesting avenues for collaborations but also a natural method of using tools such as lab and field experiments. In addition, I will discuss potential dangers faced by the field of behavioral economics that also threaten the field of behavioral taxation. |
Date: | 2021–07 |
URL: | http://d.repec.org/n?u=RePEc:cra:wpaper:2021-25&r= |
By: | Steve J. Bickley; Benno Torgler |
Abstract: | As artificial intelligence (AI) thrives and propagates through modern life, a key question to ask is how to include humans in future AI? Despite human- involvement at every stage of the pr oduction process from conception and design through to implementation, modern AI is still often criticized for its black box characteristics. Sometimes, we do not know what really goes on inside or how and why certain conclusions are met. Future AI will face many dilemmas and ethical issu es unforeseen by their creators beyond those commonly discussed (e.g., trolley probl ems and variants of it) and to which solutions cannot be hard-coded and are often still up for debate. Given the sensitivity of such social and ethical dilemmas and the implications of these for human society at large, when and if our AI make the wrong choice we need to understand how they got there in order to make corrections and prevent recurrences. This is particularly true in situations where human livelihoods are at stake (e.g., health, well-being, finance, law) or when major individual or household decisions are taken. Doing so requires opening up the black box of AI; especially as they act, interact, and adapt in a human world and how they interact with other AI in this world. In this article, we argue for the application of cognitive architectures for ethical AI. In particular, for their potential contributions to AI transparency, ex plainability, and accountability. We need to understand how our AI get to the solutions they do, and we should seek to do this on a deeper level in terms of the machine-equivalents of motivations, attitudes, values, and so on. The path to future AI is long and winding but it could arrive faster than we think. In order to harness the positive potential outcomes of AI for humans and society ( and avoid the negatives), we need to understand AI more fully in the first place and we expect this will simultaneously contribute towards greater understanding of their human counterparts also. |
Keywords: | Artificial Intelligence; Ethics; Cognitive Architectures; Intelligent Systems; Ethical AI; Society |
Date: | 2021–08 |
URL: | http://d.repec.org/n?u=RePEc:cra:wpaper:2021-27&r= |
By: | Ian Ball (Department of Economics, MIT); Jose-Antonio Espin-Sanchez (Cowles Foundation, Yale University) |
Abstract: | We introduce experimental persuasion between Sender and Receiver. Sender chooses an experiment to perform from a feasible set of experiments. Receiver observes the realization of this experiment and chooses an action. We characterize optimal persuasion in this baseline regime and in an alternative regime in which Sender can commit to garble the outcome of the experiment. Our model includes Bayesian persuasion as the special case in which every experiment is feasible; however, our analysis does not require concaviï¬ cation. Since we focus on experiments rather than beliefs, we can accommodate general preferences including costly experiments and non-Bayesian inference. |
Keywords: | Experiments, Beliefs, Decision Making, Information, Bayesian |
JEL: | D81 D82 D83 |
Date: | 2021–08 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:2298&r= |