
on Utility Models and Prospect Theory 
By:  Mamoru Kaneko (Waseda University) 
Abstract:  We reformulate expected utility theory, from the viewpoint of bounded rationality, by introducing probability grids and a cognitive bound; we restrict permissible probabilities only to decimal (`ary in general) fractions of Önite depths up to a given cognitive bound. We distinguish between measurements of utilities from pure alternatives and their extensions to lotteries involving more risks. Our theory is constructive, from the viewpoint of the decision maker. When a cognitive bound is small, the preference relation involves many incomparabilities, but these diminish as the cognitive bound is is relaxed. Similarly, the EU hypothesis would hold more for a weaker cognitive bound. The main part of the paper is a study of preferences including incomparabilities in cases with Önite cogntive bounds; we give representation theorems in terms of a 2dimensional vectorvalued utility functions. We exemplify the theory with one experimental result reported by KahnemanTversky. 
Keywords:  Expected Utility; Measurement of Utility; Bounded Rationality; Probability 
JEL:  C72 C79 C91 
Date:  2019–04 
URL:  http://d.repec.org/n?u=RePEc:wap:wpaper:1902&r=all 
By:  Franz Dietrich (Centre d'Economie de la Sorbonne, Paris School of Economics) 
Abstract:  Can a group be an orthodox rational agent? This requires the group's aggregate preferences to follow expected utility (static rationality) and to evolve by Bayesian updating (dynamic rationality). Group rationality is possible, but the only preference aggregation rules which achieve it (and are minimally Paretian and continuous) are the lineargeometric rules, which combine individual values linearly and individual beliefs geometrically. Lineargeometric preference aggregation contrasts with classic linearlinear preference aggregation, which combines both values and beliefs linearly, and achieves only static rationality. Our characterisation of lineargeometric preference aggregation implies as corollaries a characterisation of linear value aggregation (Harsanyi's Theorem) and a characterisation of geometric belief aggregation 
Keywords:  rational group agent; uncertainty; preference aggregation; opinion pooling, static versus dynamic rationality; expectedutility hypothesis; Bayesianism; group rationality versus Paretianism; spurious unanimity; exante versus expost Pareto 
JEL:  D7 D8 
Date:  2020–06 
URL:  http://d.repec.org/n?u=RePEc:mse:cesdoc:20014r&r=all 
By:  Peter G. Hansen 
Abstract:  I introduce novel preference formulations which capture aversion to ambiguity about unknown and potentially timevarying volatility. I compare these preferences with Gilboa and Schmeidler's maxmin expected utility as well as variational formulations of ambiguity aversion. The impact of ambiguity aversion is illustrated in a simple static model of portfolio choice, as well as a dynamic model of optimal contracting under repeated moral hazard. Implications for investor beliefs, optimal design of corporate securities, and asset pricing are explored. 
Date:  2021–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2101.12306&r=all 
By:  Kazuyuki Sasakura (Faculty of Political Science and Economics, Waseda Universi) 
Abstract:  This paper provides a simple example of the utility function with two consumption goods which can be calculated by hand to produce a Giffen good. It is based on the theoretical result by Kubler, Selden, and Wei (2013). Using a model of portfolio selection with a riskfree asset and a risky asset, they showed that the riskfree asset becomes a Giffen good if the utility belongs to the HARA family. This paper investigates their result further in a usual microeconomic setting, and derives the conditions for one of the consumption goods to be a Giffen good from a broader perspective. 
Keywords:  HARA family; Decreasing relative risk aversion; Giffen good; Slutsky equation; Ratio effect 
JEL:  D11 D01 G11 
Date:  2019–06 
URL:  http://d.repec.org/n?u=RePEc:wap:wpaper:1908&r=all 
By:  Saleh Afroogh 
Abstract:  Decision theorists propose a normative theory of rational choice. Traditionally, they assume that they should provide some constant and invariant principles as criteria for rational decisions, and indirectly, for agents. They seek a decision theory that invaribably works for all agents all the time. They believe that a rational agent should follow a certain principle, perhaps the principle of maximizing expected utility everywhere, all the time. As a result of the given context, these principles are considered, in this sense, contextindependent. Furthermore, decision theorists usually assume that the relevant agents at work are ideal agents, and they believe that nonideal agents should follow them so that their decisions qualify as rational. These principles are universal rules. I will refer to this contextindependent and universal approach in traditional decision theory as Invariantism. This approach is, implicitly or explicitly, adopted by theories which are proposed on the basis of these two assumptions. 
Date:  2021–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2101.08914&r=all 
By:  Aifan Ling; Jianjun Miao; Neng Wang 
Abstract:  We study how investors' preferences for robustness influence corporate investment, financing, and compensation decisions and valuation in a financial contracting model with agency. We characterize the robust contract and show that early liquidation can be optimal when investors are sufficiently ambiguity averse. We implement the robust contract by debt, equity, cash, and a financial derivative asset. The derivative is used to hedge against the investors' concern that the entrepreneur may be overly optimistic. Our calibrated model generates sizable equity premium and credit spread, and implies that ambiguity aversion lowers Tobin's q; the average investment, and investment volatility. The entrepreneur values the project at an internal rate of return of 3.5% per annum higher than investors do. 
JEL:  D81 E22 G12 G32 J33 
Date:  2021–01 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:28367&r=all 
By:  Douadia Bougherara (CEEM  Centre d'Economie de l'Environnement  Montpellier  FRE2010  UM  Université de Montpellier  CNRS  Centre National de la Recherche Scientifique  Montpellier SupAgro  Institut national d’études supérieures agronomiques de Montpellier  Institut Agro  Institut national d'enseignement supérieur pour l'agriculture, l'alimentation et l'environnement  INRAE  Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement); Margaux Lapierre (CEEM  Centre d'Economie de l'Environnement  Montpellier  FRE2010  UM  Université de Montpellier  CNRS  Centre National de la Recherche Scientifique  Montpellier SupAgro  Institut national d’études supérieures agronomiques de Montpellier  Institut Agro  Institut national d'enseignement supérieur pour l'agriculture, l'alimentation et l'environnement  INRAE  Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement); Raphaële Préget (CEEM  Centre d'Economie de l'Environnement  Montpellier  FRE2010  UM  Université de Montpellier  CNRS  Centre National de la Recherche Scientifique  Montpellier SupAgro  Institut national d’études supérieures agronomiques de Montpellier  Institut Agro  Institut national d'enseignement supérieur pour l'agriculture, l'alimentation et l'environnement  INRAE  Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement); Alexandre Sauquet (CEEM  Centre d'Economie de l'Environnement  Montpellier  FRE2010  UM  Université de Montpellier  CNRS  Centre National de la Recherche Scientifique  Montpellier SupAgro  Institut national d’études supérieures agronomiques de Montpellier  Institut Agro  Institut national d'enseignement supérieur pour l'agriculture, l'alimentation et l'environnement  INRAE  Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement) 
Abstract:  Nearly all AgriEnvironmental Schemes (AES) offer farmers stable annual payments over the duration of the contract. Yet AES are often intended to be a transition tool, thus decreasing payment sequences would appear particularly attractive for farmers. The standard discounted utility model supports this notion by predicting that individuals will prefer a decreasing sequence of payments if the total sum of outcomes is constant. Nevertheless, the literature shows that numerous mechanisms, such as increasing productivity, anticipatory pleasure and loss aversion can incline farmers to favor an increasing sequence of payments. To understand what drives farmers' preferences for different payment sequences, we propose a review of the mechanisms highlighted by the literature in psychology and economics. We then analyze farmers' preferences for stable, increasing or decreasing payments through a choice experiment (CE) survey of 123 French farmers, about 15% of those contacted. Overall, farmers do not present a clear willingness to depart from the usual stable payments. Moreover, we find a significant aversion to decreasing payments in farmers with a lower discount rate and in those more willing to take risks than the median farmer, contradicting the discounted utility model. 
Keywords:  Sequences of outcomes,AgriEnvironmental Schemes,Discounted utility,Farming practices,Cover crops,Choice experiment 
Date:  2021 
URL:  http://d.repec.org/n?u=RePEc:hal:journl:hal03103886&r=all 
By:  Youssef M. Aboutaleb; Mazen Danaf; Yifei Xie; Moshe BenAkiva 
Abstract:  This paper discusses capabilities that are essential to models applied in policy analysis settings and the limitations of direct applications of offtheshelf machine learning methodologies to such settings. Traditional econometric methodologies for building discrete choice models for policy analysis involve combining data with modeling assumptions guided by subjectmatter considerations. Such considerations are typically most useful in specifying the systematic component of random utility discrete choice models but are typically of limited aid in determining the form of the random component. We identify an area where machine learning paradigms can be leveraged, namely in specifying and systematically selecting the best specification of the random component of the utility equations. We review two recent novel applications where mixedinteger optimization and crossvalidation are used to algorithmically select optimal specifications for the random utility components of nested logit and logit mixture models subject to interpretability constraints. 
Date:  2021–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2101.10261&r=all 
By:  Dilshani N Ranawaka (Centre for Poverty Analysis, Sri Lanka, dilshani@cepa.lk); Rushan Ranawaka (University of Colombo, Sri Lanka,) 
Abstract:  Economics and Abhidharma (indepth dharma teachings by the Buddha) rarely emerge together in interdisciplinary discussions due to the contradictory nature of the two disciplines. Economics is identified as the dismal science which attempts to understand how limited resources should be distributed in a way that satisfies unlimited wants so as to maximize satisfaction (utility) with demand being one of the core concepts. It is known that the demand occurs because of three reasons: willingness to buy, the ability to pay now, in the present. With the surfacing of neuroEconomics which looks at how thoughts and brain functions when making decisions, Abhidharma can be very useful in providing insights as to how thoughts functions. According to Abhidharma, â€œgreedâ€ is the nucleus when demand is looked in a microlevel. Citta (Consciousness), also known as thoughts and chaithasika, identified as the thought processes in Abhidharma, can be used to guide and provide a different outlook to the concept of â€˜demandâ€™ in Economics. This paper hopes to pursue the knowledge of eight folds in lobha (greed); one of the theoretical elements of Abhidharma in assessing how demand in Economics occurs. 
Keywords:  Abhidharma, Demand, NeuroEconomics, Greed, Microanalysis 
Date:  2020–10 
URL:  http://d.repec.org/n?u=RePEc:smo:bpaper:024dr&r=all 
By:  Enrico Lupi (Sant'Anna Scuola Universitaria Superiore, Pisa,) 
Abstract:  This work analyzes the dynamic competition among an infinite number of managers acting in a financial market with a riskless bond and a risky asset. Each player competes against infinitely many competitors for receiving money flows that depend on her relative performances. We assume that each manager attempts to overperform the industry average performance. We find the closed formula for the optimal policy. We show that when all the agents are identical (homogenous case) the competition induced by the convex incentive affects both the risk aversion of the manager and her optimal policy. The change in the risk aversion and the shift in the risk taking behavior have opposite effects on manager's optimal policy. In the homogenous case the two effects perfectly offset and the optimal policy coincide with the usual Merton policy. We characterize the optimal solution of the problem also in the extended framework allowing for heterogenous groups of managers. In this case the two opposite forces acting on the manager's choice do not balance each other and there is room for the analysis of the change in the risktaking optimal behavior of managers and in the whole industry as function of the parameters of the utility function of the managers as well as the relative weight of the groups in the population. We study the welfare loss of investors, who let their money being managed by managers, relating to the level of competition in the market. 
Keywords:  Tournament incentives, Portfolio choice, Strategic interaction, Relative performance, Welfare 
Date:  2020–05–28 
URL:  http://d.repec.org/n?u=RePEc:rtv:ceisrp:486&r=all 
By:  Yasushi Asako (Faculty of Political Science and Economics, Waseda University) 
Abstract:  Political parties and candidates usually prefer making ambiguous promises. This study identiÖes the conditions under which candidates choose ambiguous promises in equilibrium, given convex utility functions of voters. The results show that in a deterministic model, no equilibrium exists when voters have convex utility functions. However, in a probabilistic voting model, candidates make ambiguous promises in equilibrium when, (i) voters have convex utility functions, and (ii) the distribution of votersímost preferred policies is polarized. 
Keywords:  elections; political ambiguity; public promise; campaign platform; probabilistic voting; polarization 
JEL:  D71 D72 
Date:  2019–05 
URL:  http://d.repec.org/n?u=RePEc:wap:wpaper:1906&r=all 
By:  Georges Sfeir; Filipe Rodrigues; Maya AbouZeid 
Abstract:  We present a Gaussian Process  Latent Class Choice Model (GPLCCM) to integrate a nonparametric class of probabilistic machine learning within discrete choice models (DCMs). Gaussian Processes (GPs) are kernelbased algorithms that incorporate expert knowledge by assuming priors over latent functions rather than priors over parameters, which makes them more flexible in addressing nonlinear problems. By integrating a Gaussian Process within a LCCM structure, we aim at improving discrete representations of unobserved heterogeneity. The proposed model would assign individuals probabilistically to behaviorally homogeneous clusters (latent classes) using GPs and simultaneously estimate classspecific choice models by relying on random utility models. Furthermore, we derive and implement an ExpectationMaximization (EM) algorithm to jointly estimate/infer the hyperparameters of the GP kernel function and the classspecific choice parameters by relying on a Laplace approximation and gradientbased numerical optimization methods, respectively. The model is tested on two different mode choice applications and compared against different LCCM benchmarks. Results show that GPLCCM allows for a more complex and flexible representation of heterogeneity and improves both insample fit and outofsample predictive power. Moreover, behavioral and economic interpretability is maintained at the classspecific choice model level while local interpretation of the latent classes can still be achieved, although the nonparametric characteristic of GPs lessens the transparency of the model. 
Date:  2021–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2101.12252&r=all 
By:  Daeyung Gim; Hyungbin Park 
Abstract:  This paper treats the Merton problem how to invest in safe assets and risky assets to maximize an investor's utility, given by investment opportunities modeled by a $d$dimensional state process. The problem is represented by a partial differential equation with optimizing term: the HamiltonJacobiBellman equation. The main purpose of this paper is to solve partial differential equations derived from the HamiltonJacobiBellman equations with a deep learning algorithm: the Deep Galerkin method, first suggested by Sirignano and Spiliopoulos (2018). We then apply the algorithm to get the solution of the PDE based on some model settings and compare with the one from the finite difference method. 
Date:  2021–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2101.12387&r=all 
By:  Pablo Guillen (Faculty of Economics, The University of Sydney); Róbert F. Veszteg (School of Political Science and Economic, Waseda University) 
Abstract:  We introduce two novel matching mechanisms, Reverse Top Trading Cycles (RTTC) and Reverse Deferred Acceptance (RDA), with the purpose of challenging the idea that the theoretical property of strategyproofness induces high rates of truthtelling in economic experiments. RTTC and RDA are identical to the celebrated Top Trading Cycles (TTC) and Deferred Acceptance (DA) mechanisms, respectively, in all their theoretical properties except that their dominantstrategy equilibrium is to report one's preferences in the order opposite to the way they were induced. With the focal truthtelling strategy being out of equilibrium, we are able to perform a clear measurement of how much of the truthtelling reported for strategyproof mechanisms is compatible with rational behavior and how much of it is caused by confused decisionmakers following a default (very focal) strategy without understanding the structure of the game. In a schoolallocation setting, we find that roughly half of the observed truthtelling under TTC and DA is the result of na¨ıve (nonstrategic) behavior. Only 1329% of participants' actions in RTTC and RDA are compatible with rational behavior. Further than that, by looking at the responses of those seemingly rational participants in control tasks, it becomes clear that even them lack a basic understanding of the game incentives. We argue that the use of a default option, confusion and other behavioral biases account for the vast majority of truthful play in both TTC and DA in laboratory experiments. 
Keywords:  matching; strategyproofness; truthtelling; focal point; rationality; laboratory experiment; school choice; revelation principle 
JEL:  C78 D47 C91 
Date:  2019–08 
URL:  http://d.repec.org/n?u=RePEc:wap:wpaper:1913&r=all 
By:  Bai, Hang (U of Connecticut); Zhang, Lu (Ohio State U) 
Abstract:  Labor market frictions are crucial for the equity premium in production economies. A dynamic stochastic general equilibrium model with recursive utility, search frictions, and capital accumulation yields a high equity premium of 4.26% per annum, a stock market volatility of 11.8%, and a low average interest rate of 1.59%, while simultaneously retaining plausible business cycle dynamics. The equity premium and stock market volatility are strongly countercyclical, while the interest rate and consumption growth are largely unpredictable. Because of wage inertia, dividends are procyclical despite consumption smoothing via capital investment. The welfare cost of business cycles is huge, 29%. 
JEL:  E32 E44 G12 J23 
Date:  2020–10 
URL:  http://d.repec.org/n?u=RePEc:ecl:ohidic:202023&r=all 
By:  Elyes Jouini (PJSE  Paris Jourdan Sciences Economiques  UP1  Université PanthéonSorbonne  ENS Paris  École normale supérieure  Paris  PSL  Université Paris sciences et lettres  EHESS  École des hautes études en sciences sociales  ENPC  École des Ponts ParisTech  CNRS  Centre National de la Recherche Scientifique  INRAE  Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement, CEREMADE  CEntre de REcherches en MAthématiques de la DEcision  Université Paris DauphinePSL  PSL  Université Paris sciences et lettres  CNRS  Centre National de la Recherche Scientifique) 
Abstract:  In both arbitrage and utility pricing approaches, the fictitious completion appears as a powerful tool that permits to extend complete markets results to an incomplete markets framework. Does this technique permit to characterize the equilibrium pricing interval? This note provides a negative answer. 
Date:  2020–08 
URL:  http://d.repec.org/n?u=RePEc:hal:journl:halshs03048797&r=all 
By:  Julia M. Puaschunder (The New School, Parsons School of Design, USA) 
Abstract:  Standard economic models primarily captured human beings to be rational utility maximizers in the homo oeconomicus model. Behavioral economics addressed human decision making fallibility in a wide range of studies including laboratory and field experiments as well as big data. The currently ongoing COVID19 crisis now underlines the importance of a healthy work environment. The medicine of the future is believed to prevent diseases instead of just treating their consequences. There is an expected shift from modern medicineâ€™s focus on acute treatment to address the inherentlyunderlying preventive measures that diseases would have a more favorable trajectory or are even avoidable at all. The homo praeventicus model may focus on preventing diseases and working in advance on favorable immune conditions that avert negative outbreaks of pandemics or determine a healthier state when falling sick. The homo oeconomicus offers as a remedy to chronic diseases and a reduction of a global cost escalation for medical care. Because we have to live with environmental burdens on our health, a change of direction towards prevention is recommended and the implementation of homo praeventicus models envisioned. 
Keywords:  AI, Artificial Intelligence, Coronavirus, COVID19, Discounting, Healthcare, Homo oeconomicus, Homo praeventicus, Medical care, Precautionary principle, Prevention, Trajectory 
Date:  2020–10 
URL:  http://d.repec.org/n?u=RePEc:smo:bpaper:023jpm&r=all 
By:  Alexander, L. Brown; Taisuke Imai; Ferdinand M. Vieider; Colin Camerer 
Abstract:  Loss aversion is one of the most widely used concepts in behavioral economics. We conduct a largescale interdisciplinary metaanalysis, to systematically accumulate knowledge from numerous empirical estimates of the loss aversion coefficient reported during the past couple of decades. We examine 607 empirical estimates of loss aversion from 150 articles in economics, psychology, neuroscience, and several other disciplines. Our analysis indicates that the mean loss aversion coefficient is between 1.8 and 2.1. We also document how reported estimates vary depending on the observable characteristics of the study design. 
Keywords:  loss aversion, prospect theory, metaanalysis 
JEL:  D81 D90 C90 C11 
Date:  2021 
URL:  http://d.repec.org/n?u=RePEc:ces:ceswps:_8848&r=all 
By:  Ruth Cadaoas Tacneng; Klarizze Anne Martin Puzon 
Abstract:  What is the effect of gender priming on solidarity behaviour? We explore a twoplayer solidarity game where players can insure each other against the risk of losses. In the utility function, priming is represented as the 'change in weight' given to the other player's payoff. We test this experimentally in a developing country setting, the Philippines. We consider a treatment that involves reminding subjects of their gender. We found that, without priming, there were no statistically different gender differences in the solidarity game. 
Keywords:  Gender, priming, Gender differences, Philippines, Dice game, Behaviour, Risk attitudes, Insurance 
Date:  2021 
URL:  http://d.repec.org/n?u=RePEc:unu:wpaper:wp202124&r=all 
By:  Paul Frijters; Christian Krekel; Aydogan Ulker 
Abstract:  Is wellbeing higher if the same number of negative events is spread out rather than bunched in time? Should positive events be spread out or bunched? We answer these questions exploiting quarterly data on six positive and twelve negative life events in the Household, Income and Labour Dynamics in Australia panel. Accounting for selection, anticipation, and adaptation, we find a tipping point when it comes to negative events: once people experience about two negative events, their wellbeing depreciates disproportionally as more and more events occur in a given period. For positive events, effects are weakly decreasing in size. So for a person's wellbeing both the good and the bad should be spread out rather than bunched in time, corresponding to the classic economic presumption of concave utility rather than Machiavelli's prescript of inflicting all injuries at once. Yet, differences are small, with complete smoothing of all negative events over all people and periods calculated to yield no more than a 12% reduction in the total negative wellbeing impact of negative events. 
Keywords:  wellbeing, mental health, life events, nonlinearities, hedonic adaptation, welfare analysis 
JEL:  I31 D1 P35 
Date:  2020–03 
URL:  http://d.repec.org/n?u=RePEc:cep:cepdps:dp1680&r=all 
By:  Jin Hyuk Choi; Tae Ung Gang 
Abstract:  We consider an optimal investment problem to maximize expected utility of the terminal wealth, in an illiquid market with search frictions and transaction costs. In the market model, an investor's attempt of transaction is successful only at arrival times of a Poisson process, and the investor pays proportional transaction costs when the transaction is successful. We characterize the notrade region describing the optimal trading strategy. We provide asymptotic expansions of the boundaries of the notrade region and the value function, for small transaction costs. The asymptotic analysis implies that the effects of the transaction costs are more pronounced in the market with less search frictions. 
Date:  2021–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2101.09936&r=all 
By:  Bary Pradelski (POLARIS  Performance analysis and optimization of LARge Infrastructures and Systems  LIG  Laboratoire d'Informatique de Grenoble  UJF  Université Joseph Fourier  Grenoble 1  UPMF  Université Pierre Mendès France  Grenoble 2  CNRS  Centre National de la Recherche Scientifique  INPG  Institut National Polytechnique de Grenoble  Grenoble INP  Institut polytechnique de Grenoble  Grenoble Institute of Technology  Inria Grenoble  RhôneAlpes  Inria  Institut National de Recherche en Informatique et en Automatique); Heinrich Nax 
Abstract:  In twosided markets with transferable utility ('assignment games'), we study the dynamics of trade arrangements and price adjustments as agents from the two market sides stochastically match, break up, and rematch in their pursuit of better opportunities. The underlying model of individual adjustments is based on the behavioral theories of adaptive learning and aspiration adjustment. Dynamics induced by this model converge to approximately optimal and stable market outcomes, but this convergence may be (exponentially) slow. We introduce the notion of a 'market sentiment' that governs which of the two market sides is temporarily more or less amenable to price adjustments, and show that such a feature may significantly speed up convergence. 
Keywords:  market psychology,convergence time,matching markets,assignment games,core,evolutionary game theory 
Date:  2020–03 
URL:  http://d.repec.org/n?u=RePEc:hal:journl:hal03100116&r=all 
By:  Anja Brumme; Wolfgang Buchholz; Dirk Rübbelke 
Abstract:  In this paper we demonstrate how the impure public good model can be converted into a pure public good model with satiation of private consumption, which can be handled more easily, by using a variation of the aggregative game approach as devised by Cornes and Hartley (2007). We point out the conditions for impure public good utility functions that allow for this conversion through which the analysis of Nash equilibria can be conducted in a unified way for the impure and the pure public good model and which facilitates comparative statics analysis for impure public goods. Our approach also offers new insights on the determinants for becoming a contributor to the public good in the impure case as well as on the nonneutral effects of income transfers on Nash equilibria when the public good is impure. 
Keywords:  impure public goods, warmglow giving, Nash equilibria, aggregative game approach 
JEL:  C72 D64 H41 
Date:  2021 
URL:  http://d.repec.org/n?u=RePEc:ces:ceswps:_8852&r=all 
By:  Basu, Arnab K. (Cornell University); Dimova, Ralitza (University of Manchester) 
Abstract:  This paper revisits the causes behind child labor supply by focusing on an aspect that has received little attention: the link between the household head's risk and time preferences and observed child labor supply. We develop a theoretical model and empirically test for this causality using data from the seventh round of the Ethiopian Rural Household Survey. We find child labor to be increasing in both higher adult discount rates and higher degrees of risk aversion, and this finding is robust across alternative empirical approaches. Higher discount rates favor current consumption which is financed in part by child labor income while high risk aversion to future income (due to either low or uncertain returns to education) favor child labor at the expense of schooling. 
Keywords:  risk and time preferences, education, child labor, Ethiopia 
JEL:  C93 J43 O55 
Date:  2021–01 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp14062&r=all 
By:  Bednar, William; Pretnar, Nick 
Abstract:  We construct a general equilibrium model with home production where consumers choose how to spend their offmarket time using market consumption purchases. The timeintensities and productivities of different home production activities determine the degree to which variation in income and relative market prices affects both the composition of expenditure and market labor hours per worker. When accounting for time to consume, homothetic utility functions can still generate nonlinear expansion paths as wages increase. For the United States substitution effects due to relative price changes dominate income effects from wage growth in contributing to the rise in the services share and the fall in hours per worker. Quality improvements to goods and services have roughly kept pace with each other, so that changes to sectoral produc tion efficiencies are the primary driver of relative price variation. 
Keywords:  household production, laborleisure, time use, aggregate consumption, structural change, technical change, services, goods 
JEL:  D13 E2 O3 
Date:  2020–10–19 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:103730&r=all 