|
on Utility Models and Prospect Theories |
By: | Chris van Klaveren; Bernard M.S. van Praag; Henriette Maassen van den Brink |
Abstract: | In this paper an empirical model is developed where the collective household model is used as a basic framework to describe the time allocation problem. The collective model views household behavior as the outcome of maximizing a household utility function which is a weighted sum of the utility functions of the male and the female. The empirical research that has been done is mainly focused on testing and refuting the unitary model. Moreover, in the bulk of time allocation literature the main accent still lies on the development of theory. The novelty of this paper is that we empirically estimate the two individual utility functions and the household power weight distribution, which is parameterized per household. The model is estimated on a sub-sample of the British Household Panel Survey, consisting of two-earner households. The empirical results suggest that: (1) Given that the weight distribution is wage dependent, preferences of males and females differ, which rejects the unitary model; (2) The power differences are mainly explained by differences in the ratio of the partners' hourly wages; (3) Although there are significant individual variations on average the power distribution in two-earner families is about even; (4) The male tends to be marginally more productive in performing household tasks than the female (5) The preference for total household production is influenced by family size for the female but not for the male (6) Both males and females have a backward bending labor supply curve. |
Keywords: | collective household models, labor supply, intra-household, time allocation |
JEL: | D12 D13 J22 |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:ces:ceswps:_1716&r=upt |
By: | Richard Batley |
Abstract: | As the demands placed on transport systems have increased relative to extensions in supply, problems of network unreliability have become ever more prevalent. The response of some transport users has been to accommodate expectations of unreliability in their decision-making, particularly through their trip scheduling. In the analysis of trip scheduling, Small’s (1982) approach has received considerable support. Small extends the microeconomic theory of time allocation (e.g. Becker, 1965; De Serpa, 1971), accounting for scheduling constraints in the specification of both utility and its associated constraints. Small makes operational the theory by means of the random utility model (RUM). This involves a process of converting the continuous departure time variable into discrete departure time segments, specifying the utility of each departure time segment as a function of several components (specifically journey time, schedule delay and the penalty of late arrival), and adopting particular distributional assumptions concerning the random error terms of contiguous departure time segments (whilst his 1982 paper assumes IID, Small’s 1987 paper considers a more complex pattern of covariance). A fundamental limitation of Small’s approach is that individuals make choices under certainty, an assumption that is clearly unrealistic in the context of urban travel choice. The response of microeconomic theory to such challenge is to reformulate the objective problem from the maximisation of utility, to one of maximising expected utility, with particular reference to the works of von Neumann & Morgenstern (1947) and Savage (1954). Bates et al. (2001) apply this extension to departure time choice, but specify choice as being over continuous time; the latter carries the advantage of simplifying some of the calculations of optimal departure time. Moreover Bates et al. offer account of departure time choice under uncertainty, but retain a deterministic representation. Batley & Daly (2004) develop ideas further by reconciling the analyses of Small (1982) and Bates et al. Drawing on early contributions to the RUM literature by Marschak et al. (1963), Batley and Daly propose a probabilistic model of departure time choice under uncertainty, based on an objective function of random expected utility maximisation. Despite this progression in the generality and sophistication of methods, significant challenges to the normative validity of RUM and transport network models remain. Of increasing prominence in transport research, is the conjecture that expected utility maximisation may represent an inappropriate objective of choice under uncertainty. Significant evidence for this conjecture exists, and a variety of alternative objectives proposed instead; Kahneman & Tversky (2000) offer a useful compendium of such papers. With regards to these alternatives, Kahneman & Tversky’s (1979) own Prospect Theory commands considerable support as a theoretical panacea for choice under uncertainty. This theory distinguishes between two phases in the choice process - editing and evaluation. Editing may involve several stages, so-called ‘coding’, ‘combination’, ‘cancellation’, ‘simplification’ and ‘rejection of dominated alternatives’. Evaluation involves a value function that is defined on deviations from some reference point, and is characterised by concavity for gains and convexity for losses, with the function being steeper for gains than for losses. The present paper begins by formalising the earlier ideas of Batley and Daly (2004); the paper thus presents a theoretical exposition of a random expected utility model of departure time choice. The workings of the model are then illustrated by means of numerical example. The scope of the analysis is subsequently widened to consider the possibility of divergence from the objective of expected utility maximisation. An interesting feature of this discussion is consideration of the relationship between Prospect Theory and a generalised representation of the random expected utility model. In considering this relationship, the paper draws on Batley & Daly’s (2003) investigation of the equivalence between RUM and elimination-by-aspects (Tversky, 1972); the latter representing one example of a possible ‘editing’ model within Prospect Theory. Again, the extended model is illustrated by example. |
Date: | 2005–08 |
URL: | http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa05p750&r=upt |
By: | Francoise Forges; Enrico Minelli |
Abstract: | Afriat (1967) showed the equivalence of the strong axiom of revealed preference and the existence of a solution to a set of linear inequalities. From this solution he constructed a utility function rationalizing the choices of a competitive consumer. We extend Afriat’s theorem to a class of nonlinear budget sets. We thereby obtain testable implications of rational behavior for a wide class of economic environments, and a constructive method to derive individual preferences from observed choices. In an application to market games, we identify a set of observable restrictions characterizing Nash equilibrium outcomes. |
Keywords: | GARP, rational choice, revealed preferences, market games, SARP, WARP |
JEL: | C72 D11 D43 |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:ces:ceswps:_1703&r=upt |
By: | Antoinette Baujard (CREM - CNRS) |
Abstract: | A wide diversity of rankings of opportunity sets are characterized through what is now commonly called the freedom of choice literature. We claim the normative content of each of these propositions can be analyzed and clari¯ed through a typology. We distinguish two kinds of rankings: rankings according to one prudential value and rankings according to several prudential values. In the ¯rst case, we present rankings according to freedom. Different rankings correspond to different meanings of freedom: freedom of choice, freedom as autonomy, freedom as exercise of significant choices, negative freedom, positive freedom and utility. In the second case, the rankings may capture meanwhile different prudential values, namely utility and freedom. We organize the presentation around different forms of commensurability between prudential values: weighting, trumping, equal consideration and discontinuity. |
Keywords: | opportunity sets, freedom of choice, prudential values, plurality, overall well-being, typology. |
JEL: | D11 D63 I31 |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:tut:cremwp:200611&r=upt |
By: | Dirk Van Amelsfort; Michiel Bliemer |
Abstract: | We are developing a dynamic modeling framework in which we can evaluate the effects of different road pricing measures on individual choice behavior as well as on a network level. Important parts of this framework are different choice models which forecast the route, departure time and mode choice behavior of travelers under road pricing in the Netherlands. In this paper we discuss the setup of the experiment in detail and present our findings about dealing with uncertainty, travel time and schedule delays in the utility functions. To develop the desired choice models a stated choice experiment was conducted. In this experiment respondents were presented with four alternatives, which can be described as follows: Alternative A: paying for preferred travel conditions. Alternative B: adjust arrival time and pay less. Alternative C: adjust route and pay less. Alternative D: adjust mode to avoid paying charge. The four alternatives differ mainly in price, travel time, time of departure/arrival and mode and are based on the respondents’ current morning commute characteristics. The travel time in the experiment is based on the reported (by the respondent) free-flow travel time for the home-to-work trip, and the reported trip length. We calculate the level of travel time, by setting a certain part of the trip length to be in free-flow conditions and calculate a free-flow and congested part of travel time. Adding the free-flow travel time and the congested travel time makes the total minimum travel time for the trip. Minimum travel time, because to this travel time we add an uncertainty margin, creating the maximum travel time. The level of uncertainty we introduced between minimum and maximum travel time was based on the difference between the reported average and free-flow travel time. In simpler words then explained here, we told respondents that the actual travel time for this trip is unknown, but that between the minimum and maximum each travel time has an equal change of occurring. As a consequence of introducing uncertainty in travel time, the arrival time also receives the same margin. Using the data from the experiment we estimated choice models following the schedule delay framework from Vickrey (1969) and Small (1987), assigning penalties to shifts from the preferred time of departure/arrival to earlier or later times. In the models we used the minimum travel time and the expected travel time (average of minimum and maximum). Using the expected travel time incorporates already some of the uncertainty (half) in the attribute travel time, making the uncertainty attribute in the utility function not significant. The parameters values and values-of-time for using the minimum or expected travel time do not differ. Initially, we looked at schedule delays only from an arrival time perspective. Here we also distinguished between schedule delays based on the minimum arrival time and the expected arrival time (average of minimum and maximum). Again, when using expected schedule delays the uncertainty is included in the schedule delays and a separate uncertainty attribute in the utility function is not significant. There is another issue involved when looking at the preferred arrival time of the respondents; there are three cases to take into account: 1.If the minimum and maximum arrival times are both earlier than the preferred arrival time we are certain about a schedule delay early situation (based on minimum or expected schedule delays). 2.If the minimum and maximum arrival times are both later than the preferred arrival time we are certain about a schedule delay late situation (based on minimum or expected schedule delays). 3.The scheduling situation is undetermined when the preferred arrival time is between the minimum and maximum arrival time. In this case we use an expected schedule delay assuming a uniform distribution of arrival times between the minimum and maximum arrival time. Parameter values for both situations are very different and results from the minimum arrival time approach are more in line with expectations. There is a choice to take into account uncertainty in the utility function in either the expected travel time, expected schedule delays or as a separate attribute. In the paper we discuss the effects of different approaches. We extended our models to also include schedule delays based on preferred departure time. In the departure time scheduling components uncertainty is not included. Results show that the depart schedule delay late is significant and substantial, together with significant arrival schedule early and late. Further extension of the model includes taking into account the amount of flexibility in departure and arrival times for each respondent. The results will be included in this paper. |
Date: | 2005–08 |
URL: | http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa05p721&r=upt |
By: | Dellaert, B.G.C.; Stremersch, S. (Erasmus Research Institute of Management (ERIM), RSM Erasmus University) |
Abstract: | Increasingly, firms adopt mass customization, which allows consumers to customize products by self-selecting their most preferred composition of the product for a predefined set of modules. For example, PC vendors such as Dell allow customers to customize their PC by choosing the type of processor, memory size, monitor, etc. However, how such firms configure the mass customization process determines the utility a consumer may obtain or the complexity a consumer may face in the mass customization task. Mass customization configurations may differ in four important ways ? we take the example of the personal computer industry. First, a firm may offer few or many product modules that can be mass customized (e.g., only allow consumers to customize memory and processor of a PC or allow consumers to customize any module of the PC) and few or many levels among which to choose per mass customizable module (e.g., for mass customization of the processor, only two or many more processing speeds are available). Second, a firm may offer the consumer a choice only between very similar module levels (e.g., a 17? or 18? screen) or between very different module levels (e.g., a 15? or 21? screen). Third, a firm may individually price the modules within a mass customization configuration (e.g., showing the price of the different processors the consumer may choose from) along with pricing the total product, or the firm may show only the total product price (e.g., the price of the different processors is not shown, but only the computer?s total price is shown). Fourth, the firm may show a default version (e.g., for the processor, the configuration contains a pre-selected processing speed, which may be a high-end or low-end processor), which consumers may then customize, or the firm may not show a default version and let consumers start from scratch in composing the product. The authors find that the choices that firms make in configuring the mass customization process affect the product utility consumers can achieve in mass customization. The reason is that the mass customization configuration affects how closely the consumer may approach his or her ideal product by mass customizing. Mass customization configurations also affect consumers? perception of the complexity of mass customization as they affect how many cognitive steps a consumer needs to make in the decision process. Both product utility and complexity in the end determine the utility consumers derive from using a certain mass customization configuration, which in turn will determine main outcome variables for marketers, such as total product sales, satisfaction with the product and the firm, referral behavior and loyalty. The study offers good news for those who wish to provide many mass customization options to consumers, because we find that within the rather large range of modules and module levels we manipulated in this study, consumers did not perceive significant increases in complexity, while they were indeed able to achieve higher product utility. Second, our results imply that firms when increasing the number of module levels, should typically offer consumers more additional options in the most popular range of a module and less additional options at the extremes. Third, pricing should preferably be presented only at the total product level, rather than at the module and product level. We find that this approach reduces complexity and increases product utility. Fourth, firms should offer a default version that consumers can use as a starting point for mass customization, as doing so minimizes the complexity to consumers. The best default version to start out with is a base default version because this type of default version allows the consumer to most closely approach his or her ideal product. The reason is that consumers when presented with an advanced default may buy a product that is more advanced than they actually need. We also found that expert consumers are ideal targets for mass customization offerings. Expert consumers experience lower complexity in mass customization and complexity has a less negative influence on product utility obtained in the mass customization process, all compared to novice consumers. In general, reducing complexity in the mass customization configuration is a promising strategy for firms as it not only increases the utility of the entire process for consumers, but also allows them to compose products that more closely fit their ideal product. |
Keywords: | mass customization;consumer choice;complexity;utility;PC buying;mass customized products;customization; |
Date: | 2004–11–17 |
URL: | http://d.repec.org/n?u=RePEc:dgr:eureri:30001946&r=upt |
By: | Hartley, Roger (University of Manchester); Lanot, Gauthier (Keele University); Walker, Ian (University of Warwick) |
Abstract: | This paper analyses the behaviour of contestants in one of the most popular TV gameshows ever to estimate risk aversion. This gameshow has a number of features that makes it well suited for our analysis: the format is extremely straightforward, it involves no strategic decision-making, we have a large number of observations, and the prizes are cash and paid immediately, and cover a large range – from £100 up to £1 million. Our data sources have the virtue that we are able to check the representativeness of the gameshow participants. Even though the CRRA model is extremely restrictive we find that a coefficient or relative risk aversion which is close to unity fits the data across a wide range of wealth remarkably well. |
Keywords: | Risk aversion ; gameshow |
JEL: | D81 C93 C23 |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:wrk:warwec:747&r=upt |
By: | Stephane Hess; Denis Bolduc; John Polak |
Abstract: | The area of discrete choice modelling has developed rapidly in recent years. In particular, continuing refinements of the Generalised Extreme Value (GEV) model family have permitted the representation of increasingly complex patterns of substitution and parallel advances in estimation capability have led to the increased use of model forms requiring simulation in estimation and application. One model form especially, namely the Mixed Multinomial Logit (MMNL) model, is being used ever more widely. Aside from allowing for random variations in tastes across decision-makers in a Random Coefficients Logit (RCL) framework, this model additionally allows for the representation of inter-alternative correlation as well as heteroscedasticity in an Error Components Logit (ECL) framework, enabling the model to approximate any Random Utility model arbitrarily closely. While the various developments discussed above have led to gradual gains in modelling flexibility, little effort has gone into the development of model forms allowing for a representation of heterogeneity across respondents in the correlation structure in place between alternatives. Such correlation heterogeneity is however possibly a crucial factor in the variation of choice-making behaviour across decision-makers, given the potential presence of individual-specific terms in the unobserved part of utility of multiple alternatives. To the authors' knowledge, there has so far only been one application of a model allowing for such heterogeneity, by Bhat (1997). In this Covariance NL model, the logsum parameters themselves are a function of socio-demographic attributes of the decision-makers, such that the correlation heterogeneity is explained with the help of these attributes. While the results by Bhat show the presence of statistically significant levels of covariance heterogeneity, the improvements in terms of model performance are almost negligible. While it is possible to interpret this as a lack of covariance heterogeneity in the data, another explanation is possible. It is clearly imaginable that a major part of the covariance heterogeneity cannot be explained in a deterministic fashion, either due to data limitations, or because of the presence of actual random variation, in a situation analogous to the case of random taste heterogeneity that cannot be explained in a deterministic fashion. In this paper, we propose two different ways of modelling such random variations in the correlation structure across individuals. The first approach is based on the use of an underlying GEV structure, while the second approach consists of an extension of the ECL model. In the former approach, the choice probabilities are given by integration of underlying GEV choice probabilities, such as Nested Logit, over the assumed distribution of the structural parameters. In the most basic specification, the structural parameters are specified as simple random variables, where appropriate choices of statistical distributions and/or mathematical transforms guarantee that the resulting structural parameters fall into the permissible range of values. Several extensions are then discussed in the paper that allow for a mixture of random and deterministic variations in the correlation structure. In an ECL model, correlation across alternatives is introduced with the help of normally distributed error-terms with a mean of zero that are shared by alternatives that are closer substitutes for each other, with the extent of correlation being determined by the estimates of the standard deviations of the error-components. The extension of this model to a structure allowing for random covariance heterogeneity is again divided into two parts. In the first approach, correlation is assumed to vary purely randomly; this is obtained through simple integration over the distribution of the standard deviations of the error-terms, superseding the integration over the distribution of the error-components with a specific draw for the standard deviations. The second extension is similar to the one used in the GEV case, with the standard deviations being composed of a deterministic term and a random term, either as a pure deviation, or in the form of random coefficients in the parameterisation of the distribution of the standard deviations. We next show that our Covariance GEV (CGEV) model generalises all existing GEV model structures, while the Covariance ECL (CECL) model can theoretically approximate all RUM models arbitrarily closely. Although this also means that the CECL model can closely replicate the behaviour of the CGEV model, there are some differences between the two models, which can be related to the differences in the underlying error-structure of the base models (GEV vs ECL). The CECL model has the advantage of implicitly allowing for heteroscedasticity, although this is also possible with the CGEV model, by adding appropriate error-components, leading to an EC-CGEV model. In terms of estimation, the CECL model has a run-time advantage for basic nesting structures, when the number of error-components, and hence dimensions of integration, is low enough not to counter-act the gains made by being based on a more straightforward integrand (MNL vs advanced GEV). However, in more complicated structures, this advantage disappears, in a situation that is analogous to the case of Mixed GEV models compared to ECL models. A final disadvantage of the CECL model structure comes in the form of an additional set of identification conditions. The paper presents applications of these model structures to both cross-sectional and panel datasets from the field of travel behaviour analysis. The applications illustrate the gains in model performance that can be obtained with our proposed structures when compared to models governed by a homogeneous covariance structure assumption. As expected, the gains in performance are more important in the case of data with repeated observations for the same individual, where the notion of individual-specific substitution patterns applies more directly. The applications also confirm the slight differences between the CGEV and CECL models discussed above. The paper concludes with a discussion of how the two structures can be extended to allow for random taste heterogeneity. The resulting models thus allow for random variations in choice behaviour both in the evaluation of measured attributes C as well as the correlation across alternatives in the unobserved utility terms. This further increases the flexibility of the two model structures, and their potential for analysing complex behaviour in transport and other areas of research. |
Date: | 2005–08 |
URL: | http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa05p375&r=upt |
By: | Gerard De Jong; Andrew Daly; Marits Pieters; Toon Van der Hoorn |
Abstract: | Transport infrastructure projects in The Netherlands are appraised ex ante by using cost-benefit analysis (CBA) procedures following the so-called ‘OEI-guidelines’. The project benefits for travellers are incorporated in the form of changes in demand (e.g. from the Dutch national model system, LMS, or the regional models, NRM) and changes in the generalised travel costs (using values of time from Stated Preference studies to monetise travel time savings), and applying the rule of half. While a number of short-term improvements to the current procedures have been improved, it is also interesting to consider a more radical approach using explicit measures of consumer surplus, obtained by integrating the demand models directly. These measures are called logsums, from their functional form. The advantages that the logsums would give to the appraisal procedure would be that logsums can incorporate a degree of heterogeneity in the population, while also being theoretically more correct and in many cases easier to calculate. In this context, the Transport Research Centre (AVV) of the Dutch Ministry of Transport, Public Works and Water Management has commissioned RAND Europe to undertake a study comparing the conventional approach to the use of the logsum change as a measure of the change in consumer surplus that would result from a transport infrastructure project. The paper is based on the work conducted in the study. The paper opens with a review of the literature on the use of logsums as a measure of consumer surplus change in project appraisal and evaluation. It then goes on to describe a case study with the Dutch National Model System (LMS) for transport in which three methods are compared for a specific project (a high speed magnetic hover train that would connect the four main cities in the Randstad: Amsterdam, The Hague, Rotterdam and Utrecht): a.the ‘classical’ CBA approach as described above, b.the improved ‘classical’ CBA approach (introducing a number of short-term improvements) and c.the logsum approach (as a long term improvement). The direct effects of a particular policy on the travellers can be measured as the change in consumer surplus that results from that policy (there can also be indirect and external effects that may not be covered in the consumer surplus change). The consumer surplus associated with a set of alternatives is, under the logit assumptions, relatively easy to calculate. By definition, a person’s consumer surplus is the utility, in money terms, that a person receives in the choice situation. If the unobserved component of utility is independently and identically distributed extreme value and utility is linear in income, then the expected utility becomes the log of the denominator of a logit choice probability, divided by the marginal utility of income, plus arbitrary constants. This if often called the ‘logsum’. Total consumer surplus in the population can be calculated as a weighted sum of logsums over a sample of decision-makers, with the weights reflecting the number of people in the population who face the same representative utilities as the sampled person. The change in consumer surplus is calculated as the difference between the logsum under the conditions before the change and after the change (e.g. introduction of a policy). The arbitrary constants drop out. However, to calculate this change in consumer surplus, the researcher must know the marginal utility of income. Usually a price or cost variable enters the representative utility and, in case that happens in a consistent linear additive fashion, the negative of its coefficient is the marginal utility of income by definition. If the marginal utility of income is not constant with respect to income, as is the case in the LMS and NRM, a far more complex formula is needed, or an indirect approach has to be taken. This paper will review the theoretical literature on the use of the logsum as an evaluation measure, including both the original papers on this from the seventies and the work on the income effect in the nineties. Also recent application studies that used the logsum for evaluation purposes will be reviewed. Finally outcomes of runs with the LMS will be reported for the three different approaches (including the logsum approach) mentioned above for evaluating direct effect of transport policies and projects. Different methods for monetising the logsum change will be compared. |
Date: | 2005–08 |
URL: | http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa05p158&r=upt |
By: | Matthew Adler (University of Pennsylvania Law School) |
Abstract: | "Individual risk" currently plays a major role in risk assessment and in the regulatory practices of the health and safety agencies that employ risk assessment, such as EPA, FDA, OSHA, NRC, CPSC, and others. Risk assessors use the term "population risk" to mean the number of deaths caused by some hazard. By contrast, "individual risk" is the incremental probability of death that the hazard imposes on some particular person. Regulatory decision procedures keyed to individual risk are widespread. This is true both for the regulation of toxic chemicals (the heartland of risk assessment), and for other health hazards, such as radiation and pathogens; and regulatory agencies are now beginning to employ individual risk criteria for evaluating safety threats, such as occupational injuries. Sometimes, agencies look to the risk imposed on the maximally exposed individual; in other contexts, the regulatory focus is on the average individual's risk, or perhaps the risk of a person incurring an above-average but nonmaximal exposure. Sometimes, agencies seek to regulate hazards so as to reduce the individual risk level (to the maximally exposed, high-end, or average individual) below 1 in 1 million. Sometimes, instead, a risk level of 1 in 100,000 or 1 in 10,000 or even 1 in 1000 is seen as de minimis. In short, the construct of individual risk plays a variety of decisional roles, but the construct itself is quite pervasive. This Article launches a systematic critique of agency decisionmaking keyed to individual risk. Part I unpacks the construct, and shows how it invokes a frequentist rather than Bayesian conception of probability. Part II surveys agency practice, describing the wide range of regulatory contexts where individual risk levels are wholly or partly determinative of agency choice: these include most of the EPA's major programs for regulating toxins (air pollutants under the Clean Air Act, water pollutants under the Clean Water Act and Safe Drinking Water Act, toxic waste dumps under the Superfund statute, hazardous wastes under RCRA, and pesticides under FIFRA) as well as the FDA's regulation of food safety, OSHA regulation of workplace health and safety risks, NRC licensing of nuclear reactors, and the CPSC's regulation of risky consumer products. In the remainder of the Article, I demonstrate that frequentist individual risk is a problematic basis for regulatory choice, across a range of moral views. Part III focuses on welfare consequentialism: the moral view underlying welfare economics and cost-benefit analysis. I argue that the sort of risk relevant to welfare consequentialism is Bayesian, not frequentist. Part IV explores the subtle, but crucial difference between frequentist and Bayesian risk. Part V moves beyond cost-benefit analysis and examines nonwelfarist moral views: specifically, safety-focused, deontological, contractualist, and democratic views. Here too, I suggest, regulatory reliance on frequentist individual risk should be seen as problematic. Part VI argues that current practices (as described at length in Part II) are doubly misguided: not only do they focus on frequentist rather than Bayesian risk, but they are also insensitive to population size. In short, the Article provides a wide ranging, critical analysis of contemporary risk assessment and risk regulation. The perspective offered here is that of the sympathetic critic. Risk assessment itself - the enterprise of quantifying health and safety threats - represents a great leap forward for public rationality, and should not be abandoned. Rather, the current conception of risk assessment needs to be reworked. Risk needs to be seen in Bayesian rather than frequentist terms. And regulatory choice procedures must be driven by population risk or some other measure of the seriousness of health and safety hazards that is sensitive to the size of the exposed population - not the risk that some particular person (whatever her place in the exposure distribution) incurs. |
Keywords: | Risk, |
URL: | http://d.repec.org/n?u=RePEc:bep:upennl:upenn_wps-1013&r=upt |
By: | Valeria DeBonis; Luca Spataro |
Abstract: | The issue of inheritance taxation is very similar to that of capital income taxation, once they are analyzed within the optimal taxation framework: should one tax own future consumption and estate (i.e. perspective heirs’ consumption) more than own present consumption? As for capital income taxation, starting from the seminal works by Judd (1985) and Chamley (1986), the issue of dynamic optimal capital income taxation has been analyzed by a number of researchers. In particular, Judd (1999) has shown that the zero tax rate result stems from the fact that a tax on capital income is equivalent to a tax on future consumption: thus, capital income should not be taxed if the elasticity of consumption is constant over time. However, while in infinitely lived representative agent (ILRA) models this condition is necessarily satisfied in the long run, along the transition path, instead, it holds only if the utility function is assumed to be (weakly) separable in consumption and leisure and homothetic in consumption. Another source of taxation can derive from the presence of externalities, which gives room to nonzero taxation as a Pigouvian correction device. Abandoning the standard ILRA framework in favour of Overlapping Generation models with life cycle (OLG-LC) has delivered another important case of nonzero capital income taxation. This outcome can be understood by reckoning that in such a setup optimal consumption and labor (or, more precisely, the general equilibrium elasticity of consumption) are generally not constant over life and even at the steady state, due to life-cycle behavior. A similar reasoning can be applied to estate taxation. Note that this corresponds to a dierential treatment of savings for own future consumption, on the one hand, and of savings for bequest, on the other hand. Thus, the first aspect to note is that the optimality of a nonzero tax on capital income does not necessarily imply the optimality of a nonzero tax on estates. In fact the latter can be justified on arguments analogous to those presented above: a nonzero estate tax could stem either from the violation of (weak) separability between ”expenditure” on estate and (previous period) leisure or from a dierence between the donor’s and the donee’s general equilibrium elasticities of consumption, according to the framework being analyzed. Another reason for levying a tax on inheritance could be correcting for an externality. Atkinson (1971) and Stiglitz (1987) consider the positive externality deriving from the fact that transfers benefit those who receive them. Holtz-Eakin et al. (1993), Imbens et al. (1999), Joulfaian et al. (1994) consider instead the negative externality deriving, in the presence of an income tax, from a fall in heirs’ labor eorts. In the field of estate and transfers in general, the analysis of the motives for giving is another important aspect. In fact, different motives are associated to dierent forms of utility functions and, as a consequence, to dierent policy eects. Altruism, joy of giving, exchange related motives, accidental bequests have been widely studied in the literature (see Davies, 1996; Masson and Pestieau, 1997; Stark, 1999; Kaplow, 2001). In this paper we consider altruism motivated bequests. However, we introduce an element that is not considered in the existing models, i.e. the presence of migration. Moreover, we allow for a disconnection in the economy, in that we assume altruism to be limited to own descendants4. This element turns out to be a relevant determinant of taxation once it is embedded in the social welfare function, and precisely in the sense that the policymaker takes into account the demographic evolution of the population. In fact, the zero capital income and inheritance tax result applies only if the disconnection of the economy is disregarded. We identify instead a number of ways in which the demographic evolution of the population can be accounted for within the social welfare function via appropriate intergenerational weights, leading to dierent combinations of the inheritance and capital income tax rates, with at least one of them being nonzero. The work proceeds as follows: in section 2 we present the model and derive the equilibrium conditions for the decentralized economy. Next, we characterize the Ramsey problem by adopting the primal approach. Finally, we present the results by focusing on the new ones. Concluding remarks and a technical appendix will end the work. |
Keywords: | optimal dynamic taxation, migration, altruism, inheritance taxation, capital income taxation |
JEL: | E62 H21 |
Date: | 2006–05 |
URL: | http://d.repec.org/n?u=RePEc:wpc:wplist:wp11_06&r=upt |
By: | Geert Bekaert; Eric Engstrom; Yuhang Xing |
Abstract: | We identify the relative importance of changes in the conditional variance of fundamentals (which we call “uncertainty”) and changes in risk aversion (“risk” for short) in the determination of the term structure, equity prices and risk premiums. Theoretically, we introduce persistent time-varying uncertainty about the fundamentals in an external habit model. The model matches the dynamics of dividend and consumption growth, including their volatility dynamics and many salient asset market phenomena. While the variation in dividend yields and the equity risk premium is primarily driven by risk, uncertainty plays a large role in the term structure and is the driver of counter-cyclical volatility of asset returns. |
JEL: | G12 G15 E44 |
Date: | 2006–05 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:12248&r=upt |