|
on Utility Models and Prospect Theory |
By: | Ya'acov Ritov; Wolfgang Härdle |
Abstract: | We consider two semiparametric models for the weight function in a biased sample model. The object of our interest parametrizes the weight function, and it is either Euclidean or non Euclidean. One of the models discussed in this paper is motivated by the estimation the mixing distribution of individual utility functions in the DAX market. |
Keywords: | Mixture distribution, Inverse problem, Risk aversion, Exponential mixture, Empirical pricing kernel, DAX, Market utility function. |
JEL: | C10 C14 D01 D81 |
Date: | 2007–05 |
URL: | http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2007-024&r=upt |
By: | Enzo Giacomini; Wolfgang Härdle |
Abstract: | Information about risk preferences from investors is essential for modelling a wide range of quantitative finance applications. Valuable information related to preferences can be extracted from option prices through pricing kernels. In this paper, pricing kernels and their term structure are estimated in a time varying approach from DAX and ODAX data using dynamic semiparametric factor model (DSFM). DSFM smooths in time and space simultaneously, approximating complex dynamic structures by basis functions and a time series of loading coefficients. Contradicting standard risk aversion assumptions, the estimated pricing kernels indicate risk proclivity in certain levels of return. The analysis of the time series of loading coefficients allows a better understanding of the dynamic behaviour from investors preferences towards risk. |
Keywords: | Dynamic Semiparametric Estimation, Pricing Kernel, Risk Aversion. |
JEL: | C14 G13 |
Date: | 2007–05 |
URL: | http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2007-025&r=upt |
By: | Fernando A. Broner; Guido Lorenzoni; Sergio L. Schmukler |
Abstract: | We argue that emerging economies borrow short term due to the high risk premium charged by bondholders on long-term debt. First, we present a model where the debt maturity structure is the outcome of a risk sharing problem between the government and bondholders. By issuing long-term debt, the government lowers the probability of a rollover crisis, transferring risk to bondholders. In equilibrium, this risk is reflected in a higher risk premium and borrowing cost. Therefore, the government faces a trade-off between safer long-term debt and cheaper short-term debt. Second, we construct a new database of sovereign bond prices and issuance. We show that emerging economies pay a positive term premium (a higher risk premium on long-term bonds than on short-term bonds). During crises, the term premium increases, with issuance shifting towards shorter maturities. The evidence suggests that international investors' time-varying risk aversion is crucial to understand the debt structure in emerging economies. |
JEL: | E43 F30 F34 G15 |
Date: | 2007–05 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:13076&r=upt |
By: | Francisco Penaranda |
Abstract: | This paper surveys asset allocation methods that extend the traditional approach. An important feature of the the traditional approach is that measures the risk and return tradeoff in terms of mean and variance of final wealth. However, there are also other important features that are not always made explicit in terms of investor’s wealth, information, and horizon: The investor makes a single portfolio choice based only on the mean and variance of her final financial wealth and she knows the relevant parameters in that computation. First, the paper describes traditional portfolio choice based on four basic assumptions, while the rest of the sections extend those assumptions. Each section will describe the corresponding equilibrium implications in terms of portfolio advice and asset pricing.Keywords: Mean-Variance Analysis, Background Risks, Estimation Error, Expected Utility, Multi-Period Portfolio Choice.JEL: D81, G11, G12 |
Date: | 2007–03 |
URL: | http://d.repec.org/n?u=RePEc:fmg:fmgdps:dp587&r=upt |
By: | Giuseppe Attanasi, Luca Corazzini, Francesco Passarelli (ISLA, Universita' Bocconi, Milano) |
Abstract: | Voting is a lottery in which an individual wins if she belongs to the majority or loses if she falls into the minority. The probabilities of winning and losing depend on the voting rules. The risk of losing can be reduced by increasing the majority threshold. This however has the negative effect of also lowering the chance to win. We compute the individuals preferred majority threshold, as a function of her risk attitudes, her voting power and her priors about how the other individuals will vote. We find that the optimal threshold is higher when an individual is more risk averse, less powerful, and less optimistic about the chance that the others will vote like her. De facto, raising the threshold is a form of protection against the higher risk of being tyrannized by an unfavorable majority. |
Keywords: | optimal majority rule, super-majority, risk aversion, weighted votes, voter optimism. |
JEL: | D72 D81 H11 |
Date: | 2007–05 |
URL: | http://d.repec.org/n?u=RePEc:slp:islawp:islawp28&r=upt |
By: | Kenneth Clements (Business School, The University of Western Australia) |
Abstract: | As an empirical regularity for broad commodity groups, we show that price elasticities of demand are scattered around the value of minus one-half. We also show that this finding is not inconsistent with the utility-maximising theory of the consumer under the conditions of preference independence. When nothing is known about the price-sensitivity of a good, a reasonable first approximation to its price elasticity is thus minus one-half. |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:uwa:wpaper:06-14&r=upt |
By: | Perroni, Carlo (University of Warwick); Proto, Eugenio (University of Warwick) |
Abstract: | We analyze a two-sector, general-equilibrium model of productive matching and sorting, where risky production is carried out by pairs of individuals both exerting effort. Risk-neutral (entrepreneurial) individuals can match either with other risk-neutral individuals, or – acting as employers/ insurers – with risk-averse (nonentrepreneurial) individuals. Although the latter option has the potential to generate more surplus, when effort is unobservable and risk is high, the moral hazard problem in mixed matches may be too severe for mixing to be attractive to both risk aversion types, leading to a segregated equilibrium in which risk-averse individuals select low-risk, low-yielding activities. An increase in the return associated with the riskier sector can then trigger a switch from a mixed to a segregated equilibrium, causing aggregate output to fall. |
Keywords: | Entrepreneurship ; Matching ; Natural Resources |
JEL: | C78 J41 O12 O13 |
Date: | 2007 |
URL: | http://d.repec.org/n?u=RePEc:wrk:warwec:796&r=upt |
By: | Santos-Pinto, Luís |
Abstract: | The prediction of asymmetric equilibria with Stackelberg outcomes is clearly the most frequent result in the endogenous timing literature. Several experiments have tried to validate this prediction empirically, but failed to find support for it. By contrast, the experiments find that simultaneous-move outcomes are modal and that behavior in endogenous timing games is quite heterogeneous. This paper generalizes Hamilton and Slutsky’s (1990) endogenous timing games by assuming that players are averse to inequality in payoffs. I explore the theoretical implications of inequity aversion and compare them to the empirical evidence. I find that this explanation is able to organize most of the experimental evidence on endogenous timing games. However, inequity aversion is not able to explain delay in Hamilton and Slutsky’s endogenous timing games. |
Keywords: | Endogenous Timing; Cournot; Stackelberg; Inequity Aversion. |
JEL: | D43 D63 L13 C72 |
Date: | 2006–02–06 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:3142&r=upt |
By: | Jun Ma |
Abstract: | As first pointed out by Mehra and Prescott (1985), the excess return of equities over the risk-free rate, roughly 6%, is too high to be readily reconciled with a standard intertemporal model. Recently, Bansal and Yaron (2000, 2004) have demonstrated a resolution of the equity premium puzzle when high persistence in the consumption growth process is combined with the Generalized Expected Utility (GEU) specification of Epstein and Zin (1989, 1991). However, Nelson and Startz (2006) and Ma, Nelson, and Startz (2006) have shown that standard estimates of persistence are generally spurious in time series models that are weakly identified. This motivates the re-examination of the evidence for resolution of the equity premium puzzle in this paper. Using the valid Anderson-Rubin type test proposed by Ma and Nelson (2006) I show that weak identification may account for the apparent resolution and valid confidence regions and tests reject high persistence in consumption growth. Also, the possibility of integrated expectations is examined using the Median Unbiased Estimator of Stock and Watson (1998) and little supporting evidence is found. Evidently, the equity premium puzzle remains just that. |
Date: | 2006–11 |
URL: | http://d.repec.org/n?u=RePEc:udb:wpaper:uwec-2006-21-r&r=upt |
By: | Santos-Pinto, Luís |
Abstract: | This paper extends the Cournot and Bertrand models of strategic interaction between firms by assuming that managers are not only profit maximizers, but also have preferences for reciprocity or are averse to inequity. A reciprocal manager responds to unkind behavior of rivals with unkind actions, while at the same time, it responds to kind behavior of rivals with kind actions. An inequity averse manager likes to reduce the difference between own profits and the rivals’ profits. The paper finds that if firms with reciprocal managers compete à la Cournot, then they may be able to sustain “collusive” outcomes under a constructive reciprocity equilibrium. By contrast, Stackelberg warfare may emerge under a destructive reciprocity equilibrium. If there is Cournot competition between firms and their managers are averse to advantageous (disadvantageous) inequity, then firms are better (worse) off than if managers only care about maximizing profits. If firms compete à la Bertrand, then only under very restrictive conditions will managers’ preferences for reciprocity or inequity aversion have an impact on equilibrium outcomes. |
Keywords: | Reciprocity; Inequity Aversion; Cournot; Bertrand. |
JEL: | D43 D63 L21 L13 |
Date: | 2006–05–17 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:3143&r=upt |
By: | Patrick Bonnel (LET - Laboratoire d'économie des transports - [CNRS : UMR5593] - [Université Lumière - Lyon II] - [Ecole Nationale des Travaux Publics de l'Etat]) |
Abstract: | In spite of the fact that disaggregate modelling has undergone considerable development in the last twenty years, many studies are still based on aggregate modelling. In France, for example, aggregate models are still in much more common use than disaggregate models, even for modal split. The estimation of aggregate models is still therefore an important issue.<br /><br />In France, for most studies it is possible to use behavioural data from household surveys, which are conducted every ten years in most French conurbations. These household surveys provide data on the socioeconomic characteristics both of individuals and the households to which they belong and data on modal choice for all the trips made the day before the survey. The sampling rate is generally of 1% of the population, which gives about 50,000 trips for a conurbation of 1 million inhabitants. However, matrices that contain several hundred rows and columns are frequently used. We therefore have to construct several modal matrices that contain more than 10,000 cells (in the case of a small matrix with only 100 rows) with less than 50,000 trips (to take the above example). Obviously, the matrices will contain a large number of empty cells and the precision of almost all the cells will be very low. It is consequently not possible to estimate the model at this level of zoning.<br /><br />The solution which is generally chosen is to aggregate zones. This must comply with two contradictory objectives:<br />- the number of zones must be as small as possible in order to increase the number of surveyed trips that can be used during estimation and hence the accuracy of the O-D matrices for trips conducted on each mode;<br />- the zones must be as small as possible in order to produce accurate data for the explanatory variables such as the generalized cost for each of the transport modes considered. When the size of the zone increases, it is more difficult to evaluate the access and regress time for public transport and there are several alternative routes with different travel times between each origin zone and each destination. Therefore more uncertainty is associated with the generalized cost that represents the quality of service available between the two zones. The generally adopted solution is to produce a weighted average of all the generalized costs computed from the most disaggregated matrix. However, there is no guarantee that this weighted mean will be accurate for the origin-destination pair in question.<br /><br />When the best compromise has been made, some of the matrix cells are generally empty or suffer from an insufficient level of precision. To deal with this problem we generally keep only the cells for which the data is sufficiently precise by selecting those cells in which the number of surveyed trips exceeds a certain threshold. However, this process involves rejecting part of the data which cannot be used for estimation purposes. When a fairly large number of zones is used, the origin destination pairs which are selected for the estimation of the model mainly involve trips that are performed in the centre of the conurbation or radial trips between the centre and the suburbs. These origin-destination pairs are also those for which public transport's share is generally the highest. The result is to reduce the variance of the data and therefore the quality of the estimation.<br /><br />To cope with this problem we propose a different aggregation process which makes it possible to retain all the trips and use a more disaggregate zoning system. The principle of the method is very simple. We shall apply the method to the model most commonly used for modal split, which is the logit model. When there are only two modes of transport, the share of each mode is obtained directly from the difference in the utility between the two modes with the logit function. We can therefore aggregate the origin-destination pairs for which the difference between the utility of the two modes is very small in order to obtain enough surveyed trips to ensure sufficient data accuracy. This process is justified by the fact that generally the data used to calculate the utility of each mode is as accurate or even more accurate at a more disaggregate level of zoning. The problem with this method is that the utility function coefficients have to be estimated at the same time as the logit model. An iterative process is therefore necessary. The steps of the method are summarised below:<br />- selection of initialization values for the utility function coefficients for the two transport modes in order to intitialize the iteration process. These values can, for example, be obtained from a previous study or calibration performed according to the classical method described in Section 1.2;<br />- the utility for each mode is computed on the basis of the above coefficients, followed by the difference in the utility for each O-D pair in the smallest scale zoning system for which explanatory variables with an adequate level of accuracy are available (therefore with very limited zonal aggregation or even none at all);<br />- the O-D pairs are classified on the basis of increasing utility difference;<br />- the O-D pairs are then aggregated. This is done on the basis of closeness of utility difference. The method involves taking the O-D link with the smallest utility difference then combining it with the next O-D pair (in order of increasing utility difference). This process is continued until the number of surveyed trips in the grouping is greater than a threshold value that is decided on the basis of the level of accuracy that is required for trip flow estimation. When this threshold is reached the construction of the second grouping is commenced, and so on and so forth until each O-D pair has been assigned to a group;<br />- for each new class of O-D pairs it is necessary to compute the values of the explanatory variables which make up the utility functions for each class. This value is obtained on the basis of the weighted average of the values for each O-D pair in the class;<br />- a new estimation of the utility function coefficients.<br /><br />This process is repeated until the values of the utility function coefficients converge. We have tested this method for the Lyon conurbation with data from the most recent household travel survey conducted in 1995/96. We have conducted a variety of tests in order to identify the best application of the method and to test the stability of the results. It would seem that this method always produces better results than the more traditional method that involves zoning aggregation. The paper presents both the methodology and the results obtained from different aggregation methods. In particular, we analyse how the choice of zoning system affects the results of the estimation. |
Keywords: | Aggregate modelling ; choice modal ; Zoning system ; Urban mobility ; Conurbation (Lyon, France) ; Estimation method |
Date: | 2007–04–30 |
URL: | http://d.repec.org/n?u=RePEc:hal:papers:halshs-00092335_v1&r=upt |
By: | Keith R. McLaren; K.K. Gary Wong |
Abstract: | In this paper, we utilize the notion of "effective global regularity" and the intuition stemming from Cooper and McLaren (1996)'s General Exponential Form to develop a family of "composite" (product and ratio) direct, inverse and mixed demand systems. Apart from having larger regularity regions, the resulting specifications are also of potentially arbitrary rank, which can better approximate non-linear Engel curves. We also make extensive use of duality theory and a numerical inversion estimation method to rectify the endogeneity problem encountered in the estimation of the mixed demand systems. We illustrate the techniques by estimating different types of demand systems for Japanese quarterly meat and fish consumption. Results generally indicate that the proposed methods are promising, and may prove beneficial for modeling systems of direct, inverse and mixed demand functions in the future. |
Keywords: | Effective Global Regularity; Mixed Demands; Conditional Indirect Utility Functions; Numerical Inversion Estimation Method |
JEL: | D11 D12 |
Date: | 2007–05 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2007-2&r=upt |
By: | Annette Kirstein; Roland Kirstein (Faculty of Economics and Management, Otto-von-Guericke University Magdeburg) |
Abstract: | In this paper we experimentally test a theory of boundedly rational behavior in a "lemons" market. We analyze two different market designs, for which perfect rationality implies complete and partial market collapse, respectively. Our empirical observations deviate substantially from the predictions of rational choice theory: Even after 20 repetitions, the actual outcome is closer to efficiency than expected. We examine to which extent the theory of iterated reasoning contributes to the explanation of these observations. Perfectly rational behavior requires a player to perform an infinite number of iterative reasoning steps. Boundedly rational players, however, carry out only a limited number of such iterations. We have determined the iteration type of the players independently from their market behavior. A significant correlation exists between the iteration types and the observed price offers. |
Keywords: | bounded rationality, market failure, adverse selection, regulatory failure, paternalistic regulation |
JEL: | D8 C7 B4 |
Date: | 2007–04 |
URL: | http://d.repec.org/n?u=RePEc:mag:wpaper:07014&r=upt |
By: | Jeanette Brosig (Department of Economics, University of Cologne); Thomas Riechmann (Faculty of Economics and Management, Otto-von-Guericke University Magdeburg); Joachim Weimann (Faculty of Economics and Management, Otto-von-Guericke University Magdeburg) |
Abstract: | This paper puts three of the most prominent specifications of ‘other-regarding’ preferences to the experimental test, namely the theories developed by Charness and Rabin, by Fehr and Schmidt, and by Andreoni and Miller. In a series of experiments based on various dictator and prisoner’s dilemma games, we try to uncover which of these concepts, or the classical selfish approach, is able to explain most of our experimental findings. The experiments are special with regard to two aspects: First, we investigate the consistency of individual behavior within and across different classes of games. Second, we analyze the stability of individual behavior over time by running the same experiments on the same subjects at several points in time. Our results demonstrate that in the first wave of experiments, all theories of other-regarding preferences explain a high share of individual decisions. Other-regarding preferences seem to wash out over time, however. In the final wave, it is the classical theory of selfish behavior that delivers the best explanation. Stable behavior over time is observed only for subjects, who behave strictly selfish. Most subjects behave consistently with regard to at least one of the theories within the same class of games, but are much less consistent across games. |
Keywords: | individual preferences, consistency, stability, experimental economics |
JEL: | C91 C90 C72 C73 |
Date: | 2007–02 |
URL: | http://d.repec.org/n?u=RePEc:mag:wpaper:07005&r=upt |