nep-upt New Economics Papers
on Utility Models and Prospect Theories
Issue of 2005‒11‒19
seven papers chosen by
Alexander Harin
Modern University for the Humanities

  1. An Experimental Investigation of Alternatives to Expected Utility Using Pricing Data By Andrea Morone; Ulrich Schmidt
  2. Estimating the Stochastic Discount Factor without a Utility Function By Fabio Araujo; Joao Victor Issler
  3. Asset Pricing and Loss Aversion By Willi Semmler; Lars Grüne
  4. Expectation Formation and Endogenous Fluctuations in Aggregate Demand By Maciej K. Dudek
  5. Empirical Estimation Results of a Collective Household Time Allocation Model By Chris van Klaveren; Bernard M.S. van Praag; Henriëtte Maassen van den Brink
  6. Unexploited Connections Between Intra- and Inter-temporal Allocation By Thomas F. Crossley; Hamish W. Low
  7. Operational risk management and new computational needs in banks By Duc PHAM-HI

  1. By: Andrea Morone; Ulrich Schmidt
    Abstract: Experimental research on decision making under risk has until now always employed choice data in order to evaluate the empirical performance of expected utility and the alternative non-expected utility theories. The present paper performs a similar analysis which relies on pricing data instead of choice data. Since pricing data lead in many cases to a different ordering of lotteries than choices (e.g. the preference reversal phenomenon) our analysis may have fundamental different results than preceding investigations. We elicit three different types of pricing data: willingness-to-pay, willingness-to-accept and certainty equivalents under the Becker-DeGroot-Marschak (BDM) incentive mechanism. One of our main result shows that the comparative performance of the single theories differs significantly under these three types of pricing data.
    Keywords: expected utility, non-expected utility, experiments, WTP, WTA, BDM
    JEL: C91 D81
    Date: 2005–11
    URL: http://d.repec.org/n?u=RePEc:esi:discus:2005-28&r=upt
  2. By: Fabio Araujo; Joao Victor Issler
    Abstract: In this paper we take seriously the consequences of the Pricing Equation in constructing a novel consistent estimator of the stochastic discount factor (SDF) using panel data. Under general conditions it depends exclusively on appropriate averages of asset returns, and its computation is a direct exercise, as long as one has enough observations to fit our asymptotic results. We identify the logarithm of the SDF using the fact that it is the serial correlation "common feature" in every asset return of the economy. Our estimator does not depend on any parametric function representing preferences, or on consumption data. This property allows its use in testing different preference specifications commonly employed in finance and in macroeconomics, as well as investigating the existence of several puzzles involving intertemporal substitution, such as the equity-premium puzzle. It is also straightforward to construct an estimator of the risk-free rate based on our SDF estimator. When applied to quarterly data of U.S.$ real returns from 1972:1 through 2002:4, our estimator of the SDF is close to unity most of the time and yields an equivalent average annual real discount rate of 2.46%. When we examined the appropriateness of different functional forms to represent preferences, we concluded that standard preference representations used in the literature on intertemporal substitution cannot be rejected by the data. Moreover, estimates of the relative risk-aversion coefficient are close to what can be expected a priori -- between 1 and 2, statistically significant, and not different than unity in testing. A direct test of the equity-premium puzzle using our SDF estimator cannot reject the null that the discounted equity premium in the U.S. has mean zero. However, when consumption-based SDF estimates are employed in the same test, the null is rejected. Further empirical investigation shows that our SDF estimator has a large negative correlation with the equity premium, whereas that of consumption-based estimates are usually too small in absolute value, generating the equity-premium puzzle
    Keywords: common features, stochastic discount factor
    JEL: C32 G12
    Date: 2005–11–11
    URL: http://d.repec.org/n?u=RePEc:sce:scecf5:202&r=upt
  3. By: Willi Semmler; Lars Grüne (Economics New School University)
    Abstract: Using standard preferences for asset pricing has not been very successful to match asset price characteristics such as the risk-free interest rate, equity premium and the Sharpe ratio to time series data. Behavioral finance has recently proposed more realistic preferences such as preferences with loss aversion to model asset pricing. Research has now started to explore the implications of behaviorally founded preferences for asset price characteristics. Yet the solution to those models is intricate and depends on the solution techniques employed. In this paper a stochastic version of a dynamic programming method with adaptive grid scheme is applied to compute the above mentioned asset price characteristics of a model with loss aversion in preferences. Since, as shown in Grüne and Semmler (2004), our method produces only negligible errors it is suitable to be used as solution technique for such models with more intricate decision structure.
    Keywords: asset pricing, preferences with loss aversion, behavioral finance, equity premium, dynamic programming
    JEL: G1 G12
    Date: 2005–11–11
    URL: http://d.repec.org/n?u=RePEc:sce:scecf5:199&r=upt
  4. By: Maciej K. Dudek
    Abstract: The paper recognizes that expectations and the process of their formation are subject to standard decision making and are determined as a part of equilibrium. Accordingly, the paper presents a basic framework in which the form of expectation formation is a choice variable. At any point in time rational economic agents decide on the basis of the level of utility what expectation formation technology to use and as a consequence what expectations to hold. As economic decisions are conditioned on expectations holding proper or rational expectations eliminates the possibility of ex ante inefficiencies. The choice of expectation formation technology is not trivial as the paper assumes that information gathering and processing are costly. Consequently, economic agents must make informed decisions with the regard to the quality of expectation formation technologies they wish to use. The paper shows that agents' optimization over expectations not only adds on to realism, but also can carry non trivial implications for the behavior of macroeconomic variables. Specifically, the paper illustrates that endogenous expectation revisions can be a source of permanent oscillations in aggregate demand and can prevent an economy from settling into a steady state. In addition, the paper quantifies intangible notions such as overheating, overborrowing, and output gap. Finally, the paper shows that active policy measures can limit inefficiencies resulting from output fluctuations
    Keywords: Business Cycles, Expectation Formation, Costly Information Acquisition.
    JEL: D84 E32
    Date: 2005–11–11
    URL: http://d.repec.org/n?u=RePEc:sce:scecf5:263&r=upt
  5. By: Chris van Klaveren (Faculty of Economics and Econometrics, Universiteit van Amsterdam); Bernard M.S. van Praag (Faculty of Economics and Econometrics, Universiteit van Amsterdam); Henriëtte Maassen van den Brink (Faculty of Economics and Econometrics, Universiteit van Amsterdam)
    Abstract: An empirical model is developed where the collective household model is used as a basic framework to describe the time allocation problem. The collective model views household behavior as the outcome of maximizing a household utility function which is a weighted sum of the utility functions of the male and the female. In this paper we estimate the two individual utility functions and the household power weight distribution, which is parameterized per household. The model is estimated on a sub-sample of the British Household Panel Survey, consisting of two-earner households. The empirical results suggest that: (1) Given that the weight distribution is wage dependent, preferences of males and females differ, which rejects the unitary model; (2) The male and female utility functions are weighted differently in the household function; (3) The power differences are explained by differences in the ratio of the partners' hourly wages, the presence of young children and the non-labor household income; (4) Both males and females have a backward bending labor supply curve.
    Keywords: Collective household models; Labor supply; Time allocation
    JEL: D12 D13 J22
    Date: 2005–10–21
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20050096&r=upt
  6. By: Thomas F. Crossley; Hamish W. Low
    Abstract: This paper shows that a power utility specification of preferences over total expenditure (ie. CRRA preferences) implies that intratemporal demands are in the PIGL/PIGLOG class. This class generates (at most) rank two demand systems and we can test the validity of power utility on cross-section data. Further, if we maintain the assumption of power utility, and within period preferences are not homothetic, then the intertemporal preference parameter is identified by the curvature of Engel curves. Under the power utility assumption, neither Euler equation estimation nor structural consumption function estimation is necessary to identify the power parameter. In our empirical work, we use demand data to estimate the power utility parameter and to test the assumption of the power utility representation. We find estimates of the power parameter larger than obtained from Euler equation estimation, but we reject the power specification of within period utility.
    Keywords: elasticity of intertemporal substitution, Euler equation estimation, demand systems
    JEL: D91 E21 D12
    Date: 2005–09
    URL: http://d.repec.org/n?u=RePEc:mcm:sedapp:131&r=upt
  7. By: Duc PHAM-HI (Systemes Informations & Finance Ecole Centrale Electronique)
    Abstract: Basel II banking regulation introduces new needs for computational schemes. They involve both optimal stochastic control, and large scale simulations of decision processes of preventing low-frequency high loss-impact events. This paper will first state the problem and present its parameters. It then spells out the equations that represent a rational risk management behavior and link together the variables: Levy processes are used to model operational risk losses, where calibration by historical loss databases is possible ; where it is not the case, qualitative variables such as quality of business environment and internal controls can provide both costs-side and profits-side impacts. Among other control variables are business growth rate, and efficiency of risk mitigation. The economic value of a policy is maximized by resolving the resulting Hamilton-Jacobi-Bellman type equation. Computational complexity arises from embedded interactions between 3 levels: * Programming global optimal dynamic expenditures budget in Basel II context, * Arbitraging between the cost of risk-reduction policies (as measured by organizational qualitative scorecards and insurance buying) and the impact of incurred losses themselves. This implies modeling the efficiency of the process through which forward-looking measures of threats minimization, can actually reduce stochastic losses, * And optimal allocation according to profitability across subsidiaries and business lines. The paper next reviews the different types of approaches that can be envisaged in deriving a sound budgetary policy solution for operational risk management, based on this HJB equation. It is argued that while this complex, high dimensional problem can be resolved by taking some usual simplifications (Galerkin approach, imposing Merton form solutions, viscosity approach, ad hoc utility functions that provide closed form solutions, etc.) , the main interest of this model lies in exploring the scenarios in an adaptive learning framework ( MDP, partially observed MDP, Q-learning, neuro-dynamic programming, greedy algorithm, etc.). This makes more sense from a management point of view, and solutions are more easily communicated to, and accepted by, the operational level staff in banks through the explicit scenarios that can be derived. This kind of approach combines different computational techniques such as POMDP, stochastic control theory and learning algorithms under uncertainty and incomplete information. The paper concludes by presenting the benefits of such a consistent computational approach to managing budgets, as opposed to a policy of operational risk management made up from disconnected expenditures. Such consistency satisfies the qualifying criteria for banks to apply for the AMA (Advanced Measurement Approach) that will allow large economies of regulatory capital charge under Basel II Accord.
    Keywords: REGULAR - Operational risk management, HJB equation, Levy processes, budget optimization, capital allocation
    JEL: G21
    Date: 2005–11–11
    URL: http://d.repec.org/n?u=RePEc:sce:scecf5:355&r=upt

This nep-upt issue is ©2005 by Alexander Harin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.