
on Econometrics 
By:  Carlos MartinsFilho (Department of Economics, University of Colorado); Feng Yao (Department of Economics, West Virginia University) 
Abstract:  We consider the estimation of a nonparametric stochastic frontier model with composite error density which is known up to a finite parameter vector. Our primary interest is on the estimation of the parameter vector, as it provides the basis for estimation of firm specific (in)efficiency. Our frontier model is similar to that of Fan et al. (1996), but here we extend their work in that: a) we establish the asymptotic properties of their estimation procedure, and b) propose and establish the asymptotic properties of an alternative estimator based on the maximization of a conditional profile likelihood function. The estimator proposed in Fan et al. (1996) is asymptotically normally distributed but has bias which does not vanish as the sample size n??. In contrast, our proposed estimator is asymptotically normally distributed and correctly centered at the true value of the parameter vector. In addition, our estimator is shown to be efficient in a broad class of semiparametric estimators. Our estimation procedure provides a fast converging alternative to the recently proposed estimator in Kumbhakar et al. (2007). A Monte Carlo study is performed to shed light on the finite sample properties of these competing estimators. 
Keywords:  stochastic frontier models; nonparametric frontiers; profile likelihood estimation. 
JEL:  C14 C22 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:wvu:wpaper:1009&r=ecm 
By:  Feng Yao (Department of Economics, West Virginia University); Junsen Zhang (Department of Economics, The Chinese University of Hong Kong) 
Abstract:  We consider the estimation of a semiparametric regression model where data is independently and identically distributed. Our primary interest is on the estimation of the parameter vector, where the associated regressors are correlated with the errors and contain both continuous and discrete variables. We propose three estimators by adapting Robinson's (1988) and Li and Stengos' (1996) framework and establish their asymptotic properties. They are asymptotically normally distributed and correctly centered at the true value of the parameter vector. Among a class of semiparametric IV estimators with conditional moment restriction, the first two are efficient under conditional homoskedasticity and the last one is efficient under heteroskedasticity. They allow the reduced form to be nonparametric, are asymptotically equivalent to semiparametric IV estimators that optimally select the instrument and reach the semiparametric efficiency bounds in Chamberlain (1992). A Monte Carlo study is performed to shed light on the finite sample properties of these competing estimators. Its applicability is illustrated with an empirical data set. 
Keywords:  Instrumental variables, semiparametric regression, efficient estimation. 
JEL:  C14 C21 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:wvu:wpaper:1011&r=ecm 
By:  Johannes, Jan; Van Bellegem, Sébastien; Vanhems, Anne 
Abstract:  We consider the nonparametric regression model with an additive error that is correlated with the explanatory variables. We suppose the existence of instrumental variables that are considered in this model for the identification and the estimation of the regression function. The nonparametric estimation by instrumental variables is an illposed linear inverse problem with an unknown but estimable operator. We provide a new estimator of the regression function using an iterative regularization method (the LandweberFridman method). The optimal number of iterations and the convergence of the mean square error of the resulting estimator are derived under both mild and severe degrees of illposedness. A MonteCarlo exercise shows the impact of some parameters on the estimator and concludes on the reasonable finite sample performance of the new estimator. 
Keywords:  Nonparametric estimation; Instrumental variable; Illposed inverse problem 
JEL:  C14 C30 
Date:  2010–07–16 
URL:  http://d.repec.org/n?u=RePEc:tse:wpaper:23124&r=ecm 
By:  Concepción Ausín; Pedro Galeano; Pulak Ghosh 
Abstract:  Financial time series analysis deals with the understanding of data collected on financial markets. Several parametric distribution models have been entertained for describing, estimating and predicting the dynamics of financial time series. Alternatively, this article considers a Bayesian semiparametric approach. In particular, the usual parametric distributional assumptions of the GARCHtype models are relaxed by entertaining the class of locationscale mixtures of Gaussian distributions with a Dirichlet process prior on the mixing distribution, leading to a Dirichlet process mixture model. The proposed specification allows for a greater exibility in capturing both the skewness and kurtosis frequently observed in financial returns. The Bayesian model provides statistical inference with finite sample validity. Furthermore, it is also possible to obtain predictive distributions for the Value at Risk (VaR), which has become the most widely used measure of market risk for practitioners. Through a simulation study, we demonstrate the performance of the proposed semiparametric method and compare results with the ones from a normal distribution assumption. We also demonstrate the superiority of our proposed semiparametric method using real data from the Bombay Stock Exchange Index (BSE30) and the Hang Seng Index (HSI). 
Keywords:  Bayesian estimation, Deviance information criterion, Dirichlet process mixture, Financial time series, Locationscale Gaussian mixture, Markov chain Monte Carlo 
Date:  2010–09 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:ws103822&r=ecm 
By:  Florens, JeanPierre; Schwarz, Maik; Van Bellegem, Sébastien 
Abstract:  A new nonparametric estimator of production a frontier is defined and studied when the data set of production units is contaminated by measurement error. The measurement error is assumed to be an additive normal random variable on the input variable, but its variance is unknown. The estimator is a modification of the mfrontier, which necessitates the computation of a consistent estimator of the conditional survival function of the input variable given the output variable. In this paper, the identification and the consistency of a new estimator of the survival function is proved in the presence of additive noise with unknown variance. The performance of the estimator is also studied through simulated data. 
Date:  2010–05 
URL:  http://d.repec.org/n?u=RePEc:tse:wpaper:22897&r=ecm 
By:  Florens, JeanPierre; Simoni, Anna 
Abstract:  We propose a QuasiBayesian nonparametric approach to estimating the structural relationship ' among endogenous variables when instruments are available. We show that the posterior distribution of ' is inconsistent in the frequentist sense. We interpret this fact as the illposedness of the Bayesian inverse problem defined by the relation that characterizes the structural function '. To solve this problem, we construct a regularized posterior distribution, based on a Tikhonov regularization of the inverse of the marginal variance of the sample, which is justified by a penalized projection argument. This regularized posterior distribution is consistent in the frequentist sense and its mean can be interpreted as the mean of the exact posterior distribution resulting from a gaussian prior distribution with a shrinking covariance operator. 
JEL:  C11 C14 C30 
Date:  2010–03 
URL:  http://d.repec.org/n?u=RePEc:tse:wpaper:22895&r=ecm 
By:  David I. Harvey; Stephen J. Leybourne; A. M. Robert Taylor 
Abstract:  It is well known that it is vital to account for trend breaks when testing for a unit root. In practice, uncertainty exists over whether or not a trend break is present and, if it is, where it is located. Harris et al. (2009) and CarrioniSilvestre et al. (2009) propose procedures which account for both of these forms of uncertainty. Each uses what amounts to a pretest for a trend break, accounting for a trend break (the associated break fraction estimated from the data) in the unit root procedure only where the pretest signals a break. Assuming the break magnitude is fixed (independent of sample size) these authors show that their methods achieve near asymptotically ecient unit root inference in both trend break and no trend break environments. These asymptotic results are, however, somewhat at odds with the finite sample simulations reported in both papers. These show the presence of pronounced "valleys" in the finite sample power functions (when mapped as functions of the break magnitude) of the tests such that power is initially high for very small breaks, then decreases as the break magnitude increases, before increasing again. Here we show that treating the break magnitude as local to zero (in a Pitman drift sense) allows the asymptotic analysis to very closely approximate this finite sample effect, thereby providing useful analytical insights into the observed phenomenon. In response to this problem we propose practical solutions, based either on the use of a with break unit root test but with adaptive critical values, or on a union of rejections principle taken across with break and without break unit root tests. The former is shown to eliminate power valleys but at the expense of power when no break is present, while the latter considerably mitigates the valleys while not losing all the power gains available when no break exists. 
Keywords:  Unit root test; local trend break; asymptotic local power; union of rejections; adaptive critical values 
Date:  2010–09 
URL:  http://d.repec.org/n?u=RePEc:not:notgts:10/05&r=ecm 
By:  Peter Fuleky (University of Hawaii); Eric Zivot (University of Washington) 
Abstract:  The Ecient Method of Moments (EMM) estimator popularized by Gallant and Tauchen (1996) is an indirect inference estimator based on the simulated auxiliary score evaluated at the sample estimate of the auxiliary parameters. We study an alternative estimator that uses the sample auxiliary score evaluated at the simulated binding function which maps the structural parameters of interest to the auxiliary parameters. We show that the alternative estimator has the same asymptotic properties as the EMM estimator but in finite samples behaves more like the distancebased indirect inference estimator of Gourieroux, Monfort and Renault (1993). 
Date:  2010–06 
URL:  http://d.repec.org/n?u=RePEc:udb:wpaper:uwec201008&r=ecm 
By:  Dong, Yingying 
Abstract:  Regression Discontinuity (RD) models identify local treatment effects by associating a discrete change in the mean outcome with a corresponding discrete change in the probability of treatment at a known threshold of a running variable. This paper shows that it is possible to identify RD model treatment effects without a discontinuity. The intuition is that identification can come from a slope change (a kink) instead of a discrete level change (a jump) in the treatment probability. Formally this can be shown using L'hopital's rule. The identification results are interpreted intuitively using instrumental variable models. Estimators are proposed that can be applied in the presence or absence of a discontinuity, by exploiting either a jump or a kink. 
Keywords:  Regression Discontinuity; Fuzzy design; Average treatment effect; Identification; Jump; Kink; Threshold 
JEL:  C21 C25 
Date:  2010–08–01 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:25461&r=ecm 
By:  Evarist Stoja; Arnold Polanski 
Abstract:  We propose two simple evaluation methods for time varying density forecasts of continuous higher dimensional random variables. Both methods are based on the probability integral transformation for unidimensional forecasts. The first method tests multinormal densities and relies on the rotation of the coordinate system. The advantage of the second method is not only its applicability to any continuous distribution but also the evaluation of the forecast accuracy in specific regions of its domain as defined by the user’s interest. We show that the latter property is particularly useful for evaluating a multidimensional generalization of the Value at Risk. In simulations and in an empirical study, we examine the performance of both tests. 
Keywords:  Multivariate Density Forecast Evaluation, Probability Integral Transformation, Multidimensional Value at Risk, Monte Carlo Simulations 
JEL:  C52 C53 
Date:  2009–12 
URL:  http://d.repec.org/n?u=RePEc:bri:uobdis:09/617&r=ecm 
By:  Carlos MartinsFilho (Department of Economics, University of Colorado); Feng Yao (Department of Economics, West Virginia University) 
Abstract:  The sum of two independent random variables with normal and half normal densities has a skewnormal density (Azzalini, 1985). In this note we show that this skewnormal density satisfies all assumptions required in establishing the asymptotic properties of the estimators discussed in MartinsFilho and Yao (2010). 
Keywords:  skewnormal density; semiparametric stochastic frontiers. 
JEL:  C14 C22 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:wvu:wpaper:1010&r=ecm 
By:  Ewa M. Syczewska (Warsaw School of Economics) 
Abstract:  The aim of this paper is to study properties of the KwiatkowskiPhillipsSchmidtShin test (KPSS test), introduced in Kwiatkowski et al. (1992) paper. The null of the test corresponds to stationarity of a series, the alternative to its nonstationarity. Distribution of the test statistics is nonstandard, asymptotically converges to Brownian bridges as was shown in original paper. The authors produced tables of critical values based on asymptotic approximation. Here we present results of simulation experiment aimed at studying small sample properties of the test and its empirical power. 
Keywords:  KPSS test, stationarity, integration, empirical power of KPSS test 
JEL:  C12 C16 
Date:  2010–09–23 
URL:  http://d.repec.org/n?u=RePEc:wse:wpaper:45&r=ecm 
By:  Florens, JeanPierre; Simoni, Anna 
Abstract:  We consider statistical linear inverse problems in Hilbert spaces of the type ˆ Y = Kx + U where we want to estimate the function x from indirect noisy functional observations ˆY . In several applications the operator K has an inverse that is not continuous on the whole space of reference; this phenomenon is known as illposedness of the inverse problem. We use a Bayesian approach and a conjugateGaussian model. For a very general specification of the probability model the posterior distribution of x is known to be inconsistent in a frequentist sense. Our first contribution consists in constructing a class of Gaussian prior distributions on x that are shrinking with the measurement error U and we show that, under mild conditions, the corresponding posterior distribution is consistent in a frequentist sense and converges at the optimal rate of contraction. Then, a class ^ of posterior mean estimators for x is given. We propose an empirical Bayes procedure for selecting an estimator in this class that mimics the posterior mean that has the smallest risk on the true x. 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:tse:wpaper:22884&r=ecm 
By:  Reed, W. Robert; Webb, Rachel S. 
Abstract:  Nonspherical errors, namely heteroscedasticity, serial correlation and crosssectional correlation are commonly present within panel data sets. These can cause significant problems for econometric analyses. The FGLS(Parks) estimator has been demonstrated to produce considerable efficiency gains in these settings. However, it suffers from underestimation of coefficient standard errors, oftentimes severe. Potentially, jackknifing the FGLS(Parks) estimator could allow one to maintain the efficiency advantages of FGLS(Parks) while producing more reliable estimates of coefficient standard errors. Accordingly, this study investigates the performance of the jackknife estimator of FGLS(Parks) using Monte Carlo experimentation. We find that jackknifing can  in narrowly defined situations  substantially improve the estimation of coefficient standard errors. However, its overall performance is not sufficient to make it a viable alternative to other panel data estimators.  
Keywords:  Panel Data estimation,Parks model,crosssectional correlation,jackknife,Monte Carlo 
JEL:  C23 C15 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:zbw:ifwedp:201023&r=ecm 
By:  Polasek, Wolfgang (Department of Economics and Finance, Institute for Advanced Studies, Vienna, Austria, and Faculty of Science, University of Porto, Porto, Portugal); Sellner, Richard (Department of Economics and Finance, Institute for Advanced Studies, Vienna, Austria) 
Abstract:  Flow data across regions can be modeled by spatial econometric models, see LeSage and Pace (2009). Recently, regional studies became interested in the aggregation and disaggregation of flow models, because trade data cannot be obtained at a disaggregated level but data are published on an aggregate level. Furthermore, missing data in disaggregated flow models occur quite often since detailed measurements are often not possible at all observation points in time and space. In this paper we develop classical and Bayesian methods to complete flow data. The Chow and Lin (1971) method was developed for completing disaggregated incomplete time series data. We will extend this method in a general framework to spatially correlated flow data using the crosssectional ChowLin method of Polasek et al. (2009). The missing disaggregated data can be obtained either by feasible GLS prediction or by a Bayesian (posterior) predictive density. 
Keywords:  Missing values in spatial econometrics, MCMC, nonspatial ChowLin (CL) and spatial ChowLin (SCL) methods, spatial internal flow (SIF) models, origin and destination (OD) data 
JEL:  C11 C15 C52 E17 R12 
Date:  2010–09 
URL:  http://d.repec.org/n?u=RePEc:ihs:ihsesp:255&r=ecm 
By:  Matteo Mogliani 
Abstract:  The aim of this paper is to study the performance of residualbased tests for cointegration in the presence of multiple deterministic structural breaks via Monte Carlo simulations. We consider the KPSStype LM tests proposed in CarrioniSilvestre and Sansò (2006) and in Bartley, Lee and Strazicich (2001), as well as the Schmidt and Phillipstype LM tests proposed in Westerlund and Edgerton (2007). This exercise allow us to cover a wide set of singleequation cointegration estimators. Monte Carlo experiments reveal a tradeoff between size and power distortions across tests and models. KPSStype tests display large size distortions under multiple breaks scenarios, while Schmidt and Phillipstype tests appear wellsized across all simulations. However, when regressors are endogenous, the former group of tests displays quite high power against the alternative hypothesis, while the latter shows severe low power. 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:pse:psecon:201022&r=ecm 
By:  LAURENT, Sébastien (Maastricht University, The Netherlands; Université catholique de Louvain, CORE, B1348 LouvainlaNeuve, Belgium); ROMBOUTS, Jeroen V. K. (HEC Montréal, CIRANO, CIRPEE; Université catholique de Louvain, CORE, B1348 LouvainlaNeuve, Belgium); VIOLANTE, Francesco (Université de Namur, CeReFim, B5000 Namur, Belgium; Université catholique de Louvain, CORE, B1348 LouvainlaNeuve, Belgium) 
Abstract:  This paper addresses the question of the selection of multivariate GARCH models in terms of variance matrix forecasting accuracy with a particular focus on relatively large scale problems. We consider 10 assets from NYSE and NASDAQ and compare 125 model based onestepahead conditional variance forecasts over a period of 10 years using the model confidence set (MCS) and the Superior Predicitive Ability (SPA) tests. Model per formances are evaluated using four statistical loss functions which account for different types and degrees of asymmetry with respect to over/under predictions. When consid ering the full sample, MCS results are strongly driven by short periods of high market instability during which multivariate GARCH models appear to be inaccurate. Over rel atively unstable periods, i.e. dotcom bubble, the set of superior models is composed of more sophisticated specifications such as orthogonal and dynamic conditional correlation (DCC), both with leverage effect in the conditional variances. However, unlike the DCC models, our results show that the orthogonal specifications tend to underestimate the conditional variance. Over calm periods, a simple assumption like constant conditional correlation and symmetry in the conditional variances cannot be rejected. Finally, during the 20072008 financial crisis, accounting for nonstationarity in the conditional variance process generates superior forecasts. The SPA test suggests that, independently from the period, the best models do not provide significantly better forecasts than the DCC model of Engle (2002) with leverage in the conditional variances of the returns. 
Keywords:  variance matrix, forecasting, multivariate GARCH, loss function, model confidence set, superior predictive ability 
JEL:  C10 C32 C51 C52 C53 G10 
Date:  2010–05–01 
URL:  http://d.repec.org/n?u=RePEc:cor:louvco:2010025&r=ecm 
By:  Chernobai, Anna; Menn, Christian; Rachev, Svetlozar T.; Trück, Stefan 
Abstract:  The recently finalized Basel II Capital Accord requires banks to adopt a procedure to estimate the operational risk capital charge. Under the Advanced Measurement Approaches, that are currently mandated for all large internationally active US banks, require the use of historic operational loss data. Operational loss databases are typically subject to a minimum recording threshold of roughly $10,000. We demonstrate that ignoring such thresholds leads to biases in corresponding parameter estimates when the threshold is ignored. Using publicly available operational loss data, we analyze the effects of model misspecification on resulting expected loss, ValueatRisk, and Conditional ValueatRisk figures and show that underestimation of the regulatory capital is a consequence of such model error. The choice of an adequate loss distribution is conducted via insample goodnessoffit procedures and backtesting, using both classical and robust methodologies.  
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:zbw:kitwps:4&r=ecm 
By:  Güner, Biliana; Rachev, Svetlozar T.; Edelman, Daniel; Fabozzi, Frank J. 
Abstract:  Recently, a body of academic literature has focused on the area of stable distributions and their application potential for improving our understanding of the risk of hedge funds. At the same time, research has sprung up that applies standard Bayesian methods to hedge fund evaluation. Little or no academic attention has been paid to the combination of these two topics. In this paper, we consider Bayesian inference for alphastable distributions with particular regard to hedge fund performance and risk assessment. After constructing Bayesian estimators for alphastable distributions in the context of an ARMAGARCH time series model with stable innovations, we compare our risk evaluation and prediction results to the predictions of several competing conditional and unconditional models that are estimated in both the frequentist and Bayesian setting. We find that the conditional Bayesian model with stable innovations has superior risk prediction capabilities compared with other approaches and, in particular, produced better risk forecasts of the abnormally large losses that some hedge funds sustained in the months of September and October 2008.  
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:zbw:kitwps:1&r=ecm 
By:  SBRANA, Giacomo (Université de Strasbourg, BETA, F67085 Strasbourg, France); SILVESTRINI, Andrea (Bank of Italy, Economics, Research and International Relations Area, Economic and Financial Statistics Department, I00184 Roma, Italy) 
Abstract:  In this paper we propose a unified framework to analyse contemporaneous and temporal aggregation of exponential smoothing (EWMA) models. Focusing on a vector IMA(1,1) model, we obtain a closed form representation for the parameters of the contemporaneously and temporally aggregated process as a function of the parameters of the original one. In the framework of EWMA estimates of volatility, we present an application dealing with ValueatRisk (VaR) prediction at different sampling frequencies for an equally weighted portfolio composed of multiple indices. We apply the aggregation results by inferring the decay factor in the portfolio volatility equation from the estimated vector IMA(1,1) model of squared returns. Empirical results show that VaR predictions delivered using this suggested approach are at least as accurate as those obtained by applying the standard univariate RiskMetrics TM methodology. 
Keywords:  contemporaneous and temporal aggregation, EWMA, volatility, ValueatRisk 
JEL:  C10 C32 C43 
Date:  2010–07–01 
URL:  http://d.repec.org/n?u=RePEc:cor:louvco:2010039&r=ecm 
By:  Florens, JeanPierre; Simon, Guillaume 
Abstract:  The objective of the paper is to draw the theory of endogeneity in dynamic models in discrete and continuous time, in particular for diffusions and counting processes. We first provide an extension of the separable setup to a separable dynamic framework given in term of semimartingale decomposition. Then we define our function of interest as a stopping time for an additional noise process, whose role is played by a Brownian motion for diffusions, and a Poisson process for counting processes. 
JEL:  C14 C32 C51 
Date:  2010–04 
URL:  http://d.repec.org/n?u=RePEc:tse:wpaper:22896&r=ecm 
By:  Cazals, Catherine; Dudley, Paul; Florens, JeanPierre; Jones, Michael 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:tse:wpaper:22880&r=ecm 
By:  Laurent Lamy 
Abstract:  Brendstrup (2007) and Brendstrup and Paarsch (2006) claim that sequential English auction models with multiunit demand can be identified from the distribution of the last stage winning price and without any assumption on bidding behavior in the earliest stages. We show that their identification strategy is not correct and that nonidentification occurs even if equilibrium behavior is assumed in the earliest stages. For twostage sequential auctions, an estimation procedure that has an equilibrium foundation and that uses the winning price at both stages is developed and supported by Monte Carlo experiments. Identification under general affiliated multiunit demand schemes is also investigated. 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:pse:psecon:201016&r=ecm 
By:  Evarist Stoja; Richard D. F. Harris; Fatih Yilmaz 
Abstract:  In this paper, we investigate the long run dynamics of the intraday range of the GBP/USD, JPY/USD and CHF/USD exchange rates. We use a nonparametric filter to extract the low frequency component of the intraday range, and model the cyclical deviation of the range from the long run trend as a stationary autoregressive process. We find that the long run trend is timevarying but highly persistent, while the cyclical component is strongly mean reverting. This has important implications for modelling and forecasting volatility over both short and long horizons. As an illustration, we use the cyclical volatility model to generate outofsample forecasts of exchange rate volatility for horizons of up to one year under the assumption that the long run trend is fully persistent. As a benchmark, we compare the forecasts of the cyclical volatility model with those of the twofactor intraday rangebased EGARCH model of Brandt and Jones (2006). Not only is the cyclical volatility model significantly easier to estimate than the EGARCH model, but it also offers a substantial improvement in outofsample forecast performance. 
Keywords:  Conditional volatility, Intraday range, HodrickPrescott filter 
JEL:  C15 C22 
Date:  2010–10 
URL:  http://d.repec.org/n?u=RePEc:bri:uobdis:10/618&r=ecm 
By:  Michael Lechner 
Abstract:  This survey gives a brief overview of the literature on the differenceindifference (DiD) estimation strategy and discusses major issues using a treatment effect perspective. In this sense, this survey gives a somewhat different view on DiD than the standard textbook discussion of the differenceindifference model, but it will also not be as complete as the latter. This survey contains also a couple of extensions to the literature, for example, a discussion of and suggestions for nonlinear DiD as well as DiD based on propensityscore type matching methods. 
Keywords:  Causal inference, counterfactual analysis, beforeaftertreatmentcontrol design, control group design with pretest and posttest 
JEL:  C21 C23 C31 C33 
Date:  2010–09 
URL:  http://d.repec.org/n?u=RePEc:usg:dp2010:201028&r=ecm 
By:  Kevin C. Cheng 
Abstract:  Building on the widelyused doublelognormal approach by Bahra (1997), this paper presents a multilognormal approach with restrictions to extract riskneutral probability density functions (RNPs) for various asset classes. The contributions are twofold: first, on the technical side, the paper proposes useful transformation/restrictions to Bahraâ€™s original formulation for achieving economically sensible outcomes. In addition, the paper compares the statistical properties of the estimated RNPs among major asset classes, including commodities, the S&P 500, the dollar/euro exchange rate, and the US 10year Treasury Note. Finally, a Monte Carlo study suggests that the multilognormal approach outperforms the doublelognormal approach. 
Date:  2010–08–02 
URL:  http://d.repec.org/n?u=RePEc:imf:imfwpa:10/181&r=ecm 