nep-ecm New Economics Papers
on Econometrics
Issue of 2011‒05‒30
23 papers chosen by
Sune Karlsson
Orebro University

  1. Penalized Sieve Estimation and Inference of Semi-Nonparametric Dynamic Models: A Selective Review By Xiaohong Chen
  2. WALS estimation and forecasting in factor-based dynamic models with an application to Armenia By Poghosyan, K.; Magnus, J.R.
  3. Nonclassical Measurement Error in a Nonlinear (Duration) Model By Gutknecht, Daniel
  4. Identification of jumps in financial price series By Hellström, Jörgen; Lönnbark, Carl
  5. Efficient, regression-based estimation of dynamic asset pricing models By Tobias Adrian; Richard K. Crump; Emanuel Moench
  6. Conditionally Efficient Estimation of Long-run Relationships Using Mixed-frequency Time Series By J. Isaac Miller
  7. Forecasting Value-at-Risk Using Nonlinear Regression Quantiles and the Intra-day Range By Cathy W. S. Chen; Richard Gerlach; Bruce B. K. Hwang; Michael McAleer
  8. Bayesian VARs: specification choices and forecast accuracy By Andrea Carriero; Todd Clark; Massimiliano Marcellino
  9. Meta-Regression Approximations to Reduce Publication Selection Bias By T.D. Stanley; Hristos Doucouliagos
  10. Two competitive models and their identification problem: The ESTAR and TSTAR model By Heinen, Florian; Michael, Stefanie; Sibbertsen, Philipp
  11. Asymptotic Variance Estimator for Two-Step Semiparametric Estimators By Daniel Ackerberg; Xiaohong Chen; Jinyong Hahn
  12. The simple econometrics of tail dependence By Maarten R.C. van Oordt; Chen Zhou
  13. Forecasting Value-at-Risk Using Nonlinear Regression Quantiles and the Intra-day Range By Cathy W. S. Chen; Richard Gerlach; Bruce B. K. Hwang; Michael McAleer
  14. CVaR sensitivity with respect to tail thickness By Stoyanov, Stoyan V.; Rachev, Svetlozar T.; Fabozzi, Frank J.
  15. A note on testing for purchasing power parity By Heinen, Florian
  16. Identification of Insurance Models with Multidimensional Screening By Gaurab Aryal; Isabelle Perrigne; Quang Vuong
  17. State-Observation Sampling and the Econometrics of Learning Models By Laurent E. Calvet; Veronika Czellar
  18. Computing the Jacobian in spatial models: an applied survey. By Bivand, Roger
  19. Chaos detection in economics. Metric versus topological tools By Faggini, Marisa
  20. Tempered infinitely divisible distributions and processes By Bianchi, Michele Leonardo; Rachev, Svetlozar T.; Kim, Young Shin; Fabozzi, Frank J.
  21. Tempered stable and tempered infinitely divisible GARCH models By Kim, Young Shin; Rachev, Svetlozar T.; Bianchi, Michele Leonardo; Fabozzi, Frank J.
  22. Fat-tailed models for risk estimation By Stoyanov, Stoyan V.; Rachev, Svetlozar T.; Racheva-Iotova, Boryana; Fabozzi, Frank J.
  23. A variant of radial measure capable of dealing with negative inputs and outputs in data envelopment analysis By Cheng, Gang; Zervopoulos, Panagiotis; Qian, Zhenhua

  1. By: Xiaohong Chen (Cowles Foundation, Yale University)
    Abstract: In this selective review, we first provide some empirical examples that motivate the usefulness of semi-nonparametric techniques in modelling economic and financial time series. We describe popular classes of semi-nonparametric dynamic models and some temporal dependence properties. We then present penalized sieve extremum (PSE) estimation as a general method for semi-nonparametric models with cross-sectional, panel, time series, or spatial data. The method is especially powerful in estimating difficult ill-posed inverse problems such as semi-nonparametric mixtures or conditional moment restrictions. We review recent advances on inference and large sample properties of the PSE estimators, which include (1) consistency and convergence rates of the PSE estimator of the nonparametric part; (2) limiting distributions of plug-in PSE estimators of functionals that are either smooth (i.e., root-n estimable) or non-smooth (i.e., slower than root-n estimable); (3) simple criterion-based inference for plug-in PSE estimation of smooth or non-smooth functionals; and (4) root-n asymptotic normality of semiparametric two-step estimators and their consistent variance estimators. Examples from dynamic asset pricing, nonlinear spatial VAR, semiparametric GARCH, and copula-based multivariate financial models are used to illustrate the general results.
    Keywords: Nonlinear time series, Temporal dependence, Tail dependence, Penalized sieve M estimation, Penalized sieve minimum distance, Semiparametric two-step, Nonlinear ill-posed inverse, Mixtures, Conditional moment restrictions, Nonparametric endogeneity, Dynamic asset pricing, Varying coefficient VAR, GARCH, Copulas, Value-at-risk
    JEL: C13 C14 C20
    Date: 2011–05
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1804&r=ecm
  2. By: Poghosyan, K.; Magnus, J.R. (Tilburg University, Center for Economic Research)
    Abstract: Two model averaging approaches are used and compared in estimating and forecasting dynamic factor models, the well-known BMA and the recently developed WALS. Both methods propose to combine frequentist estimators using Bayesian weights. We apply our framework to the Armenian economy using quarterly data from 2000–2010, and we estimate and forecast real GDP and inflation dynamics.
    Keywords: Dynamic models;Factor analysis;Model averaging;Monte Carlo;Armenia.
    JEL: C11 C13 C52 C53 E52 E58
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:dgr:kubcen:2011054&r=ecm
  3. By: Gutknecht, Daniel
    Abstract: In this paper, we study nonclassical measurement error in the continuous dependent variable of a semiparametric transformation model. The latter is a popular choice in practice nesting various nonlinear duration and censored regression models. The main complication arises because we allow the (additive) measurement error to be correlated with a (continuous) component of the regressors as well as with the true, unobserved dependent variable itself. This problem has not yet been studied in the literature, but we argue that it is relevant for various empirical setups with mismeasured, continuous survey data like earnings or durations. We develop a framework to identify and consistently estimate (up to scale) the parameter vector of the transformation model. Our estimator links a two-step control function approach of Imbens and Newey (2009) with a rank estimator similar to Khan (2001) and is shown to have desirable asymptotic properties. We prove that `m out of n' bootstrap can be used to obtain a consistent approximation of the asymptotic variance and study the estimator's nite sample performance in a Monte Carlo Simulation. To illustrate the empirical usefulness of our procedure, we estimate an earnings equation model using annual data from the Health and Retirement Study (HRS). We nd some evidence for a bias in the coe cients of years of education and age, emphasizing once again the importance to adjust for potential measurement error bias in empirical work. Key words: Nonclassical Measurement Error ; Dependent Variable ; Control Function ; Rank Estimator JEL classification: C14 ; C34
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:wrk:warwec:961&r=ecm
  4. By: Hellström, Jörgen (Umeå School of Business, Umeå University); Lönnbark, Carl (Department of Economics, Umeå University)
    Abstract: The paper outlines and tests, by means of Monte-Carlo simulations, a simple strategy of using existing non-parametric tests for jumps at the daily frequency to identify jumps at higher sampling frequencies. The suggested strategy allow for identification of the number of jumps and jump times during a day, as well as, the size and direction (negative or positive) of the jumps. The method is of importance in order to facilitate detailed empirical studies concerning, for example, causes for jumps in financial price series at finer levels than the daily. The Monte Carlo study reveals that the strategy works reasonably well, particular for lower jump intensities. An application of the studied strategy on the Handelsbanken stock is provided.
    Keywords: Financial econometrics; jumps; realized variance; bipower variation; stock price
    JEL: C14 C15 G12
    Date: 2011–05–20
    URL: http://d.repec.org/n?u=RePEc:hhs:umnees:0827&r=ecm
  5. By: Tobias Adrian; Richard K. Crump; Emanuel Moench
    Abstract: We study regression-based estimators for beta representations of dynamic asset pricing models with affine and exponentially affine pricing kernel specifications. These estimators extend static cross-sectional asset pricing estimators to settings where prices of risk vary with observed state variables. We identify conditions under which four-stage regression-based estimators are efficient and also present alternative, closed-form linearized maximum likelihood (LML) estimators. We provide multi-stage standard errors necessary to conduct inference for asset pricing tests. In empirical applications, we find that time-varying prices of risk are pervasive, thus favoring dynamic cross-sectional asset pricing models over standard unconditional specifications.
    Keywords: Asset pricing ; Econometric models ; Risk ; Regression analysis
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:fip:fednsr:493&r=ecm
  6. By: J. Isaac Miller (Department of Economics, University of Missouri-Columbia)
    Abstract: I analyze efficient estimation of a cointegrating vector when the regressand is observed at a lower frequency than the regressors. Previous authors have examined the effects of specific temporal aggregation or sampling schemes, finding conventionally efficient techniques to be efficient only when both the regressand and the regressors are average sampled. Using an alternative method for analyzing aggregation under more general weighting schemes, I derive an efficiency bound that is conditional on the type of aggregation used on the regressand. This conditional bound differs from the unconditional bound defined by the full-information high-frequency data generating process. I modify a conventionally efficient estimator, canonical cointegrating regression (CCR), to accommodate cases in which the aggregation weights are either unknown or known. In the unknown case, the correlation structure of the error term generally confounds identification of the conditionally efficient weights. In the commonly assumed known case, the correlation structure may be utilized to offset the potential information loss from aggregation, resulting in a conditionally efficient estimator.
    Keywords: oil price and the macroeconomy, oil market fundamental, oil price forecasts, Kalman filter
    JEL: C13 C22
    Date: 2011–05–19
    URL: http://d.repec.org/n?u=RePEc:umc:wpaper:1103&r=ecm
  7. By: Cathy W. S. Chen; Richard Gerlach; Bruce B. K. Hwang; Michael McAleer (University of Canterbury)
    Abstract: Value-at-Risk (VaR) is commonly used for financial risk measurement. It has recently become even more important, especially during the 2008-09 global financial crisis. We propose some novel nonlinear threshold conditional autoregressive VaR (CAViaR) models that incorporate intra-day price ranges. Model estimation and inference are performed using the Bayesian approach via the link with the Skewed-Laplace distribution. We examine how a range of risk models perform during the 2008-09 financial crisis, and evaluate how the crisis affects the performance of risk models via forecasting VaR. Empirical analysis is conducted on five Asia-Pacific Economic Cooperation stock market indices as well as two exchange rate series. We examine violation rates, back-testing criteria, market risk charges and quantile loss function values to measure and assess the forecasting performance of a variety of risk models. The proposed threshold CAViaR model, incorporating range information, is shown to forecast VaR more efficiently than other models, across the series considered, which should be useful for financial practitioners.
    Keywords: Value-at-Risk; CAViaR model; Skewed-Laplace distribution; intra-day range; backtesting; Markov chain Monte Carlo
    Date: 2011–05–18
    URL: http://d.repec.org/n?u=RePEc:cbt:econwp:11/22&r=ecm
  8. By: Andrea Carriero; Todd Clark; Massimiliano Marcellino
    Abstract: In this paper we examine how the forecasting performance of Bayesian VARs is affected by a number of specification choices. In the baseline case, we use a Normal-Inverted Wishart prior that, when combined with a (pseudo-) iterated approach, makes the analytical computation of multi-step forecasts feasible and simple, in particular when using standard and fixed values for the tightness and the lag length. We then assess the role of the optimal choice of the tightness, of the lag length and of both; compare alternative approaches to multi-step forecasting (direct, iterated, and pseudo-iterated); discuss the treatment of the error variance and of cross-variable shrinkage; and address a set of additional issues, including the size of the VAR, modeling in levels or growth rates, and the extent of forecast bias induced by shrinkage. We obtain a large set of empirical results, but we can summarize them by saying that we find very small losses (and sometimes even gains) from the adoption of specification choices that make BVAR modeling quick and easy. This finding could therefore further enhance the diffusion of the BVAR as an econometric tool for a vast range of applications.
    Keywords: Bayesian statistical decision theory ; Forecasting ; Vector autoregression
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:fip:fedcwp:1112&r=ecm
  9. By: T.D. Stanley; Hristos Doucouliagos
    Abstract: Publication selection bias represents a serious challenge to the integrity of all empirical sciences. We develop meta-regression approximations that are shown to reduce this bias and outperform conventional meta-analytic methods. Our approach is derived from Taylor polynomial approximations to the conditional mean of a truncated distribution. Monte Carlo simulations demonstrate how a new hybrid estimator provides a practical solution. These meta-regression methods are applied to several policy-relevant areas of research including: antidepressant effectiveness, the value of a statistical life and the employment effect of minimum wages and alter what we think we know.
    Keywords: meta-regression; publication selection bias; systematic reviews, truncation
    Date: 2011–05–25
    URL: http://d.repec.org/n?u=RePEc:dkn:econwp:eco_2011_4&r=ecm
  10. By: Heinen, Florian; Michael, Stefanie; Sibbertsen, Philipp
    Abstract: Determining good parameter estimates in ESTAR models is known to be diffcult. We show that the phenomena of getting strongly biased estimators is a consequence of the so-called identifcation problem, the problem of properly distinguishing the transition function in relation to extreme parameter combinations. This happens in particular for either very small or very large values of the error term variance. Furthermore, we introduce a new alternative model -the TSTAR model- which has similar properties as the ESTAR model but reduces the effects of the identifcation problem. We also derive a linearity and a unit root test for this model.
    Keywords: Nonlinearities, Smooth transition, Linearity testing, Unit root testing, Real exchange rates
    JEL: C12 C22 C52
    Date: 2011–05
    URL: http://d.repec.org/n?u=RePEc:han:dpaper:dp-474&r=ecm
  11. By: Daniel Ackerberg (Dept. of Economics, UCLA); Xiaohong Chen (Cowles Foundation, Yale University); Jinyong Hahn (Dept. of Economics, UCLA)
    Abstract: The goal of this paper is to develop techniques to simplify semiparametric inference. We do this by deriving a number of numerical equivalence results. These illustrate that in many cases, one can obtain estimates of semiparametric variances using standard formulas derived in the already-well-known parametric literature. This means that for computational purposes, an empirical researcher can ignore the semiparametric nature of the problem and do all calculations "as if" it were a parametric situation. We hope that this simplicity will promote the use of semiparametric procedures.
    Keywords: Two-step semiparametrics
    JEL: C14
    Date: 2011–05
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1803&r=ecm
  12. By: Maarten R.C. van Oordt; Chen Zhou
    Abstract: The aim of this paper is to show that measures on tail dependence can be estimated in a convenient way by regression analysis. This yields the same estimates as the non-parametric method within the multivariate Extreme Value Theory framework. The advantage of the regression approach is contained by its straightforward extension to the estimation of higher dimensional tail dependence. We provide an example on international stock markets. The regression approach to tail dependence can be applied to estimate several measures of systemic importance of financial institutions in the literature.
    Keywords: Tail dependence; Regression analysis; Extreme Value Theory; Systemic risk
    JEL: C14
    Date: 2011–05
    URL: http://d.repec.org/n?u=RePEc:dnb:dnbwpp:296&r=ecm
  13. By: Cathy W. S. Chen (Graduate Institute of Statistics and Actuarial Science, Feng Chia University); Richard Gerlach (University of Sydney Business School); Bruce B. K. Hwang (Graduate Institute of Statistics and Actuarial Science, Feng Chia University); Michael McAleer (Erasmus University Rotterdam, Tinbergen Institute, The Netherlands, Complutense University of Madrid, and Institute of Economic Research, Kyoto University)
    Abstract: Value-at-Risk (VaR) is commonly used for financial risk measurement. It has recently become even more important, especially during the 2008-09 global financial crisis. We pro- pose some novel nonlinear threshold conditional autoregressive VaR (CAViaR) models that incorporate intra-day price ranges. Model estimation and inference are performed using the Bayesian approach via the link with the Skewed-Laplace distribution. We examine how a range of risk models perform during the 2008-09 financial crisis, and evaluate how the crisis affects the performance of risk models via forecasting VaR. Empirical analysis is conducted on five Asia-Pacific Economic Cooperation stock market indices as well as two exchange rate series. We examine violation rates, back-testing criteria, market risk charges and quantile loss function values to measure and assess the forecasting performance of a variety of risk models. The proposed threshold CAViaR model, incorporating range information, is shown to forecast VaR more efficiently than other models, across the series considered, which should be useful for financial practitioners.
    Keywords: Value-at-Risk; CAViaR model; Skewed-Laplace distribution; intra-day range; backtesting, Markov chain Monte Carlo.
    Date: 2011–05
    URL: http://d.repec.org/n?u=RePEc:kyo:wpaper:775&r=ecm
  14. By: Stoyanov, Stoyan V.; Rachev, Svetlozar T.; Fabozzi, Frank J.
    Abstract: We consider the sensitivity of conditional value-at-risk (CVaR) with respect to the tail index assuming regularly varying tails and exponential and faster-than-exponential tail decay for the return distribution. We compare it to the CVaR sensitivity with respect to the scale parameter for stable Paretian, the Student's t, and generalized Gaussian laws and discuss implications for the modeling of daily returns and marginal rebalancing decisions. Finally, we explore empirically the impact on the asymptotic variability of the CVaR estimator with daily returns which is a standard choice for the return frequency for risk estimation. --
    Keywords: fat-tailed distributions,regularly varying tails,conditional value-at-risk,marginal rebalancing,asymptotic variability
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:zbw:kitwps:29&r=ecm
  15. By: Heinen, Florian
    Abstract: We examine the asymptotic behavior of unit root tests against nonlinear alternatives of the exponential smooth transition type if the data is erroneously nonlinearly transformed. We show analytically and by a Monte Carlo study that the probability of rejecting the correct null of a random walk depends heavily on the type of data transformation.
    Keywords: Unit roots, Misspecification, Nonlinear data transformation, Purchasing Power Parity
    JEL: C12 C22 F31
    Date: 2011–05
    URL: http://d.repec.org/n?u=RePEc:han:dpaper:dp-471&r=ecm
  16. By: Gaurab Aryal; Isabelle Perrigne; Quang Vuong
    Abstract: We study the identification of an insurance model with multidimensional screening, where insurees are characterized by risk and risk aversion. The model is solved using the concept of certainty equivalence under constant absolute risk aversion and an unspecified joint distribution of risk and risk aversion. The paper then analyzes how data availability constraints identification under four data scenarios from the ideal situation to a more realistic one. The observed number of accidents for each insuree plays a key role to identify the model. In a first part, we consider the case of a continuum of coverages offered to each insuree whether the damage distribution is fully observed or truncated. Truncation arises from that an insuree files a claim only when the accident involves a damage above the deductible. Despite bunching due to multidimensional screening, we show that the joint distribution of risk and risk aversion is identified. In a second part, we consider the case of a finite number of coverages offered to each insuree. When the full damage distribution is observed, we show that despite additional pooling due to the finite number of contracts, the joint distribution of risk and risk aversion is identified under a full support assumption and a conditional independence assumption involving the car characteristics. When the damage distribution is truncated, the joint distribution is identified up to the probability that the damage is above the deductible. In a third part, we derive the restrictions imposed by the model on observables for the fourth scenario. We also propose several identification strategies for the damage probability at the deductible. These identification results are further exploited in a companion paper developing an estimation method with an application to insurance data
    JEL: C14 L62 D82 D86
    Date: 2011–02
    URL: http://d.repec.org/n?u=RePEc:acb:cbeeco:2011-538&r=ecm
  17. By: Laurent E. Calvet; Veronika Czellar
    Abstract: In nonlinear state-space models, sequential learning about the hidden state can proceed by particle filtering when the density of the observation conditional on the state is available analytically (e.g. Gordon et al., 1993). This condition need not hold in complex environments, such as the incomplete-information equilibrium models considered in financial economics. In this paper, we make two contributions to the learning literature. First, we introduce a new filtering method, the state-observation sampling (SOS) filter, for general state-space models with intractable observation densities. Second, we develop an indirect inference-based estimator for a large class of incomplete-information economies. We demonstrate the good performance of these techniques on an asset pricing model with investor learning applied to over 80 years of daily equity returns.
    Date: 2011–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1105.4519&r=ecm
  18. By: Bivand, Roger (Dept. of Economics, Norwegian School of Economics and Business Administration)
    Abstract: Despite attempts to get around the Jacobian in fitting spatial econometric models by using GMM and other approximations, it remains a central problem for maximum likelihood estimation. In principle, and for smaller data sets, the use of the eigenvalues of the spatial weights matrix provides a very rapid and satisfactory resolution. For somewhat larger problems, including those induced in spatial panel and dyadic (network) problems, solving the eigenproblem is not as attractive, and a number of alternatives have been proposed. This paper will survey chosen alternatives, and comment on their relative usefulness.
    Keywords: Spatial autoregression; Maximum likelihood estimation; Jacobian computation; Econometric software.
    JEL: C13 C21 C87
    Date: 2010–08–17
    URL: http://d.repec.org/n?u=RePEc:hhs:nhheco:2010_020&r=ecm
  19. By: Faggini, Marisa
    Abstract: In their paper Frank F., Gencay R., and Stengos T., (1988) analyze the quarterly macroeconomic data from 1960 to 1988 for West Germany, Italy, Japan and England. The goal was to check for the presence of deterministic chaos. To ensure that the data analysed was stationary they used a first difference then tried a linear fit. Using a reasonable AR specification for each time series their conclusion was that time series showed different structures. In particular the non linear structure was present in the time series of Japan. Nevertheless the application of metric tools for detecting chaos (correlation dimension and Lyapunov exponent) didn’t show presence of chaos in any time series. Starting from this conclusion we applied a topological tool Visual Recurrence Analysis to these time series to compare the results. The purpose is to verify if the analysis performed by a topological tool could give results different from ones obtained using a metric tool.
    Keywords: economics time series; chaos; and topological tool
    JEL: E32 B22
    Date: 2010–10
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:30928&r=ecm
  20. By: Bianchi, Michele Leonardo; Rachev, Svetlozar T.; Kim, Young Shin; Fabozzi, Frank J.
    Abstract: In this paper, we construct the new class of tempered infinitely divisible (TID) distributions. Taking into account the tempered stable distribution class, as introduced by in the seminal work of Rosinsky , a modification of the tempering function allows one to obtain suitable properties. In particular, TID distributions may have exponential moments of any order and conserve all proper properties of the Rosinski setting. Furthermore, we prove that the modified tempered stable distribution is TID and give some further parametric example. --
    Keywords: stable distributions,tempered stable distributions,tempered infinitely divisible distributions,modified tempered stable distributions
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:zbw:kitwps:26&r=ecm
  21. By: Kim, Young Shin; Rachev, Svetlozar T.; Bianchi, Michele Leonardo; Fabozzi, Frank J.
    Abstract: In this paper, we introduce a new GARCH model with an infinitely divisible distributed innovation, referred to as the rapidly decreasing tempered stable (RDTS) GARCH model. This model allows the description of some stylized empirical facts observed for stock and index returns, such as volatility clustering, the non-zero skewness and excess kurtosis for the residual distribution. Furthermore, we review the classical tempered stable (CTS) GARCH model, which has similar statistical properties. By considering a proper density transformation between infinitely divisible random variables, these GARCH models allow to find the risk-neutral price process, and hence they can be applied to option pricing. We propose algorithms to generate scenario based on GARCH models with CTS and RDTS innovation. To investigate the performance of these GARCH models, we report a parameters estimation for Dow Jones Industrial Average (DJIA) index and stocks included in this index, and furthermore to demonstrate their advantages, we calculate option prices based on these models. It should be noted that only historical data on the underlying asset and on the riskfree rate are taken into account to evaluate option prices. --
    Keywords: tempered infinitely divisible distribution,tempered stable distribution,rapidly decreasing tempered stable distribution,GARCH model option pricing
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:zbw:kitwps:28&r=ecm
  22. By: Stoyanov, Stoyan V.; Rachev, Svetlozar T.; Racheva-Iotova, Boryana; Fabozzi, Frank J.
    Abstract: In the post-crisis era, financial institutions seem to be more aware of the risks posed by extreme events. Even though there are attempts to adapt methodologies drawing from the vast academic literature on the topic, there is also skepticism that fat-tailed models are needed. In this paper, we address the common criticism and discuss three popular methods for extreme risk modeling based on full distribution modeling and and extreme value theory. --
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:zbw:kitwps:30&r=ecm
  23. By: Cheng, Gang; Zervopoulos, Panagiotis; Qian, Zhenhua
    Abstract: Data envelopment analysis (DEA) is a linear programming methodology to evaluate the relative technical efficiency for each member of a set of peer decision making units (DMUs) with multiple inputs and multiple outputs. It has been widely used to measure performance in many areas. A weakness of the traditional DEA model is that it cannot deal with negative input or output values. There have been many studies exploring this issue, and various approaches have been proposed. In this paper, we develop a variant of the traditional radial model whereby original values are replaced with absolute values as the basement to quantify the proportion of improvements to reach the frontier. The new radial measure is units invariant and can deal with all cases of the presence of negative data. In addition, the VRM model preserves the property of proportionate improvement of a traditional radial model, and provides the exact same results in the cases that the traditional radial model can deal with. Examples show the advantages of the new approach.
    Keywords: Data Envelopment Analysis; Negative data in DEA; Variant of radial measure; Unit invariance
    JEL: C02 C61 C67
    Date: 2011–05–17
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:30951&r=ecm

This nep-ecm issue is ©2011 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.