nep-rmg New Economics Papers
on Risk Management
Issue of 2011‒03‒05
thirteen papers chosen by
Stan Miles
Thompson Rivers University

  1. Calibration of structural and reduced-form recovery models By Alexander F. R. Koivusalo; Rudi Sch\"afer
  2. Portfolio Insurance under a risk-measure constraint By Carmine De Franco; Peter Tankov
  3. Measuring Portfolio Diversification By Ulrich Kirchner; Caroline Zunckel
  4. Minimizing shortfall risk for multiple assets derivatives By Michal Barski
  5. Measuring High-Frequency Causality Between Returns, Realized Volatility and Implied Volatility By Jean-Marie Dufour; René Garcia; Abderrahim Taamouti
  6. Contracts for environmental outcomes: the use of financial contracts in environmental markets By Nemes, Veronica; La Nauze, Andrea; O'Neill, James
  7. Multivariate High-Frequency-Based Volatility (HEAVY) Models By Diaa Noureldin; Neil Shephard; Kevin Sheppard
  8. Regulation of Credit Rating Agencies - Evidence from Recent Crisis By Mai Hassan; Christian Kalhoefer
  9. Does Size of Banks Really Matter? Evidence from CDS Market Data By Ýlker Arslan
  10. Ruin probabilities in tough times - Part 2 - Heavy-traffic approximation for fractionally differentiated random walks in the domain of attraction of a nonGaussian stable distribution By Ph. Barbe; W. P. McCormick
  11. On Mean-Variance Analysis By Yang Li; Traian A Pirvu
  12. Modeling Default Probabilities: the case of Brazil By Benjamin M. Tabak; Daniel O. Cajueiro; A. Luduvice
  13. The canonical econophysics approach to the flash crash of May 6, 2010 By Mazzeu, Joao; Otuki, Thiago; Da Silva, Sergio

  1. By: Alexander F. R. Koivusalo; Rudi Sch\"afer
    Abstract: In recent years research on credit risk modelling has mainly focused on default probabilities. Recovery rates are usually modelled independently, quite often they are even assumed constant. Then, however, the structural connection between recovery rates and default probabilities is lost and the tails of the loss distribution can be underestimated considerably. The problem of underestimating tail losses becomes even more severe, when calibration issues are taken into account. To demonstrate this we choose a Merton-type structural model as our reference system. Diffusion and jump-diffusion are considered as underlying processes. We run Monte Carlo simulations of this model and calibrate different recovery models to the simulation data. For simplicity, we take the default probabilities directly from the simulation data. We compare a reduced-form model for recoveries with a constant recovery approach. In addition, we consider a functional dependence between recovery rates and default probabilities. This dependence can be derived analytically for the diffusion case. We find that the constant recovery approach drastically and systematically underestimates the tail of the loss distribution. The reduced-form recovery model shows better results, when all simulation data is used for calibration. However, if we restrict the simulation data used for calibration, the results for the reduced-form model deteriorate. We find the most reliable and stable results, when we make use of the functional dependence between recovery rates and default probabilities.
    Date: 2011–02
  2. By: Carmine De Franco; Peter Tankov
    Abstract: We study the problem of portfolio insurance from the point of view of a fund manager, who guarantees to the investor that the portfolio value at maturity will be above a fixed threshold. If, at maturity, the portfolio value is below the guaranteed level, a third party will refund the investor up to the guarantee. In exchange for this protection, the third party imposes a limit on the risk exposure of the fund manager, in the form of a convex monetary risk measure. The fund manager therefore tries to maximize the investor's utility function subject to the risk measure constraint.We give a full solution to this nonconvex optimization problem in the complete market setting and show in particular that the choice of the risk measure is crucial for the optimal portfolio to exist. Explicit results are provided for the entropic risk measure (for which the optimal portfolio always exists) and for the class of spectral risk measures (for which the optimal portfolio may fail to exist in some cases).
    Date: 2011–02
  3. By: Ulrich Kirchner; Caroline Zunckel
    Abstract: In the market place, diversification reduces risk and provides protection against extreme events by ensuring that one is not overly exposed to individual occurrences. We argue that diversification is best measured by characteristics of the combined portfolio of assets and introduce a measure based on the information entropy of the probability distribution for the final portfolio asset value. For Gaussian assets the measure is a logarithmic function of the variance and combining independent Gaussian assets of equal variance adds an amount to the diversification. The advantages of this measure include that it naturally extends to any type of distribution and that it takes all moments into account. Furthermore, it can be used in cases of undefined weights (zero-cost assets) or moments. We present examples which apply this measure to derivative overlays.
    Date: 2011–02
  4. By: Michal Barski
    Abstract: The risk minimizing problem $\mathbf{E}[l((H-X_T^{x,\pi})^{+})]\overset{\pi}{\longrightarrow}\min$ in the Black-Scholes framework with correlation is studied. General formulas for the minimal risk function and the cost reduction function for the option $H$ depending on multiple underlying are derived. The case of a linear and a strictly convex loss function $l$ are examined. Explicit computation for $l(x)=x$ and $l(x)=x^p$, with $p>1$ for digital, quantos, outperformance and spread options are presented. The method is based on the quantile hedging approach presented in \cite{FL1}, \cite{FL2} and developed for the multidimensional options in \cite{Barski}.
    Date: 2011–02
  5. By: Jean-Marie Dufour; René Garcia; Abderrahim Taamouti
    Abstract: In this paper, we provide evidence on two alternative mechanisms of interaction between returns and volatilities: the leverage effect and the volatility feedback effect. We stress the importance of distinguishing between realized volatility and implied volatility, and find that implied volatilities are essential for assessing the volatility feedback effect. The leverage hypothesis asserts that return shocks lead to changes in conditional volatility, while the volatility feedback effect theory assumes that return shocks can be caused by changes in conditional volatility through a time-varying risk premium. On observing that a central difference between these alternative explanations lies in the direction of causality, we consider vector autoregressive models of returns and realized volatility and we measure these effects along with the time lags involved through short-run and long-run causality measures proposed in Dufour and Taamouti (2010), as opposed to simple correlations. We analyze 5-minute observations on S&P 500 Index futures contracts, the associated realized volatilities (before and after filtering jumps through the bispectrum) and implied volatilities. Using only returns and realized volatility, we find a strong dynamic leverage effect over the first three days. The volatility feedback effect appears to be negligible at all horizons. By contrast, when implied volatility is considered, a volatility feedback becomes apparent, whereas the leverage effect is almost the same. These results can be explained by the fact that volatility feedback effect works through implied volatility which contains important information on future volatility, through its nonlinear relation with option prices which are themselves forward-looking. In addition, we study the dynamic impact of news on returns and volatility. First, to detect possible dynamic asymmetry, we separate good from bad return news and find a much stronger impact of bad return news (as opposed to good return news) on volatility. Second, we introduce a concept of news based on the difference between implied and realized volatilities (the variance risk premium) and we find that a positive variance risk premium (an anticipated increase in variance) has more impact on returns than a negative variance risk premium. <P>
    Keywords: Volatility asymmetry, leverage effect, volatility feedback effect, risk premium, variance risk premium, multi-horizon causality, causality measure, high-frequency data, realized volatility, bipower variation, implied volatility.,
    JEL: G1 G12 G14 C1 C12 C15 C32 C51 C53
    Date: 2011–02–01
  6. By: Nemes, Veronica; La Nauze, Andrea; O'Neill, James
    Abstract: In environmental markets, parties frequently exchange obligations through environmental contracts. These contracts imply a distribution of risk between parties. The main focus of our paper is to identify contracts that enable risk in environmental markets to be reduced, distributed at least cost, or managed efficiently. The risks that we consider are: moral hazard risk, price risk, exogenous environmental risk, measurement risk and production risk. The first section of our paper outlines some of the contracts currently utilised in financial and insurance markets to achieve these objectives. These are: futures and options contracts, spread contracts, weather contracts and catastrophe bonds. We then provide a snapshot of current applications of these contracts both in real markets and in the literature. Finally we discuss some possible applications in the environmental sector and indicate how the use of these contracts may alter the way government manages environmental assets and responsibilities. We also suggest a staged process to the introduction of contracts that recognises the current limitations faced by government. This paper does not propose new or novel contracts for tackling the problems of risk in exchange. Rather it extends the application of existing contractual arrangements to a new type of problem: environmental markets.
    Keywords: Environmental Economics and Policy,
    Date: 2011
  7. By: Diaa Noureldin; Neil Shephard; Kevin Sheppard
    Abstract: This paper introduces a new class of multivariate volatility models that utilizes high-frequency data. We discuss the models’ dynamics and highlight their differences from multivariate GARCH models. We also discuss their covariance targeting specification and provide closed-form formulas for multi-step forecasts. Estimation and inference strategies are outlined. Empirical results suggest that the HEAVY model outperforms the multivariate GARCH model out-of-sample, with the gains being particularly significant at short forecast horizons. Forecast gains are obtained for both forecast variances and correlations.
    Keywords: HEAVY model, GARCH, multivariate volatility, realized covariance, covariance targeting, multi-step forecasting, Wishart distribution
    JEL: C32 C52
    Date: 2011
  8. By: Mai Hassan (Faculty of Management Technology, The German University in Cairo); Christian Kalhoefer (Faculty of Management Technology, The German University in Cairo)
    Abstract: The importance of ratings for investors’ decisions and for the perceptions of the financial health of a nation pointed out the need that credit rating agencies should be regulated in some way. Regulators and market participants believed that the credit rating agencies need to abide by standards of corporate governance and supervision due to their pivotal role in the US subprime crisis. This belief was amplified recently because the rating agencies were deeply involved in the European debt crisis after various sovereign debt ratings were significantly downgraded. Therefore, the paper highlights the critique against the agencies’ role in the two most recent crises and reviews the regulation proposals which subject the rating agencies to behavioral standards.
    Keywords: Credit rating agencies, subprime, Euro crisis
    JEL: G15 G24 G38
    Date: 2011–02
  9. By: Ýlker Arslan (Department of Economics, Izmir University of Economics)
    Abstract: In this study we try to find that whether markets take into account the phenomenon of Too Big to Fail. With the help of CDS market data, which reflects the risk, markets attribute on banks, we calculate the default probabilities of banks in one, two, and three years. Then we regress these results with financial values like total assets, total shareholders? equity and net income. Later on we extend our study and repeat our regression analysis using Return on Assets as dependent variable. We find that markets give more importance to profitability of a bank than its size when pricing the riskiness of the bank. We conclude that Too Big to Fail is not a valid term as thought but may be Too Profitable to Fail may be better.
    Keywords: Banking, Too Big to Fail, CDS Market
    JEL: G21 G28
    Date: 2010–11
  10. By: Ph. Barbe (CNRS); W. P. McCormick (UGA)
    Abstract: Motivated by applications to insurance mathematics, we prove some heavy-traffic limit theorems for processes which encompass the fractionally differentiated random walk as well as some FARIMA processes, when the innovations are in the domain of attraction of a nonGaussian stable distribution.
    Date: 2011–02
  11. By: Yang Li; Traian A Pirvu
    Abstract: This paper considers the mean variance portfolio management problem. We examine portfolios which contain both primary and derivative securities. The challenge in this context is the well posedness of the optimization problem. We find examples in which this problem is well posed. Numerical experiments provide the efficient frontier. The methodology developed in this paper can be also applied to pricing and hedging in incomplete markets.
    Date: 2011–02
  12. By: Benjamin M. Tabak; Daniel O. Cajueiro; A. Luduvice
    Date: 2011–01
  13. By: Mazzeu, Joao; Otuki, Thiago; Da Silva, Sergio
    Abstract: We carry out a statistical physics analysis of the flash crash of May 6, 2010 using data from the Dow Jones Industrial Average index sampled at a one-minute frequency from September 1, 2009 to May 31, 2010. We evaluate the hypothesis of a non-Gaussian Levy-stable distribution to model the data and pay particular attention to the distribution-tail behavior. We conclude that there is non-Gaussian scaling and thus that the flash crash cannot be considered an anomaly. From the study of tails, we find that the flash crash followed a power-law pattern outside the Levy regime, which was not the inverse cubic law. Finally, we show that the time-dependent variance of the DJIA-index returns, not tracked by the Levy, can be modeled in a straightforward manner by a GARCH (1, 1) process.
    Keywords: flash crash; econophysics; stable distribution; extreme events
    JEL: C46
    Date: 2011

This nep-rmg issue is ©2011 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.