
on Risk Management 
Issue of 2017‒10‒08
fourteen papers chosen by 
By:  Alexandre Belloni; Mingli Chen; Victor Chernozhukov 
Abstract:  We propose Quantile Graphical Models (QGMs) to characterize predictive and conditional independence relationships within a set of random variables of interest. This framework is intended to quantify the dependence in nonGaussian settings which are ubiquitous in many econometric applications. We consider two distinct QGMs. First, Condition Independence QGMs characterize conditional independence at each quantile index revealing the distributional dependence structure. Second, Predictive QGMs characterize the best linear predictor under asymmetric loss functions. Under Gaussianity these notions essentially coincide but nonGaussian settings lead us to different models as prediction and conditional independence are fundamentally different properties. Combined the models complement the methods based on normal and nonparanormal distributions that study mean predictability and use covariance and precision matrices for conditional independence. We also propose estimators for each QGMs. The estimators are based on highdimension techniques including (a continuum of) $\ell_{1}$penalized quantile regressions and low biased equations, which allows us to handle the potentially large number of variables. We build upon recent results to obtain valid choice of the penalty parameters and rates of convergence. These results are derived without any assumptions on the separation from zero and are uniformly valid across a widerange of models. With the additional assumptions that the coefficients are wellseparated from zero, we can consistently estimate the graph associated with the dependence structure by hard thresholding the proposed estimators. Further we show how QGM can be used in measuring systemic risk contributions and the impact of downside movement in the market on the dependence structure of assets' return. 
Date:  2016–07 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1607.00286&r=rmg 
By:  Igor Halperin 
Abstract:  We propose an analytical approach to the computation of tail probabilities of compound distributions whose individual components have heavy tails. Our approach is based on the contour integration method, and gives rise to a representation of the tail probability of a compound distribution in the form of a rapidly convergent onedimensional integral involving a discontinuity of the imaginary part of its moment generating function across a branch cut. The latter integral can be evaluated in quadratures, or alternatively represented as an asymptotic expansion. Our approach thus offers a viable (especially at high percentile levels) alternative to more standard methods such as Monte Carlo or the Fast Fourier Transform, traditionally used for such problems. As a practical application, we use our method to compute the operational Value at Risk (VAR) of a financial institution, where individual losses are modeled as spliced distributions whose large loss components are given by powerlaw or lognormal distributions. Finally, we briefly discuss extensions of the present formalism for calculation of tail probabilities of compound distributions made of compound distributions with heavy tails. 
Date:  2017–10 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1710.01227&r=rmg 
By:  J\'er\^ome Spielmann (LAREMA) 
Abstract:  In this note, we study the ultimate ruin probabilities of a realvalued L{\'e}vy process X with lighttailed negative jumps. It is wellknown that, for such L{\'e}vy processes, the probability of ruin decreases as an exponential function with a rate given by the root of the Laplace exponent, when the initial value goes to infinity. Under the additional assumption that X has integrable positive jumps, we show how a finer analysis of the Laplace exponent gives in fact a complete description of the bounds on the probability of ruin for this class of L{\'e}vy processes. This leads to the identification of a case that is not considered in the literature and for which we give an example. We then apply the result to various risk models and in particular the Cram{\'e}rLundberg model perturbed by Brownian motion. 
Date:  2017–09 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1709.10295&r=rmg 
By:  Kamil Kladivko; Mihail Zervos 
Abstract:  We consider the problem of ESO valuation in continuous time. In particular, we consider models that assume that an appropriate random time serves as a proxy for anything that causes the ESO's holder to exercise the option early, namely, reflects the ESO holder's job termination risk as well as early exercise behaviour. In this context, we study the problem of ESO valuation by means of meanvariance hedging. Our analysis is based on dynamic programming and uses PDE techniques. We also express the ESO's value that we derive as the expected discounted payoff that the ESO yields with respect to an equivalent martingale measure, which does not coincide with the minimal martingale measure or the varianceoptimal measure. Furthermore, we present a numerical study that illustrates aspects or our theoretical results. 
Date:  2017–10 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1710.00897&r=rmg 
By:  Jinglun Yao; Sabine Laurent; Brice B\'enaben 
Abstract:  Implied volatilities form a wellknown structure of smile or surface which accommodates the Bachelier model and observed market prices of interest rate options. For the swaptions that we study, three parameters are taken into account for indexing the implied volatilities and form a "volatility cube": strike (or moneyness), time to maturity of the option contract, duration of the underlying swap contract. It should be noted that the implied volatility structure changes across time, which makes it important to study its dynamics in order to well manage the volatility risk. As volatilities are correlated across the cube, it is preferable to decompose the dynamics on orthogonal principal components, which is the idea of KarhunenLo\`eve decomposition that we have adopted in the article. The projections on principal components are investigated by Filtered Historical Simulation in order to predict the Value at Risk (VaR), which is then examined by standard tests and nonarbitrage condition to ensure its appropriateness. 
Date:  2017–10 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1710.00859&r=rmg 
By:  Torchiani, Ingo; Heidorn, Thomas; Schmaltz, Christian 
Abstract:  We propose a new method for measuring how far away banks are from complying with a multiratio regulatory framework. We suggest measuring the efforts a bank has to make to reach compliance as an additional portfolio which is derived from a microeconomic banking model. This compliance portfolio provides an integrated measure of the shortfalls resulting from a new regulatory framework. Our method complements the descriptive reporting of individual shortfalls per ratio when monitoring banks' progress toward compliance with a new regulatory framework. We apply our concept to a sample of 46 German banks in order to quantify the effects of the interdependencies of the Basel III capital and liquidity requirements. Comparing our portfolio approach to the shortfalls reported in the Basel III monitoring, we find that the reported shortfalls tend to underestimate the required capital and to overestimate of the required stable funding. However, compared to the overall level of the reported shortfalls, the effects resulting from the interdepen dencies of the Basel III ratios are found to be rather small. 
Keywords:  Basel III,linear programming,impact studies,integrated shortfall 
JEL:  G21 C61 
Date:  2017 
URL:  http://d.repec.org/n?u=RePEc:zbw:bubdps:262017&r=rmg 
By:  Zachariah Peterson 
Abstract:  Kelly's Criterion is well known among gamblers and investors as a method for maximizing the returns one would expect to observe over long periods of betting or investing. These ideas are conspicuously absent from portfolio optimization problems in the financial and automation literature. This paper will show how Kelly's Criterion can be incorporated into standard portfolio optimization models. The model developed here combines risk and return into a single objective function by incorporating a risk parameter. This model is then solved for a portfolio of 10 stocks from a major stock exchange using a differential evolution algorithm. Monte Carlo calculations are used to verify the accuracy of the results obtained from differential evolution. The results show that evolutionary algorithms can be successfully applied to solve a portfolio optimization problem where returns are calculated by applying Kelly's Criterion to each of the assets in the portfolio. 
Date:  2017–10 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1710.00431&r=rmg 
By:  ChungHan Hsieh; B. Ross Barmish 
Abstract:  Control of drawdown, that is, the control of the drops in wealth over time from peaks to subsequent lows, is of great concern from a risk management perspective. With this motivation in mind, the focal point of this paper is to address the drawdown issue in a stock trading context. Although our analysis can be carried out without reference to control theory, to make the work accessible to this community, we use the language of feedback systems. The takeoff point for the results to follow, which we call the Drawdown Modulation Lemma, characterizes any investment which guarantees that the percentage drawdown is no greater than a prespecified level with probability one. With the aid of this lemma, we introduce a new scheme which we call the drawdownmodulated feedback control. To illustrate the power of the theory, we consider a drawdownconstrained version of the wellknown Kelly Optimization Problem which involves maximizing the expected logarithmic growth of the trader's account value. As the drawdown parameter dmax in our new formulation tends to one, we recover existing results as a special case. This new theory leads to an optimal investment strategy whose application is illustrated via an example with historical stockprice data. 
Date:  2017–10 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1710.01503&r=rmg 
By:  ChungHan Hsieh; B. Ross Barmish 
Abstract:  The focal point of this paper is the issue of "drawdown" which arises in recursive betting scenarios and related applications in the stock market. Roughly speaking, drawdown is understood to mean drops in wealth over time from peaks to subsequent lows. Motivated by the fact that this issue is of paramount concern to conservative investors, we dispense with the classical variance as the risk metric and work with drawdown and mean return as the riskreward pair. In this setting, the main results in this paper address the socalled "efficiency" of linear timeinvariant (LTI) investment feedback strategies which correspond to Markowitzstyle schemes in the finance literature. Our analysis begins with the following principle which is widely used in finance: Given two investment opportunities, if one of them has higher risk and lower return, it will be deemed to be inefficient or strictly dominated and generally rejected in the marketplace. In this framework, with riskreward pair as described above, our main result is that classical Markowitzstyle strategies are inefficient. To establish this, we use a new investment strategy which involves a timevarying linear feedback block K(k), called the drawdown modulator. Using this instead of the original LTI feedback block K in the Markowitz scheme, the desired domination is obtained. As a bonus, it is also seen that the modulator assures a worstcase level of drawdown protection with probability one. 
Date:  2017–10 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1710.01501&r=rmg 
By:  Rangan Gupta (University of Pretoria, Pretoria, South Africa and IPAG Business School, Paris, France); Tahir Suleman (School of Economics and Finance, Victoria University of Wellington, New Zealand and School of Business, Wellington Institute of Technology, New Zealand); Mark E. Wohar (College of Business Administration, University of Nebraska at Omaha, Omaha, USA and School of Business and Economics, Loughborough University, Leicestershire, UK) 
Abstract:  This paper provides empirical evidence to the theoretical claim that rare disaster risks have predictability for exchange rate returns and volatility using a nonparametric quantilebased methodology. Using dollarbased exchange rates for Brazil, Russia, India, China, and South Africa, the quantilecausality test shows that indeed rare disasterrisks affects both returns and volatility over the majority of their respective conditional distributions. In addition, these effects are much stronger when compared to those using the British pound, especially in terms of currency returns 
Keywords:  Exchange Rate Returns and Volatility, Rare Disasters, Nonparametric Quantile Causality 
JEL:  C22 C58 G14 G15 
Date:  2017–09 
URL:  http://d.repec.org/n?u=RePEc:pre:wpaper:201767&r=rmg 
By:  Victor Chernozhukov; Iv\'an Fern\'andezVal; Tetsuya Kaji 
Abstract:  Extremal quantile regression, i.e. quantile regression applied to the tails of the conditional distribution, counts with an increasing number of economic and financial applications such as valueatrisk, production frontiers, determinants of low infant birth weights, and auction models. This chapter provides an overview of recent developments in the theory and empirics of extremal quantile regression. The advances in the theory have relied on the use of extreme value approximations to the law of the Koenker and Bassett (1978) quantile regression estimator. Extreme value laws not only have been shown to provide more accurate approximations than Gaussian laws at the tails, but also have served as the basis to develop bias corrected estimators and inference methods using simulation and suitable variations of bootstrap and subsampling. The applicability of these methods is illustrated with two empirical examples on conditional valueatrisk and financial contagion. 
Date:  2016–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1612.06850&r=rmg 
By:  ChungHan Hsieh; B. Ross Barmish 
Abstract:  The focal point of this paper is the socalled Kelly Criterion, a prescription for optimal resource allocation among a set of gambles which are repeated over time. The criterion calls for maximization of the expected value of the logarithmic growth of wealth. While significant literature exists providing the rationale for such an optimization, this paper concentrates on the limitations of the Kellybased theory. To this end, we fill a void in published results by providing specific examples quantifying what difficulties are encountered when Taylorstyle approximations are used and when wealth drawdowns are considered. For the case of drawdown, we describe some research directions which we feel are promising for improvement of the theory. 
Date:  2017–10 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1710.01787&r=rmg 
By:  Kathrin Glau; Paul Herold; Dilip B. Madan; Christian P\"otz 
Abstract:  The implied volatility is a crucial element of any financial toolbox, since it is used for quoting and the hedging of options as well as for model calibration. In contrast to the BlackScholes formula its inverse, the implied volatility, is not explicitly available and numerical approximation is required. We propose a bivariate interpolation of the implied volatility surface based on Chebyshev polynomials. This yields a closedform approximation of the implied volatility, which is easy to implement and to maintain. We prove a subexponential error decay. This allows us to obtain an accuracy close to machine precision with polynomials of a low degree. We compare the performance of the method in terms of runtime and accuracy to the most common reference methods. In contrast to existing interpolation methods, the proposed method is able to compute the implied volatility for all relevant option data. In this context, numerical experiments confirm a considerable increase in efficiency, especially for large data sets. 
Date:  2017–10 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1710.01797&r=rmg 
By:  Taupo, Tauisi 
Abstract:  This paper examines the financing of disaster risk management. Future climate and disaster risks are predicted to impose increasing financial pressure on the governments of lowlying atoll nations. The aftermath of a disaster, such as a cyclone, requires financial means for quick response and recovery. We quantify the appropriate levels of financial support for expected disasters in Tuvalu and Kiribati by building on the Pacific Catastrophe Risk Assessment and Financing Initiative (PCRAFI) calculated likely costs for disasters. To these, we add estimates of the potential effects of distant cyclones, droughts, sea level rise, and climate change, as they are predicted to affect lowlying atoll islands. This paper focuses on the potential contribution of the sovereign wealth funds (SWFs) of Tuvalu and Kiribati in reducing reliance on foreign aid for expost disaster risk management. We forecast the future size of SWFs using Monte Carlo simulations and an AutoRegressive Integrated Moving Average model. We examine the longterm sustainability of SWFs, and the feasibility of extending their mandate for disaster recovery. 
Keywords:  Sovereign wealth fund, Disasters, Tuvalu, Kiribati, Disaster fund, Sustainability, 
Date:  2017 
URL:  http://d.repec.org/n?u=RePEc:vuw:vuwecf:6633&r=rmg 