
on Risk Management 
By:  Leonardo Gambacorta; Sudipto Karmakar 
Abstract:  The global financial crisis has highlighted the limitations of risksensitive bank capital ratios. To tackle this problem, the Basel III regulatory framework has introduced a minimum leverage ratio, defined as a bank's Tier 1 capital over an exposure measure, which is independent of risk assessment. Using a medium sized DSGE model that features a banking sector, financial frictions and various economic agents with difering degrees of creditworthiness, we seek to answer three questions: 1) How does the leverage ratio behave over the cycle compared with the riskweighted asset ratio? 2) What are the costs and the benefits of introducing a leverage ratio, in terms of the levels and volatilities of some key macro variables of interest? 3) What can we learn about the interaction of the two regulatory ratios in the long run? The main answers are the following: 1) The leverage ratio acts as a backstop to the risksensitive capital requirement: it is a tight constraint during a boom and a soft constraint in a bust; 2) the net benefits of introducing the leverage ratio could be substantial; 3) the steady state value of the regulatory minima for the two ratios strongly depends on the riskiness and the composition of bank lending portfolios. 
JEL:  G21 G28 G32 
Date:  2016 
URL:  http://d.repec.org/n?u=RePEc:ptu:wpaper:w201616&r=rmg 
By:  International Monetary Fund. 
Abstract:  Since the 2010 IMF FSAP update, Finlandâ€™s Contingency Planning and Crisis Management (CPCM) framework, including bank recovery and resolution, has improved. The establishment of the Banking Union brought about fundamental changes: the advent of the Single Supervisory Mechanism (SSM) and the Single Resolution Mechanism (SRM); the initiation of ECB supervision over systemic banks; and the subjection of all banks to recovery and resolution planning. These actions complemented previously introduced EUwide systemic risk monitoring through the European Systemic Risk Board. Consequently, Finland has enacted a host of new legislation and has established a national resolution authority. It has also revised its deposit insurance system. 
Keywords:  Financial Sector Assessment Program;Banks;Bank resolution;Bank supervision;Financial crisis;Monetary policy;Finland; 
Date:  2017–01–11 
URL:  http://d.repec.org/n?u=RePEc:imf:imfscr:17/4&r=rmg 
By:  David E. Allen (Centre for Applied Financial Studies, University of South Australia, School of Mathematics and Statistics,); Michael McAleer (Department of Quantitative Finance, College of Technology Management, National Tsing Hua University,); Abhay K. Singh (School of Business and Law, Edith Cowan University) 
Abstract:  This paper features a tricriteria analysis of Eurekahedge fund data strategy index data. We use nine Eurekahedge equally weighted main strategy indices for the portfolio analysis. The tricriteria analysis features three objectives: return, risk and dispersion of risk objectives in a MultiCriteria Optimisation (MCO) portfolio analysis. We vary the MCO return and risk targets and contrast the results with four more standard portfolio optimisation criteria, namely the tangency portfolio(MSR), the most diversied portfolio (MDP), the global minimum variance portfolio (GMW), and portfolios based on minimising expected shortfall (ERC). Backtests of the chosen portfolios for this hedge fund data set indicate that the use of MCO is accompanied by uncertainty about the a priori choice of optimal parameter settings for the decision criteria. The empirical results do not appear to outperform more standard bicriteria portfolio analyses in the backtests undertaken on our hedge fund index data. 
Keywords:  MCO; Portfolio Analysis; Hedge Fund Strategies; MultiCriteria Optimisation, 
JEL:  G15 G17 G32 C58 D53 
Date:  2017–01–23 
URL:  http://d.repec.org/n?u=RePEc:tin:wpaper:20170013&r=rmg 
By:  Jiro Akahori; Flavia Barsotti; Yuri Imamura 
Abstract:  The aim of this paper is to provide a mathematical contribution on the semistatic hedge of timing risk associated to positions in Americanstyle options under a multidimensional market model. Barrier options are considered in the paper and semistatic hedges are studied and discussed for a fairly large class of underlying price dynamics. Timing risk is identified with the uncertainty associated to the time at which the payoff payment of the barrier option is due. Starting from the work by Carr and Picron (1999), where the authors show that the timing risk can be hedged via static positions in plain vanilla options, the present paper extends the static hedge formula proposed in Carr and Picron (1999) by giving sufficient conditions to decompose a generalized timing risk into an integral of knockin options in a multidimensional market model. A dedicated study of the semistatic hedge is then conducted by defining the corresponding strategy based on positions in barrier options. The proposed methodology allows to construct not only first order hedges but also higher order semistatic hedges, that can be interpreted as asymptotic expansions of the hedging error. The convergence of these higher order semistatic hedges to an exact hedge is shown. An illustration of the main theoretical results is provided for i) a symmetric case, ii) a one dimensional case, where the first order and second order hedging errors are derived in analytic closed form. The materiality of the hedging benefit gain of going from order one to order two by reiterating the timing risk hedging strategy is discussed through numerical evidences by showing that order two can bring to more than 90% reduction of the hedging 'cost' w.r.t. order one (depending on the specific barrier option characteristics). 
Date:  2017–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1701.05695&r=rmg 
By:  Zhidong Bai (KLASMOE and School of Mathematics and Statistics, Northeast Normal University, China.); Hua Li; Michael McAleer; WingKeung Wong 
Abstract:  This paper considers the portfolio problem for high dimensional data when the dimension and size are both large. We analyze the traditional Markowitz meanvariance (MV) portfolio by large dimension matrix theory, and find the spectral distribution of the sample covariance is the main factor to make the expected return of the traditional MV portfolio overestimate the theoretical MV portfolio. A correction is suggested to the spectral construction of the sample covariance to be the sample spectrally corrected covariance, and to improve the traditional MV portfolio to be spectrally corrected. In the expressions of the expected return and risk on the MV portfolio, the population covariance matrix is always a quadratic form, which will direct MV portfolio estimation. We provide the limiting behavior of the quadratic form with the sample spectrallycorrected covariance matrix, and explain the superior performance to the sample covariance as the dimension increases to infinity proportionally with the sample size. Moreover, this paper deduces the limiting behavior of the expected return and risk on the spectrallycorrected MV portfolio, and illustrates the superior properties of the spectrallycorrected MV portfolio. In simulations, we compare the spectrallycorrected estimates with the traditional and bootstrapcorrected estimates, and show the performance of the spectrallycorrected estimates are the best in portfolio returns and portfolio risk. We also compare the performance of the new proposed estimation with deferent optimal portfolio estimates for real data from S&P 500. The empirical findings are consistent with the theory developed in the paper. 
Keywords:  Markowitz meanvariance optimization, Optimal return, Optimal portfolio allocation, Large random matrix, Bootstrap method, Spectrallycorrected covariance matrix. 
JEL:  G11 C13 C61 
Date:  2016–12 
URL:  http://d.repec.org/n?u=RePEc:ucm:doicae:1705&r=rmg 
By:  Andrzej Ruszczynski; Jianing Yao 
Abstract:  We propose a numerical recipe for risk evaluation defined by a backward stochastic differential equation. Using dual representation of the risk measure, we convert the risk valuation to a stochastic control problem where the control is a certain RadonNikodym derivative process. By exploring the maximum principle, we show that a piecewiseconstant dual control provides a good approximation on a short interval. A dynamic programming algorithm extends the approximation to a finite time horizon. Finally, we illustrate the application of the procedure to risk management in conjunction with nested simulation. 
Date:  2017–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1701.06234&r=rmg 
By:  Martin DUDLER (Quantica Capital); Bruno GMUER (Quantica Capital); Semyon MALAMUD (Ecole Polytechnique Fédérale de Lausanne and Swiss Finance Institute) 
Abstract:  We introduce a new class of momentum strategies, the riskadjusted time series momentum (RAMOM) strategies, which are based on averages of past futures returns, normalized by their volatility. We test these strategies on a universe of 64 liquid futures contracts and show that RAMOM strategies outperform the time series momentum (TSMOM) strategies of Ooi, Moskowitz, and Pedersen (2012) for almost all combinations of holding and lookback periods. This outperformance is driven by the following new striking stylized fact that we document: For almost all of the 64 futures contracts, independent of the asset class, realized futures volatility is contemporaneously negatively related to the Fama and French (1987) market (MKT), value (HML), and momentum (UMD) factors. As a result, RAMOM returns have a natural, builtin exposure to the MKT, HML, and UMD factors and outperform TSMOM returns precisely in times when (some of) the factors deliver good returns. In particular, RAMOM allows investors to gain significant exposure to Fama and French factors without actually trading the very large stock universe. Furthermore, dollar turnover of RAMOM strategies is about 40% lower than that of TSMOM, implying a drastic reduction in trading costs. We construct measures of momentumspecific volatility, both within and across asset classes, and show how these volatility measures can be used for risk management. We find that momentum risk management significantly increases Sharpe ratios, but at the same time may lead to more pronounced negative skewness and tail risk. Furthermore, momentum risk management leads to a much lower exposure to market, value, and momentum factors; as a result, riskmanaged momentum returns offer much higher diversification benefits than those of standard momentum returns. 
Keywords:  Momentum, risk, return, volatility, trend following 
JEL:  C41 G11 
URL:  http://d.repec.org/n?u=RePEc:chf:rpseri:rp1471&r=rmg 
By:  Peter Csoka (“Momentum” Game Theory Research Group  Centre for Economic and Regional Studies, Hungarian Academy of Sciences and Corvinus University of Budapest); P. JeanJacques Herings (Department of Economics, Maastricht University, The Netherlands) 
Abstract:  The most important rule to determine payments in reallife bankruptcy problems is the proportional rule. Many bankruptcy problems are characterized by network aspects and default may occur as a result of contagion. Indeed, in financial networks with defaulting agents, the values of the agents' assets are endogenous as they depend on the extent to which claims on other agents can be collected. These network aspects make an axiomatic analysis challenging. This paper is the first to provide an axiomatization of the proportional rule in financial networks. Our two central axioms are impartiality and nonmanipulability by identical agents. The other axioms are claims boundedness, limited liability, priority of creditors, and continuity. 
Keywords:  financial networks, systemic risk, bankruptcy rules, proportional rule 
JEL:  C71 G10 
Date:  2016–12 
URL:  http://d.repec.org/n?u=RePEc:has:discpr:1701&r=rmg 
By:  Andreas Fagereng (Statistics Norway); Luigi Guiso (EIEF); Luigi Pistaferri (Stanford University) 
Abstract:  We propose a new approach to identify the strength of the precautionary motive and the extent of selfinsurance in response to earnings risk based on Euler equation estimates. To address endogeneity problems, we use Norwegian administrative data and instrument consumption and earnings volatility with the variance of firmspecific shocks. The instrument is valid because firms pass some of their productivity shocks onto wages; moreover, for most workers firm shocks are hard to avoid. Our estimates suggest a coefficient of relative prudence of 2, in a very plausible range. 
Date:  2017 
URL:  http://d.repec.org/n?u=RePEc:eie:wpaper:1702&r=rmg 
By:  Rub\'en LoaizaMaya; Michael S. Smith; Worapree Maneesoonthorn 
Abstract:  We propose parametric copulas that capture serial dependence in stationary heteroskedastic time series. We develop our copula for first order Markov series, and extend it to higher orders and multivariate series. We derive the copula of a volatility proxy, based on which we propose new measures of volatility dependence, including comovement and spillover in multivariate series. In general, these depend upon the marginal distributions of the series. Using exchange rate returns, we show that the resulting copula models can capture their marginal distributions more accurately than univariate and multivariate GARCH models, and produce more accurate value at risk forecasts. 
Date:  2017–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1701.07152&r=rmg 
By:  Marian Gidea 
Abstract:  We develop a topology data analysisbased method to detect early signs for critical transitions in financial data. From the timeseries of multiple stock prices, we build timedependent correlation networks, which exhibit topological structures. We compute the persistent homology associated to these structures in order to track the changes in topology when approaching a critical transition. As a case study, we investigate a portfolio of stocks during a period prior to the US financial crisis of 20072008, and show the presence of early signs of the critical transition. 
Date:  2017–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1701.06081&r=rmg 
By:  Amir T. Payandeh Najafabadi; Ali Panahi Bazaz 
Abstract:  A usual reinsurance policy for insurance companies admits one or two layers of the payment deductions. Under optimal criterion of minimizing the conditional tail expectation (CTE) risk measure of the insurer's total risk, this article generalized an optimal stoploss reinsurance policy to an optimal multilayer reinsurance policy. To achieve such optimal multilayer reinsurance policy, this article starts from a given optimal stoploss reinsurance policy $f(\cdot).$ In the first step, it cuts down an interval $[0,\infty)$ into two intervals $[0,M_1)$ and $[M_1,\infty).$ By shifting the origin of Cartesian coordinate system to $(M_{1},f(M_{1})),$ and showing that under the $CTE$ criteria $f(x)I_{[0, M_1)}(x)+(f(M_1)+f(xM_1))I_{[M_1,\infty)}(x)$ is, again, an optimal policy. This extension procedure can be repeated to obtain an optimal klayer reinsurance policy. Finally, unknown parameters of the optimal multilayer reinsurance policy are estimated using some additional appropriate criteria. Three simulationbased studies have been conducted to demonstrate: ({\bf 1}) The practical applications of our findings and ({\bf 2}) How one may employ other appropriate criteria to estimate unknown parameters of an optimal multilayer contract. The multilayer reinsurance policy, similar to the original stoploss reinsurance policy is optimal, in a same sense. Moreover it has some other optimal criteria which the original policy does not have. Under optimal criterion of minimizing general translative and monotone risk measure $\rho(\cdot)$ of {\it either} the insurer's total risk {\it or} both the insurer's and the reinsurer's total risks, this article (in its discussion) also extends a given optimal reinsurance contract $f(\cdot)$ to a multilayer and continuous reinsurance policy. 
Date:  2017–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1701.05447&r=rmg 