nep-rmg New Economics Papers
on Risk Management
Issue of 2017‒01‒29
twelve papers chosen by

  1. Leverage and Risk Weighted Capital Requirements By Leonardo Gambacorta; Sudipto Karmakar
  2. Finland: Financial Sector Assessment Program; Technical Note-Contingency Planning and Crisis Management By International Monetary Fund.
  3. A Multi-Criteria Portfolio Analysis of Hedge Fund Strategies By David E. Allen; Michael McAleer; Abhay K. Singh
  4. The Value of Timing Risk By Jiro Akahori; Flavia Barsotti; Yuri Imamura
  5. Spectrally-corrected estimation for high-dimensional markowitz mean-variance optimization By Zhidong Bai; Hua Li; Michael McAleer; Wing-Keung Wong
  6. A Dual Method For Backward Stochastic Differential Equations with Application to Risk Valuation By Andrzej Ruszczynski; Jianing Yao
  7. Risk-Adjusted Time Series Momentum By Martin DUDLER; Bruno GMUER; Semyon MALAMUD
  8. An Axiomatization of the Proportional Rule in Financial Networks By Peter Csoka; P. Jean-Jacques Herings
  9. Firm-Related Risk and Precautionary Saving Response By Andreas Fagereng; Luigi Guiso; Luigi Pistaferri
  10. Time Series Copulas for Heteroskedastic Data By Rub\'en Loaiza-Maya; Michael S. Smith; Worapree Maneesoonthorn
  11. Topology data analysis of critical transitions in financial networks By Marian Gidea
  12. An Optimal Multi-layer Reinsurance Policy under Conditional Tail Expectation By Amir T. Payandeh Najafabadi; Ali Panahi Bazaz

  1. By: Leonardo Gambacorta; Sudipto Karmakar
    Abstract: The global financial crisis has highlighted the limitations of risk-sensitive bank capital ratios. To tackle this problem, the Basel III regulatory framework has introduced a minimum leverage ratio, defined as a bank's Tier 1 capital over an exposure measure, which is independent of risk assessment. Using a medium sized DSGE model that features a banking sector, financial frictions and various economic agents with difering degrees of creditworthiness, we seek to answer three questions: 1) How does the leverage ratio behave over the cycle compared with the risk-weighted asset ratio? 2) What are the costs and the benefits of introducing a leverage ratio, in terms of the levels and volatilities of some key macro variables of interest? 3) What can we learn about the interaction of the two regulatory ratios in the long run? The main answers are the following: 1) The leverage ratio acts as a backstop to the risk-sensitive capital requirement: it is a tight constraint during a boom and a soft constraint in a bust; 2) the net benefits of introducing the leverage ratio could be substantial; 3) the steady state value of the regulatory minima for the two ratios strongly depends on the riskiness and the composition of bank lending portfolios.
    JEL: G21 G28 G32
    Date: 2016
  2. By: International Monetary Fund.
    Abstract: Since the 2010 IMF FSAP update, Finland’s Contingency Planning and Crisis Management (CPCM) framework, including bank recovery and resolution, has improved. The establishment of the Banking Union brought about fundamental changes: the advent of the Single Supervisory Mechanism (SSM) and the Single Resolution Mechanism (SRM); the initiation of ECB supervision over systemic banks; and the subjection of all banks to recovery and resolution planning. These actions complemented previously introduced EU-wide systemic risk monitoring through the European Systemic Risk Board. Consequently, Finland has enacted a host of new legislation and has established a national resolution authority. It has also revised its deposit insurance system.
    Keywords: Financial Sector Assessment Program;Banks;Bank resolution;Bank supervision;Financial crisis;Monetary policy;Finland;
    Date: 2017–01–11
  3. By: David E. Allen (Centre for Applied Financial Studies, University of South Australia, School of Mathematics and Statistics,); Michael McAleer (Department of Quantitative Finance, College of Technology Management, National Tsing Hua University,); Abhay K. Singh (School of Business and Law, Edith Cowan University)
    Abstract: This paper features a tri-criteria analysis of Eurekahedge fund data strategy index data. We use nine Eurekahedge equally weighted main strategy indices for the portfolio analysis. The tri-criteria analysis features three objectives: return, risk and dispersion of risk objectives in a Multi-Criteria Optimisation (MCO) portfolio analysis. We vary the MCO return and risk targets and contrast the results with four more standard portfolio optimisation criteria, namely the tangency portfolio(MSR), the most diversied portfolio (MDP), the global minimum variance portfolio (GMW), and portfolios based on minimising expected shortfall (ERC). Backtests of the chosen portfolios for this hedge fund data set indicate that the use of MCO is accompanied by uncertainty about the a priori choice of optimal parameter settings for the decision criteria. The empirical results do not appear to outperform more standard bi-criteria portfolio analyses in the backtests undertaken on our hedge fund index data.
    Keywords: MCO; Portfolio Analysis; Hedge Fund Strategies; Multi-Criteria Optimisation,
    JEL: G15 G17 G32 C58 D53
    Date: 2017–01–23
  4. By: Jiro Akahori; Flavia Barsotti; Yuri Imamura
    Abstract: The aim of this paper is to provide a mathematical contribution on the semi-static hedge of timing risk associated to positions in American-style options under a multi-dimensional market model. Barrier options are considered in the paper and semi-static hedges are studied and discussed for a fairly large class of underlying price dynamics. Timing risk is identified with the uncertainty associated to the time at which the payoff payment of the barrier option is due. Starting from the work by Carr and Picron (1999), where the authors show that the timing risk can be hedged via static positions in plain vanilla options, the present paper extends the static hedge formula proposed in Carr and Picron (1999) by giving sufficient conditions to decompose a generalized timing risk into an integral of knock-in options in a multi-dimensional market model. A dedicated study of the semi-static hedge is then conducted by defining the corresponding strategy based on positions in barrier options. The proposed methodology allows to construct not only first order hedges but also higher order semi-static hedges, that can be interpreted as asymptotic expansions of the hedging error. The convergence of these higher order semi-static hedges to an exact hedge is shown. An illustration of the main theoretical results is provided for i) a symmetric case, ii) a one dimensional case, where the first order and second order hedging errors are derived in analytic closed form. The materiality of the hedging benefit gain of going from order one to order two by re-iterating the timing risk hedging strategy is discussed through numerical evidences by showing that order two can bring to more than 90% reduction of the hedging 'cost' w.r.t. order one (depending on the specific barrier option characteristics).
    Date: 2017–01
  5. By: Zhidong Bai (KLASMOE and School of Mathematics and Statistics, Northeast Normal University, China.); Hua Li; Michael McAleer; Wing-Keung Wong
    Abstract: This paper considers the portfolio problem for high dimensional data when the dimension and size are both large. We analyze the traditional Markowitz mean-variance (MV) portfolio by large dimension matrix theory, and find the spectral distribution of the sample covariance is the main factor to make the expected return of the traditional MV portfolio overestimate the theoretical MV portfolio. A correction is suggested to the spectral construction of the sample covariance to be the sample spectrally corrected covariance, and to improve the traditional MV portfolio to be spectrally corrected. In the expressions of the expected return and risk on the MV portfolio, the population covariance matrix is always a quadratic form, which will direct MV portfolio estimation. We provide the limiting behavior of the quadratic form with the sample spectrally-corrected covariance matrix, and explain the superior performance to the sample covariance as the dimension increases to infinity proportionally with the sample size. Moreover, this paper deduces the limiting behavior of the expected return and risk on the spectrally-corrected MV portfolio, and illustrates the superior properties of the spectrally-corrected MV portfolio. In simulations, we compare the spectrally-corrected estimates with the traditional and bootstrap-corrected estimates, and show the performance of the spectrally-corrected estimates are the best in portfolio returns and portfolio risk. We also compare the performance of the new proposed estimation with deferent optimal portfolio estimates for real data from S&P 500. The empirical findings are consistent with the theory developed in the paper.
    Keywords: Markowitz mean-variance optimization, Optimal return, Optimal portfolio allocation, Large random matrix, Bootstrap method, Spectrally-corrected covariance matrix.
    JEL: G11 C13 C61
    Date: 2016–12
  6. By: Andrzej Ruszczynski; Jianing Yao
    Abstract: We propose a numerical recipe for risk evaluation defined by a backward stochastic differential equation. Using dual representation of the risk measure, we convert the risk valuation to a stochastic control problem where the control is a certain Radon-Nikodym derivative process. By exploring the maximum principle, we show that a piecewise-constant dual control provides a good approximation on a short interval. A dynamic programming algorithm extends the approximation to a finite time horizon. Finally, we illustrate the application of the procedure to risk management in conjunction with nested simulation.
    Date: 2017–01
  7. By: Martin DUDLER (Quantica Capital); Bruno GMUER (Quantica Capital); Semyon MALAMUD (Ecole Polytechnique Fédérale de Lausanne and Swiss Finance Institute)
    Abstract: We introduce a new class of momentum strategies, the risk-adjusted time series momentum (RAMOM) strategies, which are based on averages of past futures returns, normalized by their volatility. We test these strategies on a universe of 64 liquid futures contracts and show that RAMOM strategies outperform the time series momentum (TSMOM) strategies of Ooi, Moskowitz, and Pedersen (2012) for almost all combinations of holding and look-back periods. This outperformance is driven by the following new striking stylized fact that we document: For almost all of the 64 futures contracts, independent of the asset class, realized futures volatility is contemporaneously negatively related to the Fama and French (1987) market (MKT), value (HML), and momentum (UMD) factors. As a result, RAMOM returns have a natural, built-in exposure to the MKT, HML, and UMD factors and outperform TSMOM returns precisely in times when (some of) the factors deliver good returns. In particular, RAMOM allows investors to gain significant exposure to Fama and French factors without actually trading the very large stock universe. Furthermore, dollar turnover of RAMOM strategies is about 40% lower than that of TSMOM, implying a drastic reduction in trading costs. We construct measures of momentum-specific volatility, both within and across asset classes, and show how these volatility measures can be used for risk management. We find that momentum risk management significantly increases Sharpe ratios, but at the same time may lead to more pronounced negative skewness and tail risk. Furthermore, momentum risk management leads to a much lower exposure to market, value, and momentum factors; as a result, risk-managed momentum returns offer much higher diversification benefits than those of standard momentum returns.
    Keywords: Momentum, risk, return, volatility, trend following
    JEL: C41 G11
  8. By: Peter Csoka (“Momentum” Game Theory Research Group - Centre for Economic and Regional Studies, Hungarian Academy of Sciences and Corvinus University of Budapest); P. Jean-Jacques Herings (Department of Economics, Maastricht University, The Netherlands)
    Abstract: The most important rule to determine payments in real-life bankruptcy problems is the proportional rule. Many bankruptcy problems are characterized by network aspects and default may occur as a result of contagion. Indeed, in financial networks with defaulting agents, the values of the agents' assets are endogenous as they depend on the extent to which claims on other agents can be collected. These network aspects make an axiomatic analysis challenging. This paper is the first to provide an axiomatization of the proportional rule in financial networks. Our two central axioms are impartiality and non-manipulability by identical agents. The other axioms are claims boundedness, limited liability, priority of creditors, and continuity.
    Keywords: financial networks, systemic risk, bankruptcy rules, proportional rule
    JEL: C71 G10
    Date: 2016–12
  9. By: Andreas Fagereng (Statistics Norway); Luigi Guiso (EIEF); Luigi Pistaferri (Stanford University)
    Abstract: We propose a new approach to identify the strength of the precautionary motive and the extent of self-insurance in response to earnings risk based on Euler equation estimates. To address endogeneity problems, we use Norwegian administrative data and instrument consumption and earnings volatility with the variance of firm-specific shocks. The instrument is valid because firms pass some of their productivity shocks onto wages; moreover, for most workers firm shocks are hard to avoid. Our estimates suggest a coefficient of relative prudence of 2, in a very plausible range.
    Date: 2017
  10. By: Rub\'en Loaiza-Maya; Michael S. Smith; Worapree Maneesoonthorn
    Abstract: We propose parametric copulas that capture serial dependence in stationary heteroskedastic time series. We develop our copula for first order Markov series, and extend it to higher orders and multivariate series. We derive the copula of a volatility proxy, based on which we propose new measures of volatility dependence, including co-movement and spillover in multivariate series. In general, these depend upon the marginal distributions of the series. Using exchange rate returns, we show that the resulting copula models can capture their marginal distributions more accurately than univariate and multivariate GARCH models, and produce more accurate value at risk forecasts.
    Date: 2017–01
  11. By: Marian Gidea
    Abstract: We develop a topology data analysis-based method to detect early signs for critical transitions in financial data. From the time-series of multiple stock prices, we build time-dependent correlation networks, which exhibit topological structures. We compute the persistent homology associated to these structures in order to track the changes in topology when approaching a critical transition. As a case study, we investigate a portfolio of stocks during a period prior to the US financial crisis of 2007-2008, and show the presence of early signs of the critical transition.
    Date: 2017–01
  12. By: Amir T. Payandeh Najafabadi; Ali Panahi Bazaz
    Abstract: A usual reinsurance policy for insurance companies admits one or two layers of the payment deductions. Under optimal criterion of minimizing the conditional tail expectation (CTE) risk measure of the insurer's total risk, this article generalized an optimal stop-loss reinsurance policy to an optimal multi-layer reinsurance policy. To achieve such optimal multi-layer reinsurance policy, this article starts from a given optimal stop-loss reinsurance policy $f(\cdot).$ In the first step, it cuts down an interval $[0,\infty)$ into two intervals $[0,M_1)$ and $[M_1,\infty).$ By shifting the origin of Cartesian coordinate system to $(M_{1},f(M_{1})),$ and showing that under the $CTE$ criteria $f(x)I_{[0, M_1)}(x)+(f(M_1)+f(x-M_1))I_{[M_1,\infty)}(x)$ is, again, an optimal policy. This extension procedure can be repeated to obtain an optimal k-layer reinsurance policy. Finally, unknown parameters of the optimal multi-layer reinsurance policy are estimated using some additional appropriate criteria. Three simulation-based studies have been conducted to demonstrate: ({\bf 1}) The practical applications of our findings and ({\bf 2}) How one may employ other appropriate criteria to estimate unknown parameters of an optimal multi-layer contract. The multi-layer reinsurance policy, similar to the original stop-loss reinsurance policy is optimal, in a same sense. Moreover it has some other optimal criteria which the original policy does not have. Under optimal criterion of minimizing general translative and monotone risk measure $\rho(\cdot)$ of {\it either} the insurer's total risk {\it or} both the insurer's and the reinsurer's total risks, this article (in its discussion) also extends a given optimal reinsurance contract $f(\cdot)$ to a multi-layer and continuous reinsurance policy.
    Date: 2017–01

General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.