nep-rmg New Economics Papers
on Risk Management
Issue of 2018‒10‒29
fourteen papers chosen by
Stan Miles
Thompson Rivers University

  1. Alternative methods of estimating the longevity risk By Catalina Bolancé; Montserrat Guillén; Arelly Ornelas
  2. A Survey of Systemic Risk Indicators By Antonio Di Cesare; Anna Rogantini Picco
  3. Credit Risk Analysis Using Machine and Deep Learning Models By Dominique Guegan; Peter Addo; Bertrand Hassani
  4. The De Vylder-Goovaerts conjecture holds true within the diffusion limit By Stefan Ankirchner; Christophette Blanchet-Scalliet; Nabil Kazi-Tani
  5. Who bears interest rate risk? By Hoffmann, Peter; Langfield, Sam; Pierobon, Federico; Vuillemey, Guillaume
  6. Combining uncertainty with uncertainty to get certainty? Efficiency analysis for regulation purposes By Andor, Mark A.; Parmeter, Christopher; Sommer, Stephan
  8. Learning the Flood Risk By Chen, Zhenshan; Towe, Charles A.
  9. Estimating Cost of Volatility Risk in Agricultural Commodity Markets By Yan, Lei; Garcia, Philip
  10. Solvency tuned premium for a composite loss distribution By Alexandre Brouste; Anis Matoussi; Tom Rohmer; Christophe Dutang; Vanessa Désert; Erwan Gales; Pierre Golhen; Bérengère Milleville; Willie Lekeufack
  11. The global effects of global risk and uncertainty By Bonciani, Dario; Ricci, Martino
  12. Extreme Events and Serial Dependence in Commodity Prices By Park, Eunchun; Maples, Josh
  13. Identifying shocks via time-varying volatility By Lewis, Daniel J.
  14. Multivariate stochastic volatility with co-heteroscedasticity By Joshua Chan; Arnaud Doucet; Roberto León-González; Rodney W. Strachan

  1. By: Catalina Bolancé (RISKCENTER-IREA, Department of Econometrics, Statistics and Applied Economics, Universitat de Barcelona); Montserrat Guillén (RISKCENTER-IREA, Department of Econometrics, Statistics and Applied Economics, Universitat de Barcelona); Arelly Ornelas (RISKCENTER-IREA, Department of Econometrics, Statistics and Applied Economics, Universitat de Barcelona)
    Abstract: The aim of this paper is to estimate the longevity risk and its trend according to the age of the individual. We focus on individuals over 65. We use the value-at-risk to measure the longevity risk. We have proposed the use of an alternative methodology based on the estimation of the truncated cumulative distribution function and the quantiles. We apply a robust estimation method for fitting parametric distributions. Finally, we compare parametric and nonparametric estimations of longevity risk.
    Keywords: Longevity, value-at-risk, nonparametric inference.
    Date: 2018–10
  2. By: Antonio Di Cesare (Bank of Italy); Anna Rogantini Picco (European University Institute)
    Abstract: The aim of this survey is to provide a rigorous, but not so technical, introduction to several systemic risk indicators frequently used in official publications by institutions involved in macroprudential analysis and policy. The selected indicators are classified using three taxonomies. The first one adopts the point of view of regulators and policy-makers, whose attention is usually focused on the implementability and forward-looking nature of the indicators. The second taxonomy highlights the features that are most relevant for researchers, i.e. the reliance on a sound theoretical background and the use of advanced analytical techniques. The third taxonomy classifies the indicators according to the specific aspects of systemic risks that are captured. For each indicator both general and technical descriptions are provided, as well as specific examples.
    Keywords: systemic risk, financial stability, systemic risk indicators
    JEL: G21 G28 G14 C13
    Date: 2018–10
  3. By: Dominique Guegan (UP1 - Université Panthéon-Sorbonne, CES - Centre d'économie de la Sorbonne - CNRS - Centre National de la Recherche Scientifique - UP1 - Université Panthéon-Sorbonne, Labex ReFi - UP1 - Université Panthéon-Sorbonne, IPAG Business School - IPAG BUSINESS SCHOOL PARIS, University of Ca’ Foscari [Venice, Italy]); Peter Addo (AFD - Agence française de développement, Labex ReFi - UP1 - Université Panthéon-Sorbonne); Bertrand Hassani (Labex ReFi - UP1 - Université Panthéon-Sorbonne, CES - Centre d'économie de la Sorbonne - CNRS - Centre National de la Recherche Scientifique - UP1 - Université Panthéon-Sorbonne, Capgemini Consulting [Paris], UCL-CS - Computer science department [University College London] - UCL - University College of London [London])
    Abstract: Due to the advanced technology associated with Big Data, data availability and computing power, most banks or lending institutions are renewing their business models. Credit risk predictions, monitoring, model reliability and effective loan processing are key to decision-making and transparency. In this work, we build binary classifiers based on machine and deep learning models on real data in predicting loan default probability. The top 10 important features from these models are selected and then used in the modeling process to test the stability of binary classifiers by comparing their performance on separate data. We observe that the tree-based models are more stable than the models based on multilayer artificial neural networks. This opens several questions relative to the intensive use of deep learning systems in enterprises.
    Keywords: financial regulation,deep learning,Big data,data science,credit risk
    Date: 2018
  4. By: Stefan Ankirchner (Institut für Mathematik - Friedrich-Schiller-Universität Jena); Christophette Blanchet-Scalliet (ICJ - Institut Camille Jordan [Villeurbanne] - ECL - École Centrale de Lyon - Université de Lyon - UCBL - Université Claude Bernard Lyon 1 - Université de Lyon - INSA Lyon - Institut National des Sciences Appliquées de Lyon - Université de Lyon - INSA - Institut National des Sciences Appliquées - UJM - Université Jean Monnet [Saint-Étienne] - CNRS - Centre National de la Recherche Scientifique); Nabil Kazi-Tani (SAF - Laboratoire de Sciences Actuarielle et Financière - UCBL - Université Claude Bernard Lyon 1 - Université de Lyon)
    Abstract: The De Vylder and Goovaerts conjecture is an open problem in risk theory, stating that the finite time ruin probability in a standard risk model is greater or equal to the corresponding ruin probability evaluated in an associated model with equalized claim amounts. Equalized means here that the jump sizes of the associated model are equal to the average jump in the initial model between 0 and a terminal time T. In this paper, we consider the diffusion approximations of both the standard risk model and its associated risk model. We prove that the associated model, when conveniently renor-malized, converges in distribution to a Gaussian process satisfying a simple SDE. We then compute the probability that this diffusion hits the level 0 before time T and compare it with the same probability for the diffusion approximation for the standard risk model. We conclude that the De Vylder and Goovaerts conjecture holds true for the diffusion limits.
    Keywords: Risk theory,Ruin probability,Equalized claims,Diffusion approximations
    Date: 2018–10–04
  5. By: Hoffmann, Peter; Langfield, Sam; Pierobon, Federico; Vuillemey, Guillaume
    Abstract: We study the allocation of interest rate risk within the European banking sector using novel data. Banks’ exposure to interest rate risk is small on aggregate, but heterogeneous in the cross-section. In contrast to conventional wisdom, net worth is increasing in interest rates for approximately half of the institutions in our sample. Cross-sectional variation in banks’ exposures is driven by cross-country differences in loan-rate fixation conventions for mortgages. Banks use derivatives to partially hedge on-balance sheet exposures. Residual exposures imply that changes in interest rates have redistributive effects within the banking sector. JEL Classification: G21, E43, E44
    Keywords: Banking, Hedging, Interest Rate Risk, Risk Management
    Date: 2018–09
  6. By: Andor, Mark A.; Parmeter, Christopher; Sommer, Stephan
    Abstract: Data envelopment analysis (DEA) and stochastic frontier analysis (SFA), as well as combinations thereof, are widely applied in incentive regulation practice, where the assessment of efficiency plays a major role in regulation design and benchmarking. Using a Monte Carlo simulation experiment, this paper compares the performance of six alternative methods commonly applied by regulators. Our results demonstrate that combination approaches, such as taking the maximum or the mean over DEA and SFA efficiency scores, have certain practical merits and might offer an useful alternative to strict reliance on a singular method. In particular, the results highlight that taking the maximum not only minimizes the risk of underestimation, but can also improve the precision of efficiency estimation. Based on our results, we give recommendations for the estimation of individual efficiencies for regulation purposes and beyond.
    Keywords: data envelopment analysis,stochastic frontier analysis,efficiency analysis,regulation,network operators
    JEL: C10 C50 D24 L50
    Date: 2018
  7. By: Waithira, Waweru Caroline
    Keywords: Livestock Production/Industries, Risk and Uncertainty
    Date: 2017–08
  8. By: Chen, Zhenshan; Towe, Charles A.
    Keywords: Risk and Uncertainty, Natural Resource Economics, Research Methods/Econometrics/Stats
    Date: 2018–06–20
  9. By: Yan, Lei; Garcia, Philip
    Keywords: Risk and Uncertainty, Food and Agricultural Marketing, Ag Finance and Farm Management
    Date: 2018–06–20
  10. By: Alexandre Brouste (LMM - Laboratoire Manceau de Mathématiques - UM - Le Mans Université); Anis Matoussi (Département de Mathématiques [Le Mans] - UM - Le Mans Université); Tom Rohmer (LMM - Laboratoire Manceau de Mathématiques - UM - Le Mans Université); Christophe Dutang (CEREMADE - CEntre de REcherches en MAthématiques de la DEcision - Université Paris-Dauphine - CNRS - Centre National de la Recherche Scientifique); Vanessa Désert (IRA - Institut du Risque et de l'Assurance, Le Mans, MMA - Mutuelles du Mans Assurances); Erwan Gales (MMA - Mutuelles du Mans Assurances); Pierre Golhen (MMA - Mutuelles du Mans Assurances); Bérengère Milleville (MMA - Mutuelles du Mans Assurances); Willie Lekeufack (MMA - Mutuelles du Mans Assurances, ISFA - Institut de Science Financière et d'Assurances)
    Abstract: A parametric framework is proposed to model both attritional and atypical claims for insurance pricing. This model relies on a classical Generalized Linear Model for attritional claims and a non-standard Generalized Pareto distribution regression model for atypical claims. Maximum likelihood estimators (closed-form for the Generalized Linear Model part and computed with Iterated Weighted Least Square procedure for the Generalized Pareto distribution regression part) are proposed to calibrate the model. Two premium principles (expected value principle and standard deviation principle) are computed on a real data set of fire warranty of a corporate line-of-business. In our methodology, the tuning of the safety loading in the two premium principles is performed to meet a solvency constraint so that the premium caps a high-level quantile of the aggregate annual claim distribution over a reference portfolio.
    Keywords: commercial lines,non-life insurance,pricing,composite distribution,solvency criterion
    Date: 2018–09–28
  11. By: Bonciani, Dario; Ricci, Martino
    Abstract: In this paper, we analyse the effects of a shock to global financial uncertainty and risk aversion on real economic activity. To this end, we extract a global factor, which explains approximately 40% of the variance of about 1000 risky asset returns from around the world. We then study how shocks to the factor affect economic activity in 36 advanced and emerging small open economies by estimating local projections in a panel regression framework. We find the output responses to be quite heterogeneous across countries but, in general, negative and persistent. Furthermore, the effects of shocks to the global factor are stronger in countries with a higher degree of trade and/or financial openness, as well as in countries with higher levels of external debt, less developed financial sectors, and higher risk rating. JEL Classification: C30, F41, E32, F65
    Keywords: global financial cycle, local projection, macroeconomic transmission, panel data
    Date: 2018–09
  12. By: Park, Eunchun; Maples, Josh
    Keywords: Risk and Uncertainty, Ag Finance and Farm Management, Research Methods/Econometrics/Stats
    Date: 2018–06–20
  13. By: Lewis, Daniel J. (Federal Reserve Bank of New York)
    Abstract: An n-variable structural vector auto-regression (SVAR) can be identified (up to shock order) from the evolution of the residual covariance across time if the structural shocks exhibit heteroskedasticity (Rigobon (2003), Sentana and Fiorentini (2001)). However, the path of residual covariances is available only under specific parametric assumptions on the variance process. I propose a new identification argument that identifies the SVAR up to shock orderings using the autocovariance structure of second moments of the residuals implied by an arbitrary stochastic process for the shock variances. These higher moments are available without parametric assumptions like those required by existing approaches. I offer intuitive criteria to select among shock orderings; this selection does not impact inference asymptotically. The identification scheme performs well in simulations. I apply it to the debate on fiscal multipliers. I obtain estimates that are lower than those of Blanchard and Perotti (2002) and Mertens and Ravn (2014), but in line with those of more recent studies.
    Keywords: identification; impulse response function; structural shocks; SVAR; fiscal multiplier; time-varying volatility; heteroskedasticity
    JEL: C32 C58 E20 E62 H30
    Date: 2018–10–01
  14. By: Joshua Chan; Arnaud Doucet; Roberto León-González; Rodney W. Strachan
    Abstract: This paper develops a new methodology that decomposes shocks into homoscedastic and heteroscedastic components. This specification implies there exist linear combinations of heteroscedastic variables that eliminate heteroscedasticity. That is, these linear combinations are homoscedastic; a property we call co-heteroscedasticity. The heteroscedastic part of the model uses a multivariate stochastic volatility inverse Wishart process. The resulting model is invariant to the ordering of the variables, which we show is important for impulse response analysis but is generally important for, e.g., volatility estimation and variance decompositions. The specification allows estimation in moderately high-dimensions. The computational strategy uses a novel particle filter algorithm, a reparameterization that substantially improves algorithmic convergence and an alternating-order particle Gibbs that reduces the amount of particles needed for accurate estimation. We provide two empirical applications; one to exchange rate data and another to a large Vector Autoregression (VAR) of US macroeconomic variables. We find strong evidence for co-heteroscedasticity and, in the second application, estimate the impact of monetary policy on the homoscedastic and heteroscedastic components of macroeconomic variables.
    Keywords: Markov Chain Monte Carlo, Gibbs Sampling, Flexible Parametric Model, Particle Filter, Co-heteroscedasticity, state-space, reparameterization, alternating-order
    JEL: C11 C15
    Date: 2018–10

This nep-rmg issue is ©2018 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.