|
on Risk Management |
Issue of 2019‒02‒11
sixteen papers chosen by |
By: | Guglielmo D'Amico; Filippo Petroni; Philippe Regnault; Stefania Scocchera; Loriano Storchi |
Abstract: | In this paper, we propose a methodology based on piece-wise homogeneous Markov chain for credit ratings and a multivariate model of the credit spreads to evaluate the financial risk in European Union (EU). Two main aspects are considered: how the financial risk is distributed among the European countries and how large is the value of the total risk. The first aspect is evaluated by means of the expected value of a dynamic entropy measure. The second one is solved by computing the evolution of the total credit spread over time. Moreover, the covariance between countries' total spread allows understand any contagions in EU. The methodology is applied to real data of 24 countries for the three major agencies: Moody's, Standard and Poor's, and Fitch. Obtained results suggest that both the financial risk inequality and the value of the total risk increase over time at a different rate depending on the rating agency and that the dependence structure is characterized by a strong correlation between most of European countries. |
Date: | 2019–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1902.00691&r=all |
By: | Isabel Casas (BCAM; and Department of Business Economics, University of Southern Denmark); Xiuping Mao (School of Finance, Zhongnan University of Economics and Law); Helena Veiga (Department of Statistics and Instituto Flores de Lemus, Universidad Carlos III de Madrid; and BRU-IUL, Instituto Universitário de Lisboa) |
Abstract: | This study explores the predictive power of new estimators of the equity variance risk premium and conditional variance for future excess stock market returns, economic activity, and financial instability, both during and after the last global financial crisis. These estimators are obtained from new parametric and semiparametric asymmetric extensions of the heterogeneous autoregressive model. Using these new specifications, we determine that the equity variance risk premium is a predictor of future excess stock returns, whereas conditional variance predicts them only for long horizons. Moreover, a comparison of the overall results reveals that the conditional variance gains predictive power during the global financial crisis period. Furthermore, both the variance risk premium and conditional variance are determined to be predictors of future financial instability, whereas conditional variance is determined to be the only predictor of economic activity for all horizons. Before the global financial crisis period, the new parametric asymmetric specification of the heterogeneous autoregressive model gains predictive power in comparison to previous work in the literature. However, the new time-varying coefficient models are the ones showing considerably higher predictive power for stock market returns and financial instability during the financial crisis, suggesting that an extreme volatility period requires models that can adapt quickly to turmoil. |
Keywords: | Net measures, Nonparametric methods, Predictability, Realized variance, Variance risk premium, VIX |
JEL: | C22 C51 C52 C53 C58 G17 |
Date: | 2018–03–05 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2018-10&r=all |
By: | Torben G. Andersen (Northwestern University and CREATES); Nicola Fusari (The Johns Hopkins University Carey Business School); Viktor Todorov (Northwestern University) |
Abstract: | We study short-term market risks implied by weekly S&P 500 index options. The introduction of weekly options has dramatically shifted the maturity profile of traded options over the last five years, with a substantial proportion now having expiry within one week. Such short-dated options provide a direct way to study volatility and jump risks. Unlike longer-dated options, they are largely insensitive to the risk of intertemporal shifts in the economic environment. Adopting a novel semi-nonparametric approach, we uncover variation in the negative jump tail risk which is not spanned by market volatility and helps predict future equity returns. Incidents of tail shape shifts coincide with mispricing of standard parametric models for longer-dated options. As such, our approach allows for easy identification of periods of heightened concerns about negative tail events that are not always "signaled" by the level of market volatility and elude standard asset pricing models. |
Keywords: | Options, Jumps, Stochastic Volatility, Extreme Events, Time-Varying Tail Risk, Return Predictability |
JEL: | C51 C52 G12 |
Date: | 2018–01–15 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2018-08&r=all |
By: | Leopoldo Catania (Aarhus University and CREATES); Tommaso Proietti (CEIS & DEF, University of Rome "Tor Vergata") |
Abstract: | The prediction of volatility is of primary importance for business applications in risk management, asset allocation and pricing of derivative instruments. This paper proposes a novel measurement model which takes into consideration the possibly time-varying interaction of realized volatility and asset returns, according to a bivariate model aiming at capturing the main stylised facts: (i) the long memory of the volatility process, (ii) the heavy-tailedness of the returns distribution, and (iii) the negative dependence of volatility and daily market returns. We assess the relevance of "volatility in volatility"and time-varying "leverage" effects in the out-of-sample forecasting performance of the model, and evaluate the density forecasts of the future level of market volatility. The empirical results illustrate that our specification can outperform the benchmark HAR-RV, both in terms of point and density forecasts. |
Keywords: | realized volatility, forecasting, leverage effect, volatility in volatility |
Date: | 2019–02–06 |
URL: | http://d.repec.org/n?u=RePEc:rtv:ceisrp:450&r=all |
By: | Torben G. Andersen (Northwestern University and CREATES); Nicola Fusari (The Johns Hopkins University Carey Business School); Viktor Todorov (Northwestern University) |
Abstract: | We study the dynamic relation between market risks and risk premia using time series of index option surfaces. We find that priced left tail risk cannot be spanned by market volatility (and its components) and introduce a new tail factor. This tail factor has no incremental predictive power for future volatility and jump risks, beyond current and past volatility, but is critical in predicting future market equity and variance risk premia. Our findings suggest a wide wedge between the dynamics of market risks and their compensation, with the latter typically displaying a far more persistent reaction following market crises. |
Keywords: | Option Pricing, Risk Premia, Jumps, Stochastic Volatility, Return Predictability, Risk Aversion, Extreme Events |
JEL: | C51 C52 G12 |
Date: | 2018–01–15 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2018-07&r=all |
By: | Torben G. Andersen (Northwestern University and CREATES); Nicola Fusari (The Johns Hopkins University Carey Business School); Viktor Todorov (Northwestern University) |
Abstract: | We explore the pricing of tail risk as manifest in index options across international equity markets. The risk premium associated with negative tail events displays persistent shifts, unrelated to volatility. This tail risk premium is a potent predictor of future equity returns, while option-implied volatility only forecasts the future return variation. Hence, compensation for negative jump risk is the primary driver of the equity premium across all indices, whereas the reward for pure diffusive variance risk is largely unrelated to future equity returns. We also document pronounced commonalities, suggesting a high degree of integration among the major global equity markets. |
Keywords: | Equity Risk Premium, International Option Markets, Predictability, Tail Risk, Variance Risk Premium |
JEL: | G12 G13 G15 G17 |
Date: | 2018–01–10 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2018-02&r=all |
By: | Asaf Cohen; Virginia R. Young |
Abstract: | We analyze the probability of ruin for the {\it scaled} classical Cramer-Lundberg (CL) risk process and the corresponding diffusion approximation. The scaling, introduced by Iglehart (1969) to the actuarial literature, amounts to multiplying the Poisson rate $\lambda$ by $n$, dividing the claim severity by $\sqrt{n}$, and adjusting the premium rate so that net premium income remains constant. Therefore, we think of the associated diffusion approximation as being ``asymptotic for large values of $\lambda$.'' We are the first to use a comparison method to prove convergence of the probability of ruin for the scaled CL process and to derive the rate of convergence. Specifically, we prove a comparison lemma for the corresponding integro-differential equation and use this comparison lemma to prove that the probability of ruin for the scaled CL process converges to the probability of ruin for the limiting diffusion process. Moreover, we show that the rate of convergence for the ruin probability is of order $O(n^{-1/2})$, and we show that the convergence is {\it uniform} with respect to the surplus. To the best of our knowledge, this is the first rate of convergence achieved for these ruin probabilities, and we show that it is the tightest one. For the case of exponentially-distributed claims, we are able to improve the approximation arising from the diffusion, attaining a uniform $O(n^{-1})$ rate of convergence. We also include two examples that illustrate our results. |
Date: | 2019–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1902.00706&r=all |
By: | Alexis Louaas (X-DEP-ECO - Département d'Économie de l'École Polytechnique - X - École polytechnique); Pierre Picard (X - École polytechnique) |
Abstract: | We analyze the insurance of nuclear liability risk, from theoretical and applied standpoints. Firstly, we characterize the optimal insurance scheme for a low-probability industrial accident, such as a nuclear catastrophe, in a model of collective risk-sharing. Using catastrophe bond data, we then evaluate the cost of capital sustaining such an insurance mechanism. Finally, we characterize the individual lotteries associated with the risk of a nuclear accident in France, and we estimate the optimal coverage. We conclude that the corporate liability limit currently in force is likely to be inferior to the socially optimal level. |
Date: | 2019–01–28 |
URL: | http://d.repec.org/n?u=RePEc:hal:wpaper:hal-01996648&r=all |
By: | Buse, Rebekka; Schienle, Melanie; Urban, Jörg |
Abstract: | We study the impact of changes in regulations and policy interventions on systemic risk among European sovereigns measured as volatility spillovers in respective credit risk markets. Our unique intraday CDS dataset allows for precise measurement of the effectiveness of these events in a network setting. In particular, it allows discerning interventions which entail significant changes in network cross-effects with appropriate bootstrap confidence intervals. We show that it was mainly regulatory changes with the ban of trading naked sovereign CDS in 2012 as well as the new ISDA regulations in 2014 which were most effective in reducing systemic risk. In comparison, we find that the effect of policy interventions was minor and generally not sustainable. In particular, they only had a significant impact when implemented for the first time and when targeting more than one country. For the volatility spillover channels, we generally find balanced networks with no fragmentation over time. |
Keywords: | Financial Crises,Policy and Regulation,Financial Stability and Systemic Risk in the Eurozone,High-Frequency CDS,Bootstrap Spillover-Measures |
JEL: | G20 G01 G17 C32 G28 |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:zbw:kitwps:125&r=all |
By: | Bormann, Carsten; Schienle, Melanie |
Abstract: | An accurate assessment of tail inequalities and tail asymmetries of financial returns is key for risk management and portfolio allocation. We propose a new test procedure for detecting the full extent of such structural differences in the dependence of bivariate extreme returns. We decompose the testing problem into piecewise multiple comparisons of Cramér-von Mises distances of tail copulas. In this way, tail regions that cause differences in extreme dependence can be located and consequently be targeted by financial strategies. We derive the asymptotic properties of the test and provide a bootstrap approximation for finite samples. Moreover, we account for the multiplicity of the piecewise tail copula comparisons by adjusting individual p-values according to multiple testing techniques. Monte Carlo simulations demonstrate the test's superior finite-sample properties for common financial tail risk models, both in the i.i.d. and the sequentially dependent case. During the last 90 years in US stock markets, our test detects up to 20% more tail asymmetries than competing tests. This can be attributed to the presence of non-standard tail dependence structures. We also find evidence for diminishing tail asymmetries during every major financial crisis - except for the 2007-09 crisis - reflecting a risk-return trade-off for extreme returns. |
Keywords: | tail dependence,tail copulas,tail asymmetry,tail inequality,extreme values,multiple testing |
JEL: | C12 C53 C58 |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:zbw:kitwps:122&r=all |
By: | Yong Liu; Alan P. Ker |
Abstract: | County crop yield data from United States Department of Agriculture - National Agricultural Statistics Service has and continues to be extensively used in the literature as well as practice. The most notable practical example is crop insurance, as the Risk Management Agency uses the data to set guarantees, estimate premium rates, and calculate indemnities for their area programs. In many applications including crop insurance, yield data are detrended and adjusted for possible heteroscedasticity and then assumed to be independent and identically distributed. For most major crop-region combinations, county yield data exist from the 1950s onwards and reflect very significant innovations in both seed and farm management technologies; innovations that have likely moved mass all around the support of the yield distribution. Despite correcting for movements in the first two moments of the yield data generating process (dgp), these innovations raise doubt regarding the identically distributed assumption. This manuscript considers the question of how much historical yield data should be used in empirical analyses. The answer is obviously dependent on the empirical application, crop-region combination, econometric methodology, and chosen loss function. Nonetheless, we hope to provide some guidance by tackling this question in three ways. First, we use distributional tests to assess if and when the adjusted yield data may result from different dgps. Second, we consider the application to crop insurance by using an out-of-sample rating game -- commonly employed in the literature -- to compare rates from the full versus historically restricted data sets. Third, we estimate flexible time-varying dgps and then simulate to quantify the additional error when the identically distributed assumption is erroneously imposed. Our findings suggest that despite accounting for time-varying movements in the first two moments, using yield data more than 30 years past increases estimation error. Given that discarding historical data is unappetizing, particularly so in applications with relatively small T, we investigate three methodologies that re-incorporate the discarded data while explicitly acknowledging: (i) the retained and discarded data are from different dgps; and (ii) the extent and form of those differences is unknown. Our results suggest gains in efficiency may be realized by using these more flexible methodologies. While our results are most applicable to the crop insurance literature, we suggest proceeding with caution when using historical yield data in other applications as well. |
Keywords: | Agricultural and Food Policy |
Date: | 2019–01–30 |
URL: | http://d.repec.org/n?u=RePEc:ags:uguiwp:283559&r=all |
By: | Revathi Anil Kumar; Mark Chamness |
Abstract: | Managing data storage growth is of crucial importance to businesses. Poor practices can lead to large data and financial losses. Access to storage information along with timely action, or capacity forecasting, are essential to avoid these losses. In addition, ensuring high accuracy of capacity forecast estimates along with ease of interpretability plays an important role for any customer facing tool. In this paper, we introduce Stochastic Estimated Risk (SER), a tool developed at Nutanix that has been in production. SER shifts the focus from forecasting a single estimate for date of attaining full capacity to predicting the risk associated with running out of storage capacity. Using a Brownian motion with drift model, SER estimates the probability that a system will run out of capacity within a specific time frame. Our results showed that a probabilistic approach is more accurate and credible, for systems with non-linear patterns, compared to a regression or ensemble forecasting models. |
Date: | 2018–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1901.10552&r=all |
By: | Canals-Cerda, Jose J. (Federal Reserve Bank of Philadelphia) |
Abstract: | The Current Expected Credit Loss (CECL) framework represents a new approach for calculating the allowance for credit losses. Credit cards are the most common form of revolving consumer credit and are likely to present conceptual and modeling challenges during CECL implementation. We look back at nine years of account level credit card data, starting with 2008, over a time period encompassing the bulk of the Great Recession as well as several years of economic recovery. We analyze the performance of the CECL framework under plausible assumptions about allocations of future payments to existing credit card loans, a key implementation element. Our analysis focuses on three major themes: defaults, balances, and credit loss. Our analysis indicates that allowances are significantly impacted by specific payment allocation assumptions as well as downturn economic conditions. We also compare projected allowances with realized credit losses and observe a significant divergence resulting from the revolving nature of credit card portfolios. We extend our analysis across segments of the portfolio with different risk profiles. Interestingly, fewer risky segments of the portfolio are proportionally more impacted by specific payment assumptions and downturn economic conditions. Our findings suggest that the effect of the new allowance framework on a specific credit card portfolio will depend critically on its risk profile. Thus, our findings should be interpreted qualitatively, rather than quantitatively. Finally, the goal is to gain a better understanding of the sensitivity of allowances to plausible variations in assumptions about the allocation of future payments to present credit card loans. Thus, we do not offer specific best practice guidance. |
Keywords: | expected credit losses; allowances; unconditionally cancelable; revolving credit; credit loss |
JEL: | G21 G28 M41 |
Date: | 2019–01–31 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedpwp:19-8&r=all |
By: | Sarah Bensalem (SAF - Laboratoire de Sciences Actuarielle et Financière - UCBL - Université Claude Bernard Lyon 1 - Université de Lyon); Nicolás Hernández Santibáñez (Department of Mathematics - University of Michigan - University of Michigan [Ann Arbor]); Nabil Kazi-Tani (SAF - Laboratoire de Sciences Actuarielle et Financière - UCBL - Université Claude Bernard Lyon 1 - Université de Lyon) |
Abstract: | This paper studies an equilibrium model between an insurance buyer and an insurance seller, where both parties' risk preferences are given by convex risk measures. The interaction is modeled through a Stackelberg type game, where the insurance seller plays first by offering prices, in the form of safety loadings. Then the insurance buyer chooses his optimal proportional insurance share and his optimal prevention effort in order to minimize his risk measure. The loss distribution is given by a family of stochastically ordered probability measures, indexed by the prevention effort. We give special attention to the problems of self-insurance and self-protection. We prove that the formulated game admits a unique equilibrium, that we can explicitly solve by further specifying the agents criteria and the loss distribution. In self-insurance, we consider also an adverse selection setting, where the type of the insurance buyers is given by his loss probability, and study the screening and shutdown contracts. Finally, we provide case studies in which we explicitly apply our theoretical results. |
Keywords: | Coherent risk measures,Stackelberg game,Prevention,Self-insurance,Self-protection |
Date: | 2019–01–16 |
URL: | http://d.repec.org/n?u=RePEc:hal:wpaper:hal-01983433&r=all |
By: | Irina Georgescu; Jani Kinnunen |
Abstract: | A classical portfolio theory deals with finding the optimal proportion in which an agent invests a wealth in a risk-free asset and a probabilistic risky asset. Formulating and solving the problem depend on how the risk is represented and how, combined with the utility function defines a notion of expected utility. In this paper the risk is a fuzzy variable and the notion of expected utility is defined in the setting of Liu's credibility theory. Thus the portfolio choice problem is formulated as an optimization problem in which the objective function is a credibilistic expected utility. Different approximation calculation formulas for the optimal allocation of the credibilistic risky asset are proved. These formulas contain two types of parameters: various credibilistic moments associated with fuzzy variables (expected value, variance, skewness and kurtosis) and the risk aversion, prudence and temperance indicators of the utility function. |
Date: | 2019–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1901.08986&r=all |
By: | Davradakis, Emmanouil; Santos, Ricardo |
Abstract: | The purpose of this working paper is to provide a primer on financial technology and on Blockchain, while shading light on the impact they may have on the financial industry. FinTechs, the financial technology and innovation that competes with traditional financial methods in the delivery of financial services, has the potential to improve the reach of financial services to the broader public and facilitate the creation of a credit record, especially in the developing world. Some Blockchain applications like cryptocurrencies, could be problematic as cryptocurrencies cannot substitute traditional money due to the high risk of debasement, luck of trust and high inefficiencies relating to the high cost in electricity and human effort required to clear cryptocurrency transactions. Cryptocurrencies' high volatility renders it a poor means of payment and store of value, while resembling a fraudulent investment operation. Yet, other Blockchain applications, like Blockchain securities, could facilitate the functioning of an International Financial Institutions (IFI) due to the volume of securities they issue as Blockchain securities enable an almost instantaneous trade confirmation, affirmation, allocation and settlement and reconciliations are superfluous releasing collateral to be used for other purposes in the market. IFIs could promote awareness and understanding about Blockchain technology among different IFI services and launch Blockchain labs in order to pilot projects that can improve governance and social outcomes in the developing world. Financial inclusion, at the core of IFI's mandate, could be enhanced by investing into FinTechs who facilitate access to payment systems. IFIs could also ponder the development of Blockchain software aimed at improving transparency and efficiency in public resources that finance development projects. IFIs could promote Blockchain applications in several sectors like agricultural lending where Blockchain technology is used in the supply chain in order to improve transparency and efficiency in agricultural and commodity production. Other sectors include transport and logistics and even energy distribution. IFIs could benefit by utilizing FinTechs' knowhow in the analysis of big data in order to understand better the investment gaps and the financing needs of prospective clients. Finally, FinTechs' knowhow could be used by IFIs in order to streamline their internal processes concerning credit underwriting and risk management. |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:zbw:eibwps:201901&r=all |