|
on Risk Management |
Issue of 2022‒09‒19
twenty-one papers chosen by |
By: | Lee, David |
Abstract: | This article describes a valuation framework to build most common kinds of cancellation schedules and cancellation evens. The model can price generic cancellation derivatives accurately. It is very useful for derivatives trading and risk management. |
Keywords: | cancellable structured note, derivatives valuation |
JEL: | C52 C58 C63 G12 G13 G17 |
Date: | 2022–08–08 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:114147&r= |
By: | Avik Das; Dr. Devanjali Nandi Das |
Abstract: | Purpose: In the context of a COVID pandemic in 2020-21, this paper attempts to capture the interconnectedness and volatility transmission dynamics. The nature of change in volatility spillover effects and time-varying conditional correlation among the G7 countries and India is investigated. Methodology: To assess the volatility spillover effects, the bivariate BEKK and t- DCC (1,1) GARCH (1,1) models have been used. Our research shows how the dynamics of volatility spillover between India and the G7 countries shift before and during COVID-19. Findings: The findings reveal that the extent of volatility spillover has altered during COVID compared to the pre-COVID environment. During this pandemic, a sharp increase in conditional correlation indicates an increase in systematic risk between countries. Originality: The study contributes to a better understanding of the dynamics of volatility spillover between G7 countries and India. Asset managers and foreign corporations can use the changing spillover dynamics to improve investment decisions and implement effective hedging measures to protect their interests. Furthermore, this research will assist financial regulators in assessing market risk in the future owing to crises like as COVID-19. |
Date: | 2022–08 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2208.09148&r= |
By: | Emanuel Sommer; Karoline Bax; Claudia Czado |
Abstract: | Accurate estimation of different risk measures for financial portfolios is of utmost importance equally for financial institutions as well as regulators, however, many existing models fail to incorporate any high dimensional dependence structures adequately. To overcome this problem and capture complex cross-dependence structures, we use the flexible class of vine copulas and introduce a conditional estimation approach focusing on a stress factor. Furthermore, we compute conditional portfolio level risk measure estimates by simulating portfolio level forecasts conditionally on a stress factor. We then introduce a quantile-based approach to observe the behavior of the risk measures given a particular state of the conditioning asset or assets. In particular, this can generate valuable insights in stress testing situations. In a case study on Spanish stocks, we show, using different stress factors, that the portfolio is quite robust against strong market downtrends in the American market. At the same time, we find no evidence of this behavior with respect to the European market. The novel algorithms presented are combined in the R package portvine, which is publically available on CRAN. |
Date: | 2022–08 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2208.09156&r= |
By: | Lyons, Paul (Central Bank of Ireland); Rice, Jonathan (Central Bank of Ireland) |
Abstract: | Risk weighted assets for Irish residential mortgage lending are high in a European context. In this note, we explore the main contributors to these higher mortgage risk weights. One key driver is the underlying credit quality of the stock of outstanding mortgages. Mortgage default rates are higher in Ireland than many other European countries and this is true both historically and over recent years. The majority of recent defaults stem from pre-global financial crisis originated loans, highlighting the central role of these loans issued under weaker lending standards in pushing up risk weights. A second key driver of higher mortgage risk weights relates to higher modelled loss-givendefault. Irish loss rates on mortgage defaults that occurred in the financial crisis years (2009-2013) are more severe than that observed in most other EU countries. This is predominately due to the longer time to resolve defaulted loans in Ireland, associated with a particularly severe crisis. Moving forward, as banks originate new loans, with lower probability of default, these will replace crisis period loans and will place downward pressure on mortgage risk weights. Regulatory reforms such as the introduction of the ‘output floor’ under Basel III will narrow the gap between overall Irish risk weights and those in other countries. Nevertheless, the risk weight applicable to Irish mortgages will likely remain at the higher end of EU comparisons over the medium term. |
Date: | 2022–02 |
URL: | http://d.repec.org/n?u=RePEc:cbi:fsnote:1/fs/22&r= |
By: | Amine Ouazad |
Abstract: | Measuring beliefs about natural disasters is challenging. Deep out-of-the-money options allow investors to hedge at a range of strikes and time horizons, thus the 3-dimensional surface of firm-level option prices provides information on (i) skewed and fat-tailed beliefs about the impact of natural disaster risk across space and time dimensions at daily frequency; and (ii) information on the covariance of wildfire-exposed stocks with investors' marginal utility of wealth. Each publicly-traded company's daily surface of option prices is matched with its network of establishments and wildfire perimeters over two decades. First, wildfires affect investors' risk neutral probabilities at short and long maturities; investors price asymmetric downward tail risk and a probability of upward jumps. The volatility smile is more pronounced. Second, comparing risk-neutral and physical distributions reveals the option-implied risk aversion with respect to wildfire-exposed stock prices. Investors' marginal utility of wealth is correlated with wildfire shocks. Option-implied risk aversion identifies the wildfire-exposed share of portfolios. For risk aversions consistent with Barro (2012), equity options suggest (i) investors hold larger shares of wildfire-exposed stocks than the market portfolio; or (ii) investors may have more pessimistic beliefs about wildfires' impacts than what observed returns suggest, such as pricing low-probability unrealized downward tail risk. We calibrate options with models featuring both upward and downward risk. Results are consistent a significant pricing of downward jumps. |
Date: | 2022–08 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2208.06930&r= |
By: | Nicklas Werge (LPSM (UMR_8001) - Laboratoire de Probabilités, Statistique et Modélisation - SU - Sorbonne Université - CNRS - Centre National de la Recherche Scientifique - UPCité - Université Paris Cité) |
Abstract: | Financial markets tend to switch between various market regimes over time, making stationarity-based models unsustainable. We construct a regime-switching model independent of asset classes for risk-adjusted return predictions based on hidden Markov models. This framework can distinguish between market regimes in a wide range of financial markets such as the commodity, currency, stock, and fixed income market. The proposed method employs sticky features that directly affect the regime stickiness and thereby changing turnover levels. An investigation of our metric for risk-adjusted return predictions is conducted by analyzing daily financial market changes for almost twenty years. Empirical demonstrations of out-of-sample observations obtain an accurate detection of bull, bear, and high volatility periods, improving risk-adjusted returns while keeping a preferable turnover level. |
Keywords: | Hidden Markov model,Financial time series,Non-stationarity,Regime Switching,Prediction markets,Trading strategies |
Date: | 2021 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-03313129&r= |
By: | Boyi Jin |
Abstract: | This scientific paper propose a novel portfolio optimization model using an improved deep reinforcement learning algorithm. The objective function of the optimization model is the weighted sum of the expectation and value at risk(VaR) of portfolio cumulative return. The proposed algorithm is based on actor-critic architecture, in which the main task of critical network is to learn the distribution of portfolio cumulative return using quantile regression, and actor network outputs the optimal portfolio weight by maximizing the objective function mentioned above. Meanwhile, we exploit a linear transformation function to realize asset short selling. Finally, A multi-process method is used, called Ape-x, to accelerate the speed of deep reinforcement learning training. To validate our proposed approach, we conduct backtesting for two representative portfolios and observe that the proposed model in this work is superior to the benchmark strategies. |
Date: | 2022–08 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2208.10707&r= |
By: | Braga, Maria Debora; Nava, Consuelo R.; Zoia, Maria Grazia (University of Turin) |
Abstract: | In this paper, a risk parity strategy based on portfolio kurtosis as reference measure is introduced. This strategy allocates the asset weights in a portfolio in a manner that allows an homogeneous distribution of responsibility for portfolio returns’ huge dispersion, since portfolio kurtosis puts more weight on extreme outcomes than standard deviation does. Therefore, the goal of the strategy is not the minimization of kurtosis, but rather its “fair diversification†among assets. An original closed-form expression for portfolio kurtosis is devised to set up the optimization problem for this type of risk parity strategy. The latter is then compared with the one based on standard deviation by using data from a global equity investment universe and implementing an out-of-sample analysis. The kurtosis-based risk parity strategy has interesting portfolio effects, with lights and shadows. It outperforms the traditional risk parity according to main risk-adjusted performance measures. In terms of asset allocation solutions, it provides extremely unbalanced and more erratic portfolio weights (albeit with- out excluding any component) in comparison to those pertaining the traditional risk parity strategy. |
Date: | 2022–07 |
URL: | http://d.repec.org/n?u=RePEc:uto:dipeco:202208&r= |
By: | Battulga Gankhuu |
Abstract: | Because the asset value of a private company does not observable except in quarterly reports, the structural model has not been developed for a private company. For this reason, this paper attempt to develop the Merton's structural model for the private company by using the dividend discount model (DDM). In this paper, we obtain closed--form formulas of risk--neutral equity and liability values and default probability for the private company. Also, the paper provides ML estimators and the EM algorithm of our model's parameters. |
Date: | 2022–08 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2208.01974&r= |
By: | Gapeev, Pavel V.; Jeanblanc, Monique |
Abstract: | We continue to study a credit risk model of a financial market introduced recently by the authors, in which the dynamics of intensity rates of two default times are described by linear combinations of three independent geometric Brownian motions. The dynamics of two default-free risky asset prices are modeled by two geometric Brownian motions that are not independent of the ones describing the default intensity rates. We obtain closed form expressions for the no-arbitrage prices of some first-to-default and second-to-default European style contingent claims given the reference filtration initially and progressively enlarged by the two successive default times. The accessible default-free reference filtration is generated by the standard Brownian motions driving the model. |
Keywords: | successive default times; first-to-default and second-to-default options; geometric Brownian motion; initial and progressive enlargements of filtrations |
JEL: | F3 G3 |
Date: | 2021–08–06 |
URL: | http://d.repec.org/n?u=RePEc:ehl:lserod:110750&r= |
By: | Yuyu Chen; Paul Embrechts; Ruodu Wang |
Abstract: | We show the perhaps surprising inequality that the weighted average of iid ultra heavy-tailed (i.e., infinite mean) Pareto losses is larger than a standalone loss in the sense of first-order stochastic dominance. This result is further generalized to allow for random total number and weights of Pareto losses and for the losses to be triggered by catastrophic events. We discuss several implications of these results via an equilibrium analysis in a risk exchange market. First, diversification of ultra heavy-tailed Pareto losses increases portfolio risk, and thus a diversification penalty exists. Second, agents with ultra heavy-tailed Pareto losses will not share risks in a market equilibrium. Third, transferring losses from agents bearing Pareto losses to external parties without any losses may arrive at an equilibrium which benefits every party involved. The empirical studies show that our new inequality can be observed empirically for real datasets that fit well with ultra heavy tails. |
Date: | 2022–08 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2208.08471&r= |
By: | Karl Friedrich Siburg; Christopher Strothmann; Gregor Wei{\ss} |
Abstract: | We introduce a new stochastic order for the tail dependence between random variables. We then study different measures of tail dependence which are monotone in the proposed order, thereby extending various known tail dependence coefficients from the literature. We apply our concepts in an empirical study where we investigate the tail dependence for different pairs of S&P 500 stocks and indices, and illustrate the advantage of our measures of tail dependence over the classical tail dependence coefficient. |
Date: | 2022–08 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2208.10319&r= |
By: | Hamed Taherdoost (Hamta Business Corporation) |
Abstract: | One of the main stages in a research study is data collection that enables the researcher to find answers to research questions. Data collection is the process of collecting data aiming to gain insights regarding the research topic. There are different types of data and different data collection methods accordingly. However, it may be challenging for researchers to select the most appropriate type of data collection based on the type of data that is used in the research. This article aims to provide a comprehensive source for data collection methods including defining the data collection process and discussing the main types of data. The possible methodologies for gathering data are then explained based on these categories and the advantages and disadvantages of utilizing these methods are defined. Finally, the main challenges of data collection are listed and in the last section, ethical considerations in the data collection processes are reviewed. |
Keywords: | information systems,risk management,risk assessment,fraud detection,risk control,risk policy,continuity planning |
Date: | 2021–12–09 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-03741848&r= |
By: | Giacomo De Giorgi; Costanza Naguib |
Abstract: | We analyze the impact of credit default on individual trajectories. Using a proprietary dataset for the years 2004-2020, we find that after default individuals relocate to cheaper areas. Importantly, default has long-lasting negative effects on income, credit score, total credit limit, and home-ownership status. |
Keywords: | mobility, bankruptcy, default, credit, income |
JEL: | J61 G51 D12 |
Date: | 2022–08 |
URL: | http://d.repec.org/n?u=RePEc:ube:dpvwib:dp2206&r= |
By: | Marie Michaelides; Mathieu Pigeon; H\'el\`ene Cossette |
Abstract: | The occurrence of a claim often impacts not one but multiple insurance coverages provided in the contract. To account for this multivariate feature, we propose a new individual claims reserving model built around the activation of the different coverages to predict the reserve amounts. Using the framework of multinomial logistic regression, we model the activation of the different insurance coverages for each claim and their development in the following years, i.e. the activation of other coverages in the later years and all the possible payments that might result from them. As such, the model allows us to complete the individual development of the open claims in the portfolio. Using a recent automobile dataset from a major Canadian insurance company, we demonstrate that this approach generates accurate predictions of the total reserves as well as of the reserves per insurance coverage. This allows the insurer to get better insights in the dynamics of his claims reserves. |
Date: | 2022–08 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2208.08430&r= |
By: | Maulana Harris Muhajir (Neoma Business School) |
Abstract: | In the aftermath of the global financial crisis of 2008, macrofinancial linkages have gained more attention from policymakers as primary issues of financial system stability. A clearer understanding of probability of default (PD) drivers may help predict if a bank will default on its portfolio liabilities. This presentation develops a method to assess a bank's PD based on a multivariate copula distribution to capture nonlinear relationships between variables with complex data structures. Then we use the generalized method of moments (GMM) to observe the relationship between PD to bank performance (bank-specific indicators) and the macroeconomic indicators. Our findings illustrate some critical links between PD and macroeconomic environments. For example, empirical evidence suggests that bank-specific indicators such as the CET 1 ratio, inefficiency ratio, and deposit ratio appear to be negatively and statistically significant to a bank's PD. When we examined the structural and macroeconomic variables, we found that the policy rate, the real exchange rate, economic growth, and the unemployment rate may reduce the PD. We also found that central state-owned banks tend to have a higher risk than other bank groups and that regional state-owned banks in the central region have the greatest likelihood of default. |
Date: | 2022–08–11 |
URL: | http://d.repec.org/n?u=RePEc:boc:usug22:10&r= |
By: | Refk Selmi (ESC PAU - Ecole Supérieure de Commerce, Pau Business School) |
Abstract: | The ongoing Russian/Ukrainian war, along with sanctions imposed on Russia, poses a major shock to the world economy, merely two years after the COVID-19 pandemic. Accordingly, the global economic policy uncertainty has surged due to the resulting spiraling energy prices and economic disruptions. This paper uses a quantile-on-quantile approach to compare the ability of Bitcoin to hedge the economic policy uncertainty (EPU) of major global Bitcoin exchange markets (China, Japan, Korea and the United States) for the periods prior to and post-the COVID-19 and Russia's invasion of Ukraine. The results reveal that, prior to the pandemic, significant rises in EPU lead to high Bitcoin returns. After the COVID-19 and the recent war in Ukraine, the hedge effectiveness of Bitcoin is weakening due to the tight correlation with stocks in times of rising inflation expectations and the global central banks' hawkish response to it. Moreover, the Bitcoin hedging property is country-specific, and depends to different Bitcoin market conditions and various uncertainty levels. We explain this heterogeneity by differences across countries in terms of the recognition of Bitcoin as a legal tender, the Bitcoin trading volume, the exchange market maturity, and the investors' attitude towards risk. |
Keywords: | Bitcoin,the COVID-19,the war in Ukraine,the economic policy uncertainty,hedge,country-specific analysis |
Date: | 2022 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-03737131&r= |
By: | Matias D. Cattaneo; Richard K. Crump; Weining Wang |
Abstract: | Beta-sorted portfolios -- portfolios comprised of assets with similar covariation to selected risk factors -- are a popular tool in empirical finance to analyze models of (conditional) expected returns. Despite their widespread use, little is known of their statistical properties in contrast to comparable procedures such as two-pass regressions. We formally investigate the properties of beta-sorted portfolio returns by casting the procedure as a two-step nonparametric estimator with a nonparametric first step and a beta-adaptive portfolios construction. Our framework rationalize the well-known estimation algorithm with precise economic and statistical assumptions on the general data generating process and characterize its key features. We study beta-sorted portfolios for both a single cross-section as well as for aggregation over time (e.g., the grand mean), offering conditions that ensure consistency and asymptotic normality along with new uniform inference procedures allowing for uncertainty quantification and testing of various relevant hypotheses in financial applications. We also highlight some limitations of current empirical practices and discuss what inferences can and cannot be drawn from returns to beta-sorted portfolios for either a single cross-section or across the whole sample. Finally, we illustrate the functionality of our new procedures in an empirical application. |
Date: | 2022–08 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2208.10974&r= |
By: | Jean-Marc Le Caillec (IMT Atlantique - ITI - Département lmage et Traitement Information - IMT Atlantique - IMT Atlantique - IMT - Institut Mines-Télécom [Paris]) |
Abstract: | In this paper, we present the results of nonlinearity detection in Hedge Fund price returns. The main challenge is induced by the small length of the time series, since the return of this kind of asset is updated once a month. As usual, the nonlinearity of the return time series is a key point to accurately assess the risk of an asset, since the normality assumption is barely encountered in financial data. The basic idea to overcome the hypothesis testing lack of robustness on small time series is to merge several hypothesis tests to improve the final decision (i.e., the return time series is linear or not). Several aspects on the index/decision fusion, such as the fusion topology, as well as the shared information by several hypothesis tests, have to be carefully investigated to design a robust decision process. This designed decision rule is applied to two databases of Hedge Fund price return (TASS and SP). In particular, the linearity assumption is generally accepted for the factorial model. However, funds having detected nonlinearity in their returns are generally correlated with exchange rates. Since exchange rates nonlinearly evolve, the nonlinearity is explained by this risk factor and not by a nonlinear dependence on the risk factors. |
Keywords: | nonlinearity detection,decision fusion,hedge funds,price return model |
Date: | 2022 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-03739132&r= |
By: | Carlos Alberto Piscarreta Pinto Ferreira |
Abstract: | Although there is an extensive literature regarding volatility in the financial markets, to our knowledge, few empirical studies specifically focus on the drivers of volatility of sovereign bond yields. This empirical paper aims to fill part of this gap and to provide more up to date empirical insights. We add to previous work by examining the issue simultaneously in a broad number of advanced economies. Our analysis shows that sovereign bond unconditional volatility exhibits mean-reversion and persistence. Bond yield volatility responds to proximate market movements and global risk. However, that response is found to be uneven across geographies, asymmetric in some cases and possibly time-varying. Macro and policy uncertainty impact depends on the specific uncertainty measures used and rarely is very meaningful. |
Keywords: | Volatility, Bond Market, Public Debt, Sovereign Risk, Panel Data, Fixed Effects |
JEL: | C23 E44 G11 G15 H63 |
Date: | 2022–07 |
URL: | http://d.repec.org/n?u=RePEc:ise:remwps:wp02412022&r= |
By: | Angelopoulos, Anastasios N. (?); Bates, Stephen (?); Candes, Emmanuel J. (?); Jordan, Michael I. (?); Lei, Lihua (Stanford U) |
Abstract: | We introduce a framework for calibrating machine learning models so that their predictions satisfy explicit, finite-sample statistical guarantees. Our calibration algorithm works with any underlying model and (unknown) data-generating distribution and does not require model refitting. The framework addresses, among other examples, false discovery rate control in multi-label classification, intersection-over-union control in instance segmentation, and the simultaneous control of the type-1 error of outlier detection and confidence set coverage in classification or regression. Our main insight is to reframe the risk-control problem as multiple hypothesis testing, enabling techniques and mathematical arguments different from those in the previous literature. We use our framework to provide new calibration methods for several core machine learning tasks with detailed worked examples in computer vision and tabular medical data. |
Date: | 2022–04 |
URL: | http://d.repec.org/n?u=RePEc:ecl:stabus:4030&r= |