nep-rmg New Economics Papers
on Risk Management
Issue of 2020‒10‒19
28 papers chosen by
Stan Miles
Thompson Rivers University

  1. Asymmetric Volatility Effects in Risk Management: An Empirical Analysis using a Stock Index Futures By Benavides Guillermo
  2. Market risk: Exponential weighting in the value-at-risk calculation By Broll, Udo; Förster, Andreas
  3. lCARE -- localizing Conditional AutoRegressive Expectiles By Xiu Xu; Andrija Mihoci; Wolfgang Karl H\"ardle
  4. COVID-19, economic policy uncertainty and stock market crash risk By Zhifeng Liu; Toan Luu Duc Huynh; Jianjun Sun; Peng-Fei Dai
  5. Basis key to hedging success By Schulz, Lee
  6. Prudential Regulation in Financial Networks By Mohamed Belhaj; Renaud Bourlès; Frédéric Deroïan
  7. On the Pricing of Currency Options under Variance Gamma Process By Azwar Abdulsalam; Gowri Jayprakash; Abhijeet Chandra
  8. Prudential Regulation in Financial Networks By Mohamed Belhaj; Renaud Bourlès; Frédéric Deroïan
  9. Revising the Impact of Financial and Non-Financial Global Stock Market Volatility Shocks By Kang, Wensheng; Ratti, Ronald A.; Vespignani, Joaquin L.
  10. Liquidations: DeFi on a Knife-edge By Daniel Perez; Sam M. Werner; Jiahua Xu; Benjamin Livshits
  11. Crash-sensitive Kelly Strategy built on a modified Kreuser-Sornette bubble model tested over three decades of twenty equity indices By J-C Gerlach; Jerome L Kreuser; Didier Sornette
  12. An AI approach to measuring financial risk By Lining Yu; Wolfgang Karl H\"ardle; Lukas Borke; Thijs Benschop
  13. Learning Time Varying Risk Preferences from Investment Portfolios using Inverse Optimization with Applications on Mutual Funds By Shi Yu; Yuxin Chen; Chaosheng Dong
  14. Time your hedge with Deep Reinforcement Learning By Eric Benhamou; David Saltiel; Sandrine Ungari; Abhishek Mukhopadhyay
  15. A Return Based Measure of Firm Quality By Ravi Jagannathan; Yang Zhang
  16. Transparency, Auditability and eXplainability of Machine Learning Models in Credit Scoring By Michael B\"ucker; Gero Szepannek; Alicja Gosiewska; Przemyslaw Biecek
  17. COVID-19 and SME Failures By Pierre-Olivier Gourinchas; Ṣebnem Kalemli-Özcan; Veronika Penciakova; Nick Sander
  18. Interpreting Unconditional Quantile Regression with Conditional Independence By David M. Kaplan
  19. The impact of Basel IV on real estate financing By Demary, Markus; Voigtländer, Michael
  20. Modeling and analysis of the effect of COVID-19 on the stock price: V and L-shape recovery By Ajit Mahata; Anish rai; Om Prakash; Md Nurujjaman
  21. Learning Classifiers under Delayed Feedback with a Time Window Assumption By Masahiro Kato; Shota Yasui
  22. Minimum Variance Hedging: Levels versus first Difference By Prehn, Sören
  23. The Distribution of COVID-19 Related Risks By Patrick Baylis; Pierre-Loup Beauregard; Marie Connolly; Nicole Fortin; David A. Green; Pablo Gutierrez Cubillos; Sam Gyetvay; Catherine Haeck; Timea Laura Molnar; Gaëlle Simard-Duplain; Henry E. Siu; Maria teNyenhuis; Casey Warman
  24. Online Action Learning in High Dimensions: A New Exploration Rule for Contextual $\epsilon_t$-Greedy Heuristics By Claudio C. Flores; Marcelo C. Medeiros
  25. Price, Volatility and the Second-Order Economic Theory By Victor Olkhov
  26. Firm-Level Risk Exposures and Stock Returns in the Wake of COVID-19 By Steven J. Davis; Stephen Hansen; Cristhian Seminario-Amez
  27. Hierarchical PCA and Modeling Asset Correlations By Marco Avellaneda; Juan Andr\'es Serur
  28. Bet against the trend and cash in profits. By Raquel Almeida Ramos; Federico Bassi; Dany Lang

  1. By: Benavides Guillermo
    Abstract: In this research paper ARCH-type models and option implied volatilities (IV) are applied in order to estimate the Value-at-Risk (VaR) of a stock index futures portfolio for several time horizons. The relevance of the asymmetries in the estimated volatility estimation is considered. The empirical analysis is performed on futures contracts of both the Standard and Poors 500 Index and the Mexican Stock Exchange. According to the results, the IV model is superior in terms of precision compared to the ARCH-type models. Under both methodologies there are relevant statistical gains when asymmetries are included. The referred gains range from 4 to around 150 basis points of minimum capital risk requirements. This research documents the importance of taking asymmetric effects (leverage effects) into account in volatility forecasts when it comes to risk management analysis.
    Keywords: Asymmetric volatility;Backtesting;GARCH;TARCH;Implied volatility;Stock index futures;Value at Risk;Mexico
    JEL: C15 C22 C53 E31 E37
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:bdm:wpaper:2020-10&r=all
  2. By: Broll, Udo; Förster, Andreas
    Abstract: When measuring market risk, credit institutions and Alternative Investment Fund Managers may deviate from equally weighting historical data in their Value-at-Risk calculation and instead use an exponential time series weighting. The use of exponential weighting in the Value-at-Risk calculation is very popular because it takes into account changes in market volatility (immediately) and can therefore quickly adapt to VaR. In less volatile market phases, this leads to a reduction in VaR and thus to lower own funds requirements for credit institutions. However, in the exponential weighting a high volatility in the past is quickly forgotten and the VaR can be underestimated when using exponential weighting and the VaR may be underestimated. To prevent this, credit institutions or Alternative Investment Fund Managers are not completely free to choose a weighting (decay) factor. This article describes the legal requirements and deals with the calculation of the permissible weighting factor. As an example we use the exchange rate between Euro and Polish zloty to estimate the Value-at-Risk. We show the calculation of the weighting factor with two different approaches. This article also discusses exceptions to the general legal requirements
    Keywords: risk management,market risk,exponentially weighted moving average,weighting scheme,Value-at-Risk
    JEL: C22 G18 G28
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:zbw:tudcep:0420&r=all
  3. By: Xiu Xu; Andrija Mihoci; Wolfgang Karl H\"ardle
    Abstract: We account for time-varying parameters in the conditional expectile-based value at risk (EVaR) model. The EVaR downside risk is more sensitive to the magnitude of portfolio losses compared to the quantile-based value at risk (QVaR). Rather than fitting the expectile models over ad-hoc fixed data windows, this study focuses on parameter instability of tail risk dynamics by utilising a local parametric approach. Our framework yields a data-driven optimal interval length at each time point by a sequential test. Empirical evidence at three stock markets from 2005-2016 shows that the selected lengths account for approximately 3-6 months of daily observations. This method performs favorable compared to the models with one-year fixed intervals, as well as quantile based candidates while employing a time invariant portfolio protection (TIPP) strategy for the DAX, FTSE 100 and S&P 500 portfolios. The tail risk measure implied by our model finally provides valuable insights for asset allocation and portfolio insurance.
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2009.13215&r=all
  4. By: Zhifeng Liu; Toan Luu Duc Huynh; Jianjun Sun; Peng-Fei Dai
    Abstract: This paper investigates the impact of economic policy uncertainty (EPU) on the crash risk of US stock market during the COVID-19 pandemic. To this end, we use the GARCH-S (GARCH with skewness) model to estimate daily skewness as a proxy for the stock market crash risk. The empirical results show the significantly negative correlation between EPU and stock market crash risk, indicating the aggravation of EPU increase the crash risk. Moreover, the negative correlation gets stronger after the global COVID-19 outbreak, which shows the crash risk of the US stock market will be more affected by EPU during the epidemic.
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2010.01043&r=all
  5. By: Schulz, Lee
    Abstract: Livestock Outlook: Understand how fed cattle risk management tools work.
    Date: 2019–11–15
    URL: http://d.repec.org/n?u=RePEc:isu:genstf:201911150800001718&r=all
  6. By: Mohamed Belhaj (IMF-Midle East Center for Economics and Finance); Renaud Bourlès (AMSE - Aix-Marseille Sciences Economiques - EHESS - École des hautes études en sciences sociales - ECM - École Centrale de Marseille - CNRS - Centre National de la Recherche Scientifique - AMU - Aix Marseille Université, IUF - Institut Universitaire de France - M.E.N.E.S.R. - Ministère de l'Education nationale, de l’Enseignement supérieur et de la Recherche); Frédéric Deroïan (AMSE - Aix-Marseille Sciences Economiques - EHESS - École des hautes études en sciences sociales - ECM - École Centrale de Marseille - CNRS - Centre National de la Recherche Scientifique - AMU - Aix Marseille Université)
    Abstract: We analyze risk-taking regulation when financial institutions are linked through shareholdings. We model regulation as an upper bound on institutions' default probability, and pin down the corresponding limits on risk-taking as a function of the shareholding network. We show that these limits depend on an original centrality measure that relies on the cross-shareholding network twice: (i) through a risk-sharing effect coming from complementarities in risk-taking and (ii) through a resource effect that creates heterogeneity among institutions. When risk is large, we find that the risk-sharing effect relies on a simple centrality measure: the ratio between Bonacich and self-loop centralities. More generally, we show that an increase in cross-shareholding increases optimal risk-taking through the risk-sharing effect, but that resource effect can be detrimental to some banks. We show how optimal risk-taking levels can be implemented through cash or capital requirements, and analyze complementary interventions through key-player analyses. We finally illustrate our model using real-world financial data and discuss extensions toward including debt-network, correlated investment portfolios and endogenous networks.
    Keywords: prudential Regulation,financial Network,risk-Taking
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:halshs-02950881&r=all
  7. By: Azwar Abdulsalam; Gowri Jayprakash; Abhijeet Chandra
    Abstract: The pricing of currency options is largely dependent on the dynamic relationship between a pair of currencies. Typically, the pricing of options with payoffs dependent on multi-assets becomes tricky for reasons such as the non-Gaussian distribution of financial variable and non-linear macroeconomic relations between these markets. We study the options based on the currency pair US dollar and Indian rupee (USD-INR) and test several pricing formulas to evaluate the performance under different volatility regimes. We show the performance of the variance gamma and the symmetric variance gamma models during different volatility periods as well as for different moneyness, in comparison to the modified Black-Scholes model. In all cases, variance gamma model outperforms Black-Scholes. This can be attributed to the control of kurtosis and skewness of the distribution that is possible using the variance gamma model. Our findings support the superiority of variance gamma process of currency option pricing in better risk management strategies.
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2009.14113&r=all
  8. By: Mohamed Belhaj (IMF-Midle East Center for Economics and Finance (CEF)); Renaud Bourlès (Aix-Marseille Univ, CNRS, Ecole Centrale, AMSE, Marseille, France); Frédéric Deroïan (Aix-Marseille Univ, CNRS, AMSE, Marseille, France)
    Abstract: We analyze risk-taking regulation when financial institutions are linked through shareholdings. We model regulation as an upper bound on institutions' default probability, and pin down the corresponding limits on risk-taking as a function of the shareholding network. We show that these limits depend on an original centrality measure that relies on the cross-shareholding network twice: (i) through a risk-sharing effect coming from complementarities in risk-taking and (ii) through a resource effect that creates heterogeneity among institutions. When risk is large, we find that the risk-sharing effect relies on a simple centrality measure: the ratio between Bonacich and self-loop centralities. More generally, we show that an increase in cross-shareholding increases optimal risk-taking through the risk-sharing effect, but that resource effect can be detrimental to some banks. We show how optimal risk-taking levels can be implemented through cash or capital requirements, and analyze complementary interventions through key-player analyses. We finally illustrate our model using real-world financial data and discuss extensions toward including debt-network, correlated investment portfolios and endogenous networks.
    Keywords: financial network, risk-taking, prudential regulation
    JEL: C72 D85
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:aim:wpaimx:2030&r=all
  9. By: Kang, Wensheng; Ratti, Ronald A.; Vespignani, Joaquin L.
    Abstract: We decompose global stock market volatility shocks into financial originated shocks and nonfinancial originated shocks. Global stock market volatility shocks arising from financial sources reduce substantially more global outputs and inflation than non-financial sources shocks. Financial stock market volatility shocks forecasts 16.85% and 16.88% of the variation in global growth and inflation, respectively. In contrast, the on-financial stock market volatility shocks forecasts only 8.0% and 2.19% of the variation in global growth and inflation. Beside this markable difference global interest/policy rate responds similarly to both shocks.
    Keywords: Global, Stock market volatility Shocks, Monetary Policy, FAVAR
    JEL: E00 E02 E3 E40
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:103019&r=all
  10. By: Daniel Perez; Sam M. Werner; Jiahua Xu; Benjamin Livshits
    Abstract: The trustless nature of permissionless blockchains renders overcollateralization a key safety component relied upon by decentralized finance (DeFi) protocols. Nonetheless, factors such as price volatility may undermine this mechanism. In order to protect protocols from suffering losses, undercollateralized positions can be \textit{liquidated}. In this paper, we present the first in-depth empirical analysis of liquidations on protocols for loanable funds (PLFs). We examine Compound, one of the most widely used PLFs, for a period starting from its conception to September 2020. We analyze participants' behavior and risk-appetite in particular, to elucidate recent developments in the dynamics of the protocol. Furthermore, we assess how this has changed with a modification in Compound's incentive structure and show that variations of only 3% in an asset's price can result in over 10m USD becoming liquidable. To further understand the implications of this, we investigate the efficiency of liquidators. We find that liquidators' efficiency has improved significantly over time, with currently over 70% of liquidable positions being immediately liquidated. Lastly, we provide a discussion on how a false sense of security fostered by a misconception of the stability of non-custodial stablecoins, increases the overall liquidation risk faced by Compound participants.
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2009.13235&r=all
  11. By: J-C Gerlach (ETH Zürich - Department of Management, Technology, and Economics (D-MTEC)); Jerome L Kreuser (ETH Zurich); Didier Sornette (ETH Zürich - Department of Management, Technology, and Economics (D-MTEC); Swiss Finance Institute; Southern University of Science and Technology; Tokyo Institute of Technology)
    Abstract: We present a modified version of the super-exponential rational expectations “Efficient Crashes” bubble model of (Kreuser and Sornette, 2019) with a different formulation of the expected return that makes clearer the additive nature of corrective jumps. We derive a Kelly trading strategy for the new model. We combine the strategy with a simplified estimation procedure for the model parameters from price time series. We optimize the control parameters of the trading strategy by maximizing the return-weighted accuracy of trades. This enables us to predict the out-of-sample optimal investment, purely based on in-sample calibration of the model on historical data. Our approach solves the difficult problem of selecting the portfolio rebalancing time, as we endogenize it as an optimization parameter. We develop an ex-ante backtest that allows us to test our strategy on twenty equity asset indices. We find that our trading strategy achieves positive trading performance for 95% of tested assets and outperforms the Buy-and-Hold-Strategy in terms of CAGR and Sharpe Ratio in 60% of cases. In our simulations, we do not allow for any short trading or leverage. Thus, we simply simulate allocation of 0-100% of one’s capital between a risk-free and the risky asset over time. The optimal rebalancing periods are mostly of duration around a month; thus, the model does not overtrade, ensuring reasonable trading costs. Furthermore, during crashes, the model reduces the invested amount of capital sufficiently soon to reduce impact of price drawdowns. In addition to the Dotcom bubble, the great financial crisis of 2008 and other historical crashes, our study also covers the most recent crash in March 2020 that happened globally as a consequence of the economic shutdowns that were imposed as a reaction to the spread of the Coronavirus across the world.
    Keywords: financial bubbles, efficient crashes, positive feedback, rational expectation, Kelly criterion, optimal investment, Covid-19 crash
    JEL: C53 G01 G17
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:chf:rpseri:rp2085&r=all
  12. By: Lining Yu; Wolfgang Karl H\"ardle; Lukas Borke; Thijs Benschop
    Abstract: AI artificial intelligence brings about new quantitative techniques to assess the state of an economy. Here we describe a new measure for systemic risk: the Financial Risk Meter (FRM). This measure is based on the penalization parameter (lambda) of a linear quantile lasso regression. The FRM is calculated by taking the average of the penalization parameters over the 100 largest US publicly traded financial institutions. We demonstrate the suitability of this AI based risk measure by comparing the proposed FRM to other measures for systemic risk, such as VIX, SRISK and Google Trends. We find that mutual Granger causality exists between the FRM and these measures, which indicates the validity of the FRM as a systemic risk measure. The implementation of this project is carried out using parallel computing, the codes are published on www.quantlet.de with keyword FRM. The R package RiskAnalytics is another tool with the purpose of integrating and facilitating the research, calculation and analysis methods around the FRM project. The visualization and the up-to-date FRM can be found on hu.berlin/frm.
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2009.13222&r=all
  13. By: Shi Yu; Yuxin Chen; Chaosheng Dong
    Abstract: The fundamental principle in Modern Portfolio Theory (MPT) is based on the quantification of the portfolio's risk related to performance. Although MPT has made huge impacts on the investment world and prompted the success and prevalence of passive investing, it still has shortcomings in real-world applications. One of the main challenges is that the level of risk an investor can endure, known as \emph{risk-preference}, is a subjective choice that is tightly related to psychology and behavioral science in decision making. This paper presents a novel approach of measuring risk preference from existing portfolios using inverse optimization on the mean-variance portfolio allocation framework. Our approach allows the learner to continuously estimate real-time risk preferences using concurrent observed portfolios and market price data. We demonstrate our methods on real market data that consists of 20 years of asset pricing and 10 years of mutual fund portfolio holdings. Moreover, the quantified risk preference parameters are validated with two well-known risk measurements currently applied in the field. The proposed methods could lead to practical and fruitful innovations in automated/personalized portfolio management, such as Robo-advising, to augment financial advisors' decision intelligence in a long-term investment horizon.
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2010.01687&r=all
  14. By: Eric Benhamou; David Saltiel; Sandrine Ungari; Abhishek Mukhopadhyay
    Abstract: Can an asset manager plan the optimal timing for her/his hedging strategies given market conditions? The standard approach based on Markowitz or other more or less sophisticated financial rules aims to find the best portfolio allocation thanks to forecasted expected returns and risk but fails to fully relate market conditions to hedging strategies decision. In contrast, Deep Reinforcement Learning (DRL) can tackle this challenge by creating a dynamic dependency between market information and hedging strategies allocation decisions. In this paper, we present a realistic and augmented DRL framework that: (i) uses additional contextual information to decide an action, (ii) has a one period lag between observations and actions to account for one day lag turnover of common asset managers to rebalance their hedge, (iii) is fully tested in terms of stability and robustness thanks to a repetitive train test method called anchored walk forward training, similar in spirit to k fold cross validation for time series and (iv) allows managing leverage of our hedging strategy. Our experiment for an augmented asset manager interested in sizing and timing his hedges shows that our approach achieves superior returns and lower risk.
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2009.14136&r=all
  15. By: Ravi Jagannathan; Yang Zhang
    Abstract: We show that superior performance relative to peers during stressful times identifies higher quality firms as measured by conventional historical financial statement based measures as well as default probability measures. Quality measured this way is persistent, but different from price momentum. Further, a managed portfolio that takes a long position in top quintile (Stable) firms and a short position in bottom quintile (Vulnerable) firms earns superior risk adjusted returns in excess of the risk-free rate. The portfolio has an annualized Fama and French three-factor alpha of 5.2% (t=5.04) and a five-factor alpha of 3.3% (t=3.38)
    JEL: G0 G10 G11 G12
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:27859&r=all
  16. By: Michael B\"ucker; Gero Szepannek; Alicja Gosiewska; Przemyslaw Biecek
    Abstract: A major requirement for credit scoring models is to provide a maximally accurate risk prediction. Additionally, regulators demand these models to be transparent and auditable. Thus, in credit scoring, very simple predictive models such as logistic regression or decision trees are still widely used and the superior predictive power of modern machine learning algorithms cannot be fully leveraged. Significant potential is therefore missed, leading to higher reserves or more credit defaults. This paper works out different dimensions that have to be considered for making credit scoring models understandable and presents a framework for making ``black box'' machine learning models transparent, auditable and explainable. Following this framework, we present an overview of techniques, demonstrate how they can be applied in credit scoring and how results compare to the interpretability of score cards. A real world case study shows that a comparable degree of interpretability can be achieved while machine learning techniques keep their ability to improve predictive power.
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2009.13384&r=all
  17. By: Pierre-Olivier Gourinchas; Ṣebnem Kalemli-Özcan; Veronika Penciakova; Nick Sander
    Abstract: We estimate the impact of the COVID-19 crisis on business failures among small and medium size enterprises (SMEs) in seventeen countries using a large representative firm-level database. We use a simple model of firm cost-minimization and measure each firm’s liquidity shortfall during and after COVID-19. Our framework allows for a rich combination of sectoral and aggregate supply, productivity, and demand shocks. We estimate a large increase in the failure rate of SMEs under COVID-19 of nearly 9 percentage points, absent government support. Accommodation & Food Services, Arts, Entertainment & Recreation, Education, and Other Services are among the most affected sectors. The jobs at risk due to COVID-19 related SME business failures represent 3.1 percent of private sector employment. Despite the large impact on business failures and employment, we estimate only moderate effects on the financial sector: the share of Non Performing Loans on bank balance sheets would increase by up to 11 percentage points, representing 0.3 percent of banks’ assets and resulting in a 0.75 percentage point decline in the common equity Tier-1 capital ratio. We evaluate the cost and effectiveness of various policy interventions. The fiscal cost of an intervention that narrowly targets at risk firms can be modest (0.54% of GDP). However, at a similar level of effectiveness, non-targeted subsidies can be substantially more expensive (1.82% of GDP). Our results have important implications for the severity of the COVID-19 recession, the design of policies, and the speed of the recovery.
    JEL: D2 E65 G33
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:27877&r=all
  18. By: David M. Kaplan
    Abstract: This note provides additional interpretation for the counterfactual outcome distribution and corresponding unconditional quantile "effects" defined and estimated by Firpo, Fortin, and Lemieux (2009) and Chernozhukov, Fern\'andez-Val, and Melly (2013). With conditional independence of the policy variable of interest, these methods estimate the policy effect for certain types of policies, but not others. In particular, they estimate the effect of a policy change that itself satisfies conditional independence.
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2010.03606&r=all
  19. By: Demary, Markus; Voigtländer, Michael
    Abstract: The new bank regulations generally summarised as Basel IV include the introduction of an out-put floor. This means that banks are allowed less deviation from standard approaches when using internal models. This change will have far-reaching consequences. According to estimates by the European Banking Authority (EBA), German banks alone will have to increase their minimum capital requirements by around 20 percent; overall, Basel IV will increase capital requirements by 38 percent in Germany and by an average of 26 percent across the EU.Banks are therefore facing major challenges. Due to the difficult economic situation, they cannot realise capital increases simply by withholding profits or through obtaining increased capital from investors. It is therefore likely that they will become more involved in government financing, since this does not require equity investment, and similarly likely that they will use their remaining equity primarily where they can achieve the highest margins,i.e. with relatively risky financing. In addition, securitisation and cooperation with credit funds are also becoming more likely, which means less transparency, along with more risk being shifted to the shadow banking sector. For borrowers, the reforms could go hand in hand with higher interest rates.These high costs are not offset by social advantages, since lending in the countries particularly affected by the reformsis relatively prudential. Overall, it is therefore advisable not only to postpone the reformsin their current conception, but to fundamentally reconsider them.
    JEL: E44 E51 G21
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:zbw:iwkpps:182020&r=all
  20. By: Ajit Mahata; Anish rai; Om Prakash; Md Nurujjaman
    Abstract: The emergence of the COVID-19 pandemic, a new and novel risk factors, leads to the stock price crash due to a rapid and synchronous sell-off by the investors. However, within a short period, the quality sectors start recovering from the bottom. A model of the stock price movement has been developed to explain such phenomena based on the Institutional fund flow and financial antifragility, which represents the financial indicator of a company. The assumes that during the crash, the stock does not depend on the financial antifragility of a company. We study the effects of shock lengths and antifragility parameter on the stock price during the crises period using the synthetic and real fund flow data. We observed that the possibility of recovery of a quality stock decreases with an increase in shock-length beyond a specific period. On the other hand, a quality stock with higher antifragility shows V-shape recovery and outperform others. The shock lengths and recovery length of quality stock are almost equal that is seen in the Indian market. Financially stressed stocks, i.e., the stock with negative antifragility, showed L-shape recovery during the pandemic. The results show that the investors, in the uncertainty like COVID-19, restructure their portfolio to de-risk the investment towards quality stocks. The study may help the investors to make the right investment decision during a crisis.
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2009.13076&r=all
  21. By: Masahiro Kato; Shota Yasui
    Abstract: We consider training a binary classifier under delayed feedback (DF Learning). In DF Learning, we first receive negative samples; subsequently, some samples turn positive. This problem is conceivable in various real-world applications such as online advertisements, where the user action takes place long after the first click. Owing to the delayed feedback, simply separating the positive and negative data causes a sample selection bias. One solution is to assume that a long time window after first observing a sample reduces the sample selection bias. However, existing studies report that only using a portion of all samples based on the time window assumption yields suboptimal performance, and the use of all samples along with the time window assumption improves empirical performance. Extending these existing studies, we propose a method with an unbiased and convex empirical risk constructed from the whole samples under the time window assumption. We provide experimental results to demonstrate the effectiveness of the proposed method using a real traffic log dataset.
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2009.13092&r=all
  22. By: Prehn, Sören
    Keywords: Risk and Uncertainty
    Date: 2020–09–18
    URL: http://d.repec.org/n?u=RePEc:ags:gewi18:305608&r=all
  23. By: Patrick Baylis; Pierre-Loup Beauregard; Marie Connolly; Nicole Fortin; David A. Green; Pablo Gutierrez Cubillos; Sam Gyetvay; Catherine Haeck; Timea Laura Molnar; Gaëlle Simard-Duplain; Henry E. Siu; Maria teNyenhuis; Casey Warman
    Abstract: This paper documents two COVID-related risks, viral risk and employment risk, and their distributions across the Canadian population. The measurement of viral risk is based on the VSE COVID Risk/Reward Assessment Tool, created to assist policymakers in determining the impacts of economic shutdowns and re-openings over the course of the pandemic. We document that women are more concentrated in high viral risk occupations and that this is the source of their greater employment loss over the course of the pandemic so far. They were also less likely to maintain one form of contact with their former employers, reducing employment recovery rates. Low educated workers face the same virus risk rates as high educated workers but much higher employment losses. Based on a rough counterfactual exercise, this is largely accounted for by their lower likelihood of switching to working from home which, in turn, is related to living conditions such as living in crowded dwellings. For both women and the low educated, existing inequities in their occupational distributions and living situations have resulted in them bearing a disproportionate amount of the risk emerging from the pandemic. Assortative matching in couples has tended to exacerbate risk inequities.
    JEL: E32 I18 J15 J16 J21
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:27881&r=all
  24. By: Claudio C. Flores; Marcelo C. Medeiros
    Abstract: Bandit problems are pervasive in various fields of research and are also present in several practical applications. Examples, including dynamic pricing and assortment and the design of auctions and incentives, permeate a large number of sequential treatment experiments. Different applications impose distinct levels of restrictions on viable actions. Some favor diversity of outcomes, while others require harmful actions to be closely monitored or mainly avoided. In this paper, we extend one of the most popular bandit solutions, the original $\epsilon_t$-greedy heuristics, to high-dimensional contexts. Moreover, we introduce a competing exploration mechanism that counts with searching sets based on order statistics. We view our proposals as alternatives for cases where pluralism is valued or, in the opposite direction, cases where the end-user should carefully tune the range of exploration of new actions. We find reasonable bounds for the cumulative regret of a decaying $\epsilon_t$-greedy heuristic in both cases and we provide an upper bound for the initialization phase that implies the regret bounds when order statistics are considered to be at most equal but mostly better than the case when random searching is the sole exploration mechanism. Additionally, we show that end-users have sufficient flexibility to avoid harmful actions since any cardinality for the higher-order statistics can be used to achieve an stricter upper bound. In a simulation exercise, we show that the algorithms proposed in this paper outperform simple and adapted counterparts.
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2009.13961&r=all
  25. By: Victor Olkhov
    Abstract: This paper considers price volatility as the reason for description of the second-degree economic variables, trades and expectations aggregated during certain time interval {\Delta}. We call it - the second-order economic theory. The n-th degree products of costs and volumes of trades, performed by economic agents during interval {\Delta} determine price n-th statistical moments. First two price statistical moments define volatility. To model volatility one needs description of the squares of trades aggregated during interval {\Delta}. To describe price probability one needs all n-th statistical moments of price but that is almost impossible. We define squares of agent's trades and macro expectations those approve the second-degree trades aggregated during interval {\Delta}. We believe that agents perform trades under action of multiple expectations. We derive equations on the second-degree trades and expectations in economic space. As economic space we regard numerical continuous risk grades. Numerical risk grades are discussed at least for 80 years. We propose that econometrics permit accomplish risk assessment for almost all economic agents. Agents risk ratings distribute agents by economic space and define densities of macro second-degree trades and expectations. In the linear approximation we derive mean square price and volatility disturbances as functions of the first and second-degree trades disturbances. In simple approximation numerous expectations and their perturbations can cause small harmonic oscillations of the second-degree trades disturbances and induce harmonic oscillations of price and volatility perturbations.
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2009.14278&r=all
  26. By: Steven J. Davis; Stephen Hansen; Cristhian Seminario-Amez
    Abstract: Firm-level stock returns differ enormously in reaction to COVID-19 news. We characterize these reactions using the Risk Factors discussions in pre-pandemic 10-K filings and two text-analytic approaches: expert-curated dictionaries and supervised machine learning (ML). Bad COVID-19 news lowers returns for firms with high exposures to travel, traditional retail, aircraft production and energy supply -- directly and via downstream demand linkages -- and raises them for firms with high exposures to healthcare policy, e-commerce, web services, drug trials and materials that feed into supply chains for semiconductors, cloud computing and telecommunications. Monetary and fiscal policy responses to the pandemic strongly impact firm-level returns as well, but differently than pandemic news. Despite methodological differences, dictionary and ML approaches yield remarkably congruent return predictions. Importantly though, ML operates on a vastly larger feature space, yielding richer characterizations of risk exposures and outperforming the dictionary approach in goodness-of-fit. By integrating elements of both approaches, we uncover new risk factors and sharpen our explanations for firm-level returns. To illustrate the broader utility of our methods, we also apply them to explain firm-level returns in reaction to the March 2020 Super Tuesday election results.
    JEL: E44 G12 G14 G18
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:27867&r=all
  27. By: Marco Avellaneda; Juan Andr\'es Serur
    Abstract: Modeling cross-sectional correlations between thousands of stocks, across countries and industries, can be challenging. In this paper, we demonstrate the advantages of using Hierarchical Principal Component Analysis (HPCA) over the classic PCA. We also introduce a statistical clustering algorithm for identifying of homogeneous clusters of stocks, or "synthetic sectors". We apply these methods to study cross-sectional correlations in the US, Europe, China, and Emerging Markets.
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2010.04140&r=all
  28. By: Raquel Almeida Ramos; Federico Bassi (Università Cattolica del Sacro Cuore; Dipartimento di Economia e Finanza, Università Cattolica del Sacro Cuore); Dany Lang
    Abstract: This paper intends to contribute to the theoretical literature on the determinants of exchange rate fluctuations. We build an agent-based model, based on behavioral assumptions inspired by the literature on behavioral finance and by empirical surveys about the behavior of foreign exchange professionals. In our artificial economy with two countries, traders can speculate on both exchange and interest rates, and allocate their wealth across heterogeneous assets. Fundamentalists use both fundamental and technical analysis, while chartists only employ the latter, and are either trend followers or trend contrarians. In our model, trend contrarians and cash in mechanisms provide the sufficient stability conditions, and allow explaining and replicating most stylized facts of foreign exchange markets, namely (i) the excess volatility of the exchange rate with respect to its fundamentals, (ii) booms, busts and precarious equilibria, (iii) clusters of volatility, (iv) long memory and (v) fat tails.
    Keywords: retirement, Foreign exchange markets, clusters of volatility, fat tails, heterogeneous beliefs, agent-based models, Stock-flow consistent models.
    JEL: D40 D84 G11 G12
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:ctc:serie1:def090&r=all

This nep-rmg issue is ©2020 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.