nep-cmp New Economics Papers
on Computational Economics
Issue of 2021‒12‒20
twenty-two papers chosen by



  1. A Multi-criteria Approach to Evolve Sparse Neural Architectures for Stock Market Forecasting By Faizal Hafiz; Jan Broekaert; Davide La Torre; Akshya Swain
  2. Exploration of the Parameter Space in Macroeconomic Agent-Based Models By Karl Naumann-Woleske; Max Sina Knicker; Michael Benzaquen; Jean-Philippe Bouchaud
  3. Time Series Forecasting with Ensembled Stochastic Differential Equations Driven by L\'evy Noise By Luxuan Yang; Ting Gao; Yubin Lu; Jinqiao Duan; Tao Liu
  4. Machine Learning Methods: Potential for Deposit Insurance By Defina, Ryan
  5. Uncovering Heterogeneous Regional Impacts of Chinese Monetary Policy By Tsang, Andrew
  6. Public Policymaking for International Agricultural Trade using Association Rules and Ensemble Machine Learning By Feras A. Batarseh; Munisamy Gopinath; Anderson Monken; Zhengrong Gu
  7. Quantitative Discourse Analysis at Scale - AI, NLP and the Transformer Revolution By Lachlan O'Neill; Nandini Anantharama; Wray Buntine; Simon D Angus
  8. A transformer-based model for default prediction in mid-cap corporate markets By Kamesh Korangi; Christophe Mues; Cristi\'an Bravo
  9. Graph Auto-Encoders for Financial Clustering By Edward Turner
  10. Using Machine Learning to Predict Nosocomial Infections and Medical Accidents in a NICU By Beltempo, Marc; Bresson, Georges; Lacroix, Guy
  11. UnFEAR: Unsupervised Feature Extraction Clustering with an Application to Crisis Regimes Classification By Mr. Jorge A Chan-Lau
  12. Investing in a cryptocurrency price bubble: speculative Ponzi schemes and cyclic stochastic price pumps By Misha Perepelitsa
  13. Deep Hedging: Learning to Remove the Drift under Trading Frictions with Minimal Equivalent Near-Martingale Measures By Hans Buehler; Phillip Murray; Mikko S. Pakkanen; Ben Wood
  14. A Parsimonious Macroeconomic ABM for Labor Market Regulations By Caner Ates; Dietmar Maringer
  15. Quantum algorithms for numerical differentiation of expected values with respect to parameters By Koichi Miyamoto
  16. EUROLAB: A Multidimensional Labour Supply-Demand Model for EU countries By NARAZANI Edlira; COLOMBINO Ugo; PALMA FERNANDEZ Bianey
  17. General Equilibrium Effects of Insurance Expansions: Evidence from Long-Term Care Labor Markets By Martin Hackmann; Joerg Heining; Roman Klimke; Maria Polyakova; Holger Seibert
  18. Predicting Macroeconomic and Macrofinancial Stress in Low-Income Countries By Mr. Irineu E de Carvalho Filho; Hans Weisfeld; Fei Liu; Mr. Fabio Comelli; Mr. Andrea F Presbitero; Alexis Meyer-Cirkel; Mrs. Sandra V Lizarazo Ruiz; Klaus-Peter Hellwig; Rahul Giri; Chengyu Huang
  19. Assessing the fiscal-monetary policy mix in the euro area By Bańkowski, Krzysztof; Christoffel, Kai; Faria, Thomas
  20. A Lifecycle Approach to Insurance Solvency By Yuechen Dai; Tonghui Xu
  21. Sources and Transmission of Country Risk By Tarek Alexander Hassan; Jesse Schreger; Markus Schwedeler; Ahmed Tahoun
  22. A CBA of APC: analysing approaches to procyclicality reduction in CCP initial margin models By Murphy, David; Vause, Nicholas

  1. By: Faizal Hafiz; Jan Broekaert; Davide La Torre; Akshya Swain
    Abstract: This study proposes a new framework to evolve efficacious yet parsimonious neural architectures for the movement prediction of stock market indices using technical indicators as inputs. In the light of a sparse signal-to-noise ratio under the Efficient Market hypothesis, developing machine learning methods to predict the movement of a financial market using technical indicators has shown to be a challenging problem. To this end, the neural architecture search is posed as a multi-criteria optimization problem to balance the efficacy with the complexity of architectures. In addition, the implications of different dominant trading tendencies which may be present in the pre-COVID and within-COVID time periods are investigated. An $\epsilon-$ constraint framework is proposed as a remedy to extract any concordant information underlying the possibly conflicting pre-COVID data. Further, a new search paradigm, Two-Dimensional Swarms (2DS) is proposed for the multi-criteria neural architecture search, which explicitly integrates sparsity as an additional search dimension in particle swarms. A detailed comparative evaluation of the proposed approach is carried out by considering genetic algorithm and several combinations of empirical neural design rules with a filter-based feature selection method (mRMR) as baseline approaches. The results of this study convincingly demonstrate that the proposed approach can evolve parsimonious networks with better generalization capabilities.
    Date: 2021–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2111.08060&r=
  2. By: Karl Naumann-Woleske; Max Sina Knicker; Michael Benzaquen; Jean-Philippe Bouchaud
    Abstract: Agent-Based Models (ABM) are computational scenario-generators, which can be used to predict the possible future outcomes of the complex system they represent. To better understand the robustness of these predictions, it is necessary to understand the full scope of the possible phenomena the model can generate. Most often, due to high-dimensional parameter spaces, this is a computationally expensive task. Inspired by ideas coming from systems biology, we show that for multiple macroeconomic models, including an agent-based model and several Dynamic Stochastic General Equilibrium (DSGE) models, there are only a few stiff parameter combinations that have strong effects, while the other sloppy directions are irrelevant. This suggest an algorithm that efficiently explores the space of parameters by primarily moving along the stiff directions. We apply our algorithm to a medium-sized agent-based model, and show that it recovers all possible dynamics of the unemployment rate. The application of this method to Agent-based Models may lead to a more thorough and robust understanding of their features, and provide enhanced parameter sensitivity analyses. Several promising paths for future research are discussed.
    Date: 2021–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2111.08654&r=
  3. By: Luxuan Yang; Ting Gao; Yubin Lu; Jinqiao Duan; Tao Liu
    Abstract: With the fast development of modern deep learning techniques, the study of dynamic systems and neural networks is increasingly benefiting each other in a lot of different ways. Since uncertainties often arise in real world observations, SDEs (stochastic differential equations) come to play an important role. To be more specific, in this paper, we use a collection of SDEs equipped with neural networks to predict long-term trend of noisy time series which has big jump properties and high probability distribution shift. Our contributions are, first, we use the phase space reconstruction method to extract intrinsic dimension of the time series data so as to determine the input structure for our forecasting model. Second, we explore SDEs driven by $\alpha$-stable L\'evy motion to model the time series data and solve the problem through neural network approximation. Third, we construct the attention mechanism to achieve multi-time step prediction. Finally, we illustrate our method by applying it to stock marketing time series prediction and show the results outperform several baseline deep learning models.
    Date: 2021–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2111.13164&r=
  4. By: Defina, Ryan
    Abstract: The field of deposit insurance is yet to realise fully the potential of machine learning, and the substantial benefits that it may present to its operational and policy-oriented activities. There are practical opportunities available (some specified in this paper) that can assist in improving deposit insurers’ relationship with the technology. Sharing of experiences and learnings via international engagement and collaboration is fundamental in developing global best practices in this space.
    Keywords: deposit insurance; machine learning
    JEL: G21
    Date: 2021–09–15
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:110712&r=
  5. By: Tsang, Andrew
    Abstract: This paper applies causal machine learning methods to analyze the heterogeneous regional impacts of monetary policy in China. The method uncovers the heterogeneous regional im-pacts of different monetary policy stances on the provincial figures for real GDP growth, CPI inflation and loan growth compared to the national averages. The varying effects of expansionary and contractionary monetary policy phases on Chinese provinces are highlighted and explained. Subsequently, applying interpretable machine learning, the empirical results show that the credit channel is the main channel affecting the regional impacts of monetary policy. An imminent conclusion of the uneven provincial responses to the “one size fits all” monetary policy is that different policymakers should coordinate their efforts to search for the optimal fiscal and monetary policy mix.
    Keywords: China, monetary policy, regional heterogeneity, machine learning, shadow banking
    JEL: C54 C61 E52 R11
    Date: 2021–07–28
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:110703&r=
  6. By: Feras A. Batarseh; Munisamy Gopinath; Anderson Monken; Zhengrong Gu
    Abstract: International economics has a long history of improving our understanding of factors causing trade, and the consequences of free flow of goods and services across countries. The recent shocks to the free trade regime, especially trade disputes among major economies, as well as black swan events, such as trade wars and pandemics, raise the need for improved predictions to inform policy decisions. AI methods are allowing economists to solve such prediction problems in new ways. In this manuscript, we present novel methods that predict and associate food and agricultural commodities traded internationally. Association Rules (AR) analysis has been deployed successfully for economic scenarios at the consumer or store level, such as for market basket analysis. In our work however, we present analysis of imports and exports associations and their effects on commodity trade flows. Moreover, Ensemble Machine Learning methods are developed to provide improved agricultural trade predictions, outlier events' implications, and quantitative pointers to policy makers.
    Date: 2021–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2111.07508&r=
  7. By: Lachlan O'Neill (SoDa Laboratories, Monash Business School); Nandini Anantharama (SoDa Laboratories, Monash Business School); Wray Buntine (Faculty of Information Technology, Monash University); Simon D Angus (Dept. of Economics and SoDa Laboratories, Monash Business School)
    Abstract: Empirical social science requires structured data. Traditionally, these data have arisen from statistical agencies, surveys, or other controlled settings. But what of language, political speech, and discourse more generally? Can text be data? Until very recently, the journey from text to data has relied on human coding, severely limiting study scope. Here, we introduce natural language processing (NLP), a field of artificial intelligence (AI), and its application to discourse analysis at scale. We introduce AI/NLP’s key terminology, concepts, and techniques, and demonstrate its application to the social sciences. In so doing, we emphasise a major shift in AI/NLP technological capability now underway, due largely to the development of transformer models. Our aim is to provide the quantitative social scientists with both a guide to state-of-the-art AI/NLP in general, and something of a road-map for the transformer revolution now sweeping through the landscape.
    Keywords: text as data, artificial intelligence, machine learning, natural language processing, transformer models
    JEL: C45 C52 C55
    Date: 2021–12
    URL: http://d.repec.org/n?u=RePEc:ajr:sodwps:2021-12&r=
  8. By: Kamesh Korangi; Christophe Mues; Cristi\'an Bravo
    Abstract: In this paper, we study mid-cap companies, i.e. publicly traded companies with less than US $10 billion in market capitalisation. Using a large dataset of US mid-cap companies observed over 30 years, we look to predict the default probability term structure over the medium term and understand which data sources (i.e. fundamental, market or pricing data) contribute most to the default risk. Whereas existing methods typically require that data from different time periods are first aggregated and turned into cross-sectional features, we frame the problem as a multi-label time-series classification problem. We adapt transformer models, a state-of-the-art deep learning model emanating from the natural language processing domain, to the credit risk modelling setting. We also interpret the predictions of these models using attention heat maps. To optimise the model further, we present a custom loss function for multi-label classification and a novel multi-channel architecture with differential training that gives the model the ability to use all input data efficiently. Our results show the proposed deep learning architecture's superior performance, resulting in a 13% improvement in AUC (Area Under the receiver operating characteristic Curve) over traditional models. We also demonstrate how to produce an importance ranking for the different data sources and the temporal relationships using a Shapley approach specific to these models.
    Date: 2021–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2111.09902&r=
  9. By: Edward Turner
    Abstract: Deep learning has shown remarkable results on Euclidean data (e.g. audio, images, text) however this type of data is limited in the amount of relational information it can hold. In mathematics we can model more general relational data in a graph structure while retaining Euclidean data as associated node or edge features. Due to the ubiquity of graph data, and its ability to hold multiple dimensions of information, graph deep learning has become a fast emerging field. We look at applying and optimising graph deep learning on a finance graph to produce more informed clusters of companies. Having clusters produced from multiple streams of data can be highly useful in quantitative finance; not only does it allow clusters to be tailored to the specific task but the culmination of multiple streams allows for cross source pattern recognition that would have otherwise gone unnoticed. This can provide financial institutions with an edge over competitors which is crucial in the heavily optimised world of trading. In this paper we use news co-occurrence and stock price for our data combination. We optimise our model to achieve an average testing precision of 78% and find a clear improvement in clustering capabilities when dual data sources are used; cluster purity rises from 32% for just vertex data and 42% for just edge data to 64% when both are used in comparisons to ground-truth Bloomberg clusters. The framework we provide utilises unsupervised learning which we view as key for future work due to the volume of unlabelled data in financial markets.
    Date: 2021–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2111.13519&r=
  10. By: Beltempo, Marc (McGill University); Bresson, Georges (University of Paris 2); Lacroix, Guy (Université Laval)
    Abstract: Background: Adult studies have shown that nursing overtime and unit overcrowding is associated with increased adverse patient events but there exists little evidence for the Neonatal Intensive Care Unit (NICU). Objectives: To predict the onset on nosocomial infections and medical accidents in a NICU using machine learning models. Subjects: Retrospective study on the 7,438 neonates admitted in the CHU de Québec NICU (capacity of 51 beds) from 10 April 2008 to 28 March 2013. Daily administrative data on nursing overtime hours, total regular hours, number of admissions, patient characteristics, as well as information on nosocomial infections and on the timing and type of medical errors were retrieved from various hospital-level datasets. Methodology: We use a generalized mixed effects regression tree model (GMERT) to elaborate predictions trees for the two outcomes. Neonates' characteristics and daily exposure to numerous covariates are used in the model. GMERT is suitable for binary outcomes and is a recent extension of the standard tree-based method. The model allows to determine the most important predictors. Results: DRG severity level, regular hours of work, overtime, admission rates, birth weight and occupation rates are the main predictors for both outcomes. On the other hand, gestational age, C-Section, multiple births, medical/surgical and number of admissions are poor predictors. Conclusion: Prediction trees (predictors and split points) provide a useful management tool to prevent undesirable health outcomes in a NICU.
    Keywords: neonatal health outcomes, nursing overtime, machine learning, mixed effects regression tree
    JEL: I1 J2 C11 C14 C23
    Date: 2020–03
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp13099&r=
  11. By: Mr. Jorge A Chan-Lau
    Abstract: We introduce unFEAR, Unsupervised Feature Extraction Clustering, to identify economic crisis regimes. Given labeled crisis and non-crisis episodes and the corresponding features values, unFEAR uses unsupervised representation learning and a novel mode contrastive autoencoder to group episodes into time-invariant non-overlapping clusters, each of which could be identified with a different regime. The likelihood that a country may experience an econmic crisis could be set equal to its cluster crisis frequency. Moreover, unFEAR could serve as a first step towards developing cluster-specific crisis prediction models tailored to each crisis regime.
    Keywords: clustering;unsupervised feature extraction;autoencoder;deep learning;biased label problem;crisis prediction;WP;crisis frequency;crisis observation;crisis risk;crisis data points; machine learning; Early warning systems; Global
    Date: 2020–11–25
    URL: http://d.repec.org/n?u=RePEc:imf:imfwpa:2020/262&r=
  12. By: Misha Perepelitsa
    Abstract: The problem of investing into a cryptocurrency market requires good understanding of the processes that regulate the price of the currency. In this paper we offer a view of a cryptocurrency market as self-organized speculative Ponzi scheme that operates on the platform of a price bubble spontaneously created by traders. The synergy between investors and traders creates an interesting dynamical patterns of the price and systematic risk of the system. We use microscale, agent-based models to simulate the system behavior and derive macroscale ODE models to estimate such parameters as the return rate and total value of investments. We provide the formula for the total risk of the system as a sum of two independent components, one being characteristic of the price bubble and the other of the investor behavior.
    Date: 2021–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2111.11315&r=
  13. By: Hans Buehler; Phillip Murray; Mikko S. Pakkanen; Ben Wood
    Abstract: We present a numerically efficient approach for learning minimal equivalent martingale measures for market simulators of tradable instruments, e.g. for a spot price and options written on the same underlying. In the presence of transaction cost and trading restrictions, we relax the results to learning minimal equivalent "near-martingale measures" under which expected returns remain within prevailing bid/ask spreads. Our approach to thus "removing the drift" in a high dimensional complex space is entirely model-free and can be applied to any market simulator which does not exhibit classic arbitrage. The resulting model can be used for risk neutral pricing, or, in the case of transaction costs or trading constraints, for "Deep Hedging". We demonstrate our approach by applying it to two market simulators, an auto-regressive discrete-time stochastic implied volatility model, and a Generative Adversarial Network (GAN) based simulator, both of which trained on historical data of option prices under the statistical measure to produce realistic samples of spot and option prices. We comment on robustness with respect to estimation error of the original market simulator.
    Date: 2021–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2111.07844&r=
  14. By: Caner Ates; Dietmar Maringer
    Abstract: The literature on macroeconomic agent-based models (MABMs) has gained growing attention since the early 2000s. Most MABMs dealing with market regulations have been focusing on the financial market. In contrast, only a small number of MABMs investigate the effects of labor market regulations. In this paper, we provide a parsimonious yet extendable agent-based model that focuses on labor market dynamics within a macroeconomic framework, suitable to analyze labor market regulations such as minimum wages and employment protection legislations. The model is stock-flow-consistent and small-scaled, i.e., there are only workers and firms interacting in the goods and in the labor market. There are two different types of workers, namely skilled and unskilled, and firms produce according to a CES production function. This allows for substitutability between the two types of workers. A one-factor-at-a-time (OFAT) sensitivity analysis is performed to gain insights into the mechanisms and patterns produced by the model. Results show that the model is sensitive to the minimum wage parameter and that for reasonable values of the minimum wage, income inequality decreases, while aggregate consumption rises. Overall, the results suggest that the model can be used to further investigate aggregate and distributional effects of labor market regulations.
    Keywords: Labor market; minimum wage; stock-flow consistent; macroeconomic agent-based model; CATS.
    Date: 2021–12–09
    URL: http://d.repec.org/n?u=RePEc:ssa:lemwps:2021/46&r=
  15. By: Koichi Miyamoto
    Abstract: The quantum algorithms for Monte Carlo integration (QMCI), which are based on quantum amplitude estimation (QAE), speed up expected value calculation compared with classical counterparts, and have been widely investigated along with their applications to industrial problems such as financial derivative pricing. In this paper, we consider an expected value of a function of a stochastic variable and a real-valued parameter, and how to calculate derivatives of the expectation with respect to the parameter. This problem is related to calculating sensitivities of financial derivatives, and so of industrial importance. Based on QMCI and the general-order central difference formula for numerical differentiation, we propose two quantum methods for this problem, and evaluate their complexities. The first one, which we call the naive iteration method, simply calculates the formula by iterative computations and additions of the terms in it, and then estimates its expected value by QAE. The second one, which we name the sum-in-QAE method, performs the summation of the terms at the same time as the sum over the possible values of the stochastic variable in a single QAE. We see that, depending on the smoothness of the function and the number of qubits available, either of two methods is better than the other. In particular, when the function is nonsmooth or we want to save the qubit number, the sum-in-QAE method can be advantageous.
    Date: 2021–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2111.11016&r=
  16. By: NARAZANI Edlira (European Commission - JRC); COLOMBINO Ugo; PALMA FERNANDEZ Bianey (European Commission - JRC)
    Abstract: This paper describes EUROLAB, a labour supply-demand microsimulation model that relies on EUROMOD, the static microsimulation model for the European Union countries. EUROLAB is built on a multidimensional discrete choice model of labour supply and accounts for involuntary unemployment. The model estimates individual changes in supplied hours of work and participation as a reaction to a hypothetical or real tax transfer reform, often referred to in the literature as “second-order” effects. Furthermore, the model allows for the demand-side effects of a labour market that, depending on how elastic it is, would lead to a different labour supply when the market reaches its equilibrium. The model is unique in covering 27 countries under the same specification of preferences, opportunity set representation and the same concept of income and working hours. We illustrate the usefulness of the model by showing several examples of EUROLAB, using both the one-dimensional and multidimensional versions. Potential extensions of the model are also discussed in the paper.
    Keywords: Behavioural Models, Discrete Choice Modelling, Labour supply, Labour market equilibrium
    Date: 2021–12
    URL: http://d.repec.org/n?u=RePEc:ipt:taxref:202115&r=
  17. By: Martin Hackmann (UCLA, NBER, and CESifo); Joerg Heining (Institut für Arbeitsmarkt-und Berufsforschung (IAB)); Roman Klimke (Harvard University); Maria Polyakova (Stanford University, NBER and CESifo); Holger Seibert (Institut für Arbeitsmarkt-und Berufsforschung (IAB))
    Abstract: Arrow (1963) hypothesized that demand-side moral hazard induced by health insurance leads to supply-side expansions in healthcare markets. Capturing these effects empirically has been challenging, as non-marginal insurance expansions are rare and detailed data on healthcare labor and capital is sparse. We combine administrative labor market data with the geographic variation in the rollout of a universal insurance program—the introduction of long-term care (LTC) insurance in Germany in 1995—to document a substantial expansion of the inpatient LTC labor market in response to insurance expansion. A 10 percentage point expansion in the share of insured elderly leads to 0.05 (7%) more inpatient LTC firms and four (13%) more workers per 1,000 elderly in Germany. Wages did not rise, but the quality of newly hired workers declined. We find suggestive evidence of a reduction in old-age mortality. Using a machine learning algorithm, we characterize counterfactual labor market biographies of potential inpatient LTC hires, finding that the reform moved workers into LTC jobs from unemployment and out of the labor force rather than from other sectors of the economy. We estimate that employing these additional workers in LTC is socially efficient if patients value the care provided by these workers at least at 25% of the market price for care. We show conceptually that, in the spirit of Harberger (1971), in a second-best equilibrium in which supply-side labor markets do not clear at perfectly competitive wages, subsidies for healthcare consumption along with the associated demand-side moral hazard can be welfare-enhancing.
    Keywords: long-term care, universal insurance expansion, Germany, LTC labor market, second-best efficiency
    JEL: D61 I11 I13 J21 J23
    Date: 2021–11
    URL: http://d.repec.org/n?u=RePEc:upj:weupjo:21-357&r=
  18. By: Mr. Irineu E de Carvalho Filho; Hans Weisfeld; Fei Liu; Mr. Fabio Comelli; Mr. Andrea F Presbitero; Alexis Meyer-Cirkel; Mrs. Sandra V Lizarazo Ruiz; Klaus-Peter Hellwig; Rahul Giri; Chengyu Huang
    Abstract: In recent years, Fund staff has prepared cross-country analyses of macroeconomic vulnerabilities in low-income countries, focusing on the risk of sharp declines in economic growth and of debt distress. We discuss routes to broadening this focus by adding several macroeconomic and macrofinancial vulnerability concepts. The associated early warning systems draw on advances in predictive modeling.
    Keywords: Early warning systems; crisis prediction; machine learning; low-income countries; inflation crisis; stress episode; crisis concept; crisis probability; missed crisis; LIC inflation trend; Inflation; Banking crises; Commodity prices; Global
    Date: 2020–12–18
    URL: http://d.repec.org/n?u=RePEc:imf:imfwpa:2020/289&r=
  19. By: Bańkowski, Krzysztof; Christoffel, Kai; Faria, Thomas
    Abstract: This paper attempts to gauge the effects of various fiscal and monetary policy rules on macroeconomic outcomes in the euro area. It consists of two major parts – a historical assessment and an assessment based on an extended scenario until 2030 – and it builds on the ECB-BASE –a semistructural model for the euro area. The historical analysis (until end-2019, `pre-pandemic´) demonstrates that a consistently countercyclical fiscal policy could have created a fiscal buffer in good economic times and it would have been able to eliminate a large portion of the second downturn in the euro area. In turn, the post-pandemic simulations until 2030 reveal that certain combinations of policy rules can be particularly powerful in reaching favourable macroeconomic outcomes (i.e. recovering pandemic output losses and bringing inflation close to the ECB target). These consist of expansionary-for-longer fiscal policy, which maintains support for longer than usually prescribed, and lower-for-longer monetary policy, which keeps the rates lower for longer than stipulated by a standard reaction function of a central bank. Moreover, we demonstrate that in the current macroeconomic situation, fiscal and monetary policies reinforce each other and mutually create space for each other. This provides a strong case for coordination of the two policies in this situation. JEL Classification: E32, E62, E63
    Keywords: fiscal rules, joint analysis of fiscal and monetary policy, model simulations, monetary policy rules
    Date: 2021–12
    URL: http://d.repec.org/n?u=RePEc:ecb:ecbwps:20212623&r=
  20. By: Yuechen Dai; Tonghui Xu
    Abstract: At present, most well-known insurance regulatory bodies focus on reviewing the solvency of insurance companies within a one-year period. However, the operation of insurance companies is a long-term business, with most policyholders planning on holding a policy over many years, not just one. This research adopts a new perspective for measuring the insolvency risk faced by insurance companies over a longer time period by estimating their full expected lifetime (the number of periods into the future that an insurer can be expected to remain solvent, given their initial capital reserves), which has significance for insurance regulation. This research uses python numerical methods to simulate the operating conditions of insurance companies with different initial reserves, and capture the period in which the company becomes insolvent. The results show that, as is logical, the higher is the initial reserve fund, the longer one can expect the company will be in business before insolvency. In addition, our simulation model helps to explain how the relevant probability density for the insolvency date, given an initial reserve fund, can be estimated. By comparing different probability density functions, we find that a lognormal density form provides a reasonable starting point for the density in question.
    Keywords: Insurance regulation, simulation, insolvency
    JEL: C02 C15 C63
    Date: 2021–11–01
    URL: http://d.repec.org/n?u=RePEc:cbt:econwp:21/13&r=
  21. By: Tarek Alexander Hassan; Jesse Schreger; Markus Schwedeler; Ahmed Tahoun
    Abstract: We use textual analysis of earnings conference calls held by listed firms around the world to measure the amount of risk managers and investors at each firm associate with each country at each point in time. Flexibly aggregating this firm-country-quarter-level data allows us to systematically identify spikes in perceived country risk (“crises”) and document their source and pattern of transmission to foreign firms. While this pattern usually follows a gravity structure, it often changes dramatically during crises. For example, while crises originating in developed countries propagate disproportionately to foreign financial firms, emerging market crises transmit less financially and more to traditionally exposed countries. We apply our measures to show that (i) elevated perceptions of a country's riskiness, particularly those of foreign and financial firms, are associated with significant falls in local asset prices, capital outflows, and reductions in firm-level investment and employment. (ii) Risk transmitted from foreign countries affects the investment decisions of domestic firms. (iii) Heterogeneous currency loadings on perceived global risk can help explain the cross-country pattern of interest rates and currency risk premia.
    JEL: D21 F23 F3 F30 G15
    Date: 2021–11
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:29526&r=
  22. By: Murphy, David (London School of Economics and Political Science); Vause, Nicholas (Bank of England)
    Abstract: Following a period of relative calm, many derivative users received large margin calls as financial market volatility spiked amid the onset of the Covid‑19 global pandemic in March 2020. This reinvigorated the policy debate about dampening such ‘procyclicality’ of margin requirements. In this paper, we suggest how margin setters and policymakers might measure procyclicality and target particular levels of it. This procyclicality management involves recalibrating margin model parameters or applying anti-procyclicality (APC) tools. Different options reduce procyclicality by varying amounts, and do so at different costs, which we measure using the average additional margin required over the cycle. Thus, we perform a cost-benefit analysis (CBA) of the different options. We illustrate our approach using a popular type of margin model – filtered historical simulation value-at-risk – on simple portfolios, presenting the costs and benefits of varying a key model parameter and applying a number of different APC tools, including those in European legislation.
    Keywords: Central counterparty; cost-benefit analysis; derivatives clearing; initial margin models; mandatory clearing; procyclicality
    JEL: G17
    Date: 2021–11–19
    URL: http://d.repec.org/n?u=RePEc:boe:boeewp:0950&r=

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.