nep-cmp New Economics Papers
on Computational Economics
Issue of 2021‒06‒28
twenty-one papers chosen by



  1. Constrained Classification and Policy Learning By Toru Kitagawa; Shosei Sakaguchi; Aleksey Tetenov
  2. Machine Learning in U.S. Bank Merger Prediction: A Text-Based Approach By Katsafados, Apostolos G.; Leledakis, George N.; Pyrgiotakis, Emmanouil G.; Androutsopoulos, Ion; Fergadiotis, Manos
  3. A News-based Machine Learning Model for Adaptive Asset Pricing By Liao Zhu; Haoxuan Wu; Martin T. Wells
  4. Next-Day Bitcoin Price Forecast Based on Artificial intelligence Methods By Liping Yang
  5. Learning Multiple Stock Trading Patterns with Temporal Routing Adaptor and Optimal Transport By Hengxu Lin; Dong Zhou; Weiqing Liu; Jiang Bian
  6. Using machine learning to predict patent lawsuits By Juranek, Steffen; Otneim, Håkon
  7. Generative Adversarial Networks in finance: an overview By Florian Eckerli; Joerg Osterrieder
  8. Artificial Intelligence, Ethics, and Diffused Pivotality By Victor Klockmann; Alicia von Schenk; Marie Villeval
  9. Computing Equilibria of Stochastic Heterogeneous Agent Models Using Decision Rule Histories By Marcelo Veracierto
  10. A Two-Step Framework for Arbitrage-Free Prediction of the Implied Volatility Surface By Wenyong Zhang; Lingfei Li; Gongqiu Zhang
  11. Unbiased Self-Play By Shohei Ohsawa
  12. Fund2Vec: Mutual Funds Similarity using Graph Learning By Vipul Satone; Dhruv Desai; Dhagash Mehta
  13. Adversarial Attacks on Deep Models for Financial Transaction Records By Ivan Fursov; Matvey Morozov; Nina Kaploukhaya; Elizaveta Kovtun; Rodrigo Rivera-Castro; Gleb Gusev; Dmitry Babaev; Ivan Kireev; Alexey Zaytsev; Evgeny Burnaev
  14. 3D Tensor-based Deep Learning Models for Predicting Option Price By Muyang Ge; Shen Zhou; Shijun Luo; Boping Tian
  15. Design and Analysis of Robust Deep Learning Models for Stock Price Prediction By Jaydip Sen; Sidra Mehtab
  16. On the cost-effective temporal allocation of credits in conservation offsets when habitat restoration takes takes time and is uncertain By Drechsler, Martin
  17. Quantum Portfolio Optimization with Investment Bands and Target Volatility By Samuel Palmer; Serkan Sahin; Rodrigo Hernandez; Samuel Mugel; Roman Orus
  18. Active labour market policies for the long-term unemployed: New evidence from causal machine learning By Goller, Daniel; Harrer, Tamara; Lechner, Michael; Wolff, Joachim
  19. Credit spread approximation and improvement using random forest regression By Mathieu Mercadier; Jean-Pierre Lardy
  20. Artificial Intelligence, Ethics, and Intergenerational Responsibility By Victor Klockmann; Alicia von Schenk; Marie Villeval
  21. Sample Recycling Method -- A New Approach to Efficient Nested Monte Carlo Simulations By Runhuan Feng; Peng Li

  1. By: Toru Kitagawa; Shosei Sakaguchi; Aleksey Tetenov
    Abstract: Modern machine learning approaches to classification, including AdaBoost, support vector machines, and deep neural networks, utilize surrogate loss techniques to circumvent the computational complexity of minimizing empirical classification risk. These techniques are also useful for causal policy learning problems, since estimation of individualized treatment rules can be cast as a weighted (cost-sensitive) classification problem. Consistency of the surrogate loss approaches studied in Zhang (2004) and Bartlett et al. (2006) crucially relies on the assumption of correct specification, meaning that the specified set of classifiers is rich enough to contain a first-best classifier. This assumption is, however, less credible when the set of classifiers is constrained by interpretability or fairness, leaving the applicability of surrogate loss based algorithms unknown in such second-best scenarios. This paper studies consistency of surrogate loss procedures under a constrained set of classifiers without assuming correct specification. We show that in the setting where the constraint restricts the classifier's prediction set only, hinge losses (i.e., $\ell_1$-support vector machines) are the only surrogate losses that preserve consistency in second-best scenarios. If the constraint additionally restricts the functional form of the classifier, consistency of a surrogate loss approach is not guaranteed even with hinge loss. We therefore characterize conditions for the constrained set of classifiers that can guarantee consistency of hinge risk minimizing classifiers. Exploiting our theoretical results, we develop robust and computationally attractive hinge loss based procedures for a monotone classification problem.
    Date: 2021–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2106.12886&r=
  2. By: Katsafados, Apostolos G.; Leledakis, George N.; Pyrgiotakis, Emmanouil G.; Androutsopoulos, Ion; Fergadiotis, Manos
    Abstract: This paper investigates the role of textual information in a U.S. bank merger prediction task. Our intuition behind this approach is that text could reduce bank opacity and allow us to understand better the strategic options of banking firms. We retrieve textual information from bank annual reports using a sample of 9,207 U.S. bank-year observations during the period 1994-2016. To predict bidders and targets, we use textual information along with financial variables as inputs to several machine learning models. Our key findings suggest that: (1) when textual information is used as a single type of input, the predictive accuracy of our models is similar, or even better, compared to the models using only financial variables as inputs, and (2) when we jointly use textual information and financial variables as inputs, the predictive accuracy of our models is substantially improved compared to models using a single type of input. Therefore, our findings highlight the importance of textual information in a bank merger prediction task.
    Keywords: Bank merger prediction; Textual analysis; Natural language processing; Machine learning
    JEL: C38 C45 G1 G2 G21 G3 G34
    Date: 2021–06–12
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:108272&r=
  3. By: Liao Zhu; Haoxuan Wu; Martin T. Wells
    Abstract: The paper proposes a new asset pricing model -- the News Embedding UMAP Selection (NEUS) model, to explain and predict the stock returns based on the financial news. Using a combination of various machine learning algorithms, we first derive a company embedding vector for each basis asset from the financial news. Then we obtain a collection of the basis assets based on their company embedding. After that for each stock, we select the basis assets to explain and predict the stock return with high-dimensional statistical methods. The new model is shown to have a significantly better fitting and prediction power than the Fama-French 5-factor model.
    Date: 2021–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2106.07103&r=
  4. By: Liping Yang
    Abstract: In recent years, Bitcoin price prediction has attracted the interest of researchers and investors. However, the accuracy of previous studies is not well enough. Machine learning and deep learning methods have been proved to have strong prediction ability in this area. This paper proposed a method combined with Ensemble Empirical Mode Decomposition (EEMD) and a deep learning method called long short-term memory (LSTM) to research the problem of next-day Bitcoin price forecast.
    Date: 2021–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2106.12961&r=
  5. By: Hengxu Lin; Dong Zhou; Weiqing Liu; Jiang Bian
    Abstract: Successful quantitative investment usually relies on precise predictions of the future movement of the stock price. Recently, machine learning based solutions have shown their capacity to give more accurate stock prediction and become indispensable components in modern quantitative investment systems. However, the i.i.d. assumption behind existing methods is inconsistent with the existence of diverse trading patterns in the stock market, which inevitably limits their ability to achieve better stock prediction performance. In this paper, we propose a novel architecture, Temporal Routing Adaptor (TRA), to empower existing stock prediction models with the ability to model multiple stock trading patterns. Essentially, TRA is a lightweight module that consists of a set of independent predictors for learning multiple patterns as well as a router to dispatch samples to different predictors. Nevertheless, the lack of explicit pattern identifiers makes it quite challenging to train an effective TRA-based model. To tackle this challenge, we further design a learning algorithm based on Optimal Transport (OT) to obtain the optimal sample to predictor assignment and effectively optimize the router with such assignment through an auxiliary loss term. Experiments on the real-world stock ranking task show that compared to the state-of-the-art baselines, e.g., Attention LSTM and Transformer, the proposed method can improve information coefficient (IC) from 0.053 to 0.059 and 0.051 to 0.056 respectively. Our dataset and code used in this work are publicly available: https://github.com/microsoft/qlib.
    Date: 2021–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2106.12950&r=
  6. By: Juranek, Steffen (Dept. of Business and Management Science, Norwegian School of Economics); Otneim, Håkon (Dept. of Business and Management Science, Norwegian School of Economics)
    Abstract: We use machine learning methods to predict which patents end up at court using the population of US patents granted between 2002 and 2005. We analyze the role of the different dimensions of an empirical analysis for the performance of the prediction - the number of observations, the number of patent characteristics and the model choice. We find that the extending the set of patent characteristics has the biggest impact on the prediction performance. Small samples have not only a low predictive performance, their predictions are also particularly unstable. However, only samples of intermediate size are required for reasonably stable performance. The model choice matters, too, more sophisticated machine learning methods can provide additional value to a simple logistic regression. Our results provide practical advice to everyone building patent litigation models, e.g., for litigation insurance or patent management in more general.
    Keywords: Patents; litigation; prediction; machine learning
    JEL: K00 K41 O34
    Date: 2021–06–22
    URL: http://d.repec.org/n?u=RePEc:hhs:nhhfms:2021_006&r=
  7. By: Florian Eckerli; Joerg Osterrieder
    Abstract: Modelling in finance is a challenging task: the data often has complex statistical properties and its inner workings are largely unknown. Deep learning algorithms are making progress in the field of data-driven modelling, but the lack of sufficient data to train these models is currently holding back several new applications. Generative Adversarial Networks (GANs) are a neural network architecture family that has achieved good results in image generation and is being successfully applied to generate time series and other types of financial data. The purpose of this study is to present an overview of how these GANs work, their capabilities and limitations in the current state of research with financial data, and present some practical applications in the industry. As a proof of concept, three known GAN architectures were tested on financial time series, and the generated data was evaluated on its statistical properties, yielding solid results. Finally, it was shown that GANs have made considerable progress in their finance applications and can be a solid additional tool for data scientists in this field.
    Date: 2021–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2106.06364&r=
  8. By: Victor Klockmann (Goethe-University Frankfurt am Main, Max Planck Institute for Human Development - Max-Planck-Gesellschaft); Alicia von Schenk (Goethe-University Frankfurt am Main, Max Planck Institute for Human Development - Max-Planck-Gesellschaft); Marie Villeval (GATE Lyon Saint-Étienne - Groupe d'analyse et de théorie économique - CNRS - Centre National de la Recherche Scientifique - Université de Lyon - UJM - Université Jean Monnet [Saint-Étienne] - UCBL - Université Claude Bernard Lyon 1 - Université de Lyon - UL2 - Université Lumière - Lyon 2 - ENS Lyon - École normale supérieure - Lyon, IZA - Forschungsinstitut zur Zukunft der Arbeit - Institute of Labor Economics)
    Abstract: With Big Data, decisions made by machine learning algorithms depend on training data generated by many individuals. In an experiment, we identify the effect of varying individual responsibility for moral choices of an artificially intelligent algorithm. Across treatments, we manipulated the sources of training data and thus the impact of each individual's decisions on the algorithm. Reducing or diffusing pivotality for algorithmic choices increased the share of selfish decisions. Once the generated training data exclusively affected others' payoffs, individuals opted for more egalitarian payoff allocations. These results suggest that Big Data offers a "moral wiggle room" for selfish behavior.
    Keywords: Artificial Intelligence,Pivotality,Ethics,Externalities,Experiment
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:halshs-03237453&r=
  9. By: Marcelo Veracierto
    Abstract: This paper introduces a general method for computing equilibria with heterogeneous agents and aggregate shocks that is particularly suitable for economies with private information. Instead of the cross-sectional distribution of agents across individual states, the method uses as a state variable a vector of spline coefficients describing a long history of past individual decision rules. Applying the computational method to a Mirrlees RBC economy with known analytical solution recovers the solution perfectly well. This test provides considerable confidence on the accuracy of the method.
    Keywords: Computational methods; heterogeneous agents; business cycles; private information
    JEL: C63 D82 E32
    Date: 2020–02–18
    URL: http://d.repec.org/n?u=RePEc:fip:fedhwp:92776&r=
  10. By: Wenyong Zhang; Lingfei Li; Gongqiu Zhang
    Abstract: We propose a two-step framework for predicting the implied volatility surface over time without static arbitrage. In the first step, we select features to represent the surface and predict them over time. In the second step, we use the predicted features to construct the implied volatility surface using a deep neural network (DNN) model by incorporating constraints that prevent static arbitrage. We consider three methods to extract features from the implied volatility data: principal component analysis, variational autoencoder and sampling the surface, and we predict these features using LSTM. Using a long time series of implied volatility data for S\&P500 index options to train our models, we find that sampling the surface with DNN for surface construction achieves the smallest error in out-of-sample prediction. Furthermore, the DNN model for surface construction not only removes static arbitrage, but also significantly reduces the prediction error compared with a standard interpolation method. Our framework can also be used to simulate the dynamics of the implied volatility surface without static arbitrage.
    Date: 2021–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2106.07177&r=
  11. By: Shohei Ohsawa
    Abstract: We present a general optimization framework for emergent belief-state representation without any supervision. We employed the common configuration of multiagent reinforcement learning and communication to improve exploration coverage over an environment by leveraging the knowledge of each agent. In this paper, we obtained that recurrent neural nets (RNNs) with shared weights are highly biased in partially observable environments because of their noncooperativity. To address this, we designated an unbiased version of self-play via mechanism design, also known as reverse game theory, to clarify unbiased knowledge at the Bayesian Nash equilibrium. The key idea is to add imaginary rewards using the peer prediction mechanism, i.e., a mechanism for mutually criticizing information in a decentralized environment. Numerical analyses, including StarCraft exploration tasks with up to 20 agents and off-the-shelf RNNs, demonstrate the state-of-the-art performance.
    Date: 2021–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2106.03007&r=
  12. By: Vipul Satone; Dhruv Desai; Dhagash Mehta
    Abstract: Identifying similar mutual funds with respect to the underlying portfolios has found many applications in financial services ranging from fund recommender systems, competitors analysis, portfolio analytics, marketing and sales, etc. The traditional methods are either qualitative, and hence prone to biases and often not reproducible, or, are known not to capture all the nuances (non-linearities) among the portfolios from the raw data. We propose a radically new approach to identify similar funds based on the weighted bipartite network representation of funds and their underlying assets data using a sophisticated machine learning method called Node2Vec which learns an embedded low-dimensional representation of the network. We call the embedding \emph{Fund2Vec}. Ours is the first ever study of the weighted bipartite network representation of the funds-assets network in its original form that identifies structural similarity among portfolios as opposed to merely portfolio overlaps.
    Date: 2021–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2106.12987&r=
  13. By: Ivan Fursov; Matvey Morozov; Nina Kaploukhaya; Elizaveta Kovtun; Rodrigo Rivera-Castro; Gleb Gusev; Dmitry Babaev; Ivan Kireev; Alexey Zaytsev; Evgeny Burnaev
    Abstract: Machine learning models using transaction records as inputs are popular among financial institutions. The most efficient models use deep-learning architectures similar to those in the NLP community, posing a challenge due to their tremendous number of parameters and limited robustness. In particular, deep-learning models are vulnerable to adversarial attacks: a little change in the input harms the model's output. In this work, we examine adversarial attacks on transaction records data and defences from these attacks. The transaction records data have a different structure than the canonical NLP or time series data, as neighbouring records are less connected than words in sentences, and each record consists of both discrete merchant code and continuous transaction amount. We consider a black-box attack scenario, where the attack doesn't know the true decision model, and pay special attention to adding transaction tokens to the end of a sequence. These limitations provide more realistic scenario, previously unexplored in NLP world. The proposed adversarial attacks and the respective defences demonstrate remarkable performance using relevant datasets from the financial industry. Our results show that a couple of generated transactions are sufficient to fool a deep-learning model. Further, we improve model robustness via adversarial training or separate adversarial examples detection. This work shows that embedding protection from adversarial attacks improves model robustness, allowing a wider adoption of deep models for transaction records in banking and finance.
    Date: 2021–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2106.08361&r=
  14. By: Muyang Ge; Shen Zhou; Shijun Luo; Boping Tian
    Abstract: Option pricing is a significant problem for option risk management and trading. In this article, we utilize a framework to present financial data from different sources. The data is processed and represented in a form of 2D tensors in three channels. Furthermore, we propose two deep learning models that can deal with 3D tensor data. Experiments performed on the Chinese market option dataset prove the practicability of the proposed strategies over commonly used ways, including B-S model and vector-based LSTM.
    Date: 2021–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2106.02916&r=
  15. By: Jaydip Sen; Sidra Mehtab
    Abstract: Building predictive models for robust and accurate prediction of stock prices and stock price movement is a challenging research problem to solve. The well-known efficient market hypothesis believes in the impossibility of accurate prediction of future stock prices in an efficient stock market as the stock prices are assumed to be purely stochastic. However, numerous works proposed by researchers have demonstrated that it is possible to predict future stock prices with a high level of precision using sophisticated algorithms, model architectures, and the selection of appropriate variables in the models. This chapter proposes a collection of predictive regression models built on deep learning architecture for robust and precise prediction of the future prices of a stock listed in the diversified sectors in the National Stock Exchange (NSE) of India. The Metastock tool is used to download the historical stock prices over a period of two years (2013- 2014) at 5 minutes intervals. While the records for the first year are used to train the models, the testing is carried out using the remaining records. The design approaches of all the models and their performance results are presented in detail. The models are also compared based on their execution time and accuracy of prediction.
    Date: 2021–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2106.09664&r=
  16. By: Drechsler, Martin
    Abstract: Tradable permits or offsetting schemes are increasingly used as an instrument for the conservation of biodiversity on private lands. Since the restoration of degraded land often involves uncertainties and time lags, conservation biologists have strongly recommended that credits in conservation offset schemes should awarded only with the completion of the restoration process. Otherwise, as is claimed, is the instrument likely to fail on the objective of no net loss in species habitat and biodiversity. What is ignored in these arguments, however, is that such a scheme design may incur higher economic costs than a design in which credits are already awarded at the initiation of the restoration process. In the present paper a generic agent-based ecological-economic simulation model is developed to explore different pros and cons of the two scheme designs, in particular their cost-effectiveness. The model considers spatially heterogeneous and dynamic conservation costs, risk aversion and time preferences in the landowners, as well as uncertainty in the duration and the success of restoration process. It turns out that, especially under fast change of the conservation costs, awarding credits at the initiation of restoration can be more cost-effective than awarding them with completion of restoration.
    Keywords: agent-based modelling, conservation offsets, ecological-economic modelling, habitat restoration, uncertainty
    JEL: Q15 Q57
    Date: 2021–04–03
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:108209&r=
  17. By: Samuel Palmer; Serkan Sahin; Rodrigo Hernandez; Samuel Mugel; Roman Orus
    Abstract: In this paper we show how to implement in a simple way some complex real-life constraints on the portfolio optimization problem, so that it becomes amenable to quantum optimization algorithms. Specifically, first we explain how to obtain the best investment portfolio with a given target risk. This is important in order to produce portfolios with different risk profiles, as typically offered by financial institutions. Second, we show how to implement individual investment bands, i.e., minimum and maximum possible investments for each asset. This is also important in order to impose diversification and avoid corner solutions. Quite remarkably, we show how to build the constrained cost function as a quadratic binary optimization (QUBO) problem, this being the natural input of quantum annealers. The validity of our implementation is proven by finding the efficient frontier, using D-Wave Hybrid and its Advantage quantum processor, on static portfolios taking assets from the S&P500. We use three different subsets of this index. First, the S&P100 which consists of 100 of the largest companies of the S&P500; second, the 200 best-performing companies of the S&P500; and third, the full S&P500 itself. Our results show how practical daily constraints found in quantitative finance can be implemented in a simple way in current NISQ quantum processors, with real data, and under realistic market conditions. In combination with clustering algorithms, our methods would allow to replicate the behaviour of more complex indexes, such as Nasdaq Composite or others, in turn being particularly useful to build and replicate Exchange Traded Funds (ETF).
    Date: 2021–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2106.06735&r=
  18. By: Goller, Daniel; Harrer, Tamara; Lechner, Michael; Wolff, Joachim
    Abstract: We investigate the effectiveness of three different job-search and training programmes for German long-term unemployed persons. On the basis of an extensive administrative data set, we evaluated the effects of those programmes on various levels of aggregation using Causal Machine Learning. We found participants to benefit from the investigated programmes with placement services to be most effective. Effects are realised quickly and are long-lasting for any programme. While the effects are rather homogenous for men, we found differential effects for women in various characteristics. Women benefit in particular when local labour market conditions improve. Regarding the allocation mechanism of the unemployed to the different programmes, we found the observed allocation to be as effective as a random allocation. Therefore, we propose data-driven rules for the allocation of the unemployed to the respective labour market programmes that would improve the status-quo.
    Keywords: Policy evaluation, Modified Causal Forest (MCF), active labour market programmes, conditional average treatment effect (CATE)
    JEL: J08 J68
    Date: 2021–06
    URL: http://d.repec.org/n?u=RePEc:usg:econwp:2021:08&r=
  19. By: Mathieu Mercadier; Jean-Pierre Lardy
    Abstract: Credit Default Swap (CDS) levels provide a market appreciation of companies' default risk. These derivatives are not always available, creating a need for CDS approximations. This paper offers a simple, global and transparent CDS structural approximation, which contrasts with more complex and proprietary approximations currently in use. This Equity-to-Credit formula (E2C), inspired by CreditGrades, obtains better CDS approximations, according to empirical analyses based on a large sample spanning 2016-2018. A random forest regression run with this E2C formula and selected additional financial data results in an 87.3% out-of-sample accuracy in CDS approximations. The transparency property of this algorithm confirms the predominance of the E2C estimate, and the impact of companies' debt rating and size, in predicting their CDS.
    Date: 2021–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2106.07358&r=
  20. By: Victor Klockmann (Goethe-University Frankfurt am Main, Max Planck Institute for Human Development - Max-Planck-Gesellschaft); Alicia von Schenk (Goethe-University Frankfurt am Main, Max Planck Institute for Human Development - Max-Planck-Gesellschaft); Marie Villeval (GATE Lyon Saint-Étienne - Groupe d'analyse et de théorie économique - CNRS - Centre National de la Recherche Scientifique - Université de Lyon - UJM - Université Jean Monnet [Saint-Étienne] - UCBL - Université Claude Bernard Lyon 1 - Université de Lyon - UL2 - Université Lumière - Lyon 2 - ENS Lyon - École normale supérieure - Lyon, IZA - Forschungsinstitut zur Zukunft der Arbeit - Institute of Labor Economics)
    Abstract: Humans shape the behavior of artificially intelligent algorithms. One mechanism is the training these systems receive through the passive observation of human behavior and the data we constantly generate. In a laboratory experiment with a sequence of dictator games, we let participants' choices train an algorithm. Thereby, they create an externality on future decision making of an intelligent system that affects future participants. We test how information on training artificial intelligence affects the prosociality and selfishness of human behavior. We find that making individuals aware of the consequences of their training on the well-being of future generations changes behavior, but only when individuals bear the risk of being harmed themselves by future algorithmic choices. Only in that case, the externality of artificially intelligence training induces a significantly higher share of egalitarian decisions in the present.
    Keywords: Artificial Intelligence,Morality,Prosociality,Generations,Externalities
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:halshs-03237437&r=
  21. By: Runhuan Feng; Peng Li
    Abstract: Nested stochastic modeling has been on the rise in many fields of the financial industry. Such modeling arises whenever certain components of a stochastic model are stochastically determined by other models. There are at least two main areas of applications, including (1) portfolio risk management in the banking sector and (2) principle-based reserving and capital requirements in the insurance sector. As financial instrument values often change with economic fundamentals, the risk management of a portfolio (outer loop) often requires the assessment of financial positions subject to changes in risk factors in the immediate future. The valuation of financial position (inner loop) is based on projections of cashflows and risk factors into the distant future. The nesting of such stochastic modeling can be computationally challenging. Most of existing techniques to speed up nested simulations are based on curve fitting. The main idea is to establish a functional relationship between inner loop estimator and risk factors by running a limited set of economic scenarios, and, instead of running inner loop simulations, inner loop estimations are made by feeding other scenarios into the fitted curve. This paper presents a non-conventional approach based on the concept of sample recycling. Its essence is to run inner loop estimation for a small set of outer loop scenarios and to find inner loop estimates under other outer loop scenarios by recycling those known inner loop paths. This new approach can be much more efficient when traditional techniques are difficult to implement in practice.
    Date: 2021–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2106.06028&r=

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.