|
on Computational Economics |
Issue of 2021‒09‒13
twenty-two papers chosen by |
By: | Lachlan O'Neill (Faculty of Information Technology, Monash University); Simon D Angus (Dept. of Economics & SoDa Laboratories, Monash Business School, Monash University); Satya Borgohain (SoDa Laboratories, Monash Business School, Monash University); Nader Chmait (Faculty of Information Technology, Monash University); David Dowe (Faculty of Information Technology, Monash University) |
Abstract: | As the discipline has evolved, research in machine learning has been focused more and more on creating more powerful neural networks, without regard for the interpretability of these networks. Such “black-box models†yield state-of-the-art results, but we cannot understand why they make a particular decision or prediction. Sometimes this is acceptable, but often it is not. We propose a novel architecture, Regression Networks, which combines the power of neural networks with the understandability of regression analysis. While some methods for combining these exist in the literature, our architecture generalizes these approaches by taking interactions into account, offering the power of a dense neural network without forsaking interpretability. We demonstrate that the models exceed the state-of-the-art performance of interpretable models on several benchmark datasets, matching the power of a dense neural network. Finally, we discuss how these techniques can be generalized to other neural architectures, such as convolutional and recurrent neural networks. |
Keywords: | machine learning, policy evaluation, neural networks, regression, classification |
JEL: | C45 C14 C52 |
Date: | 2021–09 |
URL: | http://d.repec.org/n?u=RePEc:ajr:sodwps:2021-09&r= |
By: | Bastien Lextrait |
Abstract: | Small and Medium Size Enterprises (SMEs) are critical actors in the fabric of the economy. Their growth is often limited by the difficulty in obtaining fi nancing. Basel II accords enforced the obligation for banks to estimate the probability of default of their obligors. Currently used models are limited by the simplicity of their architecture and the available data. State of the art machine learning models are not widely used because they are often considered as black boxes that cannot be easily explained or interpreted. We propose a methodology to combine high predictive power and powerful explainability using various Gradient Boosting Decision Trees (GBDT) implementations such as the LightGBM algorithm and SHapley Additive exPlanation (SHAP) values as post-prediction explanation model. SHAP values are among the most recent methods quantifying with consistency the impact of each input feature over the credit score. This model is developed and tested using a nation-wide sample of French companies, with a highly unbalanced positive event ratio. The performances of GBDT models are compared with traditional credit scoring algorithms such as Support Vector Machine (SVM) and Logistic Regression. LightGBM provides the best performances over the test sample, while being fast to train and economically sound. Results obtained from SHAP values analysis are consistent with previous socio-economic studies, in that they can pinpoint known influent economical factors among hundreds of other features. Providing such a level of explainability to complex models may convince regulators to accept their use in automated credit scoring, which could ultimately benefi t both borrowers and lenders. |
Keywords: | Credit scoring, SMEs, Machine Learning, Gradient Boosting, Interpretability |
JEL: | C53 C63 M21 |
Date: | 2021 |
URL: | http://d.repec.org/n?u=RePEc:drm:wpaper:2021-25&r= |
By: | Marcelo Cajias; Joseph-Alexander Zeitler |
Abstract: | In the light of the rise of the World Wide Web, there is an intense debate about the potential impact of online user-generated data on classical economics. This paper is one of the first to analyze housing demand on that account by employing a large internet search dataset from a housing market platform. Focusing on the German rental housing market, we employ the variable ‘contacts per listing’ as a measure of demand intensity. Apart from traditional economic methods, we apply state-of-the-art artificial intelligence, the XGBoost, to quantify the factors that lead an apartment to be demanded. As using machine learning algorithms cannot solve the causal relationship between the independent and dependent variable, we make use of eXplainable AI (XAI) techniques to further show economic meanings and inferences of our results. Those suggest that both hedonic, socioeconomic and spatial aspects influence search intensity. We further find differences in temporal dynamics and geographical variations. Additionally, we compare our results to alternative parametric models and find evidence of the superiority of our nonparametric model. Overall, our findings entail some potentially very important implications for both researchers and practitioners. |
Keywords: | eXtreme Gradient Boosting; Machine Learning; online usergenerated search data; Residential Real Estate |
JEL: | R3 |
Date: | 2021–01–01 |
URL: | http://d.repec.org/n?u=RePEc:arz:wpaper:eres2021_70&r= |
By: | Lucien Boulet |
Abstract: | Several academics have studied the ability of hybrid models mixing univariate Generalized Autoregressive Conditional Heteroskedasticity (GARCH) models and neural networks to deliver better volatility predictions than purely econometric models. Despite presenting very promising results, the generalization of such models to the multivariate case has yet to be studied. Moreover, very few papers have examined the ability of neural networks to predict the covariance matrix of asset returns, and all use a rather small number of assets, thus not addressing what is known as the curse of dimensionality. The goal of this paper is to investigate the ability of hybrid models, mixing GARCH processes and neural networks, to forecast covariance matrices of asset returns. To do so, we propose a new model, based on multivariate GARCHs that decompose volatility and correlation predictions. The volatilities are here forecast using hybrid neural networks while correlations follow a traditional econometric process. After implementing the models in a minimum variance portfolio framework, our results are as follows. First, the addition of GARCH parameters as inputs is beneficial to the model proposed. Second, the use of one-hot-encoding to help the neural network differentiate between each stock improves the performance. Third, the new model proposed is very promising as it not only outperforms the equally weighted portfolio, but also by a significant margin its econometric counterpart that uses univariate GARCHs to predict the volatilities. |
Date: | 2021–08 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2109.01044&r= |
By: | Saeed Marzban; Erick Delage; Jonathan Yumeng Li |
Abstract: | Recently equal risk pricing, a framework for fair derivative pricing, was extended to consider dynamic risk measures. However, all current implementations either employ a static risk measure that violates time consistency, or are based on traditional dynamic programming solution schemes that are impracticable in problems with a large number of underlying assets (due to the curse of dimensionality) or with incomplete asset dynamics information. In this paper, we extend for the first time a famous off-policy deterministic actor-critic deep reinforcement learning (ACRL) algorithm to the problem of solving a risk averse Markov decision process that models risk using a time consistent recursive expectile risk measure. This new ACRL algorithm allows us to identify high quality time consistent hedging policies (and equal risk prices) for options, such as basket options, that cannot be handled using traditional methods, or in context where only historical trajectories of the underlying assets are available. Our numerical experiments, which involve both a simple vanilla option and a more exotic basket option, confirm that the new ACRL algorithm can produce 1) in simple environments, nearly optimal hedging policies, and highly accurate prices, simultaneously for a range of maturities 2) in complex environments, good quality policies and prices using reasonable amount of computing resources; and 3) overall, hedging strategies that actually outperform the strategies produced using static risk measures when the risk is evaluated at later points of time. |
Date: | 2021–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2109.04001&r= |
By: | Diego Ferraro; Daniela Blanco; Sebasti\'an Pessah; Rodrigo Castro |
Abstract: | Agricultural systems experience land-use changes that are driven by population growth and intensification of technological inputs. This results in land-use and cover change (LUCC) dynamics representing a complex landscape transformation process. In order to study the LUCC process we developed a spatially explicit agent-based model in the form of a Cellular Automata implemented with the Cell-DEVS formalism. The resulting model called AgroDEVS is used for predicting LUCC dynamics along with their associated economic and environmental changes. AgroDEVS is structured using behavioral rules and functions representing a) crop yields, b) weather conditions, c) economic profit, d) farmer preferences, e) technology level adoption and f) natural resources consumption based on embodied energy accounting. Using data from a typical location of the Pampa region (Argentina) for the 1988-2015 period, simulation exercises showed that the economic goals were achieved, on average, each 6 out of 10 years, but the environmental thresholds were only achieved in 1.9 out of 10 years. In a set of 50-years simulations, LUCC patterns quickly converge towards the most profitable crop sequences, with no noticeable tradeoff between the economic and environmental conditions. |
Date: | 2021–08 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2109.01031&r= |
By: | Paul Bilokon; David Finkelstein |
Abstract: | The principal component analysis (PCA) is a staple statistical and unsupervised machine learning technique in finance. The application of PCA in a financial setting is associated with several technical difficulties, such as numerical instability and nonstationarity. We attempt to resolve them by proposing two new variants of PCA: an iterated principal component analysis (IPCA) and an exponentially weighted moving principal component analysis (EWMPCA). Both variants rely on the Ogita-Aishima iteration as a crucial step. |
Date: | 2021–08 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2108.13072&r= |
By: | Makariou, Despoina; Barrieu, Pauline; Chen, Yining |
Abstract: | We introduce a random forest approach to enable spreads’ prediction in the primary catastrophe bond market. In a purely predictive framework, we assess the importance of catastrophe spread predictors using permutation and minimal depth methods. The whole population of non-life catastrophe bonds issued from December 2009 to May 2018 is used. We find that random forest has at least as good prediction performance as our benchmark-linear regression in the temporal context, and better prediction performance in the non-temporal one. Random forest also performs better than the benchmark when multiple predictors are excluded in accordance with the importance rankings or at random, which indicates that random forest extracts information from existing predictors more effectively and captures interactions better without the need to specify them. The results of random forest, in terms of prediction accuracy and the minimal depth importance are stable. There is only a small divergence between the drivers of catastrophe bond spread in the predictive versus explanatory framework. We believe that the usage of random forest can speed up investment decisions in the catastrophe bond industry both for would-be issuers and investors. |
Keywords: | catastrophe bond pricing; interactions; machine learning in insurance; minimal depth-importance; permutation importance; primary market spread prediction; random forest; stability |
JEL: | G22 |
Date: | 2021–07–30 |
URL: | http://d.repec.org/n?u=RePEc:ehl:lserod:111529&r= |
By: | Brunori, Paolo (London School of Economics); Hufe, Paul (LMU Munich); Mahler, Daniel Gerszon (World Bank) |
Abstract: | In this paper we propose the use of machine learning methods to estimate inequality of opportunity. We illustrate how our proposed methods—conditional inference regression trees and forests—represent a substantial improvement over existing estimation approaches. First, they reduce the risk of ad-hoc model selection. Second, they establish estimation models by trading off upward and downward bias in inequality of opportunity estimates. The advantages of regression trees and forests are illustrated by an empirical application for a cross-section of 31 European countries. We show that arbitrary model selection may lead to significant biases in inequality of opportunity estimates relative to our preferred method. These biases are reflected in both point estimates and country rankings. Our results illustrate the practical importance of leveraging machine learning algorithms to avoid giving misleading information about the level of inequality of opportunity in different societies to policymakers and the general public. |
Keywords: | equality of opportunity, machine learning, random forests |
JEL: | D31 D63 C38 |
Date: | 2021–08 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp14689&r= |
By: | Gnecco Giorgio; Nutarelli Federico; Riccaboni Massimo |
Abstract: | This work applies Matrix Completion (MC) -- a class of machine-learning methods commonly used in the context of recommendation systems -- to analyse economic complexity. MC is applied to reconstruct the Revealed Comparative Advantage (RCA) matrix, whose elements express the relative advantage of countries in given classes of products, as evidenced by yearly trade flows. A high-accuracy binary classifier is derived from the application of MC, with the aim of discriminating between elements of the RCA matrix that are, respectively, higher or lower than one. We introduce a novel Matrix cOmpletion iNdex of Economic complexitY (MONEY) based on MC, which is related to the predictability of countries' RCA (the lower the predictability, the higher the complexity). Differently from previously-developed indices of economic complexity, the MONEY index takes into account the various singular vectors of the matrix reconstructed by MC, whereas other indices are based only on one/two eigenvectors of a suitable symmetric matrix, derived from the RCA matrix. Finally, MC is compared with a state-of-the-art economic complexity index (GENEPY). We show that the false positive rate per country of a binary classifier constructed starting from the average entry-wise output of MC can be used as a proxy of GENEPY. |
Date: | 2021–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2109.03930&r= |
By: | Philipp Maximilian Mueller; Björn-Martin Kurzrock |
Abstract: | In real estate transactions, the parties generally have limited time to provide and process information. Using building documentation and digital building data may help to obtain an unbiased view of the asset. In practice, it is still particularly difficult to assess the physical structure of a building due to shortcomings in data structure and quality. Machine learning may improve speed and accuracy of information processing and results. This requires structured documents and applying a taxonomy of unambiguous document classes. In this paper, prioritized document classes from previous research (Müller, Päuser, Kurzrock 2020) are supplemented with key information for technical due diligence reports. The key information is derived from the analysis of n=35 due diligence reports. Based on the analyzed reports and identified key information, a checklist for technical due diligence is derived. The checklist will serve as a basis for a standardized reporting structure. The paper provides fundamentals for generating a (semi-)automated standardized due diligence report with a focus on the technical assessment of the asset. The paper includes recommendations for improving the machine readability of documents and indicates the potential for (partially) automated due diligence processes. The paper concludes with challenges towards an automated information extraction in due diligence processes and the potential for digital real estate management. |
Keywords: | digital building documentation; Document Classification; Due diligence; Machine Learning |
JEL: | R3 |
Date: | 2021–01–01 |
URL: | http://d.repec.org/n?u=RePEc:arz:wpaper:eres2021_64&r= |
By: | Hasbi, Maude; Bohlin, Erik |
Abstract: | Based on a unique and exhaustive database, including micro-level cross-sectional data on 23 million observations over nine years, from 2009 to 2017, we assess whether broadband quality has an impact on income and unemployment reduction. Overall, the results do not show any significant effect of download speed on either income or the unemployment rate. However, after distinguishing between educational attainment and the city size, we obtained heterogeneous results. While we highlight a substitution effect between low-skilled workers and broadband in smaller cities, we also show that broadband quality has a positive impact on unemployment reduction for low-skilled workers in bigger cities. However, the model predicts a negative effect of broadband quality on both the median income and the unemployment rate in areas having a higher proportion of college graduates. This result tends to support the analyses showing that, with the progress made in machine learning, artificial intelligence and the increasing availability of big data, job computerization is expanding to the sphere of high-income cognitive jobs. |
Keywords: | Broadband Quality,Fibre,Income,Unemployment,Artificial Intelligence |
JEL: | L13 L50 L96 |
Date: | 2021 |
URL: | http://d.repec.org/n?u=RePEc:zbw:itsb21:238026&r= |
By: | Halkos, George; Tsilika, Kyriaki |
Abstract: | The main purpose of the present study is to feature the computational practice of green policy performance measurement. Computing the progress of the green economy includes topics as indicators and measures to characterize environmental sustainability, methodological issues to indicate and present spatiotemporal patterns of resource use and pollution, computational frameworks for comparisons of environmental management among economies / economic sectors / socio-economic systems, computational techniques to define the structure, dynamics, and change in ecosystems. Results are discussed in support of green policies. |
Keywords: | Computational Economics; Sustainability. |
JEL: | C60 C61 C63 C80 C81 C88 |
Date: | 2021–09 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:109632&r= |
By: | Ugo Colombino; Nizamul Islam |
Abstract: | As a response to a changing labour market scenario and to the concerns for increasing costs and bad incentives of traditional income support policies, the last decades have witnessed, in many countries, reforms introducing more sophisticated designs of means-testing, eligibility and tagging. In this paper, we consider an alternative direction of reform that points towards universality, unconditionality and simplicity. Our main research question is whether tax-transfer rules designed according to these alternative criteria might be superior to the current one and could therefore be proposed as a policy reform. We adopt a computational approach to the design of optimal tax-transfer rules, within a flexible class. The exercise is applied to France, Germany, Italy, Luxembourg, Spain and the United Kingdom. The results suggest some common features in all the countries. The optimal tax-transfer rules feature a universal unconditional basic income or, equivalently, a negative income tax with a guaranteed minimum income. The tax profiles are much flatter than the current ones. For most social welfare criteria, and most countries, the simulated tax-transfer rules are superior to the current ones. These results confirm that policy reforms inspired by the principle of Universal Basic Income and Flat Tax might have good chances to dominate the current tax-transfer rules. |
Keywords: | empirical optimal taxation; microsimulation; microeconometrics; social welfare evaluation of tax-transfer rules |
JEL: | C60 H20 H30 |
Date: | 2021–09 |
URL: | http://d.repec.org/n?u=RePEc:irs:cepswp:2021-06&r= |
By: | Castro-Iragorri, C; Ramírez, J |
Abstract: | Principal components analysis (PCA) is a statistical approach to build factor models in finance. PCA is also a particular case of a type of neural network known as an autoencoder. Recently, autoencoders have been successfully applied in financial applications using factor models, Gu et al. (2020), Heaton and Polson (2017). We study the relationship between autoencoders and dynamic term structure models; furthermore we propose different approaches for forecasting. We compare the forecasting accuracy of dynamic factor models based on autoencoders, classical models in term structure modelling proposed in Diebold and Li (2006) and neural network-based approaches for time series forecasting. Empirically, we test the forecasting performance of autoencoders using the U.S. yield curve data in the last 35 years. Preliminary results indicate that a hybrid approach using autoencoders and vector autoregressions framed as a dynamic term structure model provides an accurate forecast that is consistent throughout the sample. This hybrid approach overcomes in-sample overfitting and structural changes in the data. |
Keywords: | autoencoders, factor models, principal components, recurrentneural networks |
JEL: | C45 C53 C58 |
Date: | 2021–07–29 |
URL: | http://d.repec.org/n?u=RePEc:col:000092:019431&r= |
By: | Söhnke M. Bartram; Jürgen Branke; Mehrshad Motahari (Cambridge Judge Business School, University of Cambridge) |
Abstract: | Artificial intelligence (AI) has a growing presence in asset management and has revolutionized the sector in many ways. It has improved portfolio management, trading, and risk management practices by increasing efficiency, accuracy, and compliance. In particular, AI techniques help construct portfolios based on more accurate risk and returns forecasts and under more complex constraints. Trading algorithms utilize AI to devise novel trading signals and execute trades with lower transaction costs, and AI improves risk modelling and forecasting by generating insights from new sources of data. Finally, robo-advisors owe a large part of their success to AI techniques. At the same time, the use of AI can create new risks and challenges, for instance as a result of model opacity, complexity, and reliance on data integrity. |
Date: | 2020–03 |
URL: | http://d.repec.org/n?u=RePEc:jbs:wpaper:20202001&r= |
By: | Janssen, Patrick; Sadowski, Bert M. |
Abstract: | Within the discussion on bias in algorithmic selection, fairness interventions are increasingly becoming a popular means to generate more socially responsible outcomes. The paper uses a modified framework based on Rambachan et. al. (2020) to empirically investigate the extent to which bias mitigation techniques can provide a more socially responsible outcome and prevent bias in algorithms. In using the algorithmic auditing tool AI Fairness 360 on a synthetically biased dataset, the paper applies different bias mitigation techniques at the preprocessing, inprocessing and postprocessing stage of algorithmic selection to account for fairness. The data analysis has been aimed at detecting violations of group fairness definitions in trained classifiers. In contrast to previous research, the empirical analysis focusses on the outcomes produced by decisions and the incentives problems behind fairness. The paper showed that binary classifiers trained on synthetically generated biased data while treating algorithms with bias mitigation techniques leads to a decrease in both social welfare and predictive accuracy in 43% of the cases tested. The results of our empirical study demonstrated that fairness interventions, which are designed to correct for bias often lead to worse societal outcomes. Based on these results, we propose that algorithmic selection involves a trade-between accuracy of prediction and fairness of outcomes. Furthermore, we suggest that bias mitigation techniques surely have to be included in algorithm selection but they have to be evaluated in the context of welfare economics. |
Date: | 2021 |
URL: | http://d.repec.org/n?u=RePEc:zbw:itsb21:238032&r= |
By: | Shenghao Feng; Xiujian Peng; Philip Adams |
Abstract: | This study investigates the energy and economic implications of China's carbon neutrality path over the period of 2020 to 2060. We use a recursive dynamic CGE model, CHIANGEM-E, to conduct the analysis. Notable advancements from the original CHINAGEM model include: 1) detailed energy sector disaggregation, 2) a new electricity generation nesting structure, and 3) carbon capture and storage (CCS) mechanisms. Our simulation shows that to achieve carbon neutrality in 2060, China needs change its energy consumption structure significantly. Coal and gas consumption will decline dramatically while the demand for renewable energy, especially demand for solar and wind energy will increase considerably. However, the negative effects of the dramatic carbon emission reduction on China's macro economy is limited. In particular, by 2060 real GDP will be 1.36 percent lower in carbon neutrality scenario (CNS) than in the base case scenario. The carbon price level will be 1614 CNY per tonne of carbon dioxide in 2060 in CNS. The substantial changes in China's energy structure imply significant changes to its fossil fuel imports. China's import demand for coal, crude oil and gas will all fall sharply. By 2060, China's imports of coal and gas will be more than 60% lower and its oil imports will be around 50% lower than their respective base-case levels. |
Keywords: | Carbon neutrality, economic implication, energy consumption, China, CGE |
JEL: | C68 Q4 |
Date: | 2021–08 |
URL: | http://d.repec.org/n?u=RePEc:cop:wpaper:g-318&r= |
By: | Felix Brandt; Carsten Lausberg |
Abstract: | This paper explores the stock returns of German real estate companies from 1991 to 2019. In contrast to previous studies we use a forward-looking approach and alternative risk measures to better reflect investor behavior. At first the paper constructs a traditional five-factor Arbitrage Pricing Theory model to measure the sensitivity of real estate stock returns to the stock, bond and real estate markets as well as to inflation and the overall economic development. The analysis shows that German real estate stocks are more impacted by changes in the economy and the stock market than by changes in the real estate market. We then apply a pseudo ge-ometric Brownian motion concept combined with a Monte Carlo simulation to model future asset prices. Value at risk and conditional value at risk are used to quantify the downside risk for an investor in listed real estate. The paper finds that listed real estate is less risky than the general stock market, which is in line with our expectations. |
Keywords: | Asset Pricing; Germany; Monte Carlo Simulation; real estate |
JEL: | R3 |
Date: | 2021–01–01 |
URL: | http://d.repec.org/n?u=RePEc:arz:wpaper:eres2021_86&r= |
By: | Delzeit, Ruth; Heimann, Tobias; Schünemann, Franziska; Söder, Mareike |
Abstract: | The goal of this technical paper is to present in a transparent way a detailed description of the DART-BIO model - the bioeconomy and land use version of the DART model. Key feature of the DART-BIO model is the explicit representation of the vegetable oil industry and the biofuel sector. The paper describes the construction and aggregation of the database used for the DART-BIO model. Further the theoretical structure of the model is elaborated. Thereby, crucial assumptions, elasticities and parameters embedded in the model are presented. |
Keywords: | CGE model,bioeconomy,climate policy,land use |
JEL: | C68 Q16 Q24 |
Date: | 2021 |
URL: | http://d.repec.org/n?u=RePEc:zbw:ifwkwp:2195&r= |
By: | Minke Remmerswaal (CPB Netherlands Bureau for Economic Policy Analysis); Jan Boone (CPB Netherlands Bureau for Economic Policy Analysis) |
Abstract: | Demand-side cost-sharing schemes reduce moral hazard in healthcare at the expense of out-of-pocket risk and equity. With a structural microsimulation model, we show that shifting the starting point of the deductible away from zero to 400 euros for all insured individuals, leads to an average 4 percent reduction in healthcare expenditure and 47 percent lower out-of-pocket payments. We use administrative healthcare expenditure data and focus on the price elastic part of the Dutch population to analyze the differences between the cost-sharing schemes. The model is estimated with a Bayesian mixture model to capture distributions of healthcare expenditure with which we predict the effects of cost-sharing schemes that are not present in our data. |
JEL: | I11 I13 I14 |
Date: | 2020–07 |
URL: | http://d.repec.org/n?u=RePEc:cpb:discus:415&r= |
By: | Benjamin Avanzi; Gregory Clive Taylor; Melantha Wang |
Abstract: | In this paper, we first introduce a simulator of cases estimates of incurred losses, called `SPLICE` (Synthetic Paid Loss and Incurred Cost Experience). In three modules, case estimates are simulated in continuous time, and a record is output for each individual claim. Revisions for the case estimates are also simulated as a sequence over the lifetime of the claim, in a number of different situations. Furthermore, some dependencies in relation to case estimates of incurred losses are incorporated, particularly recognizing certain properties of case estimates that are found in practice. For example, the magnitude of revisions depends on ultimate claim size, as does the distribution of the revisions over time. Some of these revisions occur in response to occurrence of claim payments, and so `SPLICE` requires input of simulated per-claim payment histories. The claim data can be summarized by accident and payment "periods" whose duration is an arbitrary choice (e.g. month, quarter, etc.) available to the user. `SPLICE` is built on an existing simulator of individual claim experience called `SynthETIC` available on CRAN (Avanzi et al., 2021a,b), which offers flexible modelling of occurrence, notification, as well as the timing and magnitude of individual partial payments. This is in contrast with the incurred losses, which constitute the additional contribution of `SPLICE`. The inclusion of incurred loss estimates provides a facility that almost no other simulators do. |
Date: | 2021–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2109.04058&r= |