nep-cmp New Economics Papers
on Computational Economics
Issue of 2022‒01‒03
fifteen papers chosen by

  1. Neural networks-based algorithms for stochastic control and PDEs in finance * By Maximilien Germain; Huyên Pham; Xavier Warin
  2. The Fairness of Credit Scoring Models By Christophe HURLIN; Christophe PERIGNON; Sébastien SAURIN
  3. Machine Learning Anwendungen in der betrieblichen Praxis: Praktische Empfehlungen zur betrieblichen Mitbestimmung By Thieltges, Andree
  4. An Improved Reinforcement Learning Model Based on Sentiment Analysis By Yizhuo Li; Peng Zhou; Fangyi Li; Xiao Yang
  5. Pricing equity-linked life insurance contracts with multiple risk factors by neural networks By Karim Barigou; Lukasz Delong
  6. The impact of transparency policies on local flexibility markets in electrical distribution networks: A case study with artificial neural network forecasts By Erik Heilmann
  7. Exploration of machine learning algorithms for maritime risk applications By Knapp, S.; van de Velden, M.
  8. Model-Based Recursive Partitioning to Estimate Unfair Health Inequalities in the United Kingdom Household Longitudinal Study By Brunori, Paolo; Davillas, Apostolos; Jones, Andrew M.; Scarchilli, Giovanna
  9. An explicit split point procedure in model-based trees allowing for a quick fitting of GLM trees and GLM forests By Christophe Dutang; Quentin Guibert
  10. Structured Additive Regression and Tree Boosting By Michael Mayer; Steven C. Bourassa; Martin Hoesli; Donato Scognamiglio
  11. On the role of risk aversion and market design in capacity expansion planning By Fraunholz, Christoph; Miskiw, Kim K.; Kraft, Emil; Fichtner, Wolf; Weber, Christoph
  12. Using CRETH to make quantities add up without efficiency bias By Mark Horridge
  13. Pricing Bermudan options using regression trees/random forests By Zineb El Filali Ech-Chafiq; Pierre Henry-Labordere; Jérôme Lelong
  14. Using Text Analysis to Gauge the Reasons for Respondents' Assessment in the Economy Watchers Survey By Tomoaki Mikami; Hiroaki Yamagata; Jouchi Nakajima
  15. Approximating Bayes in the 21st Century By Gael M. Martin; David T. Frazier; Christian P. Robert

  1. By: Maximilien Germain (LPSM (UMR_8001) - Laboratoire de Probabilités, Statistiques et Modélisations - SU - Sorbonne Université - CNRS - Centre National de la Recherche Scientifique - UP - Université de Paris, EDF R&D - EDF R&D - EDF - EDF, EDF - EDF); Huyên Pham (LPSM (UMR_8001) - Laboratoire de Probabilités, Statistiques et Modélisations - UPD7 - Université Paris Diderot - Paris 7 - SU - Sorbonne Université - CNRS - Centre National de la Recherche Scientifique, FiME Lab - Laboratoire de Finance des Marchés d'Energie - Université Paris Dauphine-PSL - PSL - Université Paris sciences et lettres - CREST - EDF R&D - EDF R&D - EDF - EDF); Xavier Warin (EDF R&D - EDF R&D - EDF - EDF, FiME Lab - Laboratoire de Finance des Marchés d'Energie - Université Paris Dauphine-PSL - PSL - Université Paris sciences et lettres - CREST - EDF R&D - EDF R&D - EDF - EDF, EDF - EDF)
    Abstract: This paper presents machine learning techniques and deep reinforcement learningbased algorithms for the efficient resolution of nonlinear partial differential equations and dynamic optimization problems arising in investment decisions and derivative pricing in financial engineering. We survey recent results in the literature, present new developments, notably in the fully nonlinear case, and compare the different schemes illustrated by numerical tests on various financial applications. We conclude by highlighting some future research directions.
    Date: 2021
  2. By: Christophe HURLIN; Christophe PERIGNON; Sébastien SAURIN
    Keywords: , Discrimination, Credit markets, Machine Learning, Artificial intelligence
    Date: 2021
  3. By: Thieltges, Andree
    Abstract: KI-Modelle und Machine-Learning-Anwendungen halten Einzug in die alltägliche Praxis von Unternehmen und können mitbestimmt werden. Um die Interessen und Rechte der Beschäftigten zu berücksichtigen und zu schützen, sollten die aktuellen Regelungen in betrieblichen IT-Vereinbarungen hinterfragt und hinsichtlich ihrer Praxistauglichkeit geprüft werden. Die Auswertung "Machine-Learning-Anwendungen in der betrieblichen Praxis" zeigt Handlungsmöglichkeiten anhand von Regelungspunkten aus insgesamt 29 abgeschlossenen Betriebs- und Dienstvereinbarungen. Die Ergebnisse wurden in Workshops mit Betriebs- und Personalräten diskutiert und relevante Regelungsaspekte zu KI-Modellen und Machine-Learning-Anwendungen abgeleitet. Sie sind Bestandteil der hier vorgestellten Handlungsempfehlungen.
    Keywords: Daten,Datenschutz,Persönlichkeitsrechte,Leistungskontrolle,Verhaltenskontrolle,Overfitting,Underfitting,Big Data,Black Box,Data Mining,HR Analytics
    Date: 2020
  4. By: Yizhuo Li; Peng Zhou; Fangyi Li; Xiao Yang
    Abstract: With the development of artificial intelligence technology, quantitative trading systems represented by reinforcement learning have emerged in the stock trading market. The authors combined the deep Q network in reinforcement learning with the sentiment quantitative indicator ARBR to build a high-frequency stock trading model for the share market. To improve the performance of the model, the PCA algorithm is used to reduce the dimensionality feature vector while incorporating the influence of market sentiment on the long-short power into the spatial state of the trading model and uses the LSTM layer to replace the fully connected layer to solve the traditional DQN model due to limited empirical data storage. Through the use of cumulative income, Sharpe ratio to evaluate the performance of the model and the use of double moving averages and other strategies for comparison. The results show that the improved model proposed by authors is far superior to the comparison model in terms of income, achieving a maximum annualized rate of return of 54.5%, which is proven to be able to increase reinforcement learning performance significantly in stock trading.
    Date: 2021–11
  5. By: Karim Barigou (SAF - Laboratoire de Sciences Actuarielle et Financière - UCBL - Université Claude Bernard Lyon 1 - Université de Lyon); Lukasz Delong (Warsaw School of Economics - Institut of Econometrics)
    Abstract: This paper considers the pricing of equity-linked life insurance contracts with death and survival benefits in a general model with multiple stochastic risk factors: interest rate, equity, volatility, unsystematic and systematic mortality. We price the equity-linked contracts by assuming that the insurer hedges the risks to reduce the local variance of the net asset value process and requires a compensation for the non-hedgeable part of the liability in the form of an instantaneous standard deviation risk margin. The price can then be expressed as the solution of a system of non-linear partial differential equations. We reformulate the problem as a backward stochastic differential equation with jumps and solve it numerically by the use of efficient neural networks. Sensitivity analysis is performed with respect to initial parameters and an analysis of the accuracy of the approximation of the true price with our neural networks is provided.
    Keywords: Equity-linked contracts,Neural networks,Stochastic mortality,BSDEs with jumps,Hull-White stochastic interest rates,Heston model
    Date: 2021–11–10
  6. By: Erik Heilmann (University of Kassel)
    Abstract: The energy transition brings various challenges of technical, economic and organizational nature. One major topic, especially in zonal electricity systems, is the organization of future congestion management. Local flexibility market (LFM) is an often discussed concept of market-based congestion management. Similar to the whole energy system, the market transparency of LFMs can influence the individual bidders' behavior. In this context, the predictability of the network status and an LFM's outcome, depending on a given transparency policy, is investigated in this paper. For this, forecast models based on artificial neural networks (ANN) are implemented on synthetical network and LFM data. Three defined transparency policies determine the amount of input data used for the models. The results suggest that the transparency policy can influence the predictability of network status and LFM outcome, but appropriate forecasts are generally feasible. Therefore, the transparency policy should not conceal information but provide a level playing field for all parties involved. The provision of semi-disaggregated data on the network area level can be suitable for bidders' decision making and reduces transaction costs.
    Keywords: Local flexibility markets, Market transparency, Transparency policy, Artificial neural network forecast
    JEL: L94 L98 Q41 Q47
    Date: 2021
  7. By: Knapp, S.; van de Velden, M.
    Abstract: To manage and pre-empt incident risks effectively by maritime stakeholders, predicted incident probabilities at ship level have different application aspects such as enhanced targeting for ship inspections, improved domain awareness and improving risk exposure assessments for strategic planning and asset allocations to manage risk exposure. Using a unique and comprehensive global dataset from 2014 to 2020 of 1.2 million observations, this study explores 144 model variants from the field of machine learning (18 random forest variants for 8 incident endpoints of interest) with the aim to enhance prediction capabilities to be used in maritime applications. An additional point of interest is to determine and highlight the relative importance of over 500 evaluated covariates. The results differ for each endpoint of interest and confirm that random forest methods improve prediction capabilities, based on a full year of out of sample evaluation. Targeting the top 10% most risky vessels would lead to an improvement of predictions by 2.7 to 4.9 compared to random selection. Balanced random forests and random forests with balanced training variants outperform regular random forests where the end selection of the variants also depends on the aggregation type and use of probabilities in the application areas of interest. The most important covariate groups to predict incident risk are related to beneficial ownership, the safety management company, size and age of the vessel and the importance of these factors is similar across the endpoint of interest considered here
    Keywords: ship specific risk, safety quality, reducing false negative events, risk exposure estimation, machine learning, case weighting, subsampling, random forest, sampling, evaluation metrics, top decile lift, variable importance, machine learning
    Date: 2021–12–13
  8. By: Brunori, Paolo (London School of Economics); Davillas, Apostolos (University of East Anglia); Jones, Andrew M. (University of York); Scarchilli, Giovanna (University of Trento)
    Abstract: We measure unfair health inequality in the UK using a novel data- driven empirical approach. We explain health variability as the result of circumstances beyond individual control and health-related behaviours. We do this using model-based recursive partitioning, a supervised machine learning algorithm. Unlike usual tree-based algorithms, model-based recursive partitioning does identify social groups with different expected levels of health but also unveils the heterogeneity of the relationship linking behaviors and health outcomes across groups. The empirical application is conducted using the UK Household Longitudinal Study. We show that unfair inequality is a substantial fraction of the total explained health variability. This finding holds no matter which exact definition of fairness is adopted: using both the fairness gap and direct unfairness measures, each evaluated at different reference values for circumstances or effort.
    Keywords: machine learning, health equity, inequality of opportunity, unhealthy lifestyle behaviours
    JEL: I14 D63
    Date: 2021–12
  9. By: Christophe Dutang (CEREMADE - CEntre de REcherches en MAthématiques de la DEcision - CNRS - Centre National de la Recherche Scientifique - Université Paris Dauphine-PSL - PSL - Université Paris sciences et lettres); Quentin Guibert (CEREMADE - CEntre de REcherches en MAthématiques de la DEcision - CNRS - Centre National de la Recherche Scientifique - Université Paris Dauphine-PSL - PSL - Université Paris sciences et lettres)
    Abstract: Classification and regression trees (CART) prove to be a true alternative to full parametric models such as linear models (LM) and generalized linear models (GLM). Although CART suffer from a biased variable selection issue, they are commonly applied to various topics and used for tree ensembles and random forests because of their simplicity and computation speed. Conditional inference trees and model-based trees algorithms for which variable selection is tackled via fluctuation tests are known to give more accurate and interpretable results than CART, but yield longer computation times. Using a closed-form maximum likelihood estimator for GLM, this paper proposes a split point procedure based on the explicit likelihood in order to save time when searching for the best split for a given splitting variable. A simulation study for non-Gaussian response is performed to assess the computational gain when building GLM trees. We also propose a benchmark on simulated and empirical datasets of GLM trees against CART, conditional inference trees and LM trees in order to identify situations where GLM trees are efficient. This approach is extended to multiway split trees and log-transformed distributions. Making GLM trees possible through a new split point procedure allows us to investigate the use of GLM in ensemble methods. We propose a numerical comparison of GLM forests against other random forest-type approaches. Our simulation analyses show cases where GLM forests are good challengers to random forests.
    Keywords: GLM,model-based recursive partitioning,GLM trees,random forest,GLM forest
    Date: 2021–11–11
  10. By: Michael Mayer (Schweizerische Mobiliar Versicherungsgesellschaft); Steven C. Bourassa (Florida Atlantic University); Martin Hoesli (University of Geneva - Geneva School of Economics and Management (GSEM); Swiss Finance Institute; University of Aberdeen - Business School); Donato Scognamiglio (IAZI AG and University of Bern)
    Abstract: Structured additive regression (STAR) models are a rich class of regression models that include the generalized linear model (GLM) and the generalized additive model (GAM). STAR models can be fitted by Bayesian approaches, component-wise gradient boosting, penalized least-squares, and deep learning. Using feature interaction constraints, we show that such models can be implemented also by the gradient boosting powerhouses XGBoost and LightGBM, thereby benefiting from their excellent predictive capabilities. Furthermore, we show how STAR models can be used for supervised dimension reduction and explain under what circumstances covariate effects of such models can be described in a transparent way. We illustrate the methodology with case studies pertaining to house price modeling, with very encouraging results regarding both interpretability and predictive performance.
    Keywords: machine learning, structured additive regression, gradient boosting, interpretability, transparency
    JEL: C13 C21 C45 C51 C52 C55 R31
    Date: 2021–09
  11. By: Fraunholz, Christoph; Miskiw, Kim K.; Kraft, Emil; Fichtner, Wolf; Weber, Christoph
    Abstract: Investment decisions in competitive power markets are based upon thorough profitability assessments. Thereby, investors typically show a high degree of risk aversion, which is the main argument for capacity mechanisms being implemented around the world. In order to investigate the interdependencies between investors' risk aversion and market design, we extend the agent-based electricity market model PowerACE to account for long-term uncertainties. This allows us to model capacity expansion planning from an agent perspective and with different risk preferences. The enhanced model is then applied in a multi-country case study of the European electricity market. Our results show that assuming risk-averse rather than risk-neutral investors leads to slightly reduced investments in dispatchable capacity, higher wholesale electricity prices, and reduced levels of resource adequacy. These effects are more pronounced in an energy-only market than under a capacity mechanism. Moreover, uncoordinated changes in market design may also lead to negative cross-border effects.
    Keywords: Agent-based simulation,Capacity expansion planning,Risk aversion,Electricity market design,Energy-only market,Capacity mechanism
    Date: 2021
  12. By: Mark Horridge
    Abstract: Modern CGE models can boast considerable sectoral detail. However, it is obvious that output of (say) electronic components, must be quite heterogeneous. Hence, since Leontief, multisectoral models tend to measure quantities not in physical units but in effective economic units (usually initial-dollars-worth). The CET functional form, close cousin to CES, is used to allocate a fixed resource between alternate uses; for example land between crops, or workers between sectors. It works well when both input and output quantities are measured in initial-dollars-worth, such as land rental values. Because CET chooses a crop mix to maximize revenue, it is welfare-neutral -- a small change in land allocation will not affect land's contribution to GDP. This is a desirable property. But CET translates poorly into physical units: we typically find that if percent changes in (effective) land use are interpreted as percent changes in crop areas, then total land area is not fixed. This can be a problem for reporting results, or for interfacing a CGE model to ecological or agronomic models which work with physical units. The CRETH functional form is a generalization of CET that has in the past been used like CET to allocate a fixed (measured in effective units) resource between alternate uses. In this usage, CRETH is like CET, but with more parameter flexibility. Here we show that CRETH land supply functions can instead be interpreted in a more literal fashion: as the answer (FOC) to a revenue-maximizing problem, where a land-owner allocates a fixed acreage of land between uses. Used in this way, CRETH (a) allows reported land areas to add up properly, and (b) has the optimum property that small changes in land allocation do not affect the land contribution to GDP (so avoiding efficiency bias).
    Keywords: Land use, CGE, CET, CRETH, Welfare impacts
    JEL: C68 Q15 Q24 I31
    Date: 2021–12
  13. By: Zineb El Filali Ech-Chafiq (DAO - Données, Apprentissage et Optimisation - LJK - Laboratoire Jean Kuntzmann - Inria - Institut National de Recherche en Informatique et en Automatique - CNRS - Centre National de la Recherche Scientifique - UGA - Université Grenoble Alpes - Grenoble INP - Institut polytechnique de Grenoble - Grenoble Institute of Technology - UGA - Université Grenoble Alpes, Natixis); Pierre Henry-Labordere (CMAP - Centre de Mathématiques Appliquées - Ecole Polytechnique - X - École polytechnique - CNRS - Centre National de la Recherche Scientifique, Natixis); Jérôme Lelong (DAO - Données, Apprentissage et Optimisation - LJK - Laboratoire Jean Kuntzmann - Inria - Institut National de Recherche en Informatique et en Automatique - CNRS - Centre National de la Recherche Scientifique - UGA - Université Grenoble Alpes - Grenoble INP - Institut polytechnique de Grenoble - Grenoble Institute of Technology - UGA - Université Grenoble Alpes)
    Abstract: The value of an American option is the maximized value of the discounted cash flows from the option. At each time step, one needs to compare the immediate exercise value with the continuation value and decide to exercise as soon as the exercise value is strictly greater than the continuation value. We can formulate this problem as a dynamic programming equation, where the main difficulty comes from the computation of the conditional expectations representing the continuation values at each time step. In (Longstaff and Schwartz, 2001), these conditional expectations were estimated using regressions on a finite-dimensional vector space (typically a polynomial basis). In this paper, we follow the same algorithm; only the conditional expectations are estimated using Regression trees or Random forests. We discuss the convergence of the LS algorithm when the standard least squares regression is replaced with regression trees. Finally, we expose some numerical results with regression trees and random forests. The random forest algorithm gives excellent results in high dimensions.
    Keywords: Regression trees,Random forests,Bermudan options,Optimal stopping
    Date: 2021–11–19
  14. By: Tomoaki Mikami (Bank of Japan); Hiroaki Yamagata (Bank of Japan); Jouchi Nakajima (Bank of Japan)
    Abstract: The Economy Watchers Survey released monthly by the Cabinet Office provides not only the headline diffusion index of the economic assessment of survey respondents (so-called "economy watchers") but also textual data from respondents' comments giving reasons for their assessment. Employing such data, this article presents an example of the use of text analysis, which has attracted increasing attention in recent years. Following Tsuruga and Okazaki (2017) and Otaka and Kan (2018), we construct co-occurrence network diagrams to explore what issues economy watchers focus on. The co-occurrence network diagrams drawn using data for mid-2021 show that economy watchers mainly focused on the State of Emergency and business restrictions related to COVID-19, developments in the vaccination process, and the shortage of semiconductors for automobile production. Our analysis shows that textual data are useful for an assessment of the economy; it is important to make efforts to improve text analysis methods.
    Keywords: Big data; Text analysis; Economy Watchers Survey; Co-occurrence network diagram
    Date: 2021–12–20
  15. By: Gael M. Martin; David T. Frazier; Christian P. Robert
    Abstract: The 21st century has seen an enormous growth in the development and use of approximate Bayesian methods. Such methods produce computational solutions to certain `intractable' statistical problems that challenge exact methods like Markov chain Monte Carlo: for instance, models with unavailable likelihoods, high-dimensional models, and models featuring large data sets. These approximate methods are the subject of this review. The aim is to help new researchers in particular -- and more generally those interested in adopting a Bayesian approach to empirical work -- distinguish between different approximate techniques; understand the sense in which they are approximate; appreciate when and why particular methods are useful; and see the ways in which they can can be combined.
    Keywords: Approximate Bayesian inference, intractable Bayesian problems, approximate Bayesian computation, Bayesian synthetic likelihood, variational Bayes, integrated nested Laplace approximation
    Date: 2021

General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.