nep-cmp New Economics Papers
on Computational Economics
Issue of 2021‒11‒15
twenty-one papers chosen by
Stan Miles
Thompson Rivers University

  2. Machine learning in explaining nonprofit organizations’ participation : a driving factors analysis approach By Zhanxue Gong; Xiyuan Li; Jiawen Liu; Yeming Gong
  3. Agent-Based Computational Economics: Overview and Brief History By Tesfatsion, Leigh
  4. Deep Learning Algorithms for Hedging with Frictions By Xiaofei Shi; Daran Xu; Zhanhao Zhang
  5. Multidimensionality of Land Ownership Among Men and Women in Sub-Saharan Africa: A Machine Learning Clustering Exercise By Kilic, Talip; Hasanbasri, Ardina; Moylan, Heather; Koolwal, Gayatri
  6. Speaking the same language: A machine learning approach to classify skills in Burning Glass Technologies data By Julie Lassébie; Luca Marcolin; Marieke Vandeweyer; Benjamin Vignal
  7. Data-driven Hedging of Stock Index Options via Deep Learning By Jie Chen; Lingfei Li
  8. Deep Asymptotic Expansion: Application to Financial Mathematics(forthcoming in proceedings of IEEE CSDE 2021) By Yuga Iguchi; Riu Naito; Yusuke Okano; Akihiko Takahashi; Toshihiro Yamada
  9. "Deep Asymptotic Expansion with Weak Approximation" By Yuga Iguchi; Riu Naito; Yusuke Okano; Akihiko Takahashi; Toshihiro Yamada
  10. A Meta-Method for Portfolio Management Using Machine Learning for Adaptive Strategy Selection By Damian Kisiel; Denise Gorse
  11. Land-use hysteresis triggered by staggered payment schemes for more permanent biodiversity conservation By Drechsler, Martin; Grimm, Volker
  12. Understanding corporate default using Random Forest: The role of accounting and market information By Alessandro Bitetto; Stefano Filomeni; Michele Modina
  13. Finding Needles in Haystacks: Multiple-Imputation Record Linkage Using Machine Learning By John M. Abowd; Joelle Abramowitz; Margaret C. Levenstein; Kristin McCue; Dhiren Patki; Trivellore Raghunathan; Ann M. Rodgers; Matthew D. Shapiro; Nada Wasi; Dawn Zinsser
  14. Quantification of Economic Uncertainty: a deep learning approach By Gillmann, Niels; Kim, Alisa
  15. Boosting Tax Revenues with Mixed-Frequency Data in the Aftermath of Covid-19: The Case of New York By Kajal Lahiri; Cheng Yang
  16. LifeSim: a lifecourse dynamic microsimulation model of the millennium birth cohort in England By Skarda, Ieva; Asaria, Miqdad; Cookson, Richard
  17. Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing By Christian Bayer; Chiheb Ben Hammouda; Ra\'ul Tempone
  18. Parameterized Explanations for Investor / Company Matching By Simerjot Kaur; Ivan Brugere; Andrea Stefanucci; Armineh Nourbakhsh; Sameena Shah; Manuela Veloso
  19. It's not always about the money, sometimes it's about sending a message: Evidence of Informational Content in Monetary Policy Announcements By Yong Cai; Santiago Camara; Nicholas Capel
  20. EU in the global Artificial Intelligence landscape By RIGHI Riccardo; LOPEZ COBO Montserrat; SAMOILI Sofia; CARDONA Melisande; VAZQUEZ-PRADA BAILLET Miguel; DE PRATO Giuditta
  21. AI Watch. Defining Artificial Intelligence 2.0. Towards an operational definition and taxonomy of AI for the AI landscape By Sofia Samoili; Montserrat Lopez Cobo; Blagoj Delipetrev; Fernando Martinez-Plumed; Emilia Gomez; Giuditta De Prato

  1. By: Rasolomanana, Onjaniaina Mianin’Harizo
    Abstract: This paper presents an ensemble neural network using a small data set in the context of bankruptcy prediction. The individual models of the ensemble use different data of different types. We compare the performance of three neural network models: one using a single type of data, one using a combination of both data in a single data frame, and one using ensemble learning. The results show that the ensemble model outperformed the individual model and the combined model. This suggests that with scarce training data, especially when using different types of data, ensemble neural network can improve the level of prediction accuracy.
    Keywords: ensemble neural network, small dataset, combined data, bankruptcy prediction,
    Date: 2021–10
  2. By: Zhanxue Gong (emlyon business school); Xiyuan Li; Jiawen Liu; Yeming Gong
    Abstract: The construction of smart cities requires the participation of nonprofit organizations, but there are still some problems in the analysis of driving factors of participation. Based on this, using the structural equation model as the research method, a public satisfaction relationship model, based on the machine learning, for nonprofit organizations participating in the construction planning of smart cities was constructed in this study. At the same time, corresponding assumptions are set, and data are collected through questionnaires. Afterward, the Likert tenth scale was used to score questionnaire questions, and deep learning was conducted in conjunction with the model. The research shows that the model established in this study has good analytical results and has certain practical effects. It can provide suggestions for optimization and can provide theoretical references for subsequent research.
    Keywords: public satisfaction,smart city,non-profit organization,Machine Learning,AI-based Management,Artificial Intelligence,Machine learning
    Date: 2019–12–01
  3. By: Tesfatsion, Leigh
    Abstract: Scientists seek to understand how real-world systems work. Models devised for scientific purposes must always simplify reality. However, ideally, a modeling approach should be flexible as well as logically rigorous; it should permit scientists to tailor model simplifications appropriately for specific purposes at hand. Modeling flexibility and logical rigor have been the two key goals motivating the development of Agent-based Computational Economics (ACE), a variant of agent-based modeling adhering to seven specific modeling principles. This perspective provides an overview of ACE and a brief history of its development.
    Date: 2021–11–08
  4. By: Xiaofei Shi; Daran Xu; Zhanhao Zhang
    Abstract: This work studies the optimal hedging problems in frictional markets with general convex transaction costs on the trading rates. We show that, under the smallness assumption on the magnitude of the transaction costs, the leading order approximation of the optimal trading speed can be identified through the solution to a nonlinear SDE. Unfortunately, models with arbitrary state dynamics generally lead to a nonlinear forward-backward SDE system, where wellposedness results are unavailable. However, we can numerically find the optimal trading strategy with the modern development of deep learning algorithms. Among various deep learning structures, the most popular choices are the FBSDE solver introduced in the spirit by [32] and the deep hedging algorithm pioneered by [12, 14, 15, 16, 35, 36, 45, 47]. We implement these deep learning algorithms with calibrated parameters from [26] and compare the numerical results with the leading order approximations. This work documents the performance of different learning-based algorithms and provides better understandings of their advantages and drawbacks.
    Date: 2021–11
  5. By: Kilic, Talip; Hasanbasri, Ardina; Moylan, Heather; Koolwal, Gayatri
    Keywords: Labor and Human Capital, Research and Development/Tech Change/Emerging Technologies
    Date: 2021–08
  6. By: Julie Lassébie; Luca Marcolin; Marieke Vandeweyer; Benjamin Vignal
    Abstract: This report presents a methodology to classify skill requirements in online job postings into a pre-existing expert-driven taxonomy of broader skill categories. The proposed approach uses a semi-supervised Machine Learning algorithm and relies on the actual meaning and definition of the skills. It allows for the classification of more than 17 000 unique skill keywords contained in the Burning Glass dataset into 61 categories. The outcome of the classification exercise is validated using O*NET information on skills by occupations, and by benchmarking the results of some empirical descriptive exercises against the existing literature. Compared to a manual classification, the proposed approach organises large amounts of skills information in an analytically tractable form, and with considerable savings in time and human resources.
    JEL: C45 C55 J23 J24 J63
    Date: 2021–11–11
  7. By: Jie Chen; Lingfei Li
    Abstract: We develop deep learning models to learn the hedge ratio for S&P500 index options directly from options data. We compare different combinations of features and show that a feedforward neural network model with time to maturity, Black-Scholes delta and a sentiment variable (VIX for calls and index return for puts) as input features performs the best in the out-of-sample test. This model significantly outperforms the standard hedging practice that uses the Black-Scholes delta and a recent data-driven model. Our results demonstrate the importance of market sentiment for hedging efficiency, a factor previously ignored in developing hedging strategies.
    Date: 2021–11
  8. By: Yuga Iguchi (MUFG Bank, Tokyo, Japan & UCL London, UK); Riu Naito (Japan Post Insurance & Hitotsubashi University, Tokyo, Japan); Yusuke Okano (SMBC Nikko Securities, Tokyo, Japan); Akihiko Takahashi (University of Tokyo, Tokyo, Japan); Toshihiro Yamada (Hitotsubashi University & JST, Tokyo, Japan)
    Abstract: The paper proposes a new computational scheme for diffusion semigroups based on an asymptotic expansion with weak approximation and deep learning algorithm to solve highdimensional Kolmogorov partial differential equations (PDEs). In particular, we give a spatial approximation for the solution of d-dimensional PDEs on a range [a, b]d without suffering from the curse of dimensionality.
    Date: 2021–11
  9. By: Yuga Iguchi (MUFG Bank); Riu Naito (Japan Post Insurance and Hitotsubashi University); Yusuke Okano (SMBC Nikko Securities); Akihiko Takahashi (Faculty of Economics, The University of Tokyo); Toshihiro Yamada (Graduate School of Economics, Hitotsubashi University and Japan Science and Technology Agency (JST))
    Abstract: The paper proposes a new computational scheme for diffusion semigroups based on an asymptotic expansion with weak approximation and deep learning algorithm to solve high- dimensional Kolmogorov partial differential equations (PDEs). In particular, we give a spatial approximation for the solution of d-dimensional PDEs on a range [a,b]d without suffering from the curse of dimensionality.
    Date: 2021–11
  10. By: Damian Kisiel; Denise Gorse
    Abstract: This work proposes a novel portfolio management technique, the Meta Portfolio Method (MPM), inspired by the successes of meta approaches in the field of bioinformatics and elsewhere. The MPM uses XGBoost to learn how to switch between two risk-based portfolio allocation strategies, the Hierarchical Risk Parity (HRP) and more classical Na\"ive Risk Parity (NRP). It is demonstrated that the MPM is able to successfully take advantage of the best characteristics of each strategy (the NRP's fast growth during market uptrends, and the HRP's protection against drawdowns during market turmoil). As a result, the MPM is shown to possess an excellent out-of-sample risk-reward profile, as measured by the Sharpe ratio, and in addition offers a high degree of interpretability of its asset allocation decisions.
    Date: 2021–11
  11. By: Drechsler, Martin; Grimm, Volker
    Abstract: Making conservation payment schemes permanent so that conservation efforts are retained even after the payment has been stopped, is a major challenge. Another challenge is to design conservation so that they counteract the ongoing spatial fragmentation of species habitat. The agglomeration bonus in which a bonus is added to a flat payment if the conservation activity is carried out in the neighbourhood of other conserved land, has been shown to induce the establishment of spatially contiguous habitat. I the present paper we show, with a generic spatially explicit agent-based simulation model, that the interactions between the landowners in an agglomeration bonus scheme can lead to hysteresis in the land-use dynamics, implying permanence of the scheme. It is shown that this permanence translates into efficiency gains, especially if discount rates are low and the spatial heterogeneity of conservation costs is high.
    Keywords: agent-based model, agglomeration bonus, conservation payment, land use, permanence
    JEL: C63 Q24 Q57 Q58
    Date: 2022–10–25
  12. By: Alessandro Bitetto (University of Pavia); Stefano Filomeni (University of Essex); Michele Modina (University of Molise)
    Abstract: Recent evidence highlights the importance of hybrid credit scoring models to evaluate borrowers’ creditworthiness. However, the current hybrid models neglect to consider the role of public-peer market information in addition to accounting information on default prediction. This paper aims to fill this gap in the literature by providing novel evidence on the impact of market information in predicting corporate defaults for unlisted firms. We employ a sample of 10,136 Italian micro-, small-, and mid-sized enterprises (MSMEs) that borrow from 113 cooperative banks from 2012–2014 to examine whether market pricing of public firms adds additional information to accounting measures in predicting default of private firms. Specifically, we estimate the probability of default (PD) of MSMEs using equity price of size-and industry- matched public firms, and then we adopt advanced statistical techniques based on parametric algorithm (Multivariate Adaptive Regression Spline) and non-parametric machine learning model (Random Forest). Moreover, by using Shapley values, we assess the relevance of market information in predicting corporate credit risk. Firstly, we show the predictive power of Merton’s PD on default prediction for unlisted firms. Secondly, we show the increased predictive power of credit risk models that consider both the Merton’s PD and accounting information to assess corporate credit risk. We trust the results of this paper contribute to the current debate on safeguarding the continuity and the resilience of the banking sector. Indeed, banks’ hybrid credit scoring methodologies that also embed market information prove to be successful to assess credit risk of unlisted firms and could be useful for forward-looking financial risk management frameworks
    Keywords: Default Risk, Distance to Default, Machine Learning, Merton model, SME, PD, SHAP, Autoencoder, Random Forest, XAI
    JEL: C52 C53 D82 D83 G21 G22
    Date: 2021–10
  13. By: John M. Abowd; Joelle Abramowitz; Margaret C. Levenstein; Kristin McCue; Dhiren Patki; Trivellore Raghunathan; Ann M. Rodgers; Matthew D. Shapiro; Nada Wasi; Dawn Zinsser
    Abstract: This paper considers the problem of record linkage between a household-level survey and an establishment-level frame in the absence of unique identifiers. Linkage between frames in this setting is challenging because the distribution of employment across establishments is highly skewed. To address these difficulties, this paper develops a probabilistic record linkage methodology that combines machine learning (ML) with multiple imputation (MI). This ML-MI methodology is applied to link survey respondents in the Health and Retirement Study to their workplaces in the Census Business Register. The linked data reveal new evidence that non-sampling errors in household survey data are correlated with respondents’ workplace characteristics.
    Date: 2021–11
  14. By: Gillmann, Niels; Kim, Alisa
    JEL: E00
    Date: 2021
  15. By: Kajal Lahiri; Cheng Yang
    Abstract: We forecast New York state tax revenues with a mixed-frequency model using a number of machine learning techniques. We found boosting with two dynamic factors extracted from a select list of New York and U.S. leading indicators did best in terms of correctly updating revenues for the fiscal year in direct multi-step out-of-sample forecasts. These forecasts were found to be informationally efficient over 18 monthly horizons. In addition to boosting with factors, we also studied the advisability of restricting boosting to select the most recent macro variables to capture abrupt structural changes. Since the COVID-19 pandemic upended all government budgets, our boosted forecasts were used to monitor revenues in real time for the fiscal year 2021. Our estimates showed a drastic year-over-year decline in real revenues by over 16% in May 2020, followed by several upward nowcast revisions that led to a recovery to -1% in March 2021, which was close to the actual annual value of -1.6%.
    Keywords: revenue forecasting, machine learning, real time forecasting, mixed frequency, fiscal policy
    JEL: C22 C32 C50 C53 E62
    Date: 2021
  16. By: Skarda, Ieva; Asaria, Miqdad; Cookson, Richard
    Abstract: We present a dynamic microsimulation model for childhood policy analysis that models developmental, economic, social and health outcomes from birth to death for each child in the Millennium Birth Cohort (MCS) in England, together with public costs and a summary wellbeing measure. The model is a discrete event simulation in discrete time (annual periods), implemented in R, which progresses 100,000 individuals through each year of their lives from birth in the year 2000 to death. From age 0 to 18 the model draws observational data from the MCS, with explicit modelling of only a few derived outcomes (mental health, conduct disorder, mortality, health-related quality of life, public costs and a general wellbeing metric). During adulthood, all outcomes are modelled dynamically using explicit networks of stochastic process equations, with separate networks for working age and retirement. Our equations are parameterised using effect estimates from existing studies combined with target outcome levels from up-to-date administrative and survey data. We present our baseline projections and a simple validation check against external data from the British Cohort Study 1970 and Understanding Society survey.
    Keywords: childhood; conduct problems; inequality; lifecourse; policy evaluation; simulation; skills; well-being; SRF-2013-06- 015; 205427/Z/16/Z
    JEL: N0
    Date: 2021–10–25
  17. By: Christian Bayer; Chiheb Ben Hammouda; Ra\'ul Tempone
    Abstract: When approximating the expectation of a functional of a stochastic process, the efficiency and performance of deterministic quadrature methods, such as sparse grid quadrature and quasi-Monte Carlo (QMC) methods, may critically depend on the regularity of the integrand. To overcome this issue and reveal the available regularity, we consider cases in which analytic smoothing cannot be performed, and introduce a novel numerical smoothing approach by combining a root finding algorithm with one-dimensional integration with respect to a single well-selected variable. We prove that under appropriate conditions, the resulting function of the remaining variables is a highly smooth function, potentially affording the improved efficiency of adaptive sparse grid quadrature (ASGQ) and QMC methods, particularly when combined with hierarchical transformations (i.e., Brownian bridge and Richardson extrapolation on the weak error). This approach facilitates the effective treatment of high dimensionality. Our study is motivated by option pricing problems, and our focus is on dynamics where the discretization of the asset price is necessary. Based on our analysis and numerical experiments, we show the advantages of combining numerical smoothing with the ASGQ and QMC methods over ASGQ and QMC methods without smoothing and the Monte Carlo approach.
    Date: 2021–11
  18. By: Simerjot Kaur; Ivan Brugere; Andrea Stefanucci; Armineh Nourbakhsh; Sameena Shah; Manuela Veloso
    Abstract: Matching companies and investors is usually considered a highly specialized decision making process. Building an AI agent that can automate such recommendation process can significantly help reduce costs, and eliminate human biases and errors. However, limited sample size of financial data-sets and the need for not only good recommendations, but also explaining why a particular recommendation is being made, makes this a challenging problem. In this work we propose a representation learning based recommendation engine that works extremely well with small datasets and demonstrate how it can be coupled with a parameterized explanation generation engine to build an explainable recommendation system for investor-company matching. We compare the performance of our system with human generated recommendations and demonstrate the ability of our algorithm to perform extremely well on this task. We also highlight how explainability helps with real-life adoption of our system.
    Date: 2021–10
  19. By: Yong Cai; Santiago Camara; Nicholas Capel
    Abstract: This paper introduces a transparent framework to identify the informational content of FOMC announcements. We do so by modelling the expectations of the FOMC and private sector agents using state of the art computational linguistic tools on both FOMC statements and New York Times articles. We identify the informational content of FOMC announcements as the projection of high frequency movements in financial assets onto differences in expectations. Our recovered series is intuitively reasonable and shows that information disclosure has a significant impact on the yields of short-term government bonds.
    Date: 2021–11
  20. By: RIGHI Riccardo (European Commission - JRC); LOPEZ COBO Montserrat (European Commission - JRC); SAMOILI Sofia (European Commission - JRC); CARDONA Melisande (European Commission - JRC); VAZQUEZ-PRADA BAILLET Miguel (European Commission - JRC); DE PRATO Giuditta (European Commission - JRC)
    Abstract: The brief presents the results of the AI worldwide ecosystem analysis for the period 2009-2020, by applying the Techno-Economic ecoSystem (TES) analytical approach. The TES approach allows to map the AI worldwide ecosystem by considering the main AI-related industrial, innovation and research activities, and all the economic players that are involved in them (i.e. firms, research institutes, governmental institutions). The brief analyses the position of the EU in the international context, via-à-vis the United States, China, and other main players in the landscape, in terms of size of the AI ecosystem, specialisation in AI areas, AI firms and AI R&D capacities. It follows with an in-depth analysis of the EU ecosystem, with a section devoted to the impact of EC-funded projects on the EU AI ecosystem.
    Keywords: artificial intelligence, ecosystem, ai firms, ai R&D
    Date: 2021–11
  21. By: Sofia Samoili (European Commission - JRC); Montserrat Lopez Cobo (European Commission - JRC); Blagoj Delipetrev (European Commission - JRC); Fernando Martinez-Plumed (Universitat Politecnica de Valencia); Emilia Gomez (European Commission - JRC); Giuditta De Prato (European Commission - JRC)
    Abstract: We present here the second edition of our research aimed at establishing an operational definition of artificial intelligence (AI), to which we refer to in the activities of AI Watch. This edition builds on the first report, published in February 2020, and complements it with several recent developments. Since then, the European Commission has proposed a regulatory framework on artificial intelligence (AI Act) that establishes a legal definition of AI, which we incorporate in the current review. In addition to this legal definition, an operational definition is still needed to better delineate the boundaries and analysis of the AI Watch AI landscape. The proposed AI Watch operational definition consists of an iterative method providing a concise taxonomy and list of keywords that characterise the core domains of the AI research field, complemented by transversal topics such as AI applications or ethical and philosophical considerations - in line with the wider monitoring objective of AI Watch. The AI taxonomy is designed to inform the AI Watch AI landscape analysis and is also expected to cover applications of AI in closely related technological domains, such as robotics (in a broader sense), neuroscience or internet of things. The literature considered for the qualitative analysis of existing definitions and taxonomies has been enlarged to include recently published reports from the three complementary perspectives considered in this work: policy, research and industry. Therefore, the collection of definitions published between 1955 and 2021 and the summary of the main features of the concept of AI appearing in the relevant literature is another valuable output of this work. Finally, alternative approaches to study AI are also briefly presented in this new edition of the report. These include the classification of AI according to: families of algorithms and the theoretical models behind them; cognitive abilities reproduced by AI; functions performed by AI. Applications of AI may be grouped also according to other dimensions, like the economic sector in which such applications are found, or their business functions. These approaches, complementary to the taxonomy used for the analysis of the AI Watch international landscape, are useful to gain a wider understanding of the AI domain, and suitable to be used in studies related to these dimensions.
    Keywords: artificial intelligence, ai watch, ai definition, ai taxonomy, ai keywords
    Date: 2021–10

This nep-cmp issue is ©2021 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.