nep-cmp New Economics Papers
on Computational Economics
Issue of 2022‒04‒25
fifteen papers chosen by



  1. Original Data Vs High Performance Augmented Data for ANN Prediction of Glycemic Status in Diabetes Patients By Massaro, Alessandro; Magaletti, Nicola; Giardinelli, Vito O. M.; Cosoli, Gabriele; Leogrande, Angelo; Cannone, Francesco
  2. Nowcasting GDP - A Scalable Approach Using DFM, Machine Learning and Novel Data, Applied to European Economies By Mr. Jean-Francois Dauphin; Marzie Taheri Sanjani; Mrs. Nujin Suphaphiphat; Mr. Kamil Dybczak; Hanqi Zhang; Morgan Maneely; Yifei Wang
  3. e-Government in Europe. A Machine Learning Approach By Leogrande, Angelo; Magaletti, Nicola; Cosoli, Gabriele; Massaro, Alessandro
  4. ICT Specialists in Europe By Leogrande, Angelo; Magaletti, Nicola; Cosoli, Gabriele; Giardinelli, Vito; Massaro, Alessandro
  5. The Hidden Cost of Smoking: Rent Premia in the Housing Market By Cigdem Gedikli; Robert Hill; Oleksandr Talavera; Okan Yilmaz
  6. Machine learning in international trade research - evaluating the impact of trade agreements By Breinlich, Holger; Corradi, Valentina; Rocha, Nadia; Ruta, Michele; Silva, J.M.C. Santos; Zylkin, Tom
  7. Deep Reinforcement Learning and Convex Mean-Variance Optimisation for Portfolio Management By Ruan Pretorius; Terence van Zyl
  8. Computers, Programming and Dynamic General Equilibrium Macroeconomic Modeling By Bongers, Anelí; Molinari, Benedetto; Torres, José L.
  9. The Ensemble Approach to Forecasting: A Review and Synthesis By Hao Wu; David Levinson
  10. Hidden hazards and Screening Policy: Predicting Undetected Lead Exposure in Illinois Using Machine Learning By Abbasi, A; Gazze, L; Pals, B
  11. Riesgo Causado por la Propagación de las Pérdidas por Terremoto a través de la Economía Mediante el uso de Modelos CGE Espaciales By Jose Antonio Leon; Mario Ordaz, Eduardo A Haddad, Inacio F. Araujo
  12. Evaluating the impact of automation in long-haul trucking using USAGE-Hwy By Catherine Taylor; Robert Waschik
  13. The application of techniques derived from artificial intelligence to the prediction of the solvency of bank customers: case of the application of the cart type decision tree (dt) By Karim Amzile; Rajaa Amzile
  14. Anomaly Detection applied to Money Laundering Detecion using Ensemble Learning By Otero Gomez, Daniel; Agudelo, Santiago Cartagena; Patiño, Andres Ospina; Lopez-Rojas, Edgar
  15. Implementing and managing Algorithmic Decision-Making in the public sector By Rocco, Salvatore

  1. By: Massaro, Alessandro; Magaletti, Nicola; Giardinelli, Vito O. M.; Cosoli, Gabriele; Leogrande, Angelo; Cannone, Francesco
    Abstract: In the following article a comparative analysis between Original Data (OD) and Augmented Data (AD) are carried out for the prediction of glycemic status in patients with diabetes. Specifically, the OD concerning the time series of the glycemic status of a patient are compared with AD. The AD are obtained by the randomised average with five different ranges, and are processed by a Machine Learning (ML) algorithm for prediction. The adopted ML algorithm is the Artificial Neural Network (ANN) Multilayer Perceptron (MLP). In order to optimise the prediction two different data partitioning scenarios selecting training datasets are analysed. The results show that the algorithm performances related to the use of AD through the randomisation of data in different ranges around the average value, are better than the OD data processing about the minimization of statistical errors in self learning models. The best achieved error decrease is of 75.4% if compared with ANN-MLP processing of the original dataset. Furthermore, in the paper is added a linked discussion about the economic and managerial impact of AD in the healthcare sector.
    Keywords: ANN-Artificial Neural Network, Augmented Data Generation, Telemedicine, EHealthcare, Model Optimization.
    JEL: O30 O31 O32 O33 O34
    Date: 2022–04–05
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:112638&r=
  2. By: Mr. Jean-Francois Dauphin; Marzie Taheri Sanjani; Mrs. Nujin Suphaphiphat; Mr. Kamil Dybczak; Hanqi Zhang; Morgan Maneely; Yifei Wang
    Abstract: This paper describes recent work to strengthen nowcasting capacity at the IMF’s European department. It motivates and compiles datasets of standard and nontraditional variables, such as Google search and air quality. It applies standard dynamic factor models (DFMs) and several machine learning (ML) algorithms to nowcast GDP growth across a heterogenous group of European economies during normal and crisis times. Most of our methods significantly outperform the AR(1) benchmark model. Our DFMs tend to perform better during normal times while many of the ML methods we used performed strongly at identifying turning points. Our approach is easily applicable to other countries, subject to data availability.
    Keywords: Nowcasting, Factor Model, Machine Learning, Large Data Sets
    Date: 2022–03–11
    URL: http://d.repec.org/n?u=RePEc:imf:imfwpa:2022/052&r=
  3. By: Leogrande, Angelo; Magaletti, Nicola; Cosoli, Gabriele; Massaro, Alessandro
    Abstract: The following article analyzes the determinants of e-government in 28 European countries between 2016 and 2021. The DESI-Digital Economy and Society Index database was used. The econometric analysis involved the use of the Panel Data with Fixed Effects and Panel Data with Variable Effects methods. The results show that the value of “e-Government” is negatively associated with “Fast BB (NGA) coverage”, “Female ICT specialists”, “e-Invoices”, “Big data” and positively associated with “Open Data”, “e-Government Users”, “ICT for environmental sustainability”, “Artificial intelligence”, “Cloud”, “SMEs with at least a basic level of digital intensity”, “ICT Specialists”, “At least 1 Gbps take-up”, “At least 100 Mbps fixed BB take-up”, “Fixed Very High Capacity Network (VHCN) coverage”. A cluster analysis was carried out below using the unsupervised k-Means algorithm optimized with the Silhouette coefficient with the identification of 4 clusters. Finally, a comparison was made between eight different machine learning algorithms using "augmented data". The most efficient algorithm in predicting the value of e-government both in the historical series and with augmented data is the ANN-Artificial Neural Network.
    Keywords: Innovation, and Invention: Processes and Incentives; Management of Technological Innovation and R&D; Diffusion Processes; Open Innovation.
    JEL: O30 O31 O32 O33 O34
    Date: 2022–03–05
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:112242&r=
  4. By: Leogrande, Angelo; Magaletti, Nicola; Cosoli, Gabriele; Giardinelli, Vito; Massaro, Alessandro
    Abstract: The following article estimates the value of ICT Specialists in Europe between 2016 and 2021 for 28 European countries. The data were analyzed using the following econometric techniques, namely: Panel Data with Fixed Effects, Panel Data with Random Effects, WLS and Pooled OLS. The results show that the value of ICT Specialists in Europe is positively associated with the following variables: "Desi Index", "SMEs with at least a basic level of digital intensity", "At least 100 Mbps fixed BB take-up" and negatively associated with the following variables: "4G Coverage","5G Coverage", "5G Readiness", "Fixed broadband coverage", "e-Government", "At least Basic Digital Skills", "Fixed broadband take-up", "Broadband price index", "Integration of Digital Technology". Subsequently, two European clusters were found by value of "ICTG Specialists" using the k-Means clustering algorithm optimized by using the Silhouette coefficient. Finally, eight different machine learning algorithms were compared to predict the value of "ICT Specialists" in Europe. The results show that the best prediction algorithm is ANNArtificial Neural Network with an estimated growth value of 12.53%. Finally, "augmented data" were obtained through the use of the ANN-Artificial Neural Network, through which a new prediction was made which estimated a growing value of the estimated variable equal to 3.18%.
    Keywords: Innovation, and Invention: Processes and Incentives; Management of Technological Innovation and R&D; Diffusion Processes; Open Innovation.
    JEL: O30 O31 O32 O33 O34
    Date: 2022–03–05
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:112241&r=
  5. By: Cigdem Gedikli (Swansea University); Robert Hill (University of Graz); Oleksandr Talavera (University of Birmingham); Okan Yilmaz (Swansea University)
    Abstract: In this paper, we provide novel evidence on the additional costs associated with smoking. While it may not be surprising that smokers pay a rent premium, we are the first to quantify the size of this premium. Our approach is innovative in that we use text mining methods that extract implicit information on landlords' attitudes to smoking directly from Zoopla UK rental listings. Applying hedonic, matching and machine-learning methods to the text-mined data, we find a positive smoking rent premium of around 6 percent. This translates into 14.40GBP of indirect costs, in addition to 40GBP of weekly spending on cigarettes estimated for an average smoker in the UK.
    Keywords: Smoking; Rental market; Hedonic regression; Matching; Text mining; Random forest; Smoking rent premium; Contracting frictions
    JEL: I30 R21 R31
    Date: 2022–03
    URL: http://d.repec.org/n?u=RePEc:bir:birmec:22-06&r=
  6. By: Breinlich, Holger; Corradi, Valentina; Rocha, Nadia; Ruta, Michele; Silva, J.M.C. Santos; Zylkin, Tom
    Abstract: Modern trade agreements contain a large number of provisions in addition to tariff reductions, in areas as diverse as services trade, competition policy, trade-related investment measures, or public procurement. Existing research has struggled with overfitting and severe multicollinearity problems when trying to estimate the effects of these provisions on trade flows. Building on recent developments in the machine learning and variable selection literature, this paper proposes data-driven methods for selecting the most important provisions and quantifying their impact on trade flows, without the need of making ad hoc assumptions on how to aggregate individual provisions. The analysis finds that provisions related to antidumping, competition policy, technical barriers to trade, and trade facilitation are associated with enhancing the trade-increasing effect of trade agreements.
    Keywords: lasso; machine learning; preferential trade agreements; deep trade agreements; EST013567/1
    JEL: F14 F15 F17
    Date: 2021–06–16
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:114379&r=
  7. By: Ruan Pretorius; Terence van Zyl
    Abstract: Traditional portfolio management methods can incorporate specific investor preferences but rely on accurate forecasts of asset returns and covariances. Reinforcement learning (RL) methods do not rely on these explicit forecasts and are better suited for multi-stage decision processes. To address limitations of the evaluated research, experiments were conducted on three markets in different economies with different overall trends. By incorporating specific investor preferences into our RL models' reward functions, a more comprehensive comparison could be made to traditional methods in risk-return space. Transaction costs were also modelled more realistically by including nonlinear changes introduced by market volatility and trading volume. The results of this study suggest that there can be an advantage to using RL methods compared to traditional convex mean-variance optimisation methods under certain market conditions. Our RL models could significantly outperform traditional single-period optimisation (SPO) and multi-period optimisation (MPO) models in upward trending markets, but only up to specific risk limits. In sideways trending markets, the performance of SPO and MPO models can be closely matched by our RL models for the majority of the excess risk range tested. The specific market conditions under which these models could outperform each other highlight the importance of a more comprehensive comparison of Pareto optimal frontiers in risk-return space. These frontiers give investors a more granular view of which models might provide better performance for their specific risk tolerance or return targets.
    Date: 2022–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2203.11318&r=
  8. By: Bongers, Anelí; Molinari, Benedetto; Torres, José L.
    Abstract: Dynamic stochastic general equilibrium (DSGE) models nowadays undertake the bulk of macroeconomic analysis. Their widespread use during the last 40 years reflects their usefulness as a scientific laboratory in which to study the aggregate economy and its responses to different shocks, to carry out counterfactual experiments and to perform policy evaluation. A key characteristic of DSGE models is that their computation is numerical and requires intensive computational power and the handling of numerical methods. In fact, the main advances in macroeconomic modeling since the 1980s have been possible only because of the increasing computational power of computers, which has supported the expansion of DSGE models as more and more accurate reproductions of the actual economy, thus becoming the prevailing modeling strategy and the dominant paradigm in contemporaneous macroeconomics. Along with DSGE models, specific computer languages have been developed to facilitate simulations, estimations and comparisons of the aggregate economies represented by DSGE models. Knowledge of these languages, together with expertise in programming and computers, has become an essential part of the profession for macroeconomists at both the academic and the professional level.
    Keywords: Dynamic stochastic general equilibrium models; Computers; Programming languages; Codes; Computational economics; Dynare.
    JEL: C61 C63 C88 E37
    Date: 2022–03–22
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:112505&r=
  9. By: Hao Wu; David Levinson (TransportLab, School of Civil Engineering, University of Sydney)
    Abstract: Ensemble forecasting is a modeling approach that combines data sources, models of different types, with alternative assumptions, using distinct pattern recognition methods. The aim is to use all available information in predictions, without the limiting and arbitrary choices and dependencies resulting from a single statistical or machine learning approach or a single functional form, or results from a limited data source. Uncertainties are systematically accounted for. Outputs of ensemble models can be presented as a range of possibilities, to indicate the amount of uncertainty in modeling. We review methods and applications of ensemble models both within and outside of transport research. The review finds that ensemble forecasting generally improves forecast accuracy, robustness in many fields, particularly in weather forecasting where the method originated. We note that ensemble methods are highly siloed across different disciplines, and both the knowledge and application of ensemble forecasting are lacking in transport. In this paper we review and synthesize methods of ensemble forecasting with a unifying framework, categorizing ensemble methods into two broad and not mutually exclusive categories, namely combining models, and combining data; this framework further extends to ensembles of ensembles. We apply ensemble forecasting to transport related cases, which shows the potential of ensemble models in improving forecast accuracy and reliability. This paper sheds light on the apparatus of ensemble forecasting, which we hope contributes to the better understanding and wider adoption of ensemble models.
    Keywords: Ensemble forecasting, Combining models, Data fusion, Ensembles of ensembles
    JEL: R41 C93
    Date: 2022
    URL: http://d.repec.org/n?u=RePEc:nex:wpaper:ensembleapproachforecasting&r=
  10. By: Abbasi, A (University of California San Francisco); Gazze, L (University of Warwick); Pals, B (New York University)
    Abstract: Lead exposure remains a significant threat to children’s health despite decades of policies aimed at getting the lead out of homes and neighborhoods. Generally, lead hazards are identified through inspections triggered by high blood lead levels (BLLs) in children. Yet, it is unclear how best to screen children for lead exposure to balance the costs of screening and the potential benefits of early detection, treatment, and lead hazard removal. While some states require universal screening, others employ a targeted approach, but no regime achieves 100% compliance. We estimate the extent and geographic distribution of undetected lead poisoning in Illinois. We then compare the estimated detection rate of a universal screening program to the current targeted screening policy under different compliance levels. To do so, we link 2010-2016 Illinois lead test records to 2010-2014 birth records, demographics, and housing data. We train a random forest classifier that predicts the likelihood a child has a BLL above 5µg/dL. We estimate that 10,613 untested children had a BLL≥5µg/dL in addition to the 18,115 detected cases. Due to the unequal spatial distribution of lead hazards, 60% of these undetected cases should have been screened under the current policy, suggesting limited benefits from universal screening.
    Keywords: Lead Poisoning, Environmental Health, Screening JEL Classification:
    Date: 2022
    URL: http://d.repec.org/n?u=RePEc:cge:wacage:612&r=
  11. By: Jose Antonio Leon; Mario Ordaz, Eduardo A Haddad, Inacio F. Araujo
    Abstract: La economía de un país está expuesta a perturbaciones inducidas por catástrofes causadas desastres naturales y por el hombre. Este trabajo presenta un esfuerzo para estimar de una manera sistemática y probabilista las consecuencias económicas naciones y regionales de la ocurrencia de terremotos. Además de abordar las pérdidas de producción, nuestro modelo calcula las métricas estándar de riesgo para múltiples componentes de la economía como empleo, PIB, PRB, inflación, volumen de exportación, etc. El enfoque propuesto se ilustra con un ejemplo desarrollado para Chile, cuyos resultados constituyen los primeros de su tipo. Los resultados revelan que la pérdida anual esperada (AAL, por sus siglas en inglés) de la producción bruta, PIB, y volumen de exportación en Chile son 277, 305 y 62 millones de dólares, mientras que la AAL de empleo es de 7,786 trabajadores. La Región Metropolitana de Santiago concentra ~43% de la AAL total de producción mientras que la Región de Valparaíso es la más riesgoso, con una AAL de producción regional de 0.21%. También presentamos las curvas de excedencia de pérdidas para diferentes componentes de la economía chilena tanto a nivel país como a nivel regional.
    Keywords: Evaluación probabilista; Desastres naturales; Terremotos; Modelo CGE Espacial
    JEL: C55 C68 R12
    Date: 2022–03–24
    URL: http://d.repec.org/n?u=RePEc:spa:wpaper:2022wpecon11&r=
  12. By: Catherine Taylor; Robert Waschik
    Abstract: We evaluate the macroeconomic effects of the introduction of automation in the long-haul trucking sectors in the United States, along with the output and employment impacts in the long-haul trucking sector itself, using the purpose-built computable general equilibrium (CGE) USAGE-Hwy model.1 We simulate the automation of long-haul trucking in the US by assuming that the fleet of long-haul trucks is converted for automation technology over the period 2021-2050 following a 'fast', 'medium' or 'slow' adoption path. After accounting for the cost of converting the fleet for automation, the efficiency and safety improvements contribute to an increase in real GDP and welfare in the US in 2050 of between 0.35-0.40 per cent. Despite the fact that automation technology obviates the need for most long-haul truck drivers, hiring of long-haul truck drivers remains positive throughout the simulation period in all scenarios, except for a five-year period under the 'fast' adoption of automation. Over this five-year period, at most 10,000 long-haul truck drivers per year are laid off. Given an annual occupational turnover rate for truck drivers of 10.5 per cent, the annual turnover of short-haul truck drivers in 2018 was almost 138,000, implying that the issue of layoffs of long-haul truck drivers should not be a significant concern when considering the adoption of automation in long-haul trucking.
    Keywords: autonomous vehicles, driverless trucks, computable general equilibrium
    JEL: O18 O33 C68
    Date: 2022–04
    URL: http://d.repec.org/n?u=RePEc:cop:wpaper:g-326&r=
  13. By: Karim Amzile; Rajaa Amzile
    Abstract: In this study we applied the CART-type Decision Tree (DT-CART) method derived from artificial intelligence technique to the prediction of the solvency of bank customers, for this we used historical data of bank customers. However we have adopted the process of Data Mining techniques, for this purpose we started with a data preprocessing in which we clean the data and we deleted all rows with outliers or missing values as well as rows with empty columns, then we fixed the variable to be explained (dependent or Target) and we also thought to eliminate all explanatory (independent) variables that are not significant using univariate analysis as well as the correlation matrix, then we applied our CART decision tree method using the SPSS tool. After completing our process of building our model (AD-CART), we started the process of evaluating and testing the performance of our model, by which we found that the accuracy and precision of our model is 71%, so we calculated the error ratios, and we found that the error rate equal to 29%, this allowed us to conclude that our model at a fairly good level in terms of precision, predictability and very precisely in predicting the solvency of our banking customers.
    Date: 2022–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2203.13001&r=
  14. By: Otero Gomez, Daniel; Agudelo, Santiago Cartagena; Patiño, Andres Ospina; Lopez-Rojas, Edgar
    Abstract: Financial crime and, specifically, the illegal business of money laundering are increasing dramatically with the expansion of modern technology and global communication, resulting in the loss of billions of dollars worldwide each year. Money laundering, known as the process that transforms the proceeds of crime into clean legitime assets, is a common phenomenon that occurs around the world. Irregular obtained money is generally cleaned up thanks to transfers involving banks or companies, see Walker (1999). Hence, one of the main problems remains to find an efficient way to identify suspicious actors and transactions, in each operation attention should be focused on the type, amount, motive, frequency, and consistency with the previous activity and the geographic area. This identification must be the result of a process that cannot be based solely on individual judgments but must, at least in part, be automated. Although prevention technologies are the best way to reduce fraud, fraudsters are adaptive and, given time, will usually find ways to overcome such measures, see Perols (2011). Then, what we propose is to enrich this set of information by building an anomaly detection model in operations related to money transfer in order to benefit from the power of artificial intelligence. Now, anti­money laundering is a complex problem but We believe Artificial Intelligence can play a powerful role in this area.
    Date: 2021–12–08
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:f84ht&r=
  15. By: Rocco, Salvatore
    Abstract: This paper examines the current evolution of Artificial Intelligence (AI) systems for “algorithmic decision-making” (ADM) in the public sector (§1). In particular, it will focus on the challenges brought by such new uses of AI in the field of governance and public administration. From a review of the rising global scholarship on the matter, three strands of research are hereby expanded. First, the technical approach (§2). To close the gaps between law, policy and technology, it is indeed necessary to understand what an AI system is and why and how it can affect decision-making. Second, the legal and “algor-ethical” approach (§3). This is aimed at showing the big picture wherein the governance concerns arise – namely, the wider framework of principles and key-practices needed to secure a good use of AI in the public sector against its potential risks and misuses. Third, as the core subject of this analysis, the governance approach stricto sensu (§4). This aims to trace back the renowned issue of the “governance of AI” to essentially four major sets of challenges which ADM poses in the public management chain: (i) defining clear goals and responsibilities; (ii) gaining competency and knowledge; (iii) managing and involving stakeholders; (iv) managing and auditing risks.
    Date: 2022–03–27
    URL: http://d.repec.org/n?u=RePEc:osf:socarx:ex93w&r=

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.