nep-cmp New Economics Papers
on Computational Economics
Issue of 2020‒05‒18
fourteen papers chosen by
Stan Miles
Thompson Rivers University

  1. Forecasting Foreign Exchange Rate Movements with k-Nearest-Neighbour, Ridge Regression and Feed-Forward Neural Networks By Milan Fičura
  2. Denise: Deep Learning based Robust PCA for Positive Semidefinite Matrices By Calypso Herrera; Florian Krach; Josef Teichmann
  3. Estimating Full Lipschitz Constants of Deep Neural Networks By Calypso Herrera; Florian Krach; Josef Teichmann
  4. Capability accumulation and product innovation: an agent-based perspective By Claudius Graebner; Anna Hornykewycz
  5. An Economic Approach to Regulating Algorithms By Ashesh Rambachan; Jon Kleinberg; Sendhil Mullainathan; Jens Ludwig
  6. Regional economic resilience in the European Union: a numerical general equilibrium analysis By Filippo Di Pietro; Patrizio Lecca; Simone Salotti
  7. Public policies and the art of catching up: matching the historical evidence with a multi-country agent-based model By Giovanni Dosi; Andrea Roventini; Emanuele Russo
  8. The Impact of the Wuhan Covid-19 Lockdown on Air Pollution and Health: A Machine Learning and Augmented Synthetic Control Approach By Matthew A Cole; Robert J R Elliott; Bowen Liu
  9. Intergenerational transfers within the family and the role for old age survival By Fanny A. Kluge; Tobias C. Vogt
  10. Political referenda and investment: evidence from Scotland By Azqueta-Gavaldon, Andres
  11. Local Environmental Quality and Heterogeneity in an OLG Agent-Based Model with Network Externalities By Andrea Caravaggio; Mauro Sodini
  12. Measuring the Occupational Impact of AI: Tasks, Cognitive Abilities and AI Benchmarks By Songul Tolan; Annarosa Pesole; Fernando Martinez-Plumed; Enrique Fernandez-Macias; José Hernandez-Orallo; Emilia Gomez
  13. Exact and heuristic algorithms for scheduling jobs with time windows on unrelated parallel machines By Tadumadze, Giorgi; Emde, Simon; Diefenbach, Heiko
  14. Drawing policy suggestions to fight Covid-19 from hardly reliable data. A machine-learning contribution on lockdowns analysis. By Bonacini, Luca; Gallo, Giovanni; Patriarca, Fabrizio

  1. By: Milan Fičura
    Abstract: Three different classes of data mining methods (k-Nearest Neighbour, Ridge Regression and Multilayer Perceptron Feed-Forward Neural Networks) are applied for the purpose of quantitative trading on 10 simulated time series, as well as real world time series of 10 currency exchange rates ranging from 1.11.1999 to 12.6.2015. Each method is tested in multiple variants. The k-NN algorithm is applied alternatively with the Euclidian, Manhattan, Mahalanobis and Maximum distance function. The Ridge Regression is applied as Linear and Quadratic, and the Feed-Forward Neural Network is applied with either 1, 2 or 3 hidden layers. In addition to that Principal Component Analysis (PCA) is eventually applied for the dimensionality reduction of the predictor set and the meta-parameters of the methods are optimized on the validation sample. In the simulation study a Stochastic-Volatility Jump-Diffusion model, extended alternatively with 10 different non-linear conditional mean patterns, is used, to simulate the asset price behaviour to which the tested methods are applied. The results show that no single method was able to profit on all of the non-linear patterns in the simulated time series, but instead different methods worked well for different patterns. Alternatively, past price movements and past returns were used as predictors. In the case when the past price movements were used, quadratic ridge regression achieved the most robust results, followed by some of the k-NN methods. In the case when past returns were used, k-NN based methods were the most consistently profitable, followed by the linear ridge regression and quadratic ridge regression. Neural networks, while being able to profit on some of the time series, did not achieve profit on most of the others. No evidence was further found of the PCA method to improve the results of the tested methods in a systematic way. In the second part of the study, the models were applied to empirical foreign exchange rate time series. Overall the profitability of the methods was rather low, with most of them ending with a loss on most of the currencies. The most profitable currency was EURUSD, followed by EURJPY, GBPJPY and EURGBP. The most successful methods were the linear ridge regression and the Manhattan distance based k-NN method which both ended with profits for most of the time series (unlike the other methods). Finally, a forward selection procedure using the linear ridge regression was applied to extend the original predictor set with some technical indicators. The selection procedure achieved limited success in improving the out-sample results for the linear ridge regression model but not the other models.
    Keywords: Ridge regression, k-Nearest Neighbour, Artificial Neural Networks, Principal Component Analysis, Exchange rate forecasting, Investment strategy, Market efficiency
    JEL: C45 C63 G11 G14 G17
    Date: 2019–11–13
  2. By: Calypso Herrera (Department of Mathematics, ETH Z\"urich, Switzerland); Florian Krach (Department of Mathematics, ETH Z\"urich, Switzerland); Josef Teichmann (Department of Mathematics, ETH Z\"urich, Switzerland)
    Abstract: We introduce Denise, a deep learning based algorithm for decomposing positive semidefinite matrices into the sum of a low rank plus a sparse matrix. The deep neural network is trained on a randomly generated dataset using the Cholesky factorization. This method, benchmarked on synthetic datasets as well as on some S&P500 stock returns covariance matrices, achieves comparable results to several state-of-the-art techniques, while outperforming all existing algorithms in terms of computational time. Finally, theoretical results concerning the convergence of the training are derived.
    Date: 2020–04
  3. By: Calypso Herrera (Department of Mathematics, ETH Z\"urich, Switzerland); Florian Krach (Department of Mathematics, ETH Z\"urich, Switzerland); Josef Teichmann (Department of Mathematics, ETH Z\"urich, Switzerland)
    Abstract: We estimate the Lipschitz constants of the gradient of a deep neural network and the network itself with respect to the full set of parameters. We first develop estimates for a deep feed-forward densely connected network and then, in a more general framework, for all neural networks that can be represented as solutions of controlled ordinary differential equations, where time appears as continuous depth. These estimates can be used to set the step size of stochastic gradient descent methods, which is illustrated for one example method.
    Date: 2020–04
  4. By: Claudius Graebner (Institute for Socio-Economics, University of Duisburg-Essen, Germany; Institute for Comprehensive Analysis of the Economy, Johannes Kepler University Linz, Austria); Anna Hornykewycz (Institute for Comprehensive Analysis of the Economy, Johannes Kepler University Linz, Austria)
    Abstract: The paper studies the relevance of product heterogeneity for innovation dynamics using an agent-based model. The vantage point is a short a review on the empirical relevance of capability accumulation for innovation processes and an assessment of how these processes are modelled theoretically in evolutionary micro and macroeconomic models. This shows that the macroeconomic literature so far has focused on process innovations. To facilitate the consideration of empirical and microeconomic insights on product innovation in macroeconomic models, a simple agent-based model, which may later serve as an innovation module in macroeconomic models, is introduced. Following up on recent empirical results, products in the model are heterogeneous in terms of their complexity and differ in their relatedness to each other. The model is used to study theoretical implications of different topological structures underlying product relatedness by conducting simulations with different ‘product spaces’. The analysis suggests that the topological structure of the product space, the assumed relationship between product complexity and centrality as well as the relevance of product complexity in price setting dynamics have significant but nontrivial implications and deserve further attention in evolutionary macroeconomics. To this end, the model presented here may serve as a first step towards a module to be integrated in such a more comprehensive model framework.
    Date: 2020–04
  5. By: Ashesh Rambachan; Jon Kleinberg; Sendhil Mullainathan; Jens Ludwig
    Abstract: There is growing concern about "algorithmic bias" - that predictive algorithms used in decision-making might bake in or exacerbate discrimination in society. When will these "biases" arise? What should be done about them? We argue that such questions are naturally answered using the tools of welfare economics: a social welfare function for the policymaker, a private objective function for the algorithm designer and a model of their information sets and interaction. We build such a model that allows the training data to exhibit a wide range of "biases." Prevailing wisdom is that biased data change how the algorithm is trained and whether an algorithm should be used at all. In contrast, we find two striking irrelevance results. First, when the social planner builds the algorithm, her equity preference has no effect on the training procedure. So long as the data, however biased, contain signal, they will be used and the algorithm built on top will be the same. Any characteristic that is predictive of the outcome of interest, including group membership, will be used. Second, we study how the social planner regulates private (possibly discriminatory) actors building algorithms. Optimal regulation depends crucially on the disclosure regime. Absent disclosure, algorithms are regulated much like human decision-makers: disparate impact and disparate treatment rules dictate what is allowed. In contrast, under stringent disclosure of all underlying algorithmic inputs (data, training procedure and decision rule), once again we find an irrelevance result: private actors can use any predictive characteristic. Additionally, now algorithms strictly reduce the extent of discrimination against protected groups relative to a world in which humans make all the decisions. As these results run counter to prevailing wisdom on algorithmic bias, at a minimum, they provide a baseline set of assumptions that must be altered to generate different conclusions.
    JEL: C54 D6 J7 K00
    Date: 2020–05
  6. By: Filippo Di Pietro (Universidad de Sevilla); Patrizio Lecca (European Commission - JRC); Simone Salotti (European Commission - JRC)
    Abstract: Using a spatial general equilibrium model, this paper investigates the resilience of EU regions under three alternative recessionary shocks, each of them activating different economic adjustments and mechanisms. We measure the vulnerability, resistance, and recoverability of regions and we identify key regional features affecting the ability of regions to withstand to and recover from unexpected external shocks. The analysis reveals that the response of regions varies according to the nature of the external disturbance and to the pre-shock regional characteristics. Finally, it seems that resilience also depends on factors' mobility.
    Keywords: Rhomolo, Region, Growth, computable general equilibrium model, regional economic resilience, economic shocks.
    JEL: C68 R13 R15
    Date: 2020–04
  7. By: Giovanni Dosi; Andrea Roventini; Emanuele Russo
    Abstract: In this paper, we study the effects of industrial policies on international convergence using a multi-country agent-based model which builds upon Dosi et al. (2019b). The model features a group of microfounded economies, with evolving industries, populated by heterogeneous firms that compete in international markets. In each country, technological change is driven by firms' activities of search and innovation, while aggregate demand formation and distribution follows Keynesian dynamics. Interactions among countries take place via trade flows and international technological imitation. We employ the model to assess the different strategies that laggard countries can adopt to catch up with leaders: market-friendly policies; industrial policies targeting the development of firms' capabilities and R&D investments, as well as trade restrictions for infant industry protection; protectionist policies focusing on tariffs only. We find that markets cannot do the magic: in absence of government interventions, laggards will continue to fall behind. On the contrary, industrial policies can successfully drive international convergence among leaders and laggards, while protectionism alone is not necessary to support catching up and countries get stuck in a sort of middle-income trap. Finally, in a global trade war, where developed economies impose retaliatory tariffs, both laggards and leaders are worse off and world productivity growth slows down.
    Keywords: Endogenous growth; catching up; technology-gaps; industrial policies; agent-based models.
    Date: 2020–05–08
  8. By: Matthew A Cole (University of Birmingham); Robert J R Elliott (University of Birmingham); Bowen Liu (University of Birmingham)
    Abstract: We quantify the impact of the Wuhan Covid-19 lockdown on concentrations of four air pollutants using a two-step approach. First, we use machine learning to remove the confounding effects of weather conditions on pollution concentrations. Second, we use a new Augmented Synthetic Control Method (Ben-Michael et al. 2019) to estimate the impact of the lockdown on weather normalised pollution relative to a control group of cities that were not in lockdown. We find NO2 concentrations fell by as much as 24 ug/m3 during the lockdown (a reduction of 63% from the pre-lockdown level), while PM10 concentrations fell by a similar amount but for a shorter period. The lockdown had no discernible impact on concentrations of SO2 or CO. We calculate that the reduction of NO2 concentrations could have prevented as many as 496 deaths in Wuhan city, 3,368 deaths in Hubei province and 10,822 deaths in China as a whole.
    Keywords: Air pollution, Covid-19, machine learning, synthetic control, health.
    JEL: Q53 Q52 I18 I15 C21 C23
    Date: 2020–05
  9. By: Fanny A. Kluge (Max Planck Institute for Demographic Research, Rostock, Germany); Tobias C. Vogt (Max Planck Institute for Demographic Research, Rostock, Germany)
    Abstract: In this paper, we study the relationship between income and old age survival via the indirect pathway of private transfers. Our analysis focuses on intergenerational transfers in the family as an important, but so far less investigated, link between income and improved old age survival. We use an agent based model to simulate an exchange relationship between two generations in a family and incorporate realistic demographic, economic and time use data for Germany. We find that older parents transfer increasing shares of their pensions to their offspring and receive informal care or emotional support in return. This exchange motive is mutually beneficial as younger generations are in greater need for financial subsidies and older ones for contact and care. Our inductive approach adds to our understanding how income is spread in the family and how older family members can benefit from an exchange of money for care.
    JEL: J1 Z0
    Date: 2020
  10. By: Azqueta-Gavaldon, Andres
    Abstract: We present evidence that referenda have a significant, detrimental outcome on investment. Employing an unsupervised machine learning algorithm over the period 2008-2017, we construct three important uncertainty indices underlying reports in the Scottish news media: Scottish independence (IndyRef)-related uncertainty; Brexit-related uncertainty; and Scottish policy-related uncertainty. Examining the relationship of these indices with investment on a longitudinal panel of 3,589 Scottish firms, the evidence suggests that Brexit-related uncertainty associates more strongly than IndyRef -related uncertainty to investment. Our preferred specification suggests that a one standard-deviation increase in Brexit uncertainty foreshadows a reduction in investment by 8% on average in the following year. Besides we find that the uncertainty associated with the Scottish referendum for independence while negligible at the aggregate level, relates more strongly with the investment of listed firms as well as those operating on the border with England. In addition, we present evidence of greater sensitivity to these indices among firms that are financially constrained or whose investment is to a greater degree irreversible. JEL Classification: C80, D80, E22, E66, G18, G31
    Keywords: investment, machine learning, political uncertainty, textual-data
    Date: 2020–05
  11. By: Andrea Caravaggio; Mauro Sodini
    Abstract: Most of the theoretical contributions on the relationship between economy and environment assumes the environment as a good distributed homogeneously among the agents. The aim of this work is to weaken this hypothesis and to consider that the environment can have a local character even if conditioned through externalities by the choices made at global level. In particular, adapting the classical framework introduced in John and Pecchenino (1994) to analyze the dynamic relationship between environment and economic process, in this paper we propose an OLG agent-based model where the agents may have different initial environmental endowments, or may be heterogeneous in their preferences. What emerges is that, despite the attention devoted to local environmental aspects, the network externalities (determined through the scheme of Moore neighbourhoods) play a fundamental role in defining environmental dynamics and they may induce the emergence of chaotic dynamics. On the other hand, the heterogeneity of preferences and/or initial conditions plays an ambiguous role. In fact, depending on the weight of network externalities and the impact of consumption and/or defensive expenditures, heterogeneity may stabilize or destabilize the system.
    Keywords: Agent-Based Models; Overlapping Generations; Local Environment; Network Externalities
    JEL: C63 D62 O13 Q2
    Date: 2020–05–01
  12. By: Songul Tolan (European Commission – JRC); Annarosa Pesole (European Commission - JRC); Fernando Martinez-Plumed (European Commission - JRC); Enrique Fernandez-Macias (European Commission - JRC); José Hernandez-Orallo (Universitat Politècnica de València); Emilia Gomez (European Commission - JRC)
    Abstract: In this paper we develop a framework for analysing the impact of AI on occupations. Leaving aside the debates on robotisation, digitalisation and online platforms as well as workplace automation, we focus on the occupational impact of AI that is driven by rapid progress in machine learning. In our framework we map 59 generic tasks from several worker surveys and databases to 14 cognitive abilities (that we extract from the cognitive science literature) and these to a comprehensive list of 328 AI benchmarks used to evaluate progress in AI techniques. The use of these cognitive abilities as an intermediate mapping, instead of mapping task characteristics to AI tasks, allows for an analysis of AI’s occupational impact that goes beyond automation. An application of our framework to occupational databases gives insights into the abilities through which AI is most likely to affect jobs and allows for a ranking of occupation with respect to AI impact. Moreover, we find that some jobs that were traditionally less affected by previous waves of automation may now be subject to relatively higher AI impact.
    Keywords: artificial intelligence, occupations, tasks
    Date: 2020–04
  13. By: Tadumadze, Giorgi; Emde, Simon; Diefenbach, Heiko
    Date: 2020
  14. By: Bonacini, Luca; Gallo, Giovanni; Patriarca, Fabrizio
    Abstract: Feedback control-based mitigation strategies for COVID-19 are threatened by the time span occurring before an infection is detected in official data. Such a delay also depends on behavioral, technological and procedural issues other than the incubation period. We provide a machine learning procedure to identify structural breaks in detected positive cases dynamics using territorial level panel data. In our case study, Italy, three structural breaks are found and they can be related to the three different national level restrictive measures: the school closure, the main lockdown and the shutdown of non-essential economic activities. This allows assessing the detection delays and their relevant variability among the different measures adopted and the relative effectiveness of each of them. Accordingly we draw some policy suggestions to support feedback control based mitigation policies as to decrease their risk of failure, including the further role that wide swap campaigns may play in reducing the detection delay. Finally, by exploiting the huge heterogeneity among Italian provinces features, we stress some drawbacks of the restrictive measures specific features and of their sequence of adoption, among which, the side effects of the main lockdown on social and economic inequalities.
    Keywords: Covid-19,coronavirus,lockdown,feedback control,mitigation strategies
    JEL: C63 I14 I18
    Date: 2020

This nep-cmp issue is ©2020 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.