nep-cmp New Economics Papers
on Computational Economics
Issue of 2020‒06‒08
sixteen papers chosen by
Stan Miles
Thompson Rivers University

  1. GEMPACK manual By Mark Horridge; Michael Jerie; Dean Mustakinov; Florian Schiffmann
  2. Application of Nonlinear Autoregressive with Exogenous Input (NARX) neural network in macroeconomic forecasting, national goal setting and global competitiveness assessment By Liyang Tang
  3. Multi-View Graph Convolutional Networks for Relationship-Driven Stock Prediction By Jiexia Ye; Juanjuan Zhao; Kejiang Ye; Chengzhong Xu
  4. Finding and verifying the nucleolus of cooperative games By Márton Benedek; Jörg Fliege; Tri-Dung Nguyen
  5. Simulating the dynamics of individual adaptation to floods By Katrin Erdlenbruch; Bruno Bonté
  6. Machine Learning Econometrics: Bayesian algorithms and methods By Korobilis, Dimitris; Pettenuzzo, Davide
  7. Which bills are lobbied? Predicting and interpreting lobbying activity in the US By Ivan Slobozhan; Peter Ormosi; Rajesh Sharma
  8. Evaluación del impacto económico de las interrupciones en el transporte de gas natural en el Perú By Vásquez Cordano, Arturo Leonardo
  9. Public policies and the art of catching up: matching the historical evidence with a multi-country agent-based model By Giovanni Dosi; Andrea Roventini; Emanuele Russo
  10. Beyond the weights: A multicriteria approach to evaluate Inequality in Education By Giuseppe Coco; Raffaele Lagravinese; Giuliano Resce
  11. A Predator-Prey Model of Unemployment and W-shaped Recession in the COVID-19 Pandemic By Maria Cristina Barbieri Góes; Ettore Gallo
  12. The Aino 3.0 model By Silvo, Aino; Verona, Fabio
  13. Wages, Hires, and Labor Market Concentration By Marinescu, Ioana E.; Ouss, Ivan; Pape, Louis-Daniel
  14. Predicting the COVID-19 Pandemic in Canada and the US By Ba Chu; Shafiullah Qureshi
  15. Revenu de base – Simulations en vue d’une expérimentation By Mahdi Ben Jelloul; Antoine Bozio; Sophie Cottet; Brice Fabre; Claire Leroy
  16. Corruption in the Times of Pandemia By Gallego, Jorge; Prem, Mounu; Vargas, Juan F.

  1. By: Mark Horridge; Michael Jerie; Dean Mustakinov; Florian Schiffmann
    Abstract: GEMPACK (General Equilibrium Modelling PACKage) is a modeling system for CGE economic models, used at the Centre of Policy Studies (CoPS) in Melbourne, Australia, and sold to other CGE modellers. Some of the more well-known CGE models solved using GEMPACK are the GTAP model of world trade, and the VURM, ORANI-G, USAGE and TERM models used at CoPS. All these models share a distinctive feature: they are formulated as a system of differential equations in percentage change form; however, this is not required by GEMPACK. A characteristic feature of CGE models is that an initial solution for the model can be readily constructed from a table of transaction values (such as an input-output table or a social accounting matrix) that satisfies certain basic accounting restrictions. GEMPACK builds on this feature by formulating the CGE model as an initial value problem which is solved using standard techniques. The GEMPACK user specifies a model by constructing a text file listing model equations and variables, and showing how variables relate to value flows stored on an initial data file. GEMPACK translates this file into a computer program which solves the model, i.e., computes how model variables might change in response to an external shock. The original equation system is linearized (reformulated as a system of first-order partial differential equations). If most variables are expressed in terms of percentage changes (akin to log changes) the coefficients of the linearized system are usually very simple functions of database value flows. Computer algebra is used at this point to greatly reduce (by substitution) the size of the system. Then it is solved by multistep methods such as the Euler method, midpoint method or Gragg's modified Midpoint method. These all require solution of a large system of linear equations; accomplished by sparse matrix techniques. Richardson extrapolation is used to improve accuracy. The final result is an accurate solution of the original non-linear equations. This linearized approach, originally devised to solve medium-sized CGE models on early computers, has since proved capable (on modern computers) of solving very large models quite quickly. Additionally it has lent itself to some interesting extensions, such as: a Gaussian quadrature method of estimating confidence intervals for model results from known distributions of shock or parameter values; a way to formulate inequality constraints or non-differentiable equations as complementarities; and a technique to decompose changes in model variables due to several shocks into components due to each individual shock. The underlying numerical approach is complemented by several GUI programs that: ease viewing of large multidimensional arrays often found in CGE databases; manage complex (e.g., multi-period) simulations; and allow interactive exploration and explanation of simulation results. The document provides a complete description of GEMPACK features, instructions for use, and many examples.
    Keywords: Computable General Equilibrium, CGE, software, GEMPACK
    JEL: C63 C68 D58
    Date: 2019–05
  2. By: Liyang Tang
    Abstract: This paper selects the NARX neural network as the method through literature review, and constructs specific NARX neural networks under application scenarios involving macroeconomic forecasting, national goal setting and global competitiveness assessment. Through case studies on China, US and Eurozone, this study explores how those limited & partial exogenous inputs or abundant & comprehensive exogenous inputs, a small set of most relevant exogenous inputs or a large set of exogenous inputs covering all major aspects of the macro economy, whole area related exogenous inputs or both whole area and subdivision area related exogenous inputs specifically affect the forecasting performance of NARX neural networks for specific macroeconomic indicators or indices. Through the case study on Russia this paper explores how the limited & most relevant exogenous inputs set or the abundant & comprehensive exogenous inputs set specifically influences the prediction performance of those specific NARX neural networks for national goal setting. Finally, comparative studies on the application of NARX neural networks for the forecasts of Global Competitiveness Indices (GCIs) of various economies are conducted, in order to explore whether the specific NARX neural network trained on the basis of the GCI related data of some economies can make sufficiently accurate predictions about GCIs of other economies, and whether the specific NARX neural network trained on the basis of the data of some type of economies can give more accurate predictions about GCIs of the same type of economies than those of different type of economies. Based on all of the above successful application, this paper provides policy recommendations on applying fully trained NARX neural networks that are assessed as qualified to assist or even replace the deductive and inductive abilities of the human brain in a variety of appropriate tasks.
    Date: 2020–05
  3. By: Jiexia Ye; Juanjuan Zhao; Kejiang Ye; Chengzhong Xu
    Abstract: Stock price movement prediction is commonly accepted as a very challenging task due to the extremely volatile nature of financial markets. Previous works typically focus on understanding the temporal dependency of stock price movement based on the history of individual stock movement, but they do not take the complex relationships among involved stocks into consideration. However it is well known that an individual stock price is correlated with prices of other stocks. To address that, we propose a deep learning-based framework, which utilizes recurrent neural network (RNN) and graph convolutional network (GCN) to predict stock movement. Specifically, we first use RNN to model the temporal dependency of each related stock' price movement based on their own information of the past time slices, then we employ GCN to model the influence from involved stock based on three novel graphs which represent the shareholder relationship, industry relationship and concept relationship among stocks based on investment decisions. Experiments on two stock indexes in China market show that our model outperforms other baselines. To our best knowledge, it is the first time to incorporate multi-relationships among involved stocks into a GCN based deep learning framework for predicting stock price movement.
    Date: 2020–05
  4. By: Márton Benedek (Institute of Economics, Centre for Economic and Regional Studies,Toth Kalman utca 4, 1097 Budapest, Hungary Corvinus University of Budapest, Fõvám tér 8., Budapest, 1093, Hungary,Budapest University of Technology and Economics, Egry József u. 1., Budapest, 1111, Hungary); Jörg Fliege (Mathematical Sciences, University of Southampton, University Road, Southampton, SO17 1BJ, United Kingdom); Tri-Dung Nguyen (Mathematical Sciences, Business School and CORMSIS, University of Southampton, Southampton, SO17 1BJ, United Kingdom)
    Abstract: The nucleolus offers a desirable payoff-sharing solution in cooperative games, thanks to its attractive properties. Although computing the nucleolus is very challenging, the Kohlberg criterion offers a method for verifying whether a solution is the nucleolus in relatively small games (number of players n at most 15). This approach becomes more challenging for larger games as the criterion involves possibly exponentially large collections of coalitions, with each collection being potentially exponentially large. The aim of this work is twofold. First, we develop an improved Kohlberg criterion that involves checking the `balancedness' of at most (n-1) sets of coalitions. Second, we exploit these results and introduce a novel descent-based constructive algorithm to find the nucleolus efficiently. We demonstrate the performance of the new algorithms by comparing them with existing methods over different types of games. Our contribution also includes the first open-source code for computing the nucleolus of moderately large games.
    Keywords: nucleolus, cooperative games, Kohlberg criterion, computation
    JEL: C71 C61
    Date: 2020–05
  5. By: Katrin Erdlenbruch (CEE-M - Centre d'Economie de l'Environnement - Montpellier - FRE2010 - INRA - Institut National de la Recherche Agronomique - UM - Université de Montpellier - CNRS - Centre National de la Recherche Scientifique - Montpellier SupAgro - Institut national d’études supérieures agronomiques de Montpellier, UMR G-EAU - Gestion de l'Eau, Acteurs, Usages - Cirad - Centre de Coopération Internationale en Recherche Agronomique pour le Développement - IRD - Institut de Recherche pour le Développement - AgroParisTech - IRSTEA - Institut national de recherche en sciences et technologies pour l'environnement et l'agriculture - Montpellier SupAgro - Institut national d’études supérieures agronomiques de Montpellier); Bruno Bonté (UMR G-EAU - Gestion de l'Eau, Acteurs, Usages - Cirad - Centre de Coopération Internationale en Recherche Agronomique pour le Développement - IRD - Institut de Recherche pour le Développement - AgroParisTech - IRSTEA - Institut national de recherche en sciences et technologies pour l'environnement et l'agriculture - Montpellier SupAgro - Institut national d’études supérieures agronomiques de Montpellier)
    Abstract: Individual adaptation measures are an important tool for households to reduce the negative consequences of floods. Although people's motivations to adopt such measures are widely studied in the literature, the diffusion of adaptations within a given population is less well described. In this paper, we build a dynamic agent based model which simulates the adoption of individual adaptation measures and enables evaluation of the efficiency of different communication policies. We run our model using an original dataset, based on a survey in France. We test the importance of different parameters of our model by implementing a global sensitivity analysis. We then compare the ranking and performance of different communication policies under different model settings. We show that in all settings, targeted policies that deal with both risk and coping possibilities, perform best in supporting individual adaptation. Moreover, we show that different dynamic parameters are of particular importance, namely the delay between the motivation to act and the implementation of the measure and the time during which households stick to a given adaptation measure.
    Date: 2018–06
  6. By: Korobilis, Dimitris; Pettenuzzo, Davide
    Abstract: As the amount of economic and other data generated worldwide increases vastly, a challenge for future generations of econometricians will be to master efficient algorithms for inference in empirical models with large information sets. This Chapter provides a review of popular estimation algorithms for Bayesian inference in econometrics and surveys alternative algorithms developed in machine learning and computing science that allow for efficient computation in high-dimensional settings. The focus is on scalability and parallelizability of each algorithm, as well as their ability to be adopted in various empirical settings in economics and finance.
    Keywords: MCMC; approximate inference; scalability; parallel computation
    JEL: C11 C15 C49 C88
    Date: 2020–05–05
  7. By: Ivan Slobozhan; Peter Ormosi; Rajesh Sharma
    Abstract: Using lobbying data from, we offer several experiments applying machine learning techniques to predict if a piece of legislation (US bill) has been subjected to lobbying activities or not. We also investigate the influence of the intensity of the lobbying activity on how discernible a lobbied bill is from one that was not subject to lobbying. We compare the performance of a number of different models (logistic regression, random forest, CNN and LSTM) and text embedding representations (BOW, TF-IDF, GloVe, Law2Vec). We report results of above 0.85% ROC AUC scores, and 78% accuracy. Model performance significantly improves (95% ROC AUC, and 88% accuracy) when bills with higher lobbying intensity are looked at. We also propose a method that could be used for unlabelled data. Through this we show that there is a considerably large number of previously unlabelled US bills where our predictions suggest that some lobbying activity took place. We believe our method could potentially contribute to the enforcement of the US Lobbying Disclosure Act (LDA) by indicating the bills that were likely to have been affected by lobbying but were not filed as such.
    Date: 2020–04
  8. By: Vásquez Cordano, Arturo Leonardo
    Abstract: Se muestra, a partir de los resultados de un análisis de equilibrio general computable (CGE), el impacto en la economía peruana que tienen las interrupciones del suministro de gas natural por potenciales restricciones en el sistema de transporte del proyecto Camisea situado en la región del Cusco, Perú. Mediante un ejercicio de simulación utilizando un modelo CGE, se estima que el valor social para la economía peruana debido a la interrupción de un día en el suministro de gas natural asciende, en el escenario base, a US$ 335 millones, lo que equivale a 0.21% del PBI peruano en términos reales. En un escenario catastrófico de suspensión del suministro de gas natural de tres meses, las pérdidas sociales podrían ascender a más de US$ 30,000 millones, lo que equivale aproximadamente a 19% del PBI peruano. Al final del documento, se discuten algunas recomendaciones de política para atenuar los impactos de los cortes de suministro de gas natural en el Perú.
    Keywords: Gas Natural Licuado; Actividades Extractivas; Proyecto Camisea; Perú; GLP; renewable resources; Disasters; gas pipeline constraints; computable general equilibrium (CGE); Peru; Camisea; economic impact evaluation
    JEL: C58 D57 D58 L95 Q35 Q41 Q54
    Date: 2019–06
  9. By: Giovanni Dosi (Institute of Economics and EMbeDS, Scuola Superiore Sant’Anna, Pisa (Italy)); Andrea Roventini (OFCE Sciences Po, Sophia-Antipolis (France), Institute of Economics and EMbeDS, Scuola Superiore Sant’Anna, Pisa (Italy)); Emanuele Russo (Institute of Economics and EMbeDS, Scuola Superiore Sant’Anna, Pisa (Italy))
    Abstract: In this paper, we study the effects of industrial policies on international convergence using a multi-country agent-based model which builds upon Dosi et al. (2019b). The model features a group of microfounded economies, with evolving industries, populated by heterogeneous firms that compete in international markets. In each country, technological change is driven by firms’ activities of search and innovation, while aggregate demand formation and distribution follows Keynesian dynamics. Interactions among countries take place via trade flows and international technological imitation. We employ the model to assess the different strategies that laggard countries can adopt to catch up with leaders: market-friendly policies; industrial policies targeting the development of firms’ capabilities and R&D investments, as well as trade restrictions for infant industry protection; protectionist policies focusing on tariffs only. We find that markets cannot do the magic: in absence of government interventions, laggards will continue to fall behind. On the contrary, industrial policies can successfully drive international convergence among leaders and laggards, while protectionism alone is not necessary to support catching up and countries get stuck in a sort of middle-income trap. Finally, in a global trade war, where developed economies impose retaliatory tariffs, both laggards and leaders are worse off and world productivity growth slows down.
    Keywords: Endogenous growth, catching up, technology-gaps, industrial policies, agent-based models.
    JEL: F41 F43 O4 O3
    Date: 2020–06
  10. By: Giuseppe Coco (Università degli Studi di Firenze); Raffaele Lagravinese (Università degli Studi di Bari "Aldo Moro"); Giuliano Resce (Italian National Research Council (CNR))
    Abstract: This paper proposes the use of a new technique, the Stochastic Multicriteria Acceptability Analysis (SMAA), to evaluate education quality at school level out of the PISA multidimensional database. SMAA produces rankings with Monte Carlo Generation of weights to estimate the probability that each school is in a certain position of the aggregate ranking, thus avoiding any arbitrary intervention of researchers. We use the rankings in 4 waves of PISA assessment to compare SMAA outcomes with Benefit of Doubt (BoD), showing that differentiation of weights matters. Considering the whole set of feasible weights by means of SMAA, we then estimate multidimensional inequality in education, and we disentangle inequality into a ‘within’ and a ‘between’ country component, in addition to a component due to overlapping, using the multidimensional ANOGI. We find that, over time, inequality within countries has increased substantially. Overlapping among countries, particularly in the upper part of the distribution has also increased quite substantially suggesting excellence is spreading among countries.
    Keywords: Education inequality, PISA, SMAA, ANOGI, anywhere and somewhere
    JEL: I14 C44
    Date: 2020–05
  11. By: Maria Cristina Barbieri Góes (Department of Economics, Università degli Studi Roma Tre); Ettore Gallo (Department of Economics, New School for Social Research)
    Abstract: The paper presents a predator-prey model which captures the interactions between unemployment rate and COVID-19 infection rate. The model shows that lockdown measures can effectively reduce the infection rate, but at the cost of higher unemployment rate. The solution of the system makes the case for an endemic equilibrium of COVID-19 infections, hence producing waves in the unemployment rate in the absence of widespread immunity and/or vaccination. Furthermore, we simulate the model, calibrating it for the US. The simulation shows the dramatic effects on unemployment and on overall economic activity produced by potential recurrent waves of COVID-19, leading to a series of W-shaped recessions that - in absence of adequate policy response - jeopardize the coming back to the normal trend in the medium run.
    Keywords: COVID-19, unemployment rate, jobless recovery, W-shaped recession
    JEL: E24 E60 H51 I18
    Date: 2020–05
  12. By: Silvo, Aino; Verona, Fabio
    Abstract: In this paper we present Aino 3.0, the latest vintage of the dynamic stochastic general equilibrium (DSGE) model used at the Bank of Finland for policy analysis. Aino 3.0 is a small-open economy DSGE model at the intersection of the recent literatures on so-called TANK (“Two-Agent New Keynesian”) and MONK (“Mortgages in New Keynesian”) models. It aims at capturing the most relevant macro-financial linkages in the Finnish economy and provides a rich laboratory for the analysis of various macroeconomic and macroprudential policies. We show how the availability of a durable consumption good (housing), on the one hand, and the presence of credit-constrained households, on the other hand, affect the transmission of key macroeconomic and financial shocks. We also illustrate how these new transmission channels affect model dynamics compared to the previous model vintage (the Aino 2.0 model of Kilponen et al., 2016).
    JEL: E21 E32 E44 F41 R31
    Date: 2020–05–26
  13. By: Marinescu, Ioana E. (University of Pennsylvania); Ouss, Ivan (CREST (ENSAE)); Pape, Louis-Daniel (CREST (ENSAE))
    Abstract: How does employer market power affect workers? We compute the concentration of new hires by occupation and commuting zone in France using linked employer-employee data. Using instrumental variables with worker and firm fixed effects, we find that a 10% increase in labor market concentration decreases hires by 12.4% and the wages of new hires by nearly 0.9%, as hypothesized by monopsony theory. Based on a simple merger simulation, we find that a merger between the top two employers in the retail industry would be most damaging, with about 24 million euros in annual lost wages for new hires, and an 8000 decrease in annual hires.
    Keywords: labor market concentration, wages, hires, merger simulation
    JEL: J31 J32 J42 L13 J51 L40 L41 L44
    Date: 2020–05
  14. By: Ba Chu (Department of Economics, Carleton University); Shafiullah Qureshi (Department of Economics, Carleton University)
    Abstract: Our proposed time series model with the quartic trend function predicts that the peak of confirmed coronavirus cases has passed in Canada and the US while the end period of the pandemic will come around June 2020 in the best scenario and till the end of 2020 in the worst scenario. Both the bootstrap distance-based test of independence and the XGBoost algorithm reveals a strong link between the coronavirus case count and relevant Google Trends features (defined by search intensities of various keywords that the public entered in the Google internet search engine during this pandemic).
    Keywords: COVID-19; prediction; machine-learning; google trends
    Date: 2020–05–04
  15. By: Mahdi Ben Jelloul (IPP - Institut des politiques publiques); Antoine Bozio (IPP - Institut des politiques publiques, PJSE - Paris Jourdan Sciences Economiques - UP1 - Université Panthéon-Sorbonne - ENS Paris - École normale supérieure - Paris - INRA - Institut National de la Recherche Agronomique - EHESS - École des hautes études en sciences sociales - ENPC - École des Ponts ParisTech - CNRS - Centre National de la Recherche Scientifique, PSE - Paris School of Economics); Sophie Cottet (IPP - Institut des politiques publiques, PSE - Paris School of Economics, PJSE - Paris Jourdan Sciences Economiques - UP1 - Université Panthéon-Sorbonne - ENS Paris - École normale supérieure - Paris - INRA - Institut National de la Recherche Agronomique - EHESS - École des hautes études en sciences sociales - ENPC - École des Ponts ParisTech - CNRS - Centre National de la Recherche Scientifique); Brice Fabre (PSE - Paris School of Economics, IPP - Institut des politiques publiques, PJSE - Paris Jourdan Sciences Economiques - UP1 - Université Panthéon-Sorbonne - ENS Paris - École normale supérieure - Paris - INRA - Institut National de la Recherche Agronomique - EHESS - École des hautes études en sciences sociales - ENPC - École des Ponts ParisTech - CNRS - Centre National de la Recherche Scientifique); Claire Leroy (IPP - Institut des politiques publiques)
    Abstract: Le système de prestations sociales actuel suscite des débats sur de nombreuses dimensions : non-recours aux minima sociaux, empilement de dispositifs multiples, conditions restrictives d'éligibilité pour la population jeune, etc. Face à ces enjeux, 13 conseils départementaux (l'Ardèche, l'Ariège, l'Aude, la Dordogne, le Gers, la Gironde, la Haute-Garonne, l'Ille-et-Vilaine, les Landes, le Lot-et-Garonne, la Meurthe-et-Moselle, la Nièvre et la Seine-Saint-Denis) ont lancé un projet d'expérimentation de la mise en place d'un revenu de base simplifiant le système existant et ouvert à tout individu au-dessus d'un certain âge sous condition de ressources. Un préalable à la mise en œuvre de ce projet est la définition du ou des scénarios de réforme à expérimenter. Ce rapport s'inscrit dans cet objectif, en évaluant ex-ante les effets budgétaires et redistributifs de plusieurs scénarios de réforme définis par les conseils départementaux impliqués. À partir du modèle de microsimulation TAXIPP 1.0, qui mobilise à la fois des données administratives de source fiscale et des données d'enquête, ce rapport propose deux schémas de simplification du système existant : le remplacement du Revenu de Solidarité Active (RSA) et de la prime d'activité par un dispositif simplifié d'une part, et l'intégration des aides au logement dans le nouveau dispositif unifié d'autre part. Sont notamment évalués les effets de l'ouverture de ces dispositifs aux individus de 18 à 24 ans, qui sont aujourd'hui les plus touchés par la pauvreté.
    Date: 2018–06
  16. By: Gallego, Jorge; Prem, Mounu; Vargas, Juan F.
    Abstract: The public health crisis caused by the COVID-19 pandemic, coupled with the subsequent economic emergency and social turmoil, has pushed governments to substantially and swiftly increase spending. Because of the pressing nature of the crisis, public procurement rules and procedures have been relaxed in many places in order to expedite transactions. However, this may also create opportunities for corruption. Using contract-level information on public spending from Colombia's e-procurement platform, and a difference-in-differences identification strategy, we find that municipalities classified by a machine learning algorithm as traditionally more prone to corruption react to the pandemic-led spending surge by using a larger proportion of discretionary non-competitive contracts and increasing their average value. This is especially so in the case of contracts to procure crisis-related goods and services. Our evidence suggests that large negative shocks that require fast and massive spending may increase corruption, thus at least partially offsetting the mitigating effects of this fiscal instrument.
    Keywords: DiCorruption; COVID-19; Public procurement; Machine learning
    JEL: H57 D73 I18 H75
    Date: 2020–05

This nep-cmp issue is ©2020 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.