nep-cmp New Economics Papers
on Computational Economics
Issue of 2020‒11‒30
twenty-one papers chosen by
Stan Miles
Thompson Rivers University

  1. Inequality and imbalances: a monetary union agent-based model By Alberto Cardaci; Francesco Saraceno
  2. Modelling the Long Term Potential Macroeconomic Impact of Brexit on Wales By Sangeeta Khorana; Badri Narayanan G; Nicholas Perdikis
  3. Exploration of model performances in the presence of heterogeneous preferences and random effects utilities awareness By Gusarov, N.; Talebijmalabad, A.; Joly, I.
  4. Prospects and challenges of quantum finance By Adam Bouland; Wim van Dam; Hamed Joorati; Iordanis Kerenidis; Anupam Prakash
  5. Predicting well-being based on features visible from space – the case of Warsaw By Krystian Andruszek; Piotr Wójcik
  6. Population synthesis for urban resident modeling using deep generative models By Martin Johnsen; Oliver Brandt; Sergio Garrido; Francisco C. Pereira
  7. COVID-Town: An Integrated Economic-Epidemiological Agent-Based Model By Patrick Mellacher
  8. Riesgo Causado por la Propagación de las Pérdidas por Terremoto a través de la Economía Mediante el uso de Modelos CGE Espaciales By León, José Antonio; Ordaz, Mario; Haddad, Eduardo; Araújo, Inácio
  9. Identifying Consumer Preferences from User- and Crowd-Generated Digital Footprints on by Leveraging Machine Learning and Natural Language Processing By Jikhan Jeong
  10. Forecasting the Spread of SARS-CoV-2 is inherently Ambiguous given the Current State of Virus Research By Koenen, Melissa; Balvert, Marleen; Brekelmans, Ruud; Stienen, Valentijn; Wagenaar, Joris
  11. We have just explained real convergence factors using machine learning By Piotr Wójcik; Bartłomiej Wieczorek
  12. Assessing Short‑Term and Long‑Term Economic and Environmental Effects of the COVID‑19 Crisis in France By Paul Malliet; Frédéric Reynés; Gissela Landa; Meriem Hamdi‑cherif; Aurélien Saussay
  13. Liquidity in resolution: estimating possible liquidity gaps for specific banks in resolution and in a systemic crisis By Parisi, Laura; Chalamandaris, Dimitrios; Amamou, Raschid; Torstensson, Pär; Baumann, Andreas
  14. Double blind vs. open review: an evolutionary game logit-simulating the behavior of authors and reviewers By Mantas Radzvilas; Francesco De Pretis; William Peden; Daniele Tortoli; Barbara Osimani
  15. Analysis and Forecasting of Financial Time Series Using CNN and LSTM-Based Deep Learning Models By Sidra Mehtab; Jaydip Sen; Subhasis Dasgupta
  16. China's Missing Pigs: Correcting China's Hog Inventory Data Using a Machine Learning Approach By Shao, Yongtong; Xiong, Tao; Li, Minghao; Hayes, Dermot; Zhang, Wendong; Xie, Wei
  17. Predicting United States Policy Outcomes with Random Forests By Shawn K. McGuire; Charles B. Delahunt
  18. microWELT: Microsimulation Projection of Full Generational Accounts for Austria and Spain By Martin Spielauer; Thomas Horvath; Marian Fink; Gemma Abio; Guadalupe Souto Nieves; Concepció Patxot
  19. Public Policies And The Art Of Catching Up: Matching The Historical Evidence With A Multi-Country Agent-Based Model By Giovanni Dosi; Andrea Roventini; Emmanuele Russo
  20. North and South: A Regional Model of the UK By Minford, Patrick; Gai, Yue; Meenagh, David
  21. Mostly Harmless Machine Learning: Learning Optimal Instruments in Linear IV Models By Jiafeng Chen; Daniel L. Chen; Greg Lewis

  1. By: Alberto Cardaci (Lombardy Advanced School of Economics Milan); Francesco Saraceno (Observatoire français des conjonctures économiques)
    Abstract: Our paper investigates the impact of rising inequality in a two-country macroeconomic model with an agent-based household sector characterized by peer effects in consumption. In particular, the model highlights the role of inequality in determining diverging balance of payments dynamics within a currency union. Inequality may drive the two countries into different growth patterns: where peer effects in consumption interact with higher credit availability, rising income inequality leads to the emergence of a debt-led growth. Where social norms determine weaker emulation and credit availability is lower, an export-led regime arises. Eventually, a crisis emerges endogenously due to the sudden-stop of capital flows from the net lending country, triggered by the excessive risk associated with the dramatic amount of private debt accumulated by households in the borrowing country. Monte Carlo simulations for a wide range of calibrations confirm the robustness of our results.
    Keywords: Inequality; Current account; Currency union; Agent-based model
    JEL: C63 D31 E21 F32 F43
    Date: 2019–07
  2. By: Sangeeta Khorana (Department of Accounting, Finance and Economics, Bournemouth University); Badri Narayanan G (School of Environmental and Forestry Sciences, University of Washington-Seattle); Nicholas Perdikis (Aberystwyth University, SY23 3AL)
    Abstract: This paper employs a computable general equilibrium (CGE) dynamic simulation model to analyse how Brexit is likely to impact the Welsh economy. The model simulates two potential future trade relationship scenarios between the United Kingdom (UK) and European Union (EU) for 29 March 2019: (a) No-deal Brexit, i.e. trading partners revert to World Trade Organization (WTO)rules; (b) Limited transition period and/or extension of Article 50. The model demonstrates how Welsh exports and imports, output, prices and employment are likely to be impacted from Brexit in the long-term. The scenarios modelled present a negative forecast for the Welsh (and UK) economy and industry, and show that the macroeconomic variables are sensitive to the policy disruption caused by Brexit. Projections show gross domestic product (GDP), GDP per capita, trade, investment and employment losses for the Welsh economy. A no-deal Brexit, which sees the UK reverting to trading with the EU on WTO terms, generates maximum losses for Wales (and the UK) in the long-term. In light of the results, it is important to avoid a no-deal Brexit that sees high losses and tariff barriers returning. A transition period arrangement or an extension to Article 50 also projects long-term losses for Wales. However, losses depend on the length of transition period and results show that a longer transition minimises losses for Wales (and the UK). From a policy perspective, a deal with an extended transition period should be agreed between the UK and EU as soon as possible to enable the continuation of existing EU-Wales trading arrangement.
    Keywords: Brexit; EU and UK; local economic impact; CGE modelling
    JEL: F13 F15 F17 C68
    Date: 2019–02
  3. By: Gusarov, N.; Talebijmalabad, A.; Joly, I.
    Abstract: This work is a cross-disciplinary study of econometrics and machine learning (ML) models applied to consumer choice preference modelling. To bridge the interdisciplinary gap, a simulation and theorytesting framework is proposed. It incorporates all essential steps from hypothetical setting generation to the comparison of various performance metrics. The flexibility of the framework in theory-testing and models comparison over economics and statistical indicators is illustrated based on the work of Michaud, Llerena and Joly (2012). Two datasets are generated using the predefined utility functions simulating the presence of homogeneous and heterogeneous individual preferences for alternatives’ attributes. Then, three models issued from econometrics and ML disciplines are estimated and compared. The study demonstrates the proposed methodological approach’s efficiency, successfully capturing the differences between the models issued from different fields given the homogeneous or heterogeneous consumer preferences.
    JEL: C25 C45 C52 C80 C90
    Date: 2020
  4. By: Adam Bouland; Wim van Dam; Hamed Joorati; Iordanis Kerenidis; Anupam Prakash
    Abstract: Quantum computers are expected to have substantial impact on the finance industry, as they will be able to solve certain problems considerably faster than the best known classical algorithms. In this article we describe such potential applications of quantum computing to finance, starting with the state-of-the-art and focusing in particular on recent works by the QC Ware team. We consider quantum speedups for Monte Carlo methods, portfolio optimization, and machine learning. For each application we describe the extent of quantum speedup possible and estimate the quantum resources required to achieve a practical speedup. The near-term relevance of these quantum finance algorithms varies widely across applications - some of them are heuristic algorithms designed to be amenable to near-term prototype quantum computers, while others are proven speedups which require larger-scale quantum computers to implement. We also describe powerful ways to bring these speedups closer to experimental feasibility - in particular describing lower depth algorithms for Monte Carlo methods and quantum machine learning, as well as quantum annealing heuristics for portfolio optimization. This article is targeted at financial professionals and no particular background in quantum computation is assumed.
    Date: 2020–11
  5. By: Krystian Andruszek (Data Science Lab WNE UW); Piotr Wójcik (Faculty of Economic Sciences, Data Science Lab WNE UW, University of Warsaw)
    Abstract: In recent years, availability of satellite imagery has grown rapidly. In addition, deep neural networks gained popularity and become widely used in various applications. This article focuses on using innovative deep learning and machine learning methods with combination of data that is describing objects visible from space. High resolution daytime satellite images are used to extract features for particular areas with the use of transfer learning and convolutional neural networks. Then extracted features are used in machine learning models (LASSO and random forest) as predictors of various socio-economic indicators. The analysis is performed on a local level of Warsaw districts. The findings from such approach can be a great help to get almost continuous measurement of the economic well-being, independently of statistical offices.
    Keywords: well-being, economic indicators, Open Street Map, satellite images, Warsaw
    JEL: I31 R12 O18 C14
    Date: 2020
  6. By: Martin Johnsen; Oliver Brandt; Sergio Garrido; Francisco C. Pereira
    Abstract: The impacts of new real estate developments are strongly associated to its population distribution (types and compositions of households, incomes, social demographics) conditioned on aspects such as dwelling typology, price, location, and floor level. This paper presents a Machine Learning based method to model the population distribution of upcoming developments of new buildings within larger neighborhood/condo settings. We use a real data set from Ecopark Township, a real estate development project in Hanoi, Vietnam, where we study two machine learning algorithms from the deep generative models literature to create a population of synthetic agents: Conditional Variational Auto-Encoder (CVAE) and Conditional Generative Adversarial Networks (CGAN). A large experimental study was performed, showing that the CVAE outperforms both the empirical distribution, a non-trivial baseline model, and the CGAN in estimating the population distribution of new real estate development projects.
    Date: 2020–11
  7. By: Patrick Mellacher
    Abstract: I develop a novel macroeconomic epidemiological agent-based model to study the impact of the COVID-19 pandemic under varying policy scenarios. Agents differ with regard to their profession, family status and age and interact with other agents at home, work or during leisure activities. The model allows to implement and test actually used or counterfactual policies such as closing schools or the leisure industry explicitly in the model in order to explore their impact on the spread of the virus, and their economic consequences. The model is calibrated with German statistical data on time use, demography, households, firm demography, employment, company profits and wages. I set up a baseline scenario based on the German containment policies and fit the epidemiological parameters of the simulation to the observed German death curve and an estimated infection curve of the first COVID-19 wave. My model suggests that by acting one week later, the death toll of the first wave in Germany would have been 180% higher, whereas it would have been 60% lower, if the policies had been enacted a week earlier. I finally discuss two stylized fiscal policy scenarios: procyclical (zero-deficit) and anticyclical fiscal policy. In the zero-deficit scenario a vicious circle emerges, in which the economic recession spreads from the high-interaction leisure industry to the rest of the economy. Even after eliminating the virus and lifting the restrictions, the economic recovery is incomplete. Anticyclical fiscal policy on the other hand limits the economic losses and allows for a V-shaped recovery, but does not increase the number of deaths. These results suggest that an optimal response to the pandemic aiming at containment or holding out for a vaccine combines early introduction of containment measures to keep the number of infected low with expansionary fiscal policy to keep output in lower risk sectors high.
    Date: 2020–11
  8. By: León, José Antonio; Ordaz, Mario; Haddad, Eduardo (Departamento de Economia, Universidade de São Paulo); Araújo, Inácio (Departamento de Economia, Universidade de São Paulo)
    Abstract: La economía de un país está expuesta a perturbaciones inducidas por catástrofes causadas desastres naturales y por el hombre. Este trabajo presenta un esfuerzo para estimar de una manera sistemática y probabilista la consecuencias económicas naciones y regionales de la ocurrencia de terremotos. Además de abordar las pérdidas de producción, nuestro modelo calcula las métricas estándar de riesgo para múltiples componentes de la economía como empleo, PIB, PRB, inflación, volumen de exportación, etc. El enfoque propuesto se ilustra con un ejemplo desarrollado para Chile, cuyos resultados constituyen los primeros de su tipo. Los resultados revelan que la pérdida anual esperada (AAL, por sus siglas en inglés) de la producción bruta, PIB, y volumen de exportación en Chile son 277, 305 y 62 millones de dólares, mientras que la AAL de empleo es de 7,786 trabajadores. La Región Metropolitana de Santiago concentra ~43% de la AAL total de producción mientras que la Región de Valparaiso es la más riesgoso, con una AAL de producción regional de 0.21%. También presentamos las curvas de excedencia de pérdidas para diferentes componentes de la economía chilena tanto a nivel país como a nivel regional.
    Keywords: Desastres naturales; Terremoto; Modelo de riesgo sísmico; Modelo CGE
    JEL: C68 Q54 R10
    Date: 2020–11–14
  9. By: Jikhan Jeong
    Abstract: Inexperienced consumers may have high uncertainty about experience goods that require technical knowledge and skills to operate effectively; therefore, experienced consumersâ prior reviews can be useful for inexperienced ones. However, the one-sided review system (e.g., only provides the opportunity for consumers to write a review as a buyer and contains no feedback from the sellerâs side, so the information displayed about individual buyers is limited. This study analyzes consumersâ digital footprints (DFs) to identify and predict unobserved consumer preferences from online product reviews. It makes use of Python coding along with high-performance computing to extract reviewersâ DFs for a specific product group (programmable thermostats) from a dataset of 141 million Amazon reviews. It identifies consumersâ sentiment toward product content dimensions (PCDs) extracted from review text by applying topic modeling and domain expert annotations. However, some questionable reviews (posted by âsuspicious one-time reviewersâ and âalways-the-same rating reviewersâ) are excluded. This paper obtains three main results: First, I find that the factors that affect consumer ratings are: (a) userâ DFs (e.g., length of the product review, average rating across all categories, volume of prior reviews overall and in sub-categories), (b) reviewersâ attitudes toward eight product content dimensions (smart connectivity, easiness, energy saving, functionality, support, price value, privacy, and the Amazon effect), and (c) other prior reviewers DFs (e.g., length of the review summary.) All the heteroskedastic ordered probit models with DF and sentiment variables show a better model fit than the base model. This paper is the first to identify the effect of service quality of the online platform ( on ratings. Second, extreme gradient boosting (XGBoost) is found to obtain the highest F1 score for predicting the ratings of potential consumers before they make a purchase or write a review. All the models containing DF and sentiment variables show a higher prediction performance than the base model. Classifications with a lower range of labels (three-class or binary classifications) show better prediction performance than the five-star rating classification. However, the performance for the minority class is low. Third, a convolutional neural network (CNN) on top of Bidirectional Encoder Representations from Transformers (BERT) embedding shows the highest F1 score for classifying consumersâ sentiment toward a specific PCD. Overall, this approach developed in this paper is applicable, scalable, and interpretable for distinguishing important drivers of consumer reviews for different goods in a specific industry and can be used by industry to identify and predict unobserved consumer preferences and sentiment associated with product content dimensions.
    JEL: D80 M21 M31 C45
    Date: 2020–11–10
  10. By: Koenen, Melissa (Tilburg University, Center For Economic Research); Balvert, Marleen (Tilburg University, Center For Economic Research); Brekelmans, Ruud (Tilburg University, Center For Economic Research); Stienen, Valentijn (Tilburg University, Center For Economic Research); Wagenaar, Joris (Tilburg University, Center For Economic Research)
    Keywords: SARS-CoV-2; Simulation model; Epidemiologic; Virus and disease progression characteristics
    Date: 2020
  11. By: Piotr Wójcik (Faculty of Economic Sciences, Data Science Lab WNE UW, University of Warsaw); Bartłomiej Wieczorek (Data Science Lab WNE UW)
    Abstract: There are several competing empirical approaches to identify factors of real economic convergence. However, all of the previous studies of cross-country convergence assume a linear model specification. This article uses a novel approach and shows the application of several machine learning tools to this topic discussing their advantages over the other methods, including possibility of identifying nonlinear relationships without any a priori assumptions about its shape. The results suggest that conditional convergence observed in earlier studies could have been a result of inappropriate model specification. We find that in a correct non-linear approach, initial GDP is not (strongly) correlated with growth. In addition, the tools of interpretable machine learning allow to discover the shape of relationship between the average growth and initial GDP. Based on these tools we prove the occurrence of convergence of clubs.
    Keywords: cross-country convergence, conditional convergence, determinants, machine learning, non-linear
    JEL: O47 C14 C52
    Date: 2020
  12. By: Paul Malliet (Observatoire français des conjonctures économiques); Frédéric Reynés (Observatoire français des conjonctures économiques); Gissela Landa (Observatoire français des conjonctures économiques); Meriem Hamdi‑cherif; Aurélien Saussay (Observatoire français des conjonctures économiques)
    Abstract: In response to the COVID-19 health crisis, the French government has imposed drastic lockdown measures for a period of 55 days. This paper provides a quantitative assessment of the economic and environmental impacts of these measures in the short and long term. We use a Computable General Equilibrium model designed to assess environmental and energy policies impacts at the macroeconomic and sectoral levels. We find that the lockdown has led to a significant decrease in economic output of 5% of GDP, but a positive environmental impact with a 6.6% reduction in CO2 emissions in 2020. Both decreases are temporary: economic and environmental indicators return to their baseline trajectory after a few years. CO2 emissions even end up significantly higher after the COVID-19 crisis when we account for persistently low oil prices. We then investigate whether implementing carbon pricing can still yield positive macroeconomic dividends in the post-COVID recovery. We find that implementing ambitious carbon pricing speeds up economic recovery while significantly reducing CO2 emissions. By maintaining high fossil fuel prices, carbon taxation reduces the imports of fossil energy and stimulates energy efficiency investments while the full redistribution of tax proceeds does not hamper the recovery.
    Keywords: Carbon tax; CO2 emissions; Macroeconomic modeling; Neo-Keynesian CGE model; Post-COVID economy
    JEL: E12 E17 E27 E37 E47 D57 D58
    Date: 2020
  13. By: Parisi, Laura; Chalamandaris, Dimitrios; Amamou, Raschid; Torstensson, Pär; Baumann, Andreas
    Abstract: This paper contributes to the debate on liquidity in resolution by providing a quantitative assessment of liquidity gaps of banks in resolution in the euro area. It estimates possible ranges of liquidity gaps for significant banks under different assumptions and scenarios. The findings suggest that, while the average liquidity gaps in resolution are limited, the averages hide significant outliers. The paper thus shows that, under adverse circumstances, the instruments currently available to provide liquidity support to financial institutions in the euro area would be insufficient JEL Classification: G01, G21, G28, G33, C63
    Keywords: bank runs, contagion, Liquidity, Monte Carlo simulations, resolution, systemic crisis
    Date: 2020–11
  14. By: Mantas Radzvilas; Francesco De Pretis; William Peden; Daniele Tortoli; Barbara Osimani
    Abstract: Despite the tremendous successes of science in providing knowledge and technologies, the Replication Crisis has highlighted that scientific institutions have much room for improvement. Peer-review is one target of criticism and suggested reforms. However, despite numerous controversies peer review systems, plus the obvious complexity of the incentives affecting the decisions of authors and reviewers, there is very little systematic and strategic analysis of peer-review systems. In this paper, we begin to address this feature of the peer-review literature by applying the tools of game theory. We use simulations to develop an evolutionary model based around a game played by authors and reviewers, before exploring some of its tendencies. In particular, we examine the relative impact of double-blind peer-review and open review on incentivising reviewer effort under a variety of parameters. We also compare (a) the impact of one review system versus another with (b) other alterations, such as higher costs of reviewing. We find that is no reliable difference between peer-review systems in our model. Furthermore, under some conditions, higher payoffs for good reviewing can lead to less (rather than more) author effort under open review. Finally, compared to the other parameters that we vary, it is the exogenous utility of author effort that makes an important and reliable difference in our model, which raises the possibility that peer-review might not be an important target for institutional reforms.
    Date: 2020–11
  15. By: Sidra Mehtab; Jaydip Sen; Subhasis Dasgupta
    Abstract: Prediction of stock price and stock price movement patterns has always been a critical area of research. While the well-known efficient market hypothesis rules out any possibility of accurate prediction of stock prices, there are formal propositions in the literature demonstrating accurate modeling of the predictive systems can enable us to predict stock prices with a very high level of accuracy. In this paper, we present a suite of deep learning-based regression models that yields a very high level of accuracy in stock price prediction. To build our predictive models, we use the historical stock price data of a well-known company listed in the National Stock Exchange (NSE) of India during the period December 31, 2012 to January 9, 2015. The stock prices are recorded at five minutes interval of time during each working day in a week. Using these extremely granular stock price data, we build four convolutional neural network (CNN) and five long- and short-term memory (LSTM)-based deep learning models for accurate forecasting of future stock prices. We provide detailed results on the forecasting accuracies of all our proposed models based on their execution time and their root mean square error (RMSE) values.
    Date: 2020–11
  16. By: Shao, Yongtong; Xiong, Tao; Li, Minghao; Hayes, Dermot; Zhang, Wendong; Xie, Wei
    Abstract: Small sample size often limits forecasting tasks such as the prediction of production, yield, and consumption of agricultural products. Machine learning offers an appealing alternative to traditional forecasting methods. In particular, Support Vector Regression has superior forecasting performance in small sample applications. In this article, we introduce Support Vector Regression via an application to China’s hog market. Since 2014, China’s hog inventory data has experienced an abnormal decline that contradicts price and consumption trends. We use Support Vector Regression to predict the true inventory based on the price-inventory relationship before 2014. We show that, in this application with a small sample size, Support Vector Regression out-performs neural networks, random forest, and linear regression. Predicted hog inventory decreased by 3.9% from November 2013 to September 2017, instead of the 25.4% decrease in the reported data.
    Date: 2020–01–01
  17. By: Shawn K. McGuire; Charles B. Delahunt (University of Washington, Seattle, WA)
    Abstract: Two decades of U.S. government legislative outcomes, as well as the policy preferences of rich people, the general population, and diverse interest groups, were captured in a detailed dataset curated and analyzed by Gilens, Page et al. (2014). They found that the preferences of the rich correlated strongly with policy outcomes, while the preferences of the general population did not, except via a linkage with rich people`s preferences. Their analysis applied the tools of classical statistical inference, in particular logistic regression. In this paper we analyze the Gilens dataset using the complementary tools of Random Forest classifiers (RFs), from Machine Learning. We present two primary findings, concerning respectively prediction and inference: (i) Holdout test sets can be predicted with approximately 70% balanced accuracy by models that consult only the preferences of rich people and a small number of powerful interest groups, as well as policy area labels. These results include retrodiction, where models trained on pre-1997 cases predicted ``future`` (post-1997) cases. The 20% gain in accuracy over baseline (chance), in this detailed but noisy dataset, indicates the high importance of a few wealthy players in U.S. policy outcomes, and aligns with a body of research indicating that the U.S. government has significant plutocratic tendencies. (ii) The feature selection methods of RF models identify especially salient subsets of interest groups (economic players). These can be used to further investigate the dynamics of governmental policy making, and also offer an example of the potential value of RF feature selection methods for inference on datasets such as this one.
    Keywords: political economy, financial crisis, political parties, political money.
    JEL: G20 L5 N22 D72 G38 P16 K22
    Date: 2020–10–02
  18. By: Martin Spielauer (WIFO); Thomas Horvath; Marian Fink; Gemma Abio; Guadalupe Souto Nieves; Concepció Patxot (University of Barcelona)
    Abstract: This paper studies the effect of population ageing on the inter- and intra-generational redistribution of income from a longitudinal perspective, comparing lifetime measures of income and transfers by generation, gender, education and family characteristics. For this end, we incorporate new disaggregated National Transfer Account (NTA) data and concepts of generational accounting into the dynamic microsimulation model microWELT. This bottom-up modelling strategy makes it possible to project, for each generation and socio-demographic group, the net present value of expected transfers. microWELT delivers detailed sociodemographic projections consistent with Eurostat population projections but additionally providing the required detail concerning the changes in the population composition by education and family characteristics. Also, the model allows incorporating mechanisms to balance budgets over time in response to population ageing. Our study compares the results for Spain and Austria. We find significant differences in the role of private and public transfers related to parenthood. While in both countries parents privately transfer substantially more money to others, the Austrian welfare state fully compensates for these differences through public transfers to parents. Such compensation is not observed in Spain.
    Keywords: Microsimulation, Education, Demographic Change, National Transfer Accounts
    Date: 2020–11–12
  19. By: Giovanni Dosi (Laboratory of Economics and Management); Andrea Roventini; Emmanuele Russo (Scuola Superiore Sant'Anna)
    Abstract: In this paper, we study the effects of industrial policies on international convergence using a multi-country agent-based model which builds upon Dosi et al. (2019b). The model features a group of microfounded economies, with evolving industries, populated by heterogeneous firms that compete in international markets. In each country, technological change is driven by firms’ activities of search and innovation, while aggregate demand formation and distribution follows Keynesian dynamics. Interactions among countries take place via trade flows and international technological imitation. We employ the model to assess the different strategies that laggard countries can adopt to catch up with leaders: market-friendly policies;industrial policies targeting the development of firms’ capabilities and R&D investments, as well as trade restrictions for infant industry protection; protectionist policies focusing on tariffs only. We find that markets cannot do the magic: in absence of government interventions, laggards will continue to fall behind. On the contrary, industrial policies can successfully drive international convergence among leaders and laggards, while protectionism alone is not necessary to support catching up and countries get stuck in a sort of middle-income trap. Finally, in a global trade war, where developed economies impose retaliatory tariffs, both laggards and leaders are worse off and world productivity growth slows down.
    Keywords: Endogenous growth; Catching up; Technology-gaps; Industrial policies; Agent-based models
    JEL: F41 F43 O4 O3
    Date: 2020–05–06
  20. By: Minford, Patrick (Cardiff Business School); Gai, Yue (Cardiff Business School); Meenagh, David (Cardiff Business School)
    Abstract: We set up a two-region model to study the policy challenge of bringing the NorthÕs income up to the level of the South in the UK. The model focuses on labour costs as the driver of output gains through the international competitiveness channel. The empirical results show that the regional model behaviour fits the regional UK data behaviour over the period of 1986Q1 and 2019Q4, using the demanding Indirect Inference method. We also carry out a Monte Carlo power test, which shows the empirical results we obtain are trustworthy and can provide us a reliable guide for policy reform.The results suggest that in response to tax cuts and labour market reforms GDP in the North increases almost twice as much as GDP in the South. Given that a broad programme of tax cuts and regulatory reform would more than pay for itself in the long run, it must be considered as a highly attractive political agenda.
    Keywords: Regional study; DSGE model; Policy implication; Indirect Inference
    JEL: E32 E60 P48
    Date: 2020–11
  21. By: Jiafeng Chen; Daniel L. Chen; Greg Lewis
    Abstract: We provide some simple theoretical results that justify incorporating machine learning in a standard linear instrumental variable setting, prevalent in empirical research in economics. Machine learning techniques, combined with sample-splitting, extract nonlinear variation in the instrument that may dramatically improve estimation precision and robustness by boosting instrument strength. The analysis is straightforward in the absence of covariates. The presence of linearly included exogenous covariates complicates identification, as the researcher would like to prevent nonlinearities in the covariates from providing the identifying variation. Our procedure can be effectively adapted to account for this complication, based on an argument by Chamberlain (1992). Our method preserves standard intuitions and interpretations of linear instrumental variable methods and provides a simple, user-friendly upgrade to the applied economics toolbox. We illustrate our method with an example in law and criminal justice, examining the causal effect of appellate court reversals on district court sentencing decisions.
    Date: 2020–11

This nep-cmp issue is ©2020 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.