nep-cmp New Economics Papers
on Computational Economics
Issue of 2021‒09‒20
ten papers chosen by
Stan Miles
Thompson Rivers University

  1. Simple Matching Protocols for Agent-based Models. By Andrea Borsato
  2. Artificial Intelligence in the Field of Economics By Steve J. Bickley; Ho Fai Chan; Benno Torgler
  3. Наукастинг темпов роста стоимостных объемов экспорта и импорта по товарным группам By Maiorova, Ksenia; Fokin, Nikita
  4. Investigating the determinants of successful budgeting with SVM and Binary models By Hariharan, Naveen Kunnathuvalappil
  5. Housing-Price Prediction in Colombia using Machine Learning By Otero Gomez, Daniel; MANRIQUE, MIGUEL ANGEL CORREA; Sierra, Omar Becerra; Laniado, Henry; Mateus C, Rafael; Millan, David Andres Romero
  6. Integrating R Machine Learning Algorithms in Stata using rcall: A Tutorial By Ebad F. Haghish
  7. WaveCorr: Correlation-savvy Deep Reinforcement Learning for Portfolio Management By Saeed Marzban; Erick Delage; Jonathan Yumeng Li; Jeremie Desgagne-Bouchard; Carl Dussault
  8. Using Satellite Imagery and Machine Learning to Estimate the Livelihood Impact of Electricity Access By Nathan Ratledge; Gabe Cadamuro; Brandon de la Cuesta; Matthieu Stigler; Marshall Burke
  9. Useful results for the simulation of non-optimal economies with heterogeneous agents By Pierri, Damian Rene

  1. By: Andrea Borsato
    Abstract: The main purpose of this article is to show how simple matching protocols suitable for agent-based models can be developed from scratch. Keeping the feature of the underlying economy at minimum, I develop, detail, and present the code for three matching processes. Their small size and flexibility may act as a stimulus to non-expert students to undertake such stream of literature and address a variety of research topics.
    Keywords: Agent-based Modelling, Matching Protocols, Computer Simulation, Linear Matrix Algebra, R.
    JEL: A20 C63 E10 O10 O30
    Date: 2021
  2. By: Steve J. Bickley; Ho Fai Chan; Benno Torgler
    Abstract: The history of AI in economics is long and winding, much the same as the evolving field of AI itself. Economists have engaged with AI since its beginnings, albeit in varying degrees and with changing focus across time and places. In this study, we have explored the diffusion of AI and different AI methods (e.g., machine learning, deep learning, neur al networks, expert systems, knowledge- based systems) through and within economic subfields, taking a scientometrics approach. In particular, we centre our accompanying discussion of AI in economics around the problems of economic calculation and social planning as proposed by Hayek. To map the history of AI within and between economic sub- fields, we construct two datasets containing bibliometrics information of economics papers based on search query results from the Scopus database and the EconPapers (and IDEAs/RePEc) repository. We present descriptive results that map the use and discussion of AI in economics over time, place, and subfield. In doing so, we also characterise the authors and affiliations of those engaging with AI in economics. Additionally, we find positive correlations between quality of institutional affiliation and engagement with or focus on AI in economics and negative correlations between the Human Development Index and share of learning-based AI papers.
    Keywords: Artificial Intelligence; Machine Learning; Economics; Scientometrics; Science of Science; Bibliometrics
    JEL: B40 N01 A14
    Date: 2021–09
  3. By: Maiorova, Ksenia; Fokin, Nikita
    Abstract: In this paper we consider a set of machine learning and econometrics models, namely: Elastic Net, Random Forest, XGBoost and SSVS as applied to nowcasting a large dataset of USD volumes of Russian exports and imports by commodity group. We use lags of the volumes of export and import commodity groups, prices for some imported and exported goods and other variables, due to which the curse of dimensionality becomes quite acute. The models we use are very popular and have proven themselves well in forecasting in the presence of the curse of dimensionality, when the number of model parameters exceeds the number of observations. The best model is the weighted model of machine learning methods, which outperforms the ARIMA benchmark model in nowcasting the volume of both exports and imports. In the case of the largest commodities groups, we often get a significantly more accurate nowcasts then ARIMA model, according to the Diebold-Mariano test. In addition, nowcasts turns out to be quite close to the historical forecasts of the Bank of Russia, being constructed in similar conditions.
    Keywords: наукастинг; внешняя торговля; проклятие размерности; машинное обучение; российская экономика
    JEL: C52 C53 C55 F17
    Date: 2020–06
  4. By: Hariharan, Naveen Kunnathuvalappil
    Abstract: Learning the determinants of successful project budgeting is crucial. This research attempts to empirically find the determinants of a successful budget. To find this, this work applied three different supervised machine learning algorithms for classification: Support Vector Machine (SVM), Logistic regression, and Probit regression with data from 470 projects. Five features have been selected: coordination, participation, budget control, communication, and motivation. The SVM analysis results showed that SVM could predict successful and failed budgets with fairly good accuracy. The results from Logistic and Probit regression showed that if managers properly focus on coordination, participation, budget control, and communication, the probability of success in project-budget increases.
    Date: 2021–09–08
  5. By: Otero Gomez, Daniel; MANRIQUE, MIGUEL ANGEL CORREA; Sierra, Omar Becerra; Laniado, Henry; Mateus C, Rafael; Millan, David Andres Romero
    Abstract: It is a common practice to price a house without proper evaluation studies being performed for assurance. That is why the purpose of this study provide an explanatory model by establishing parameters for accuracy in interpretation and projection of housing prices. In addition, it is intentioned to establish proper data preprocessing practices in order to increase the accuracy of machine learning algorithms. Indeed, according to our literature review, there are few articles and reports on the use of Machine Learning tools for the prediction of property prices in Colombia. The dataset in which the research is built upon was provided by an existing real estate company. It contains near 940,000 items (housing advertisements) posted on the platform from the year 2018 to 2020. The database was enriched using statistical imputation techniques. Housing prices prediction was performed using Decision Tree Regressors and LightGBM methods, thus deriving in better alternatives for house price prediction in Colombia. Moreover, to measure the accuracy of the proposed models, the Root Mean Squared Logarithmic Error (RMSLE) statistical indicator was used. The best cross validation results obtained were 0.25354±0.00699 for the LightGBM, 0.25296 ±0.00511 for the Bagging Regressor, and 0.25312±0.00559 for the ExtraTree Regressor with Bagging Regressor, and it was not found a statistical difference between their performances.
    Date: 2020–09–02
  6. By: Ebad F. Haghish (Department of Psychology, University of Oslo, Norway)
    Abstract: rcall is a Stata package that integrates R and R packages in Stata and supports seamless two-way data communication between R and Stata. The package offers two modes of data communication, which are 1) interactive and 2) non-interactive. In the first part of the presentation, I will introduce the latest updates of the package (version 3.0) and how to use it in practice for data analysis (interactive mode). The second part of the presentation concerns developing Stata packages with rcall (non-interactive mode) and how to defensively embed R and R packages within Stata programs. All the examples of the presentation, either for data analysis or package development, would be based on embedding R machine learning algorithms in Stata and using them in practice.
    Date: 2021–09–12
  7. By: Saeed Marzban; Erick Delage; Jonathan Yumeng Li; Jeremie Desgagne-Bouchard; Carl Dussault
    Abstract: The problem of portfolio management represents an important and challenging class of dynamic decision making problems, where rebalancing decisions need to be made over time with the consideration of many factors such as investors preferences, trading environments, and market conditions. In this paper, we present a new portfolio policy network architecture for deep reinforcement learning (DRL)that can exploit more effectively cross-asset dependency information and achieve better performance than state-of-the-art architectures. In particular, we introduce a new property, referred to as \textit{asset permutation invariance}, for portfolio policy networks that exploit multi-asset time series data, and design the first portfolio policy network, named WaveCorr, that preserves this invariance property when treating asset correlation information. At the core of our design is an innovative permutation invariant correlation processing layer. An extensive set of experiments are conducted using data from both Canadian (TSX) and American stock markets (S&P 500), and WaveCorr consistently outperforms other architectures with an impressive 3%-25% absolute improvement in terms of average annual return, and up to more than 200% relative improvement in average Sharpe ratio. We also measured an improvement of a factor of up to 5 in the stability of performance under random choices of initial asset ordering and weights. The stability of the network has been found as particularly valuable by our industrial partner.
    Date: 2021–09
  8. By: Nathan Ratledge; Gabe Cadamuro; Brandon de la Cuesta; Matthieu Stigler; Marshall Burke
    Abstract: In many regions of the world, sparse data on key economic outcomes inhibits the development, targeting, and evaluation of public policy. We demonstrate how advancements in satellite imagery and machine learning can help ameliorate these data and inference challenges. In the context of an expansion of the electrical grid across Uganda, we show how a combination of satellite imagery and computer vision can be used to develop local-level livelihood measurements appropriate for inferring the causal impact of electricity access on livelihoods. We then show how ML-based inference techniques deliver more reliable estimates of the causal impact of electrification than traditional alternatives when applied to these data. We estimate that grid access improves village-level asset wealth in rural Uganda by 0.17 standard deviations, more than doubling the growth rate over our study period relative to untreated areas. Our results provide country-scale evidence on the impact of a key infrastructure investment, and provide a low-cost, generalizable approach to future policy evaluation in data sparse environments.
    Date: 2021–09
  9. By: Pierri, Damian Rene
    Abstract: This paper deals with infinite horizon non-optimal economies with aggregate uncertainty and a finite number of heterogeneous agents. It derives sufficient conditions for the existence of a recursive structure,an ergodic, a stationary, and a non-stationary equilibria. It also gives an answer to the following question: is it possible to derive a general framework which guarantees that numerical simulations truly reflect the behavior of endogenous variables in the model? We provide sufficient conditions to give an affirmative answer to this question for endowment economies with incomplete markets and uncountable exogenous shocks. These conditions guarantee the ergodicity of the process and hold for a particular selection mechanism. For economies with finitely many shocks or for an arbitrary selection in economies with uncountable shocks, it is only possible to show that a computable, time independent and recursive representation generates a stationary Markov process. The results in this paper suggest that often a well defined stochastic steady state in heterogenous agent models is sensitive to the initial conditions of the economy; a fact which imply that heterogeneity may have irreversible long-lasting effects.
    Keywords: Non-Optimal Economies; Markov Equilibrium; Heterogeneous Agents; Simulations
    JEL: C63 C68 D52 D58
    Date: 2021–09–07
  10. By: Hariharan, Naveen Kunnathuvalappil
    Abstract: Only when the input data is reliable can mathematical models and business intelligence systems for decisionmaking produce accurate and effective outputs. However, data taken from primary sources and gathered in a data mart may contain several anomalies that analysts must identify and correct. This research covers the activities involved in creating a high-quality dataset for business intelligence and data mining. Three techniques are addressed to achieve this goal: data validation, which detects and reduce anomalies and inconsistencies; data modification, which enhances the precision and robustness of learning algorithms; and data reduction, which produces a set of data with fewer characteristics and records but is just as insightful as the original dataset.
    Date: 2019–12–22

This nep-cmp issue is ©2021 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.