nep-cmp New Economics Papers
on Computational Economics
Issue of 2021‒08‒30
nineteen papers chosen by



  1. Whatever it takes to understand a central banker - Embedding their words using neural networks. By Martin Baumgaertner; Johannes Zahner
  2. Анализ рисков потребительских кредитов с помощью алгоритмов машинного обучения // Consumer credit risk analysis via machine learning algorithms By Байкулаков Шалкар // Baikulakov Shalkar; Белгибаев Зангар // Belgibayev Zanggar
  3. Economic Determinants of Regional Trade Agreements Revisited Using Machine Learning By Simon Blöthner; Mario Larch
  4. Machine learning using Stata/Python By Giovanni Cerulli
  5. Using Deep Learning Neural Networks to Predict the Knowledge Economy Index for Developing and Emerging Economies By Andres, Antonio Rodriguez; Otero, Abraham; Amavilah, Voxi Heinrich
  6. Predicting Information Avoidance Behavior using Machine Learning By Meerza, Syed Imran Ali; Brooks, Kathleen R.; Gustafson, Christopher R.; Yiannaka, Amalia
  7. Previs\~ao dos pre\c{c}os de abertura, m\'inima e m\'axima de \'indices de mercados financeiros usando a associa\c{c}\~ao de redes neurais LSTM By Gabriel de Oliveira Guedes Nogueira; Marcel Otoboni de Lima
  8. How to Assess Country Risk: The Vulnerability Exercise Approach Using Machine Learning By International Monetary Fund
  9. Machine Learning on residential electricity consumption: Which households are more responsive to weather? By Jieyi Kang; David Reiner
  10. Double Machine Learning and Bad Controls -- A Cautionary Tale By Paul H\"unermund; Beyers Louw; Itamar Caspi
  11. Grounded reality meets machine learning: A deep-narrative analysis framework for energy policy research By Ramit Debnath; Sarah Darby; Ronita Bardhan; Kamiar Mohaddes; Minna Sunikka-Blank
  12. Deep Signature FBSDE Algorithm By Qi Feng; Man Luo; Zhaoyu Zhang
  13. Moving average options: Machine Learning and Gauss-Hermite quadrature for a double non-Markovian problem By Ludovic Gouden\`ege; Andrea Molent; Antonino Zanette
  14. Forecasting student enrollment using time series models and recurrent neural networks By Parvez, Rezwanul; Ali Meerza, Syed Imran; Hasan Khan Chowdhury, Nazea
  15. Loss-Based Variational Bayes Prediction By David T. Frazier; Ruben Loaiza-Maya; Gael M. Martin; Bonsoo Koo
  16. Robust Risk-Aware Reinforcement Learning By Sebastian Jaimungal; Silvana Pesenti; Ye Sheng Wang; Hariom Tatsat
  17. DIGNAR-19 Toolkit Manual By Luis-Felipe Zanna; Mr. Giovanni Melina; Mr. Zamid Aligishiev
  18. Applying Artificial Intelligence in Agriculture: Evidence from Washington State Apple Orchards By Amin, Modhurima D.; Badruddoza, Syed; Mantle, Steve
  19. Adaptive Gradient Descent Methods for Computing Implied Volatility By Yixiao Lu; Yihong Wang; Tinggan Yang

  1. By: Martin Baumgaertner (THM Business School); Johannes Zahner (Philipps-Universitaet Marburg)
    Abstract: Dictionary approaches are at the forefront of current techniques for quantifying central bank communication. This paper proposes embeddings – a language model trained using machine learning techniques – to locate words and documents in a multidimensional vector space. To accomplish this, we gather a text corpus that is unparalleled in size and diversity in the central bank communication literature, as well as introduce a novel approach to text quantification from computational linguistics. Utilizing this novel text corpus of over 23,000 documents from over 130 central banks we are able to provide high quality text-representations –embeddings– for central banks. Finally, we demonstrate the applicability of embeddings in this paper by several examples in the fields of monetary policy surprises, financial uncertainty, and gender bias.
    Keywords: Word Embedding, Neural Network, Central Bank Communication, Natural Language Processing, Transfer Learning
    JEL: C45 C53 E52 Z13
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:mar:magkse:202130&r=
  2. By: Байкулаков Шалкар // Baikulakov Shalkar (Center for the Development of Payment and Financial Technologies); Белгибаев Зангар // Belgibayev Zanggar (National Bank of Kazakhstan)
    Abstract: Данное исследование представляет собой попытку оценки кредитоспособности физических лиц с помощью алгоритмов машинного обучения на основе данных, предоставляемых банками второго уровня Национальному Банку Республики Казахстан. Оценка кредитоспособности заемщиков позволяет НБРК исследовать качество выданных кредитов банками второго уровня и прогнозировать потенциальные системные риски. В данном исследовании были применены два линейных и шесть нелинейных методов классификации (линейные модели - логистическая регрессия, стохастический градиентный спуск, и нелинейные - нейронные сети, k-ближайшие соседи (kNN), дерево решений (decision tree), случайный лес (random tree), XGBoost, наивный Байесовский классификатор (Naïve Bayes)) и сравнивались алгоритмы, основанные на правильности классификации (accuracy), точности (precision) и ряде других показателей. Нелинейные модели показывают более точные прогнозы по сравнению с линейными моделями. В частности, нелинейные модели, такие как случайный лес (random forest) и k-ближайшие соседи (kNN) на передискредитированных данных (oversampled data) продемонстрировали наиболее многообещающие результаты. // This project is an attempt to assess the creditworthiness of individuals through machine learning algorithms and based on regulatory data provided by second-tier banks to the central bank. The assessment of the creditworthiness of borrowers can allow the central bank to investigate the accuracy of issued loans by second-tier banks, and predict potential systematic risks. In this project, two linear and six nonlinear classification methods were developed (linear models – Logistic Regression, Stochastic Gradient Descent, and nonlinear - Neural Networks, kNN, Decision tree, Random forest, XGBoost, Naïve Bayes), and the algorithms were compared based on accuracy, precision, and several other metrics. The non-linear models illustrate more accurate predictions in comparison with the linear models. In particular, the non-linear models such as the Random Forest and kNN classifiers on oversampled data demonstrated promising outcomes.
    Keywords: потребительские кредиты, машинное обучение, банковское регулирование, стохастический градиентный спуск, логистическая регрессия, k-ближайшие соседи, классификатор случайных лесов, дерево решений, gaussian NB (Гауссовский наивный Байесовский классификатор), XGBoost, нейронные сети (многослойный персептрон), consumer credits, machine learning, bank regulation, stochastic gradient descent (linear model), logistic regression (linear model), kNN (neighbors), random forest classifier (ensemble), decision tree (tree), gaussian NB (naïve bayes), XGBoost, Neural network (MLP classifier)
    JEL: G21 G28 E37 E51
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:aob:wpaper:21&r=
  3. By: Simon Blöthner; Mario Larch
    Abstract: While traditional empirical models using determinants like size and trade costs are able to predict RTA formation reasonably well, we demonstrate that allowing for machine detected non-linear patterns helps to improve the predictive power of RTA formation substantially. We employ machine learning methods and find that the fitted tree-based methods and neural networks deliver sharper and more accurate predictions than the probit model. For the majority of models the allowance of fixed effects increases the predictive performance considerably. We apply our models to predict the likelihood of RTA formation of the EU and the United States with their trading partners, respectively.
    Keywords: Regional Trade Agreements, neural networks, tree-based methods, high-dimensional fixed effects
    JEL: F14 F15 C45 C53
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_9233&r=
  4. By: Giovanni Cerulli (IRCrES-CNR)
    Abstract: We present two related Stata modules, r_ml_stata and c_ml_stata, for fitting popular machine learning (ML) methods in both regression and classification settings. Using the recent Stata/Python integration platform (sfi) of Stata 16, these commands provide hyperparameters' optimal tuning via K-fold cross-validation using greed search. More specifically, they make use of the Python Scikit-learn API to carry out both cross-validation and outcome/label prediction.
    Date: 2021–08–07
    URL: http://d.repec.org/n?u=RePEc:boc:scon21:25&r=
  5. By: Andres, Antonio Rodriguez; Otero, Abraham; Amavilah, Voxi Heinrich
    Abstract: Missing values and the inconsistency of the measures of the knowledge economy remain vexing problems that hamper policy-making and future research in developing and emerging economies. This paper contributes to the new and evolving literature that seeks to advance better understanding of the importance of the knowledge economy for policy and further research in developing and emerging economies. In this paper we use a supervised machine deep learning neural network (DLNN) approach to predict the knowledge economy index of 71 developing and emerging economies during the 1995-2017 period. Applied in combination with a data imputation procedure based on the K-closest neighbor algorithm, DLNN is capable of handling missing data problems better than alternative methods. A 10-fold validation of the DLNN yielded low quadratic and absolute error (0,382 +- 0,065). The results are robust and efficient, and the model’s predictive power is high. There is a difference in the predictive power when we disaggregate countries in all emerging economies versus emerging Central European countries. We explain this result and leave the rest to future endeavors. Overall, this research has filled in gaps due to missing data thereby allowing for effective policy strategies. At the aggregate level development agencies, including the World Bank that originated the KEI, would benefit from our approach until substitutes come along.
    Keywords: Machine deep learning neural networks; developing economies, emerging economies, knowledge economy, knowledge economy index, World Bank
    JEL: C45 C53 O38 O41 O57 P41
    Date: 2021–04–15
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:109137&r=
  6. By: Meerza, Syed Imran Ali; Brooks, Kathleen R.; Gustafson, Christopher R.; Yiannaka, Amalia
    Keywords: Institutional and Behavioral Economics, Research Methods/Statistical Methods, Health Economics and Policy
    Date: 2021–08
    URL: http://d.repec.org/n?u=RePEc:ags:aaea21:312876&r=
  7. By: Gabriel de Oliveira Guedes Nogueira; Marcel Otoboni de Lima
    Abstract: In order to make good investment decisions, it is vitally important for an investor to know how to make good analysis of financial time series. Within this context, studies on the forecast of the values and trends of stock prices have become more relevant. Currently, there are different approaches to dealing with the task. The two main ones are the historical analysis of stock prices and technical indicators and the analysis of sentiments in news, blogs and tweets about the market. Some of the most used statistical and artificial intelligence techniques are genetic algorithms, Support Vector Machines (SVM) and different architectures of artificial neural networks. This work proposes the improvement of a model based on the association of three distinct LSTM neural networks, each acting in parallel to predict the opening, minimum and maximum prices of stock exchange indices on the day following the analysis. The dataset is composed of historical data from more than 10 indices from the world's largest stock exchanges. The results demonstrate that the model is able to predict trends and stock prices with reasonable accuracy.
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2108.10065&r=
  8. By: International Monetary Fund
    Abstract: The IMF’s Vulnerability Exercise (VE) is a cross-country exercise that identifies country-specific near-term macroeconomic risks. As a key element of the Fund’s broader risk architecture, the VE is a bottom-up, multi-sectoral approach to risk assessments for all IMF member countries. The VE modeling toolkit is regularly updated in response to global economic developments and the latest modeling innovations. The new generation of VE models presented here leverages machine-learning algorithms. The models can better capture interactions between different parts of the economy and non-linear relationships that are not well measured in ”normal times.” The performance of machine-learning-based models is evaluated against more conventional models in a horse-race format. The paper also presents direct, transparent methods for communicating model results.
    Keywords: Risk Assessment, Supervised Machine Learning, Prediction, Sudden Stop, Exchange Market Pressure, Fiscal Crisis, Debt, Financial Crisis, Economic Crisis, Economic Growth
    Date: 2021–05–07
    URL: http://d.repec.org/n?u=RePEc:imf:imftnm:2021/003&r=
  9. By: Jieyi Kang (Department of Land Economy, University of Cambridge); David Reiner (EPRG, CJBS, University of Cambridge)
    Keywords: Weather sensitivity, smart metering data, unsupervised learning, clusters, residential electricity, consumption patterns, Ireland
    JEL: C55 D12 R22 Q41
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:enp:wpaper:eprg2113&r=
  10. By: Paul H\"unermund (Copenhagen Business School); Beyers Louw (Maastricht University); Itamar Caspi (Bank of Israel)
    Abstract: Double machine learning (DML) is becoming an increasingly popular tool for automated model selection in high-dimensional settings. At its core, DML assumes unconfoundedness, or exogeneity of all considered controls, which might likely be violated if the covariate space is large. In this paper, we lay out a theory of bad controls building on the graph-theoretic approach to causality. We then demonstrate, based on simulation studies and an application to real-world data, that DML is very sensitive to the inclusion of bad controls and exhibits considerable bias even with only a few endogenous variables present in the conditioning set. The extent of this bias depends on the precise nature of the assumed causal model, which calls into question the ability of selecting appropriate controls for regressions in a purely data-driven way.
    Date: 2021–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2108.11294&r=
  11. By: Ramit Debnath (EPRG, CJBS, University of Cambridge); Sarah Darby (University of Oxford); Ronita Bardhan (Department of Architecture, University of Cambridge); Kamiar Mohaddes (EPRG, CJBS, University of Cambridge); Minna Sunikka-Blank (Department of Architecture, University of Cambridge)
    Keywords: energy policy, narratives, topic modelling, computational social science, text analysis, methodological framework
    JEL: Q40 Q48 R28
    Date: 2020–07
    URL: http://d.repec.org/n?u=RePEc:enp:wpaper:eprg2019&r=
  12. By: Qi Feng; Man Luo; Zhaoyu Zhang
    Abstract: We propose a deep signature/log-signature FBSDE algorithm to solve forward-backward stochastic differential equations (FBSDEs) with state and path dependent features. By incorporating the deep signature/log-signature transformation into the recurrent neural network (RNN) model, our algorithm shortens the training time, improves the accuracy, and extends the time horizon comparing to methods in the existing literature. Moreover, our algorithms can be applied to a wide range of applications such as state and path dependent option pricing involving high-frequency data, model ambiguity, and stochastic games, which are linked to parabolic partial differential equations (PDEs), and path-dependent PDEs (PPDEs). Lastly, we also derive the convergence analysis of the deep signature/log-signature FBSDE algorithm.
    Date: 2021–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2108.10504&r=
  13. By: Ludovic Gouden\`ege; Andrea Molent; Antonino Zanette
    Abstract: Evaluating moving average options is a tough computational challenge for the energy and commodity market as the payoff of the option depends on the prices of a certain underlying observed on a moving window so, when a long window is considered, the pricing problem becomes high dimensional. We present an efficient method for pricing Bermudan style moving average options, based on Gaussian Process Regression and Gauss-Hermite quadrature, thus named GPR-GHQ. Specifically, the proposed algorithm proceeds backward in time and, at each time-step, the continuation value is computed only in a few points by using Gauss-Hermite quadrature, and then it is learned through Gaussian Process Regression. We test the proposed approach in the Black-Scholes model, where the GPR-GHQ method is made even more efficient by exploiting the positive homogeneity of the continuation value, which allows one to reduce the problem size. Positive homogeneity is also exploited to develop a binomial Markov chain, which is able to deal efficiently with medium-long windows. Secondly, we test GPR-GHQ in the Clewlow-Strickland model, the reference framework for modeling prices of energy commodities. Finally, we consider a challenging problem which involves double non-Markovian feature, that is the rough-Bergomi model. In this case, the pricing problem is even harder since the whole history of the volatility process impacts the future distribution of the process. The manuscript includes a numerical investigation, which displays that GPR-GHQ is very accurate and it is able to handle options with a very long window, thus overcoming the problem of high dimensionality.
    Date: 2021–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2108.11141&r=
  14. By: Parvez, Rezwanul; Ali Meerza, Syed Imran; Hasan Khan Chowdhury, Nazea
    Keywords: Teaching/Communication/Extension/Profession, Community/Rural/Urban Development, Institutional and Behavioral Economics
    Date: 2021–08
    URL: http://d.repec.org/n?u=RePEc:ags:aaea21:312912&r=
  15. By: David T. Frazier; Ruben Loaiza-Maya; Gael M. Martin; Bonsoo Koo
    Abstract: We propose a new method for Bayesian prediction that caters for models with a large number of parameters and is robust to model misspecification. Given a class of high-dimensional (but parametric) predictive models, this new approach constructs a posterior predictive using a variational approximation to a loss-based, or Gibbs, posterior that is directly focused on predictive accuracy. The theoretical behavior of the new prediction approach is analyzed and a form of optimality demonstrated. Applications to both simulated and empirical data using high-dimensional Bayesian neural network and autoregressive mixture models demonstrate that the approach provides more accurate results than various alternatives, including misspecified likelihood-based predictions
    Keywords: loss-based Bayesian forecasting, variational inference, Gibbs posteriors, proper scoring rules, Bayesian neural networks, M4 forecasting competition
    JEL: C11 C53 C58
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2021-8&r=
  16. By: Sebastian Jaimungal; Silvana Pesenti; Ye Sheng Wang; Hariom Tatsat
    Abstract: We present a reinforcement learning (RL) approach for robust optimisation of risk-aware performance criteria. To allow agents to express a wide variety of risk-reward profiles, we assess the value of a policy using rank dependent expected utility (RDEU). RDEU allows the agent to seek gains, while simultaneously protecting themselves against downside events. To robustify optimal policies against model uncertainty, we assess a policy not by its distribution, but rather, by the worst possible distribution that lies within a Wasserstein ball around it. Thus, our problem formulation may be viewed as an actor choosing a policy (the outer problem), and the adversary then acting to worsen the performance of that strategy (the inner problem). We develop explicit policy gradient formulae for the inner and outer problems, and show its efficacy on three prototypical financial problems: robust portfolio allocation, optimising a benchmark, and statistical arbitrage
    Date: 2021–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2108.10403&r=
  17. By: Luis-Felipe Zanna; Mr. Giovanni Melina; Mr. Zamid Aligishiev
    Abstract: This note is a user’s manual for the DIGNAR-19 toolkit, an application aimed at facilitating the use of the DIGNAR-19 model by economists with no to little knowledge of Matlab and Dynare via a user-friendly Excel-based interface. he toolkit comprises three tools—the simulation tool, the graphing tool, and the realism tool—that translate the contents of an Excel input file into instructions for Matlab/Dynare programs. These programs are executed behind the scenes. Outputs are saved in a separate Excel file and can also be visualized in customizable charts.
    Keywords: COVID-19, Natural Resources, Public Investment, Debt Sustainability; simulation tool; toolkit manual; realism tool; policy scenario analysis; I. DIGNAR-19 toolkit; Labor supply; Public investment spending; Natural resources; Fiscal stance
    Date: 2021–06–23
    URL: http://d.repec.org/n?u=RePEc:imf:imftnm:2021/007&r=
  18. By: Amin, Modhurima D.; Badruddoza, Syed; Mantle, Steve
    Keywords: Productivity Analysis, Research Methods/Statistical Methods, Agribusiness
    Date: 2021–08
    URL: http://d.repec.org/n?u=RePEc:ags:aaea21:312764&r=
  19. By: Yixiao Lu; Yihong Wang; Tinggan Yang
    Abstract: In this paper, a new numerical method based on adaptive gradient descent optimizers is provided for computing the implied volatility from the Black-Scholes (B-S) option pricing model. It is shown that the new method is more accurate than the close form approximation. Compared with the Newton-Raphson method, the new method obtains a reliable rate of convergence and tends to be less sensitive to the beginning point.
    Date: 2021–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2108.07035&r=

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.