nep-cmp New Economics Papers
on Computational Economics
Issue of 2019‒05‒06
fourteen papers chosen by
Stan Miles
Thompson Rivers University

  1. Gated deep neural networks for implied volatility surfaces By Yu Zheng; Yongxin Yang; Bowei Chen
  2. Curriculum Learning in Deep Neural Networks for Financial Forecasting By Allison Koenecke; Amita Gajewar
  3. Decomposition of intra-household disparity sensitive fuzzy multi-dimensional poverty index: A study of vulnerability through Machine Learning By Sen, Sugata
  4. Offline Multi-Action Policy Learning: Generalization and Optimization By Zhou, Zhengyuan; Athey, Susan; Wager, Stefan
  5. Legal Responsibility in Investment Decisions Using Algorithms and AI By Makoto Chiba; Mikari Kashima; Kenta Sekiguchi
  6. Machine Learning Methods Economists Should Know About By Athey, Susan; Imbens, Guido W.
  7. Supervised Machine Learning for Eliciting Individual Reservation Values By John A. Clithero; Jae Joon Lee; Joshua Tasoff
  8. Optimal execution with rough path signatures By Jasdeep Kalsi; Terry Lyons; Imanol Perez Arribas
  9. Distributional and welfare effects of replacing monetary benefits with Universal Basic Income in Spain By Badenes-Plá, Nuria; Gambau-Suelves, Borja; Navas Román, María
  10. Statistical Learning for Probability-Constrained Stochastic Optimal Control By Alessandro Balata; Michael Ludkovski; Aditya Maheshwari; Jan Palczewski
  11. Causally Driven Incremental Multi Touch Attribution Using a Recurrent Neural Network By Du, Ruihuan; Zhong, Yu; Nair, Harikesh S.; Cui, Bo; Shou, Ruyang
  12. Evaluating Welfare and Economic Effects of Raised Fertility By Makarski, Krzysztof; Tyrowicz, Joanna; Malec, Magda
  13. The Wrong Kind of AI? Artificial Intelligence and the Future of Labor Demand By Acemoglu, Daron; Restrepo, Pascual
  14. A New Organizational Chassis for Artificial Intelligence - Exploring Organizational Readiness Factors By Pumplun, Luisa; Tauchert, Christoph; Heidt, Margareta

  1. By: Yu Zheng; Yongxin Yang; Bowei Chen
    Abstract: In this paper, we propose a gated deep neural network model to predict implied volatility surfaces. Conventional financial conditions and empirical evidence related to the implied volatility are incorporated into the neural network architecture design and calibration including no static arbitrage, boundaries, asymptotic slope and volatility smile. They are also satisfied empirically by the option data on the S&P 500 over a ten years period. Our proposed model outperforms the widely used surface stochastic volatility inspired model on the mean average percentage error in both in-sample and out-of-sample datasets. The research of this study has a fundamental methodological contribution to the emerging trend of applying the state-of-the-art information technology into business studies as our model provides a framework of integrating data-driven machine learning algorithms with financial theories and this framework can be easily extended and applied to solve other problems in finance or other business fields.
    Date: 2019–04
  2. By: Allison Koenecke; Amita Gajewar
    Abstract: For any financial organization, computing accurate quarterly forecasts for various products is one of the most critical operations. As the granularity at which forecasts are needed increases, traditional statistical time series models may not scale well. We apply deep neural networks in the forecasting domain by experimenting with techniques from Natural Language Processing (Encoder-Decoder LSTMs) and Computer Vision (Dilated CNNs), as well as incorporating transfer learning. A novel contribution of this paper is the application of curriculum learning to neural network models built for time series forecasting. We illustrate the performance of our models using Microsoft's revenue data corresponding to Enterprise, and Small, Medium & Corporate products, spanning approximately 60 regions across the globe for 8 different business segments, and totaling in the order of tens of billions of USD. We compare our models' performance to the ensemble model of traditional statistics and machine learning techniques currently used by Microsoft Finance. With this in-production model as a baseline, our experiments yield an approximately 30% improvement in overall accuracy on test data. We find that our curriculum learning LSTM-based model performs best, showing that it is reasonable to implement our proposed methods without overfitting on medium-sized data.
    Date: 2019–04
  3. By: Sen, Sugata
    Abstract: The traditional multi-dimensional measures have failed to properly project the vulnerability of human-beings towards poverty. Some of the reasons behind this inability may be the failure of the existing measures to recognise the graduality inside the concept of poverty and the disparities within the household in wealth distribution. So this work wants to develop a measure to estimate the vulnerability of households in becoming poor in a multidimensional perspective through incorporating the intra-household disparities and graduality within the causal factors. Dimensional decomposition of the developed vulnerability measure is also under the purview of this work. To estimate the vulnerability and dimensional influences with the help of artificial intelligence an integrated mathematical framework is developed.
    Keywords: Poverty, Vulnerability, Fuzzy logic, Intra-household disparity, Shapley Value Decomposition, Machine Learning, LIME
    JEL: C63 I32
    Date: 2019–04–28
  4. By: Zhou, Zhengyuan (Department of Electrical Engineering, Stanford University); Athey, Susan (Graduate School of Business, Stanford University); Wager, Stefan (Graduate School of Business, Stanford University)
    Abstract: In many settings, a decision-maker wishes to learn a rule, or policy, that maps from observable characteristics of an individual to an action. Examples include selecting offers, prices, advertisements, or emails to send to consumers, as well as the problem of determining which medication to prescribe to a patient. While there is a growing body of literature devoted to this problem, most existing results are focused on the case where data comes from a randomized experiment, and further, there are only two possible actions, such as giving a drug to a patient or not. In this paper, we study the offline multi-action policy learning problem with observational data and where the policy may need to respect budget constraints or belong to a restricted policy class such as decision trees. We build on the theory of efficient semi-parametric inference in order to propose and implement a policy learning algorithm that achieves asymptotically minimax-optimal regret. To the best of our knowledge, this is the first result of this type in the multi-action setup, and it provides a substantial performance improvement over the existing learning algorithms. We then consider additional computational challenges that arise in implementing our method for the case where the policy is restricted to take the form of a decision tree. We propose two different approaches, one using a mixed integer program formulation and the other using a tree-search based algorithm.
    Date: 2018–10
  5. By: Makoto Chiba (Bank of Japan); Mikari Kashima (Bank of Japan); Kenta Sekiguchi (Bank of Japan)
    Abstract: This article provides an overview of the report released by a study group on legal issues regarding financial investments using algorithms/artificial intelligence (AI). The report focuses on legal issues regarding the automated or black-boxed financial investment decisions by using algorithms/AI. Specifically, the report discusses the points for consideration in applying laws regarding (1) regulations and civil liability issues surrounding business operators for investment management or investment advisory activities, and (2) regulations on market misconduct. The report shows that the application of some existing laws requires the presence of a certain mental state (such as purpose and intent), which is unlikely to be given in the case of investment decisions using algorithms/AI. To deal with this problem, the report considers the necessity of introducing new legislation.
    Keywords: algorithm; artificial intelligence; AI; investment decision; duty to explain; duty of due care of a prudent manager; market manipulation; insider trading
    JEL: K22
    Date: 2019–04–26
  6. By: Athey, Susan (Graduate School of Business, Stanford University, SIEPR, and NBER); Imbens, Guido W. (Graduate School of Business and Department of Economics, Stanford)
    Abstract: We discuss the relevance of the recent Machine Learning (ML) literature for economics and econometrics. First we discuss the differences in goals, methods and settings between the ML literature and the traditional econometrics and statistics literatures. Then we discuss some specific methods from the machine learning literature that we view as important for empirical researchers in economics. These include supervised learning methods for regression and classification, unsupervised learning methods, as well as matrix completion methods. Finally, we highlight newly developed methods at the intersection of ML and econometrics, methods that typically perform better than either off-the-shelf ML or more traditional econometric methods when applied to particular classes of problems, problems that include causal inference for average treatment effects, optimal policy estimation, and estimation of the counterfactual effect of price changes in consumer choice models.
    Date: 2019–03
  7. By: John A. Clithero; Jae Joon Lee; Joshua Tasoff
    Abstract: Direct elicitation, guided by theory, is the standard method for eliciting individual-level latent variables. We present an alternative approach, supervised machine learning (SML), and apply it to measuring individual valuations for goods. We find that the approach is superior for predicting out-of-sample individual purchases relative to a canonical direct-elicitation approach, the Becker-DeGroot-Marschak (BDM) method. The BDM is imprecise and systematically biased by understating valuations. We characterize the performance of SML using a variety of estimation methods and data. The simulation results suggest that prices set by SML would increase revenue by 22% over the BDM, using the same data.
    Date: 2019–04
  8. By: Jasdeep Kalsi; Terry Lyons; Imanol Perez Arribas
    Abstract: We present a method for obtaining approximate solutions to the problem of optimal execution, based on a signature method. The framework is general, only requiring that the price process is a geometric rough path and the price impact function is a continuous function of the trading speed. Following an approximation of the optimisation problem, we are able to calculate an optimal solution for the trading speed in the space of linear functions on a truncation of the signature of the price process. We provide strong numerical evidence illustrating the accuracy and flexibility of the approach. Our numerical investigation both examines cases where exact solutions are known, demonstrating that the method accurately approximates these solutions, and models where exact solutions are not known. In the latter case, we obtain favourable comparisons with standard execution strategies.
    Date: 2019–05
  9. By: Badenes-Plá, Nuria; Gambau-Suelves, Borja; Navas Román, María
    Abstract: This papers quantifies the redistributive effects on progressivity, poverty and welfare, that would occur if the monetary benefits currently in place in the Spanish system were to be replaced by a neutral alternative in terms of spending, granting a universal basic income (UBI) to everyone. We have calculated two scenarios: one in which the benefit system is replaced by a basic income, and another in which retirement pensions are maintained, with the rest of monetary benefits being distributed via a UBI. The simulations are carried out using EUROMOD. The implementation of a UBI, even a very radical one that eliminates the existing benefits system, could be: economically sustainable; as redistributive as the current one; almost as poverty reducing as the one in force (or more in some dimensions), and a generator of greater welfare.
    Date: 2019–04–22
  10. By: Alessandro Balata; Michael Ludkovski; Aditya Maheshwari; Jan Palczewski
    Abstract: We investigate Monte Carlo based algorithms for solving stochastic control problems with probabilistic constraints. Our motivation comes from microgrid management, where the controller tries to optimally dispatch a diesel generator while maintaining low probability of blackouts. The key question we investigate are empirical simulation procedures for learning the admissible control set that is specified implicitly through a probability constraint on the system state. We propose a variety of relevant statistical tools including logistic regression, Gaussian process regression, quantile regression and support vector machines, which we then incorporate into an overall Regression Monte Carlo (RMC) framework for approximate dynamic programming. Our results indicate that using logistic or Gaussian process regression to estimate the admissibility probability outperforms the other options. Our algorithms offer an efficient and reliable extension of RMC to probability-constrained control. We illustrate our findings with two case studies for the microgrid problem.
    Date: 2019–04
  11. By: Du, Ruihuan (?); Zhong, Yu (?); Nair, Harikesh S. (Stanford University Graduate School of Business); Cui, Bo (?); Shou, Ruyang (?)
    Abstract: This paper describes a practical system for Multi Touch Attribution (MTA) for use by a publisher of digital ads. We developed this system for, an eCommerce company, which is also a publisher of digital ads in China. The approach has two steps. The first step (“response modeling†) fits a user-level model for purchase of a product as a function of the user’s exposure to ads. The second (“credit allocation†) uses the fitted model to allocate the incremental part of the observed purchase due to advertising, to the ads the user is exposed to over the previous T days. To implement step one, we train a Recurrent Neural Network (RNN) on user-level conversion and exposure data. The RNN has the advantage of flexibly handling the sequential dependence in the data in a semi-parametric way. The specific RNN formulation we implement captures the impact of advertising intensity, timing, competition, and user-heterogeneity, which are known to be relevant to ad-response. To implement step two, we compute Shapley Values, which have the advantage of having axiomatic foundations and satisfying fairness considerations. The specific formulation of the Shapley Value we implement respects incrementality by allocating the overall incremental improvement in conversion to the exposed ads, while handling the sequence-dependence of exposures on the observed outcomes. The system is under production at, and scales to handle the high dimensionality of the problem on the platform (attribution of the orders of about 300M users, for roughly 160K brands, across 200+ ad-types, served about 80B ad-impressions over a typical 15-day period).
    Date: 2019–01
  12. By: Makarski, Krzysztof (Warsaw School of Economics); Tyrowicz, Joanna (University of Warsaw); Malec, Magda (Warsaw School of Economics)
    Abstract: Many countries consider rising fertility through pro-family policies as a solution to the fiscal pressure stemming from longevity. However, an increased number of births implies immediate private costs and only delayed public benefits of younger and larger population. We propose using an overlapping generations model with a rich family structure to quantify the effects of simulated increases to the birth rates. We analyze the overall macroeconomic and welfare effects of these simulated paths relative to status quo. We also study the distribution of these effects across cohorts and study the sensitivity of the final effects to the assumed target value and path of increased fertility. Since our study tries to quantify the possible effects of pro-natalistic policies, we focus of public costs and benefits of having children. We find that fiscal effects are positive, but short of the natalistic expenditures in many countries. The sign and the size of both welfare and fiscal effects depend on the patterns of increased fertility.
    Keywords: fertility, welfare, natalistic policies, overlapping generations model
    JEL: H55 E17 C60 C68 E21 D63
    Date: 2019–04
  13. By: Acemoglu, Daron (MIT); Restrepo, Pascual (Boston University)
    Abstract: Artificial Intelligence is set to influence every aspect of our lives, not least the way production is organized. AI, as a technology platform, can automate tasks previously performed by labor or create new tasks and activities in which humans can be productively employed. Recent technological change has been biased towards automation, with insufficient focus on creating new tasks where labor can be productively employed. The consequences of this choice have been stagnating labor demand, declining labor share in national income, rising inequality and lower productivity growth. The current tendency is to develop AI in the direction of further automation, but this might mean missing out on the promise of the "right" kind of AI with better economic and social outcomes.
    Keywords: automation, artificial intelligence, jobs, inequality, innovation, labor demand, productivity, tasks, technology, wages
    JEL: J23 J24
    Date: 2019–04
  14. By: Pumplun, Luisa; Tauchert, Christoph; Heidt, Margareta
    Date: 2019–06–08

This nep-cmp issue is ©2019 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.