nep-big New Economics Papers
on Big Data
Issue of 2019‒10‒21
eleven papers chosen by
Tom Coupé
University of Canterbury

  1. Measuring the Completeness of Theories By Drew Fudenberg; Jon Kleinberg; Annie Liang; Sendhil Mullainathan
  2. Review of national policy initiatives in support of digital and AI-driven innovation By Caroline Paunov; Sandra Planes-Satorra; Greta Ravelli
  3. Testing the employment impact of automation, robots and AI: A survey and some methodological issues By Laura Barbieri; Chiara Mussida; Mariacristina Piva; Marco Vivarelli
  4. The Paradox of Big Data By Smith, Gary
  5. Predicting Auction Price of Vehicle License Plate with Deep Residual Learning By Vinci Chow
  6. Principled estimation of regression discontinuity designs with covariates: a machine learning approach By Jason Anastasopoulos
  7. Nowcasting and forecasting US recessions: Evidence from the Super Learner By Maas, Benedikt
  8. An Inertial Newton Algorithm for Deep Learning By Bolte, Jérôme; Castera, Camille; Pauwels, Edouard; Févotte, Cédric
  9. Incorporating Fine-grained Events in Stock Movement Prediction By Deli Chen; Yanyan Zou; Keiko Harimoto; Ruihan Bao; Xuancheng Ren; Xu Sun
  10. Conservative set valued fields, automatic differentiation, stochastic gradient methods and deep learning By Bolte, Jérôme; Pauwels, Edouard
  11. Be Wary of Black-Box Trading Algorithms By Smith, Gary

  1. By: Drew Fudenberg; Jon Kleinberg; Annie Liang; Sendhil Mullainathan
    Abstract: We use machine learning to provide a tractable measure of the amount of predictable variation in the data that a theory captures, which we call its "completeness." We apply this measure to three problems: assigning certain equivalents to lotteries, initial play in games, and human generation of random sequences. We discover considerable variation in the completeness of existing models, which sheds light on whether to focus on developing better models with the same features or instead to look for new features that will improve predictions. We also illustrate how and why completeness varies with the experiments considered, which highlights the role played in choosing which experiments to run.
    Date: 2019–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1910.07022&r=all
  2. By: Caroline Paunov (OECD); Sandra Planes-Satorra (OECD); Greta Ravelli (OECD)
    Abstract: What can we learn from new policies implemented in different OECD countries to foster digital and AI-driven innovation? This document reviews and extracts lessons from 12 national policy initiatives (four AI strategies and eight policy programmes) aimed at supporting breakthrough digital and AI-driven innovation and the application of those innovations by industry. Most selected policy initiatives actively involve multiple stakeholders from public research, industry and government, have mixed public-private funding models and seek international co-operation on AI. AI and digital research and innovation centres encourage interdisciplinarity, reduce hierarchies within centres and increase the autonomy of staff to enhance centres’ agility and spur creativity. AI strategies set specific actions to strengthen AI research and capabilities, support business adoption of AI and develop standards for the ethical use of AI. Responsible data-access and sharing regulations, infrastructure investments, and measures to ensure that AI contributes to sustainable and inclusive growth are other priorities.
    Keywords: artificial intelligence strategies, digital innovation, digital technologies, innovation policy
    JEL: O30 O31 O33 O38 O25 I28
    Date: 2019–10–17
    URL: http://d.repec.org/n?u=RePEc:oec:stiaac:79-en&r=all
  3. By: Laura Barbieri (Dipartimento di Scienze Economiche e Sociali, DISCE, Università Cattolica del Sacro Cuore); Chiara Mussida (Dipartimento di Scienze Economiche e Sociali, DISCE, Università Cattolica del Sacro Cuore); Mariacristina Piva (Dipartimento di Politica Economica, DISCE, Università Cattolica del Sacro Cuore); Marco Vivarelli (Dipartimento di Politica Economica, DISCE, Università Cattolica del Sacro Cuore - UNU-MERIT, Maastricht, The Netherlands and IZA, Bonn, Germany)
    Abstract: The present technological revolution, characterized by the pervasive and growing presence of robots, automation, Artificial Intelligence and machine learning, is going to transform societies and economic systems. However, this is not the first technological revolution humankind has been facing, but it is probably the very first one with such an accelerated diffusion pace involving all the industrial sectors. Studying its mechanisms and consequences (will the world turn into a jobless society or not?), mainly considering the labor market dynamics, is a crucial matter. This paper aims at providing an updated picture of main empirical evidence on the relationship between new technologies and employment both in terms of overall consequences on the number of employees, tasks required, and wage/inequality effect.
    Keywords: technology, innovation, employment, skill, task, routine
    JEL: O33
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:ctc:serie5:dipe0006&r=all
  4. By: Smith, Gary (Pomona College)
    Abstract: Data-mining is often used to discover patterns in Big Data. It is tempting believe that because an unearthed pattern is unusual it must be meaningful, but patterns are inevitable in Big Data and usually meaningless. The paradox of Big Data is that data mining is most seductive when there are a large number of variables, but a large number of variables exacerbates the perils of data mining.
    Keywords: data mining, big data, machine learning
    Date: 2019–01–01
    URL: http://d.repec.org/n?u=RePEc:clm:pomwps:1003&r=all
  5. By: Vinci Chow
    Abstract: Due to superstition, license plates with desirable combinations of characters are highly sought after in China, fetching prices that can reach into the millions in government-held auctions. Despite the high stakes involved, there has been essentially no attempt to provide price estimates for license plates. We present an end-to-end neural network model that simultaneously predict the auction price, gives the distribution of prices and produces latent feature vectors. While both types of neural network architectures we consider outperform simpler machine learning methods, convolutional networks outperform recurrent networks for comparable training time or model complexity. The resulting model powers our online price estimator and search engine.
    Date: 2019–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1910.04879&r=all
  6. By: Jason Anastasopoulos
    Abstract: The regression discontinuity design (RDD) has become the "gold standard" for causal inference with observational data. Local average treatment effects (LATE) for RDDs are often estimated using local linear regressions with pre-treatment covariates typically added to increase the efficiency of treatment effect estimates, but their inclusion can have large impacts on LATE point estimates and standard errors, particularly in small samples. In this paper, I propose a principled, efficiency-maximizing approach for covariate adjustment of LATE in RDDs. This approach allows researchers to combine context-specific, substantive insights with automated model selection via a novel adaptive lasso algorithm. When combined with currently existing robust estimation methods, this approach improves the efficiency of LATE RDD with pre-treatment covariates. The approach will be implemented in a forthcoming R package, AdaptiveRDD which can be used to estimate and compare treatment effects generated by this approach with extant approaches.
    Date: 2019–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1910.06381&r=all
  7. By: Maas, Benedikt
    Abstract: This paper introduces the Super Learner to nowcast and forecast the probability of a US economy recession in the current quarter and future quarters. The Super Learner is an algorithm that selects an optimal weighted average from several machine learning algorithms. In this paper, elastic net, random forests, gradient boosting machines and kernel support vector machines are used as underlying base learners of the Super Learner, which is trained with real-time vintages of the FRED-MD database as input data. The Super Learner’s ability to categorise future time periods into recessions versus expansions is compared with eight different alternatives based on probit models. The relative model performance is evaluated based on receiver operating characteristic (ROC) curves. In summary, the Super Learner predicts a recession very reliably across all forecast horizons, although it is defeated by different individual benchmark models on each horizon.
    Keywords: Machine Learning; Nowcasting; Forecasting; Business cycle analysis
    JEL: C32 C53 E32
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:96408&r=all
  8. By: Bolte, Jérôme; Castera, Camille; Pauwels, Edouard; Févotte, Cédric
    Abstract: We devise a learning algorithm for possibly nonsmooth deep neural networks featuring inertia and Newtonian directional intelligence only by means of a backpropagation oracle. Our algorithm, called INDIAN, has an appealing mechanical interpretation, making the role of its two hyperparameters transparent. An elementary phase space lifting allows both for its implementation and its theoretical study under very general assumptions. We handle in particular a stochastic version of our method (which encompasses usual mini-batch approaches) for nonsmooth activation functions (such as ReLU). Our algorithm shows high efficiency and reaches state of the art on image classification problems.
    Date: 2019–10
    URL: http://d.repec.org/n?u=RePEc:tse:wpaper:123630&r=all
  9. By: Deli Chen; Yanyan Zou; Keiko Harimoto; Ruihan Bao; Xuancheng Ren; Xu Sun
    Abstract: Considering event structure information has proven helpful in text-based stock movement prediction. However, existing works mainly adopt the coarse-grained events, which loses the specific semantic information of diverse event types. In this work, we propose to incorporate the fine-grained events in stock movement prediction. Firstly, we propose a professional finance event dictionary built by domain experts and use it to extract fine-grained events automatically from finance news. Then we design a neural model to combine finance news with fine-grained event structure and stock trade data to predict the stock movement. Besides, in order to improve the generalizability of the proposed method, we design an advanced model that uses the extracted fine-grained events as the distant supervised label to train a multi-task framework of event extraction and stock prediction. The experimental results show that our method outperforms all the baselines and has good generalizability.
    Date: 2019–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1910.05078&r=all
  10. By: Bolte, Jérôme; Pauwels, Edouard
    Abstract: Modern problems in AI or in numerical analysis require nonsmooth approaches with a exible calculus. We introduce generalized derivatives called conservative fields for which we develop a calculus and provide representation formulas. Functions having a conservative field are called path differentiable: convex, concave, Clarke regular and any semialgebraic Lipschitz continuous functions are path differentiable. Using Whitney stratification techniques for semialgebraic and definable sets, our model provides variational formulas for nonsmooth automatic diffrentiation oracles, as for instance the famous backpropagation algorithm in deep learning. Our differential model is applied to establish the convergence in values of nonsmooth stochastic gradient methods as they are implemented in practice.
    Keywords: Deep Learning, Automatic differentiation, Backpropagation algorithm,; Nonsmooth stochastic optimization, Defiable sets, o-minimal structures, Stochastic gradient, Clarke subdifferential, First order methods
    Date: 2019–10
    URL: http://d.repec.org/n?u=RePEc:tse:wpaper:123631&r=all
  11. By: Smith, Gary (Pomona College)
    Abstract: Black-box algorithms now account for nearly a third of all U. S. stock trades. It is a mistake to think that these algorithms possess superhuman intelligence. In reality, computers do not have the common sense and wisdom that humans have accumulated by living. Trading algorithms are particularly dangerous because they are so efficient at discovering statistical patterns—but so utterly useless in judging whether the discovered patterns are meaningful.
    Keywords: algorithmic trading, black box trading, quants, artificial intelligence
    Date: 2019–01–01
    URL: http://d.repec.org/n?u=RePEc:clm:pomwps:1007&r=all

This nep-big issue is ©2019 by Tom Coupé. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.