nep-big New Economics Papers
on Big Data
Issue of 2021‒01‒25
28 papers chosen by
Tom Coupé
University of Canterbury

  1. Advanced Technologies Adoption and Use by U.S. Firms: Evidence from the Annual Business Survey By Nikolas Zolas; Zachary Kroff; Erik Brynjolfsson; Kristina McElheran; David Beede; Catherine Buffington; Nathan Goldschlag
  2. Answering the Queen: Machine Learning and Financial Crises By Jeremy Fouliard; Michael Howell; Hélène Rey
  3. Comparing Conventional and Machine-Learning Approaches to Risk Assessment in Domestic Abuse Cases By Jeffrey Grogger; Sean Gupta; Ria Ivandic; Tom Kirchmaier
  4. Bankruptcy prediction using disclosure text features By Sridhar Ravula
  5. Now- and Backcasting Initial Claims with High-Dimensional Daily Internet Search-Volume Data By Daniel Borup; David E. Rapach; Erik Christian Montes Schütte
  6. A machine learning approach to volatility forecasting By Kim Christensen; Mathias Siggaard; Bezirgen Veliyev
  7. Deep Portfolio Optimization via Distributional Prediction of Residual Factors By Kentaro Imajo; Kentaro Minami; Katsuya Ito; Kei Nakagawa
  8. Nowcasting Indonesia’s GDP Growth Using Machine Learning Algorithms By Tamara, Novian; Dwi Muchisha, Nadya; Andriansyah, Andriansyah; Soleh, Agus M
  9. The Deep Parametric PDE Method: Application to Option Pricing By Kathrin Glau; Linus Wunderlich
  10. Deep Learning, Predictability, and Optimal Portfolio Returns By Mykola Babiak; Jozef Barunik
  11. AVM and high dimensional data: Do ridge, the lasso or the elastic net provide an "automated" solution? By Hinrichs, Nils; Kolbe, Jens; Werwatz, Axel
  12. AI Watch : AI Uptake in Health and Healthcare, 2020 By DE NIGRIS Sarah; CRAGLIA Massimo; NEPELSKI Daniel; HRADEC Jiri; GOMEZ-GONZALES Emilio; GOMEZ GUTIERREZ Emilia; VAZQUEZ-PRADA BAILLET Miguel; RIGHI Riccardo; DE PRATO Giuditta; LOPEZ COBO Montserrat; SAMOILI Sofia; CARDONA Melisande
  13. Reducción de la brecha del crédito en México en un ambiente de incertidumbre generada por la pandemia COVID-19: Un enfoque de ciencia de datos (machine learning) By Rodríguez-García, Jair Hissarly; Venegas-Martínez, Francisco
  14. Building Cross-Sectional Systematic Strategies By Learning to Rank By Daniel Poh; Bryan Lim; Stefan Zohren; Stephen Roberts
  15. Machine Learning as Natural Experiment: Method and Deployment at Japanese Firms (Japanese) By NARITA Yusuke; AIHARA Shunsuke; SAITO Yuta; MATSUTANI Megumi; YATA Kohei
  16. AI Watch Assessing Technology Readiness Levels for Artificial Intelligence By Fernando Martinez-Plumed; Emilia Gomez Gutierrez; Jose Hernandez-Orallo
  17. Deep Reinforcement Learning for Stock Portfolio Optimization By Le Trung Hieu
  18. Financial Intermediation and Technology: What’s Old, What’s New? By Arnoud W.A. Boot; Peter Hoffmann; Luc Laeven; Lev Ratnovski
  19. Public Procurement and Innovation for Human-Centered Artificial Intelligence By Naudé, Wim; Dimitri, Nicola
  20. The Variational Method of Moments By Andrew Bennett; Nathan Kallus
  21. Recurrent Neural Networks for Stochastic Control Problems with Delay By Jiequn Han; Ruimeng Hu
  22. Machine Learning Systems in Clinics – How Mature Is the Adoption Process in Medical Diagnostics? By Pumplun, Luisa; Fecho, Mariska; Islam, Nihal; Buxmann, Peter
  23. How Artificial Intelligence is Making Transport Safer, Cleaner, More Reliable and Efficient in Emerging Markets By Maria Lopez Conde; Ian Twinn
  24. Mining the Relationship Between COVID-19 Sentiment and Market Performance By Ziyuan Xia; Jeffery Chen
  25. To Use or Not to Use Artificial Intelligence? A Framework for the Ideation and Evaluation of Problems to Be Solved with Artificial Intelligence By Sturm, Timo; Fecho, Mariska; Buxmann, Peter
  26. To Use or Not to Use Artificial Intelligence? A Framework for the Ideation and Evaluation of Problems to Be Solved with Artificial Intelligence By Sturm, Timo; Fecho, Mariska; Buxmann, Peter
  27. Deep learning for efficient frontier calculation in finance By Xavier Warin
  28. Tensoring volatility calibration Calibration of the rough Bergomi volatility model via Chebyshev Tensors By Mariano Zeron; Ignacio Ruiz

  1. By: Nikolas Zolas; Zachary Kroff; Erik Brynjolfsson; Kristina McElheran; David Beede; Catherine Buffington; Nathan Goldschlag
    Abstract: We introduce a new survey module intended to complement and expand research on the causes and consequences of advanced technology adoption. The 2018 Annual Business Survey (ABS), conducted by the Census Bureau in partnership with the National Center for Science and Engineering Statistics (NCSES), provides comprehensive and timely information on the diffusion among U.S. firms of advanced technologies including artificial intelligence (AI), cloud computing, robotics, and the digitization of business information. The 2018 ABS is a large, nationally representative sample of over 850,000 firms covering all private, nonfarm sectors of the economy. We describe the motivation for and development of the technology module in the ABS, as well as provide a first look at technology adoption and use patterns across firms and sectors. We find that digitization is quite widespread, as is some use of cloud computing. In contrast, advanced technology adoption is rare and generally skewed towards larger and older firms. Adoption patterns are consistent with a hierarchy of increasing technological sophistication, in which most firms that adopt AI or other advanced business technologies also use the other, more widely diffused technologies. Finally, while few firms are at the technology frontier, they tend to be large so technology exposure of the average worker is significantly higher. This new data will be available to qualified researchers on approved projects in the Federal Statistical Research Data Center network.
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:cen:wpaper:20-40&r=all
  2. By: Jeremy Fouliard; Michael Howell; Hélène Rey
    Abstract: Financial crises cause economic, social and political havoc. Macroprudential policies are gaining traction but are still severely under-researched compared to monetary policy and fiscal policy. We use the general framework of sequential predictions also called online machine learning to forecast crises out-of-sample. Our methodology is based on model averaging and is meta-statistic since we can incorporate any predictive model of crises in our set of experts and test its ability to add information. We are able to predict systemic financial crises twelve quarters ahead out-of-sample with high signal-to-noise ratio in most cases. We analyse which experts provide the most information for our predictions at each point in time and for each country, allowing us to gain some insights into economic mechanisms underlying the building of risk in economies.
    JEL: G01 G15
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:28302&r=all
  3. By: Jeffrey Grogger; Sean Gupta; Ria Ivandic; Tom Kirchmaier
    Abstract: We compare predictions from a conventional protocol-based approach to risk assessment with those based on a machine-learning approach. We first show that the conventional predictions are less accurate than, and have similar rates of negative prediction error as, a simple Bayes classifier that makes use only of the base failure rate. Machine learning algorithms based on the underlying risk assessment questionnaire do better under the assumption that negative prediction errors are more costly than positive prediction errors. Machine learning models based on two-year criminal histories do even better. Indeed, adding the protocol-based features to the criminal histories adds little to the predictive adequacy of the model. We suggest using the predictions based on criminal histories to prioritize incoming calls for service, and devising a more sensitive instrument to distinguish true from false positives that result from this initial screening.
    JEL: K14 K36
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:28293&r=all
  4. By: Sridhar Ravula
    Abstract: A public firm's bankruptcy prediction is an important financial research problem because of the security price downside risks. Traditional methods rely on accounting metrics that suffer from shortcomings like window dressing and retrospective focus. While disclosure text-based metrics overcome some of these issues, current methods excessively focus on disclosure tone and sentiment. There is a requirement to relate meaningful signals in the disclosure text to financial outcomes and quantify the disclosure text data. This work proposes a new distress dictionary based on the sentences used by managers in explaining financial status. It demonstrates the significant differences in linguistic features between bankrupt and non-bankrupt firms. Further, using a large sample of 500 bankrupt firms, it builds predictive models and compares the performance against two dictionaries used in financial text analysis. This research shows that the proposed stress dictionary captures unique information from disclosures and the predictive models based on its features have the highest accuracy.
    Date: 2021–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2101.00719&r=all
  5. By: Daniel Borup (Aarhus University, CREATES and the Danish Finance Institute (DFI)); David E. Rapach (Washington University in St. Louis and Saint Louis University); Erik Christian Montes Schütte (Aarhus University, CREATES and the Danish Finance Institute (DFI))
    Abstract: We generate a sequence of now- and backcasts of weekly unemployment insurance initial claims (UI) based on a rich trove of daily Google Trends (GT) search-volume data for terms related to unemployment. To harness the information in a high-dimensional set of daily GT terms, we estimate predictive models using machine-learning techniques in a mixed-frequency framework. In a simulated out-of-sample exercise, now- and backcasts of weekly UI that incorporate the information in the daily GT terms substantially outperform models that ignore the information. The relevance of GT terms for predicting UI is strongly linked to the COVID-19 crisis.
    Keywords: Unemployment insurance, Internet search, Mixed-frequency data, Penalized regression, Neural network, Variable importance
    JEL: C45 C53 C55 E24 E27 J65
    Date: 2021–01–11
    URL: http://d.repec.org/n?u=RePEc:aah:create:2021-02&r=all
  6. By: Kim Christensen (Aarhus University and CREATES); Mathias Siggaard (Aarhus University and CREATES); Bezirgen Veliyev (Aarhus University and CREATES)
    Abstract: We show that machine learning (ML) algorithms improve one-day-ahead forecasts of realized variance from 29 Dow Jones Industrial Average index stocks over the sample period 2001 - 2017. We inspect several ML approaches: Regularization, tree-based algorithms, and neural networks. Off-the-shelf ML implementations beat the Heterogeneous AutoRegressive (HAR) model, even when the only predictors employed are the daily, weekly, and monthly lag of realized variance. Moreover, ML algorithms are capable of extracting substantial more information from additional predictors of volatility, including firm-specific characteristics and macroeconomic indicators, relative to an extended HAR model (HAR-X). ML automatically deciphers the often nonlinear relationship among the variables, allowing to identify key associations driving volatility. With accumulated local effect (ALE) plots we show there is a general agreement about the set of the most dominant predictors, but disagreement on their ranking. We investigate the robustness of ML when a large number of irrelevant variables, exhibiting serial correlation and conditional heteroscedasticity, are added to the information set. We document sustained forecasting improvements also in this setting.
    Keywords: Gradient boosting, high-frequency data, machine learning, neural network, random forest, realized variance, regularization, volatility forecasting
    JEL: C10 C50
    Date: 2021–01–18
    URL: http://d.repec.org/n?u=RePEc:aah:create:2021-03&r=all
  7. By: Kentaro Imajo; Kentaro Minami; Katsuya Ito; Kei Nakagawa
    Abstract: Recent developments in deep learning techniques have motivated intensive research in machine learning-aided stock trading strategies. However, since the financial market has a highly non-stationary nature hindering the application of typical data-hungry machine learning methods, leveraging financial inductive biases is important to ensure better sample efficiency and robustness. In this study, we propose a novel method of constructing a portfolio based on predicting the distribution of a financial quantity called residual factors, which is known to be generally useful for hedging the risk exposure to common market factors. The key technical ingredients are twofold. First, we introduce a computationally efficient extraction method for the residual information, which can be easily combined with various prediction algorithms. Second, we propose a novel neural network architecture that allows us to incorporate widely acknowledged financial inductive biases such as amplitude invariance and time-scale invariance. We demonstrate the efficacy of our method on U.S. and Japanese stock market data. Through ablation experiments, we also verify that each individual technique contributes to improving the performance of trading strategies. We anticipate our techniques may have wide applications in various financial problems.
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2012.07245&r=all
  8. By: Tamara, Novian; Dwi Muchisha, Nadya; Andriansyah, Andriansyah; Soleh, Agus M
    Abstract: GDP is very important to be monitored in real time because of its usefulness for policy making. We built and compared the ML models to forecast real-time Indonesia's GDP growth. We used 18 variables that consist a number of quarterly macroeconomic and financial market statistics. We have evaluated the performance of six popular ML algorithms, such as Random Forest, LASSO, Ridge, Elastic Net, Neural Networks, and Support Vector Machines, in doing real-time forecast on GDP growth from 2013:Q3 to 2019:Q4 period. We used the RMSE, MAD, and Pearson correlation coefficient as measurements of forecast accuracy. The results showed that the performance of all these models outperformed AR (1) benchmark. The individual model that showed the best performance is random forest. To gain more accurate forecast result, we run forecast combination using equal weighting and lasso regression. The best model was obtained from forecast combination using lasso regression with selected ML models, which are Random Forest, Ridge, Support Vector Machine, and Neural Network.
    Keywords: Nowcasting, Indonesian GDP, Machine Learning
    JEL: C55 E30 O40
    Date: 2020–06–26
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:105235&r=all
  9. By: Kathrin Glau; Linus Wunderlich
    Abstract: We propose the deep parametric PDE method to solve high-dimensional parametric partial differential equations. A single neural network approximates the solution of a whole family of PDEs after being trained without the need of sample solutions. As a practical application, we compute option prices in the multivariate Black-Scholes model. After a single training phase, the prices for different time, state and model parameters are available in milliseconds. We evaluate the accuracy in the price and a generalisation of the implied volatility with examples of up to 25 dimensions. A comparison with alternative machine learning approaches, confirms the effectiveness of the approach.
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2012.06211&r=all
  10. By: Mykola Babiak; Jozef Barunik
    Abstract: We study dynamic portfolio choice of a long-horizon investor who uses deep learning methods to predict equity returns when forming optimal portfolios. Our results show statistically and economically significant benefits from using deep learning to form optimal portfolios through certainty equivalent returns and Sharpe ratios. Return predictability via deep learning also generates substantially improved portfolio performance across different subsamples, particularly during recessionary periods. These gains are robust to including transaction costs, short-selling and borrowing constraints.
    Keywords: return predictability; portfolio allocation; machine learning; neural networks; empirical asset pricing;
    JEL: C45 C53 E37 G11 G17
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:cer:papers:wp677&r=all
  11. By: Hinrichs, Nils; Kolbe, Jens; Werwatz, Axel
    Abstract: In this paper, we apply Ridge Regression, the Lasso and the Elastic Net to a rich and reliable data set of condominiums sold in Berlin, Germany, between 1996 and 2013. We their predictive performance in a rolling window design to a simple linear OLS procedure. Our results suggest that Ridge Regression, the Lasso and the Elastic Net show potential as AVM procedures but need to be handled with care because of their uneven prediction performance. At least in our application, these procedures are not the "automated" solution to Automated Valuation Modeling that they may seem to be.
    Keywords: Automated valuation,Machine learning,Elastic Net,Forecastperformance
    JEL: R31 C14
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:zbw:forlwp:222020&r=all
  12. By: DE NIGRIS Sarah (European Commission - JRC); CRAGLIA Massimo (European Commission - JRC); NEPELSKI Daniel (European Commission - JRC); HRADEC Jiri (European Commission - JRC); GOMEZ-GONZALES Emilio; GOMEZ GUTIERREZ Emilia (European Commission - JRC); VAZQUEZ-PRADA BAILLET Miguel (European Commission - JRC); RIGHI Riccardo (European Commission - JRC); DE PRATO Giuditta (European Commission - JRC); LOPEZ COBO Montserrat (European Commission - JRC); SAMOILI Sofia (European Commission - JRC); CARDONA Melisande (European Commission - JRC)
    Abstract: This document presents a sectoral analysis of AI in health and healthcare for AI Watch, the knowledge service of the European Commission monitoring the development, uptake and impact of Artificial Intelligence for Europe. Its main aim is to act as a benchmark for future editions of the report to be able to assess the changes in uptake and impact of AI in healthcare over time, in line with the mission of AI Watch. The report recognises that we are still at an early stage in the adoption of AI and that AI offers many opportunities in the short term for improved efficiency in administrative and operational processes and in the medium-long term for clinical applications, patients’ care, and increased citizen empowerment. At the same time, AI applications in this sensitive sector raise many ethical and societal issues and shaping the direction of development so that we can maximise the benefits whilst reducing the risks is a key issue. In the global context, Europe is well positioned with a strong research base and excellent health data, which is the pre-requisite for the development of beneficial AI applications. Where Europe is less well placed is in translating research and innovation into industrial applications and in venture capital funding able to support innovative companies to set themselves up and scale up once successful. There are however noticeable exception as the case of the BioNTech that is leading the development of one of the COVID-19 vaccines. It should also be noted that in AI-enabled health start-ups, many of them are in the area of drug discovery, i.e. the domain of BioNTech. Investment in education and training of the healthcare workforce as well as creating environments for multidisciplinary exchange of knowledge between software developers and health practitioners are other key areas. The report recognizes that there are many important policy developments already in the making that will shape future directions, including the European Strategy for Data which is setting up a common dataspace for health, a riskbased regulatory framework for AI to be put in place by the end of 2020, and the forthcoming launch of the Horizon Europe programme as well the Digital Europe Programme with large investments in AI, computing infrastructure, cybersecurity and training. The COVID-19 crisis has also acted as a booster to the adoption of AI in health and the digital transition of business, research, education and public administration. Furthermore, the unprecedented investments of the Recovery Plan agreed in July 2020 may fuel development in digital technologies and health beyond expectation. We are therefore at the junction of a potentially extraordinary period of change which we will be able to measure in future years against the baseline set by this report.
    Keywords: artificial intelligence, health, health care, technology uptake
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:ipt:iptwpa:jrc122675&r=all
  13. By: Rodríguez-García, Jair Hissarly; Venegas-Martínez, Francisco
    Abstract: Resumen El otorgamiento de microcréditos de forma eficiente y transparente a través de plataformas digitales a individuos que desarrollan actividades económicas y que buscan mantener su empleo y el de sus trabajadores y que no tienen acceso al sistema financiero convencional es, sin duda, un problema urgente por resolver en la crisis sanitaria por la que atraviesa actualmente México. La presente investigación desarrolla varios modelos y estrategias de riesgo de crédito que permiten promover la inclusión crediticia en México de manera justa y sostenible en un ambiente de incertidumbre generada por los estragos presentes y esperados por la pandemia COVID-19. Para ello se utiliza el enfoque de ciencia de datos de machine learning, particularmente, se emplean las herramientas: regresión del árbol de decisión, bosques aleatorios, función de base radial, boosting, K-Nearest Neigbor (KNN) y Redes Neuronales. Abstract The efficient and transparent granting of microcredits through digital platforms to people who carry out economic activities and who seek to maintain their employment and that of their workers and who do not have access to the conventional financial system is, without a doubt, an urgent problem be solved in the health crisis that Mexico is going through. This research develops various credit risk models and strategies that allow promoting credit inclusion in Mexico in a fair and sustainable manner in an environment of uncertainty generated by the present and expected ravages of the COVID-19 pandemic. For this, the data science approach of machine learning is used, in particular, the used tools are: decision tree regression, random forests, radial basis function, boosting, K-Nearest Neigbor (KNN), and Neural Networks.
    Keywords: riesgo crédito, ciencia de datos, mercados de créditos, instituciones financieras, inclusión financiera. credit risk, data science, credit markets, financial institutions, financial inclusion.
    JEL: G23
    Date: 2021–01–04
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:105133&r=all
  14. By: Daniel Poh; Bryan Lim; Stefan Zohren; Stephen Roberts
    Abstract: The success of a cross-sectional systematic strategy depends critically on accurately ranking assets prior to portfolio construction. Contemporary techniques perform this ranking step either with simple heuristics or by sorting outputs from standard regression or classification models, which have been demonstrated to be sub-optimal for ranking in other domains (e.g. information retrieval). To address this deficiency, we propose a framework to enhance cross-sectional portfolios by incorporating learning-to-rank algorithms, which lead to improvements of ranking accuracy by learning pairwise and listwise structures across instruments. Using cross-sectional momentum as a demonstrative case study, we show that the use of modern machine learning ranking algorithms can substantially improve the trading performance of cross-sectional strategies -- providing approximately threefold boosting of Sharpe Ratios compared to traditional approaches.
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2012.07149&r=all
  15. By: NARITA Yusuke; AIHARA Shunsuke; SAITO Yuta; MATSUTANI Megumi; YATA Kohei
    Abstract: From public policy to business, machine learning and other algorithms produce a growing portion of treatment decisions and recommendations. Such algorithmic decisions are natural experiments (conditionally quasi-randomly assigned instruments) since the algorithms make decisions based only on observable input variables. We use this observation to characterize the sources of causal-effect identification for a class of stochastic and deterministic algorithms. This identification result translates into consistent estimators of causal effects and the counterfactual performance of new algorithms. We apply our method to improve a large-scale fashion e-commerce platform (ZOZOTOWN). We conclude by providing public policy applications.
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:eti:rdpsjp:20045&r=all
  16. By: Fernando Martinez-Plumed (European Commission - JRC); Emilia Gomez Gutierrez (European Commission - JRC); Jose Hernandez-Orallo (Universitat Politècnica de Valencia)
    Abstract: Artificial Intelligence (AI) offers the potential to transform our lives in radical ways. However, the main unanswered questions about this foreseen transformation are when and how this is going to happen. Not only do we lack the tools to determine what achievements will be attained in the near future, but we even underestimate what various technologies in AI are capable of today. Many so-called breakthroughs in AI are simply associated with highly-cited research papers or good performance on some particular benchmarks. Certainly, the translation from papers and benchmark performance to products is faster in AI than in other non-digital sectors. However, it is still the case that research breakthroughs do not directly translate to a technology that is ready to use in real-world environments. This document describes an exemplar-based methodology to categorise and assess several AI research and development technologies, by mapping them into Technology Readiness Levels (TRL) (e.g., maturity and availability levels). We first interpret the nine TRLs in the context of AI and identify different categories in AI to which they can be assigned. We then introduce new bidimensional plots, called readiness-vs-generality charts, where we see that higher TRLs are achievable for low-generality technologies focusing on narrow or specific abilities, while low TRLs are still out of reach for more general capabilities. We include numerous examples of AI technologies in a variety of fields, and show their readiness-vs-generality charts, serving as exemplars. Finally, we use the dynamics of several AI technology exemplars at different generality layers and moments of time to forecast some short-term and mid-term trends for AI.
    Keywords: Artificial Intelligence, Technology Readiness Level, Technology
    Date: 2020–11
    URL: http://d.repec.org/n?u=RePEc:ipt:iptwpa:jrc122014&r=all
  17. By: Le Trung Hieu
    Abstract: Stock portfolio optimization is the process of constant re-distribution of money to a pool of various stocks. In this paper, we will formulate the problem such that we can apply Reinforcement Learning for the task properly. To maintain a realistic assumption about the market, we will incorporate transaction cost and risk factor into the state as well. On top of that, we will apply various state-of-the-art Deep Reinforcement Learning algorithms for comparison. Since the action space is continuous, the realistic formulation were tested under a family of state-of-the-art continuous policy gradients algorithms: Deep Deterministic Policy Gradient (DDPG), Generalized Deterministic Policy Gradient (GDPG) and Proximal Policy Optimization (PPO), where the former two perform much better than the last one. Next, we will present the end-to-end solution for the task with Minimum Variance Portfolio Theory for stock subset selection, and Wavelet Transform for extracting multi-frequency data pattern. Observations and hypothesis were discussed about the results, as well as possible future research directions.1
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2012.06325&r=all
  18. By: Arnoud W.A. Boot; Peter Hoffmann; Luc Laeven; Lev Ratnovski
    Abstract: We study the effects of technological change on financial intermediation, distinguishing between innovations in information (data collection and processing) and communication (relationships and distribution). Both follow historic trends towards an increased use of hard information and less in-person interaction, which are accelerating rapidly. We point to more recent innovations, such as the combination of data abundance and artificial intelligence, and the rise of digital platforms. We argue that in particular the rise of new communication channels can lead to the vertical and horizontal disintegration of the traditional bank business model. Specialized providers of financial services can chip away activities that do not rely on access to balance sheets, while platforms can interject themselves between banks and customers. We discuss limitations to these challenges, and the resulting policy implications.
    Keywords: Financial services;Banking;Communications in revenue administration;Technological innovation;Financial statements;WP,bank,firm,customer,financial service,provider
    Date: 2020–08–07
    URL: http://d.repec.org/n?u=RePEc:imf:imfwpa:2020/161&r=all
  19. By: Naudé, Wim (University College Cork); Dimitri, Nicola (University of Siena)
    Abstract: The possible negative consequences of Artificial Intelligence (AI) have given rise to calls for public policy to ensure that it is safe, and to prevent improper use and misuse. Human-centered AI (HCAI) draws on ethical principles and puts forth actionable guidelines in this regard. So far however, these have lacked strong incentives for adherence. In this paper we contribute to the debate on HCAI by arguing that public procurement and innovation (PPaI) can be used to incentivize HCAI. We dissect the literature on PPaI and HCAI and provide a simple theoretical model to show that procurement of innovative AI solutions underpinned by ethical considerations can provide the incentives that scholars have called for. Our argument in favor of PPaI for HCAI is also an argument for the more innovative use of public procurement, and is consistent with calls for mission-oriented and challenge-led innovation policies. Our paper also contributes to the emerging literature on public entrepreneurship, given that PPaI for HCAI can advance the transformation of society, but only under uncertaint.
    Keywords: artificial intelligence, data, innovation, public procurement, ethics
    JEL: H57 D02 O38 O32
    Date: 2021–01
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp14021&r=all
  20. By: Andrew Bennett; Nathan Kallus
    Abstract: The conditional moment problem is a powerful formulation for describing structural causal parameters in terms of observables, a prominent example being instrumental variable regression. A standard approach is to reduce the problem to a finite set of marginal moment conditions and apply the optimally weighted generalized method of moments (OWGMM), but this requires we know a finite set of identifying moments, can still be inefficient even if identifying, or can be unwieldy and impractical if we use a growing sieve of moments. Motivated by a variational minimax reformulation of OWGMM, we define a very general class of estimators for the conditional moment problem, which we term the variational method of moments (VMM) and which naturally enables controlling infinitely-many moments. We provide a detailed theoretical analysis of multiple VMM estimators, including based on kernel methods and neural networks, and provide appropriate conditions under which these estimators are consistent, asymptotically normal, and semiparametrically efficient in the full conditional moment model. This is in contrast to other recently proposed methods for solving conditional moment problems based on adversarial machine learning, which do not incorporate optimal weighting, do not establish asymptotic normality, and are not semiparametrically efficient.
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2012.09422&r=all
  21. By: Jiequn Han; Ruimeng Hu
    Abstract: Stochastic control problems with delay are challenging due to the path-dependent feature of the system and thus its intrinsic high dimensions. In this paper, we propose and systematically study deep neural networks-based algorithms to solve stochastic control problems with delay features. Specifically, we employ neural networks for sequence modeling (\emph{e.g.}, recurrent neural networks such as long short-term memory) to parameterize the policy and optimize the objective function. The proposed algorithms are tested on three benchmark examples: a linear-quadratic problem, optimal consumption with fixed finite delay, and portfolio optimization with complete memory. Particularly, we notice that the architecture of recurrent neural networks naturally captures the path-dependent feature with much flexibility and yields better performance with more efficient and stable training of the network compared to feedforward networks. The superiority is even evident in the case of portfolio optimization with complete memory, which features infinite delay.
    Date: 2021–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2101.01385&r=all
  22. By: Pumplun, Luisa; Fecho, Mariska; Islam, Nihal; Buxmann, Peter
    Date: 2021–01–05
    URL: http://d.repec.org/n?u=RePEc:dar:wpaper:124660&r=all
  23. By: Maria Lopez Conde; Ian Twinn
    Keywords: Transport - Transport and Trade Logistics Transport - Transport Economics Policy and Planning Transport - Roads & Highways Transport - Railways Urban Development - Transport in Urban Areas Information and Communication Technologies - Information Technology
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:wbk:wboper:33387&r=all
  24. By: Ziyuan Xia; Jeffery Chen
    Abstract: At the beginning of the COVID-19 outbreak in March, we observed one of the largest stock market crashes in history. Within the months following this, a volatile bullish climb back to pre-pandemic performances and higher. In this paper we study the stock market behavior during the initial few months of the COVID-19 pandemic in relation to COVID-19 sentiment. Using text sentiment analysis of Twitter data, we look at tweets that contain key words in relation to the COVID-19 pandemic and the sentiment of the tweet to understand whether sentiment can be used as an indicator for stock market performance. There has been previous research done on applying natural language processing and text sentiment analysis to understand the stock market performance, given how prevalent the impact of COVID-19 is to the economy, we want to further the application of these techniques to understand the relationship that COVID-19 has with stock market performance. Our findings show that there is a strong relationship to COVID-19 sentiment derived from tweets that could be used to predict stock market performance in the future.
    Date: 2021–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2101.02587&r=all
  25. By: Sturm, Timo; Fecho, Mariska; Buxmann, Peter
    Date: 2021–01–07
    URL: http://d.repec.org/n?u=RePEc:dar:wpaper:124702&r=all
  26. By: Sturm, Timo; Fecho, Mariska; Buxmann, Peter
    Date: 2021–01–05
    URL: http://d.repec.org/n?u=RePEc:dar:wpaper:124636&r=all
  27. By: Xavier Warin
    Abstract: We propose deep neural network algorithms to calculate efficient frontier in some Mean-Variance and Mean-CVaR portfolio optimization problems. We show that we are able to deal with such problems when both the dimension of the state and the dimension of the control are high. Adding some additional constraints, we compare different formulations and show that a new projected feedforward network is able to deal with some global constraints on the weights of the portfolio while outperforming classical penalization methods. All developed formulations are compared in between. Depending on the problem and its dimension, some formulations may be preferred.
    Date: 2021–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2101.02044&r=all
  28. By: Mariano Zeron; Ignacio Ruiz
    Abstract: Inspired by a series of remarkable papers in recent years that use Deep Neural Nets to substantially speed up the calibration of pricing models, we investigate the use of Chebyshev Tensors instead of Deep Neural Nets. Given that Chebyshev Tensors can be, under certain circumstances, more efficient than Deep Neural Nets at exploring the input space of the function to be approximated, due to their exponential convergence, the problem of calibration of pricing models seems, a priori, a good case where Chebyshev Tensors can be used. In this piece of research, we built Chebyshev Tensors, either directly or with the help of the Tensor Extension Algorithms, to tackle the computational bottleneck associated with the calibration of the rough Bergomi volatility model. Results are encouraging as the accuracy of model calibration via Chebyshev Tensors is similar to that when using Deep Neural Nets, but with building efforts that range between 5 and 100 times more efficient in the experiments run. Our tests indicate that when using Chebyshev Tensors, the calibration of the rough Bergomi volatility model is around 40,000 times more efficient than if calibrated via brute-force (using the pricing function).
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2012.07440&r=all

This nep-big issue is ©2021 by Tom Coupé. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.