nep-big New Economics Papers
on Big Data
Issue of 2021‒04‒19
25 papers chosen by
Tom Coupé
University of Canterbury

  1. Measuring National Life Satisfaction with Music By Benetos, Emmanouil; Ragano, Alessandro; Sgroi, Daniel; Tuckwell, Anthony
  2. Attacking and Defending Multiple Valuable Secrets in a Big Data World By Kai A. Konrad
  3. Financial Markets Prediction with Deep Learning By Jia Wang; Tong Sun; Benyuan Liu; Yu Cao; Degang Wang
  4. Enabling Machine Learning Algorithms for Credit Scoring -- Explainable Artificial Intelligence (XAI) methods for clear understanding complex predictive models By Przemys{\l}aw Biecek; Marcin Chlebus; Janusz Gajda; Alicja Gosiewska; Anna Kozak; Dominik Ogonowski; Jakub Sztachelski; Piotr Wojewnik
  5. Event-Driven LSTM For Forex Price Prediction By Ling Qi; Matloob Khushi; Josiah Poon
  6. Using Machine Learning and Qualitative Interviews to Design a Five-Question Women's Agency Index By Seema Jayachandran; Monica Biradavolu; Jan Cooper
  7. Local mortality estimates during the COVID-19 pandemic in Italy By Augusto Cerqua; Roberta Di Stefano; Marco Letta; Sara Miccoli
  8. Forecasting UK inflation bottom up By Joseph, Andreas; Kalamara, Eleni; Kapetanios, George; Potjagailo, Galina
  9. A comparative study of Different Machine Learning Regressors For Stock Market Prediction By Nazish Ashfaq; Zubair Nawaz; Muhammad Ilyas
  10. Uncovering commercial activity in informal cities By Daniel Straulino; Juan C. Saldarriaga; Jairo A. G\'omez; Juan C. Duque; Neave O'Clery
  11. Is the Covid equity bubble rational? A machine learning answer By Jean Jacques Ohana; Eric Benhamou; David Saltiel; Beatrice Guez
  12. Recurrent Dictionary Learning for State-Space Models with an Application in Stock Forecasting By Shalini Sharma; Víctor Elvira; Emilie Chouzenoux; Angshul Majumdar
  13. Autocalibration and Tweedie-dominance for insurance pricing with machine learning By Denuit, Michel; Charpentier, Arthur; Trufin, Julien
  14. Monetary policy, Twitter and financial markets: evidence from social media traffic By Donato Masciandaro; Davide Romelli; Gaia Rubera
  15. CLVSA: A Convolutional LSTM Based Variational Sequence-to-Sequence Model with Attention for Predicting Trends of Financial Markets By Jia Wang; Tong Sun; Benyuan Liu; Yu Cao; Hongwei Zhu
  16. Do Workfare Programs Live Up to Their Promises? Experimental Evidence from Cote D’Ivoire By Marianne Bertrand; Bruno Crépon; Alicia Marguerie; Patrick Premand
  17. Making Sense of the AI Landscape By Margherita Pagani; Renaud Champion
  18. Profitability Analysis in Stock Investment Using an LSTM-Based Deep Learning Model By Jaydip Sen; Abhishek Dutta; Sidra Mehtab
  19. ICT's Wide Web: a System-Level Analysis of ICT's Industrial Diffusion with Algorithmic Links By Ekaterina Prytkova
  20. Analysis of bank leverage via dynamical systems and deep neural networks By Fabrizio Lillo; Giulia Livieri; Stefano Marmi; Anton Solomko; Sandro Vaienti
  21. Boosting cost-complexity pruned trees On Tweedie responses: the ABT machine By Trufin, Julien; Denuit, Michel
  22. The Effect of Sport in Online Dating: Evidence from Causal Machine Learning By Daniel Boller; Michael Lechner; Gabriel Okasa
  23. Black-box model risk in finance By Samuel N. Cohen; Derek Snow; Lukasz Szpruch
  24. Assessing the Impact of COVID-19 on Trade: a Machine Learning Counterfactual Analysis By Marco Due\~nas; V\'ictor Ortiz; Massimo Riccaboni; Francesco Serti
  25. Response versus gradient boosting trees, GLMs and neural networks under Tweedie loss and log-link By Hainaut, Donatien; Trufin, Julien; Denuit, Michel

  1. By: Benetos, Emmanouil (Queen Mary, University of London); Ragano, Alessandro (The Alan Turing Institute); Sgroi, Daniel (University of Warwick); Tuckwell, Anthony (University of Warwick)
    Abstract: National life satisfaction is an important way to measure societal well-being and since 2011 has been used to judge the effectiveness of government policy across the world. However, there is a paucity of historical data making limiting long-run comparisons with other data. We construct a new measure based on the emotional content of music. We first trained a machine learning model using 191 different audio features embedded within music and use this model to construct a long-run Music Valence Index derived from chart-topping songs. This index correlates strongly and significantly with survey-based life satisfaction and outperforms an equivalent text-based measure. Our results have implications for the role of music in society, and validate a new use of music as a long-run measure of public sentiment.
    Keywords: historical subjective wellbeing, life satisfaction, music, sound data, language, big data
    JEL: C8 N3 N4 O1 D6
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp14258&r=all
  2. By: Kai A. Konrad
    Abstract: This paper studies the attack-and-defence game between a web user and a whole set of players over this user’s ‘valuable secrets.’ The number and type of these valuable secrets are the user’s private information. Attempts to tap information as well as privacy protection are costly. The multiplicity of secrets is of strategic value for the holders of these secrets. Users with few secrets keep their secrets private with some probability, even though they do not protect them. Users with many secrets protect their secrets at a cost that is smaller than the value of the secrets protected. The analysis also accounts for multiple redundant information channels with cost asymmetries, relating the analysis to attack-and-defence games with a weakest link.
    Keywords: OR in societal problem analysis, big-data, privacy, web user, conflict, information rents, valuable secrets, attack-and-defence, multiple attackers, multiple defence items, multi-front contest.
    JEL: D18 D72 D74 D82
    Date: 2019–05
    URL: http://d.repec.org/n?u=RePEc:mpi:wpaper:tax-mpg-rps-2019-05&r=all
  3. By: Jia Wang; Tong Sun; Benyuan Liu; Yu Cao; Degang Wang
    Abstract: Financial markets are difficult to predict due to its complex systems dynamics. Although there have been some recent studies that use machine learning techniques for financial markets prediction, they do not offer satisfactory performance on financial returns. We propose a novel one-dimensional convolutional neural networks (CNN) model to predict financial market movement. The customized one-dimensional convolutional layers scan financial trading data through time, while different types of data, such as prices and volume, share parameters (kernels) with each other. Our model automatically extracts features instead of using traditional technical indicators and thus can avoid biases caused by selection of technical indicators and pre-defined coefficients in technical indicators. We evaluate the performance of our prediction model with strictly backtesting on historical trading data of six futures from January 2010 to October 2017. The experiment results show that our CNN model can effectively extract more generalized and informative features than traditional technical indicators, and achieves more robust and profitable financial performance than previous machine learning approaches.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.05413&r=all
  4. By: Przemys{\l}aw Biecek; Marcin Chlebus; Janusz Gajda; Alicja Gosiewska; Anna Kozak; Dominik Ogonowski; Jakub Sztachelski; Piotr Wojewnik
    Abstract: Rapid development of advanced modelling techniques gives an opportunity to develop tools that are more and more accurate. However as usually, everything comes with a price and in this case, the price to pay is to loose interpretability of a model while gaining on its accuracy and precision. For managers to control and effectively manage credit risk and for regulators to be convinced with model quality the price to pay is too high. In this paper, we show how to take credit scoring analytics in to the next level, namely we present comparison of various predictive models (logistic regression, logistic regression with weight of evidence transformations and modern artificial intelligence algorithms) and show that advanced tree based models give best results in prediction of client default. What is even more important and valuable we also show how to boost advanced models using techniques which allow to interpret them and made them more accessible for credit risk practitioners, resolving the crucial obstacle in widespread deployment of more complex, 'black box' models like random forests, gradient boosted or extreme gradient boosted trees. All this will be shown on the large dataset obtained from the Polish Credit Bureau to which all the banks and most of the lending companies in the country do report the credit files. In this paper the data from lending companies were used. The paper then compares state of the art best practices in credit risk modelling with new advanced modern statistical tools boosted by the latest developments in the field of interpretability and explainability of artificial intelligence algorithms. We believe that this is a valuable contribution when it comes to presentation of different modelling tools but what is even more important it is showing which methods might be used to get insight and understanding of AI methods in credit risk context.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.06735&r=all
  5. By: Ling Qi; Matloob Khushi; Josiah Poon
    Abstract: The majority of studies in the field of AI guided financial trading focus on purely applying machine learning algorithms to continuous historical price and technical analysis data. However, due to non-stationary and high volatile nature of Forex market most algorithms fail when put into real practice. We developed novel event-driven features which indicate a change of trend in direction. We then build long deep learning models to predict a retracement point providing a perfect entry point to gain maximum profit. We use a simple recurrent neural network (RNN) as our baseline model and compared with short-term memory (LSTM), bidirectional long short-term memory (BiLSTM) and gated recurrent unit (GRU). Our experiment results show that the proposed event-driven feature selection together with the proposed models can form a robust prediction system which supports accurate trading strategies with minimal risk. Our best model on 15-minutes interval data for the EUR/GBP currency achieved RME 0.006x10^(-3) , RMSE 2.407x10^(-3), MAE 1.708x10^(-3), MAPE 0.194% outperforming previous studies.
    Date: 2021–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.01499&r=all
  6. By: Seema Jayachandran; Monica Biradavolu; Jan Cooper
    Abstract: We propose a new method to design a short survey measure of a complex concept such as women’s agency. The approach combines mixed-methods data collection and machine learning. We select the best survey questions based on how strongly correlated they are with a “gold standard” measure of the concept derived from qualitative interviews. In our application, we measure agency for 209 women in Haryana, India, first, through a semi-structured interview and, second, through a large set of close-ended questions. We use qualitative coding methods to score each woman’s agency based on the interview, which we treat as her true agency. To identify the close-ended questions most predictive of the “truth,” we apply statistical algorithms that build on LASSO and random forest but constrain how many variables are selected for the model (five in our case). The resulting five-question index is as strongly correlated with the coded qualitative interview as is an index that uses all of the candidate questions. This approach of selecting survey questions based on their statistical correspondence to coded qualitative interviews could be used to design short survey modules for many other latent constructs.
    Keywords: women’s empowerment, survey design, feature selection, psychometrics
    JEL: C83 D13 J16 O12
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_8984&r=all
  7. By: Augusto Cerqua (Sapienza Università di Roma); Roberta Di Stefano (Sapienza Università di Roma); Marco Letta (Sapienza Università di Roma); Sara Miccoli (Sapienza Università di Roma)
    Abstract: Estimates of the real death toll of the COVID-19 pandemic have proven to be problematic in many countries, Italy being no exception. Mortality estimates at the local level are even more uncertain as they require stringent conditions, such as granularity and accuracy of the data at hand, which are rarely met. The ‘official’ approach adopted by public institutions to estimate the ‘excess mortality’ during the pandemic draws on a comparison between observed all-cause mortality data for 2020 and averages of mortality figures in the past years for the same period. In this paper, we apply the recently developed machine learning control method to build a more realistic counterfactual scenario of mortality in the absence of COVID-19. We demonstrate that supervised machine learning techniques outperform the official method by substantially improving prediction accuracy of local mortality in ‘ordinary’ years, especially in small- and medium-sized municipalities. We then apply the best-performing algorithms to derive estimates of local excess mortality for the period between February and June 2020. Such estimates allow us to provide insights about the demographic evolution of the pandemic throughout the country. To help improve diagnostic and monitoring efforts, our dataset is freely available to the research community.
    Keywords: COVID-19, coronavirus, local mortality, Italy, machine learning, counterfactual building
    JEL: C21 C52 I10 J11
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:ahy:wpaper:wp6&r=all
  8. By: Joseph, Andreas (Bank of England); Kalamara, Eleni (King’s College London); Kapetanios, George (King’s College London); Potjagailo, Galina (Bank of England)
    Abstract: We forecast CPI inflation in the United Kingdom up to one year ahead using a large set of monthly disaggregated CPI item series combined with a wide set of forecasting tools, including dimensionality reduction techniques, shrinkage methods and non-linear machine learning models. We find that exploiting CPI item series over the period 2011–19 yields strong improvements in forecasting UK inflation against an autoregressive benchmark, above and beyond the gains from macroeconomic predictors. Ridge regression and other shrinkage methods perform best across specifications that include item-level data, yielding gains in relative forecast accuracy of up to 70% at the one-year horizon. Our results suggests that the combination of a large and relevant information set combined with efficient penalisation is key for good forecasting performance for this problem. We also provide a model-agnostic approach to address the general problem of model interpretability in high-dimensional settings based on model Shapley values, partial re-aggregation and statistical testing. This allows us to identify CPI divisions that consistently drive aggregate inflation forecasts across models and specifications, as well as to assess model differences going beyond forecast accuracy.
    Keywords: Inflation; forecasting; machine learning; state space models; CPI disaggregated data; Shapley values
    JEL: C32 C45 C53 C55 E37
    Date: 2021–03–26
    URL: http://d.repec.org/n?u=RePEc:boe:boeewp:0915&r=all
  9. By: Nazish Ashfaq; Zubair Nawaz; Muhammad Ilyas
    Abstract: For the development of successful share trading strategies, forecasting the course of action of the stock market index is important. Effective prediction of closing stock prices could guarantee investors attractive benefits. Machine learning algorithms have the ability to process and forecast almost reliable closing prices for historical stock patterns. In this article, we intensively studied NASDAQ stock market and targeted to choose the portfolio of ten different companies belongs to different sectors. The objective is to compute opening price of next day stock using historical data. To fulfill this task nine different Machine Learning regressor applied on this data and evaluated using MSE and R2 as performance metric.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.07469&r=all
  10. By: Daniel Straulino; Juan C. Saldarriaga; Jairo A. G\'omez; Juan C. Duque; Neave O'Clery
    Abstract: Knowledge of the spatial organisation of economic activity within a city is key to policy concerns. However, in developing cities with high levels of informality, this information is often unavailable. Recent progress in machine learning together with the availability of street imagery offers an affordable and easily automated solution. Here we propose an algorithm that can detect what we call 'visible firms' using street view imagery. Using Medell\'in, Colombia as a case study, we illustrate how this approach can be used to uncover previously unseen economic activity. Applying spatial analysis to our dataset we detect a polycentric structure with five distinct clusters located in both the established centre and peripheral areas. Comparing the density of visible and registered firms, we find that informal activity concentrates in poor but densely populated areas. Our findings highlight the large gap between what is captured in official data and the reality on the ground.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.04545&r=all
  11. By: Jean Jacques Ohana; Eric Benhamou (MILES - Machine Intelligence and Learning Systems - LAMSADE - Laboratoire d'analyse et modélisation de systèmes pour l'aide à la décision - Université Paris Dauphine-PSL - PSL - Université Paris sciences et lettres - CNRS - Centre National de la Recherche Scientifique, LAMSADE - Laboratoire d'analyse et modélisation de systèmes pour l'aide à la décision - Université Paris Dauphine-PSL - PSL - Université Paris sciences et lettres - CNRS - Centre National de la Recherche Scientifique); David Saltiel; Beatrice Guez
    Abstract: Is the Covid Equity bubble rational? In 2020, stock prices ballooned with S&P 500 gaining 16%, and the tech-heavy Nasdaq soaring to 43%, while fundamentals deteriorated with decreasing GDP forecasts, shrinking sales and revenues estimates and higher government deficits. To answer this fundamental question, with little bias as possible, we explore a gradient boosting decision trees (GBDT) approach that enables us to crunch numerous variables and let the data speak. We define a crisis regime to identify specific downturns in stock markets and normal rising equity markets. We test our approach and report improved accuracy of GBDT over other ML methods. Thanks to Shapley values, we are able to identify most important features, making this current work innovative and a suitable answer to the justification of current equity level.
    Date: 2021–04–05
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-03189799&r=all
  12. By: Shalini Sharma (IIIT-Delhi - Indraprastha Institute of Information Technology [New Delhi]); Víctor Elvira (School of Mathematics - University of Edinburgh - University of Edinburgh); Emilie Chouzenoux (OPIS - OPtimisation Imagerie et Santé - CVN - Centre de vision numérique - CentraleSupélec - Université Paris-Saclay - Inria - Institut National de Recherche en Informatique et en Automatique - Inria Saclay - Ile de France - Inria - Institut National de Recherche en Informatique et en Automatique); Angshul Majumdar (IIIT-Delhi - Indraprastha Institute of Information Technology [New Delhi])
    Abstract: In this work, we introduce a new modeling and inferential tool for dynamical processing of time series. The approach is called recurrent dictionary learning (RDL). The proposed model reads as a linear Gaussian Markovian state-space model involving two linear operators, the state evolution and the observation matrices, that we assumed to be unknown. These two unknown operators (that can be seen interpreted as dictionaries) and the sequence of hidden states are jointly learnt via an expectation-maximization algorithm. The RDL model gathers several advantages, namely online processing, probabilistic inference, and a high model expressiveness which is usually typical of neural networks. RDL is particularly well suited for stock forecasting. Its performance is illustrated on two problems: next day forecasting (regression problem) and next day trading (classification problem), given past stock market observations. Experimental results show that our proposed method excels over state-of-the-art stock analysis models such as CNN-TA, MFNN, and LSTM.
    Keywords: Stock Forecasting,Recurrent dictionary learning,Kalman filter,expectation-minimization,dynamical modeling,uncertainty quantification
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-03184841&r=all
  13. By: Denuit, Michel (Université catholique de Louvain, LIDAM/ISBA, Belgium); Charpentier, Arthur (UQAM); Trufin, Julien (ULB)
    Abstract: Boosting techniques and neural networks are particularly effective machine learning methods for insurance pricing. Often in practice, there are nevertheless endless debates about the choice of the right loss function to be used to train the machine learning model, as well as about the appropriate metric to assess the performances of competing models. Also, the sum of fitted values can depart from the observed totals to a large extent and this often confuses actuarial analysts. The lack of balance inherent to training models by minimizing deviance outside the familiar GLM with canonical link setting has been empirically documented in Wüthrich (2019, 2020) who attributes it to the early stopping rule in gradient descent methods for model fitting. The present paper aims to further study this phenomenon when learning proceeds by minimizing Tweedie deviance. It is shown that minimizing deviance involves a trade-off between the integral of weighted differences of lower partial moments and the bias measured on a specific scale. Autocalibration is then proposed as a remedy. This new method to correct for bias adds an extra local GLM step to the analysis. Theoretically, it is shown that it implements the autocalibration concept in pure premium calculation and ensures that balance also holds on a local scale, not only at portfolio level as with existing bias-correction techniques. The convex order appears to be the natural tool to compare competing models, putting a new light on the diagnostic graphs and associated metrics proposed by Denuit et al. (2019).
    Keywords: Risk classification ; Tweedie distribution family ; Concentration curve ; Bregman loss ; Convex order
    Date: 2021–03–04
    URL: http://d.repec.org/n?u=RePEc:aiz:louvad:2021013&r=all
  14. By: Donato Masciandaro; Davide Romelli; Gaia Rubera
    Abstract: How does central bank communication affect financial markets? This paper shows that the monetary policy announcements of three major central banks, i.e. the European Central Bank, the Federal Reserve and the Bank of England, trigger significant discussions on monetary policy on Twitter. Using machine learning techniques we identify Twitter messages related to monetary policy around the release of monetary policy decisions and we build a metric of the similarity between the policy announcement and Twitter traffic before and after the announcement. We interpret large changes in the similarity of tweets and announcements as a proxy for monetary policy surprise and show that market volatility spikes after the announcement whenever changes in similarity are high. These findings suggest that social media discussions on central bank communication are aligned with bond and stock market reactions.
    Keywords: monetary policy, central bank communication, financial markets, social media, Twitter, Federal Reserve, European Central Bank, Bank of England
    JEL: E44 E52 E58 G14 G15 G41
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:baf:cbafwp:cbafwp20160&r=all
  15. By: Jia Wang; Tong Sun; Benyuan Liu; Yu Cao; Hongwei Zhu
    Abstract: Financial markets are a complex dynamical system. The complexity comes from the interaction between a market and its participants, in other words, the integrated outcome of activities of the entire participants determines the markets trend, while the markets trend affects activities of participants. These interwoven interactions make financial markets keep evolving. Inspired by stochastic recurrent models that successfully capture variability observed in natural sequential data such as speech and video, we propose CLVSA, a hybrid model that consists of stochastic recurrent networks, the sequence-to-sequence architecture, the self- and inter-attention mechanism, and convolutional LSTM units to capture variationally underlying features in raw financial trading data. Our model outperforms basic models, such as convolutional neural network, vanilla LSTM network, and sequence-to-sequence model with attention, based on backtesting results of six futures from January 2010 to December 2017. Our experimental results show that, by introducing an approximate posterior, CLVSA takes advantage of an extra regularizer based on the Kullback-Leibler divergence to prevent itself from overfitting traps.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.04041&r=all
  16. By: Marianne Bertrand; Bruno Crépon; Alicia Marguerie; Patrick Premand
    Abstract: Workfare programs are one of the most popular social protection and employment policy instruments in the developing world. They evoke the promise of efficient targeting, as well as immediate and lasting impacts on participants’ employment, earnings, skills and behaviors. This paper evaluates contemporaneous and post-program impacts of a public works intervention in Côte d’Ivoire. The program was randomized among urban youths who self-selected to participate and provided seven months of employment at the formal minimum wage. Randomized subsets of beneficiaries also received complementary training on basic entrepreneurship or job search skills. During the program, results show limited impacts on the likelihood of employment, but a shift toward wage jobs, higher earnings and savings, as well as changes in work habits and behaviors. Fifteen months after the program ended, savings stock remain higher, but there are no lasting impacts on employment or behaviors, and only limited impacts on earnings. Machine learning techniques are applied to assess whether program targeting can improve. Significant heterogeneity in impacts on earnings is found during the program but not post-program. Departing from self-targeting improves performance: a range of practical targeting mechanisms achieve impacts close to a machine learning benchmark by maximizing contemporaneous impacts without reducing post-program impacts. Impacts on earnings remain substantially below program costs even under improved targeting.
    JEL: C93 H53 I38 J24 O12
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:28664&r=all
  17. By: Margherita Pagani (emlyon business school); Renaud Champion
    Abstract: Through a survey of more than 800 AI Systems we identify four distinct type s of AI supported tasks.
    Keywords: Artificial intelligence,Ethics
    Date: 2020–11–17
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-03188248&r=all
  18. By: Jaydip Sen; Abhishek Dutta; Sidra Mehtab
    Abstract: Designing robust systems for precise prediction of future prices of stocks has always been considered a very challenging research problem. Even more challenging is to build a system for constructing an optimum portfolio of stocks based on the forecasted future stock prices. We present a deep learning-based regression model built on a long-and-short-term memory network (LSTM) network that automatically scraps the web and extracts historical stock prices based on a stock's ticker name for a specified pair of start and end dates, and forecasts the future stock prices. We deploy the model on 75 significant stocks chosen from 15 critical sectors of the Indian stock market. For each of the stocks, the model is evaluated for its forecast accuracy. Moreover, the predicted values of the stock prices are used as the basis for investment decisions, and the returns on the investments are computed. Extensive results are presented on the performance of the model. The analysis of the results demonstrates the efficacy and effectiveness of the system and enables us to compare the profitability of the sectors from the point of view of the investors in the stock market.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.06259&r=all
  19. By: Ekaterina Prytkova (Friedrich Schiller University Jena, School of Economics)
    Abstract: This paper seeks to contribute to the understanding of diffusion patterns and relatedness within ICT as a technology system in the EU28 region. Considering ICT as a technology system, first, I break down ICT into a set of distinct technologies employing OECD and WIPO classifications. Then, using text analysis and the Algorithmic Links with Probabilities method, I construct industry–technology links to connect industries with ICT and track ICT's diffusion over the period 1977-2020. The analysis highlights the heterogeneity of the technologies that constitute the ICT cluster. As not all ICTs are pervasive and not all ICTs are key technologies, this leads to differences in industry reliance on them. The results indicate that the ICT cluster shows signs of a "phase transition", passing the phase of building bulk elements of the infrastructure and around the 2000s entering the phase of working on the functionality for business applications deployment and users' convenience. This transition is marked by the surging relevance of ICT technologies such as mobile communication, information analysis, security, and human interface. Studying the ICT as a cluster allows putting each ICT technology in context to compare them in relative terms; this is especially important for the discussion of novel and fast–growing technologies such as Artificial Intelligence (AI). Concerning the structure of industry reliance on the ICT cluster, ICT's penetration is characterized by increasing scope but unevenly distributed scale; depending on the industry and the distinct ICT technology the intensity of their connections varies significantly. Remarkably, looking closer at AI technologies, in line with the current literature, a wide array of "shallow" connections with industries is revealed. Finally, I calculate relatedness metrics to estimate proximity among ICT technologies. The analysis reveals differences in the underlying knowledge base among the overwhelming majority of the ICT technologies but a similar structure of their application base.
    Keywords: ICT, algorithmic links, artificial intelligence, relatedness, industry–technology nexus
    JEL: O33 O52 O14
    Date: 2021–03–29
    URL: http://d.repec.org/n?u=RePEc:jrp:jrpwrp:2021-005&r=all
  20. By: Fabrizio Lillo; Giulia Livieri; Stefano Marmi; Anton Solomko; Sandro Vaienti
    Abstract: We consider a model of a simple financial system consisting of a leveraged investor that invests in a risky asset and manages risk by using Value-at-Risk (VaR). The VaR is estimated by using past data via an adaptive expectation scheme. We show that the leverage dynamics can be described by a dynamical system of slow-fast type associated with a unimodal map on [0,1] with an additive heteroscedastic noise whose variance is related to the portfolio rebalancing frequency to target leverage. In absence of noise the model is purely deterministic and the parameter space splits in two regions: (i) a region with a globally attracting fixed point or a 2-cycle; (ii) a dynamical core region, where the map could exhibit chaotic behavior. Whenever the model is randomly perturbed, we prove the existence of a unique stationary density with bounded variation, the stochastic stability of the process and the almost certain existence and continuity of the Lyapunov exponent for the stationary measure. We then use deep neural networks to estimate map parameters from a short time series. Using this method, we estimate the model in a large dataset of US commercial banks over the period 2001-2014. We find that the parameters of a substantial fraction of banks lie in the dynamical core, and their leverage time series are consistent with a chaotic behavior. We also present evidence that the time series of the leverage of large banks tend to exhibit chaoticity more frequently than those of small banks.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.04960&r=all
  21. By: Trufin, Julien (Université Libre de Bruxelles); Denuit, Michel (Université catholique de Louvain, LIDAM/ISBA, Belgium)
    Abstract: This paper proposes a new boosting machine based on forward stagewise additive modeling with cost-complexity pruned trees. In the Tweedie case, it deals directly with observed res-ponses, not gradients of the loss function. Trees included in the score progressively reduce to the root-node one, in an adaptive way. The proposed Adaptive Boosting Tree (ABT) machine thus automatically stops at that time, avoiding to resort to the time-consuming cross validation approach. A case study performed on motor third-party liability insurance claim data demons-trates the performances of the proposed ABT machine for ratemaking, in comparison with regu-lar gradient boosting trees.
    Keywords: Risk classification ; Boosting ; Gradient Boosting ; Regression Trees ; Cost-complexity pruning
    Date: 2021–03–09
    URL: http://d.repec.org/n?u=RePEc:aiz:louvad:2021015&r=all
  22. By: Daniel Boller; Michael Lechner; Gabriel Okasa
    Abstract: Online dating emerged as a key platform for human mating. Previous research focused on socio-demographic characteristics to explain human mating in online dating environments, neglecting the commonly recognized relevance of sport. This research investigates the effect of sport activity on human mating by exploiting a unique data set from an online dating platform. Thereby, we leverage recent advances in the causal machine learning literature to estimate the causal effect of sport frequency on the contact chances. We find that for male users, doing sport on a weekly basis increases the probability to receive a first message from a woman by 50%, relatively to not doing sport at all. For female users, we do not find evidence for such an effect. In addition, for male users the effect increases with higher income.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.04601&r=all
  23. By: Samuel N. Cohen; Derek Snow; Lukasz Szpruch
    Abstract: Machine learning models are increasingly used in a wide variety of financial settings. The difficulty of understanding the inner workings of these systems, combined with their wide applicability, has the potential to lead to significant new risks for users; these risks need to be understood and quantified. In this sub-chapter, we will focus on a well studied application of machine learning techniques, to pricing and hedging of financial options. Our aim will be to highlight the various sources of risk that the introduction of machine learning emphasises or de-emphasises, and the possible risk mitigation and management strategies that are available.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.04757&r=all
  24. By: Marco Due\~nas; V\'ictor Ortiz; Massimo Riccaboni; Francesco Serti
    Abstract: By interpreting exporters' dynamics as a complex learning process, this paper constitutes the first attempt to investigate the effectiveness of different Machine Learning (ML) techniques in predicting firms' trade status. We focus on the probability of Colombian firms surviving in the export market under two different scenarios: a COVID-19 setting and a non-COVID-19 counterfactual situation. By comparing the resulting predictions, we estimate the individual treatment effect of the COVID-19 shock on firms' outcomes. Finally, we use recursive partitioning methods to identify subgroups with differential treatment effects. We find that, besides the temporal dimension, the main factors predicting treatment heterogeneity are interactions between firm size and industry.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.04570&r=all
  25. By: Hainaut, Donatien (Université catholique de Louvain, LIDAM/ISBA, Belgium); Trufin, Julien (Université Libre de Bruxelles); Denuit, Michel (Université catholique de Louvain, LIDAM/ISBA, Belgium)
    Abstract: Thanks to its outstanding performances, boosting has rapidly gained wide acceptance among actuaries. To speed calculations, boosting is often applied to gradients of the loss function, not to responses (hence the name gradient boosting). When the model is trained by minimizing Poisson deviance, this amounts to apply the least-squares principle to raw residuals. This exposes gradient boosting to the same problems that lead to replace least-squares with Poisson GLM to analyze low counts (typically, the number of reported claims at policy level in personal lines). This paper shows that boosting can be conducted directly on the response under Tweedie loss function and log-link, by adapting the weights at each step. Numerical illustrations demonstrate improved performances compared to gradient boosting when trees, GLMs and neural networks are used as weak learners.
    Keywords: Risk classification ; Boosting ; Gradient Boosting ; Regression Trees ; GLM ; Neural Networks
    Date: 2021–01–01
    URL: http://d.repec.org/n?u=RePEc:aiz:louvad:2021012&r=all

This nep-big issue is ©2021 by Tom Coupé. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.