nep-big New Economics Papers
on Big Data
Issue of 2020‒09‒14
24 papers chosen by
Tom Coupé
University of Canterbury

  1. On the Origin(s) and Development of "Big Data": The Phenomenon, the Term, and the Discipline By Francis X. Diebold
  2. Data vs collateral By Leonardo Gambacorta; Yiping Huang; Zhenhua Li; Han Qiu; Shu Chen
  3. How is Machine Learning Useful for Macroeconomic Forecasting? By Philippe Goulet Coulombe; Maxime Leroux; Dalibor Stevanovic; St\'ephane Surprenant
  4. Predicting monetary policy using artificial neural networks By Hinterlang, Natascha
  5. Learning low-frequency temporal patterns for quantitative trading By Joel da Costa; Tim Gebbie
  6. Image Processing Tools for Financial Time Series Classification By Bairui Du; Paolo Barucca
  7. Pattern recognition of financial institutions’ payment behavior By Carlos León; Paolo Barucca; Oscar Acero; Gerardo Gage; Fabio Ortega
  8. News-driven inflation expectations and information rigidities By Vegard H. Larsen; Leif Anders Thorsrud; Julia Zhulanova
  9. Convergence of Deep Fictitious Play for Stochastic Differential Games By Jiequn Han; Ruimeng Hu; Jihao Long
  10. Layoffs, Inequity and COVID-19: A Longitudinal Study of the Journalism Jobs Crisis in Australia from 2012 to 2020 By Nik Dawson; Sacha Molitorisz; Marian-Andrei Rizoiu; Peter Fray
  11. Mastering the Art of Cookbook Medicine: Machine Learning, Randomized Trials, and Misallocation By Jason Abaluck; Leila Agha; David C. Chan Jr; Daniel Singer; Diana Zhu
  12. A Blockchain Transaction Graph based Machine Learning Method for Bitcoin Price Prediction By Xiao Li; Weili Wu
  13. Share Price Prediction of Aerospace Relevant Companies with Recurrent Neural Networks based on PCA By Linyu Zheng; Hongmei He
  14. High-Resolution Poverty Maps in Sub-Saharan Africa By Kamwoo Lee; Jeanine Braithwaite
  15. Analysing a built-in advantage in asymmetric darts contests using causal machine learning By Goller, Daniel
  16. GA-MSSR: Genetic Algorithm Maximizing Sharpe and Sterling Ratio Method for RoboTrading By Zezheng Zhang; Matloob Khushi
  17. Data, global development, and COVID-19: Lessons and consequences By Wim Naudé; Ricardo Vinuesa
  18. DeepFolio: Convolutional Neural Networks for Portfolios with Limit Order Book Data By Aiusha Sangadiev; Rodrigo Rivera-Castro; Kirill Stepanov; Andrey Poddubny; Kirill Bubenchikov; Nikita Bekezin; Polina Pilyugina; Evgeny Burnaev
  19. InClass Nets: Independent Classifier Networks for Nonparametric Estimation of Conditional Independence Mixture Models and Unsupervised Classification By Konstantin T. Matchev; Prasanth Shyamsundar
  20. Cognitive Performance in the Home Office - Evidence from Professional Chess By Künn, Steffen; Seel, Christian; Zegners, Dainis
  21. Can Urbanization Improve Household Welfare? Evidence from Ethiopia By Kibrom A. Abay Author-Name: Luca Tiberti Author-Name: Tsega G. Mezgebo Author-Name: Meron Endale
  22. Group Testing in a Pandemic: The Role of Frequent Testing, Correlated Risk, and Machine Learning By Ned Augenblick; Jonathan T. Kolstad; Ziad Obermeyer; Ao Wang
  23. Deep Learning for Constrained Utility Maximisation By Ashley Davey; Harry Zheng
  24. Modernización de la administración de justicia a través de la inteligencia artificial By Manuel José Cepeda E.; Guillermo Otálora L.

  1. By: Francis X. Diebold
    Abstract: I investigate Big Data, the phenomenon, the term, and the discipline, with emphasis on origins of the term, in industry and academics, in computer science and statistics/econometrics. Big Data the phenomenon continues unabated, Big Data the term is now firmly entrenched, and Big Data the discipline is emerging.
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2008.05835&r=all
  2. By: Leonardo Gambacorta; Yiping Huang; Zhenhua Li; Han Qiu; Shu Chen
    Abstract: The use of massive amounts of data by large technology firms (big techs) to assess firms’ creditworthiness could reduce the need for collateral in solving asymmetric information problems in credit markets. Using a unique dataset of more than 2 million Chinese firms that received credit from both an important big tech firm (Ant Group) and traditional commercial banks, this paper investigates how different forms of credit correlate with local economic activity, house prices and firm characteristics. We find that big tech credit does not correlate with local business conditions and house prices when controlling for demand factors, but reacts strongly to changes in firm characteristics, such as transaction volumes and network scores used to calculate firm credit ratings. By contrast, both secured and unsecured bank credit react significantly to local house prices, which incorporate useful information on the environment in which clients operate and on their creditworthiness. This evidence implies that a greater use of big tech credit – granted on the basis of machine learning and big data – could reduce the importance of collateral in credit markets and potentially weaken the financial accelerator mechanism.
    Keywords: big tech, big data, collateral, banks, asymmetric information, credit markets
    JEL: D22 G31 R30
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:bis:biswps:881&r=all
  3. By: Philippe Goulet Coulombe; Maxime Leroux; Dalibor Stevanovic; St\'ephane Surprenant
    Abstract: We move beyond "Is Machine Learning Useful for Macroeconomic Forecasting?" by adding the "how". The current forecasting literature has focused on matching specific variables and horizons with a particularly successful algorithm. In contrast, we study the usefulness of the underlying features driving ML gains over standard macroeconometric methods. We distinguish four so-called features (nonlinearities, regularization, cross-validation and alternative loss function) and study their behavior in both the data-rich and data-poor environments. To do so, we design experiments that allow to identify the "treatment" effects of interest. We conclude that (i) nonlinearity is the true game changer for macroeconomic prediction, (ii) the standard factor model remains the best regularization, (iii) K-fold cross-validation is the best practice and (iv) the $L_2$ is preferred to the $\bar \epsilon$-insensitive in-sample loss. The forecasting gains of nonlinear techniques are associated with high macroeconomic uncertainty, financial stress and housing bubble bursts. This suggests that Machine Learning is useful for macroeconomic forecasting by mostly capturing important nonlinearities that arise in the context of uncertainty and financial frictions.
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2008.12477&r=all
  4. By: Hinterlang, Natascha
    Abstract: This paper analyses the forecasting performance of monetary policy reaction functions using U.S. Federal Reserve's Greenbook real-time data. The results indicate that artificial neural networks are able to predict the nominal interest rate better than linear and nonlinearTaylor rule models as well as univariate processes. While in-sample measures usually imply a forward-looking behaviour of the central bank, using nowcasts of the explanatory variables seems to be better suited for forecasting purposes. Overall, evidence suggests that U.S. monetary policy behaviour between1987-2012 is nonlinear.
    Keywords: Forecasting,Monetary Policy,Artificial Neural Network,Taylor Rule,Reaction Function
    JEL: C45 E47 E52
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:zbw:bubdps:442020&r=all
  5. By: Joel da Costa; Tim Gebbie
    Abstract: We consider the viability of a modularised mechanistic online machine learning framework to learn signals in low-frequency financial time series data. The framework is proved on daily sampled closing time-series data from JSE equity markets. The input patterns are vectors of pre-processed sequences of daily, weekly and monthly or quarterly sampled feature changes. The data processing is split into a batch processed step where features are learnt using a stacked autoencoder via unsupervised learning, and then both batch and online supervised learning are carried out using these learnt features, with the output being a point prediction of measured time-series feature fluctuations. Weight initializations are implemented with restricted Boltzmann machine pre-training, and variance based initializations. Historical simulations are then run using an online feedforward neural network initialised with the weights from the batch training and validation step. The validity of results are considered under a rigorous assessment of backtest overfitting using both combinatorially symmetrical cross validation and probabilistic and deflated Sharpe ratios. Results are used to develop a view on the phenomenology of financial markets and the value of complex historical data-analysis for trading under the unstable adaptive dynamics that characterise financial markets.
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2008.09481&r=all
  6. By: Bairui Du; Paolo Barucca
    Abstract: Time series prediction is a challenge for many complex systems, yet in finance predictions are hindered by the very nature of how financial markets work. In efficient markets, the opportunities for stock price predictions leading to profitable trades are supposed to rapidly disappear. In the growing industry of high-frequency trading, the competition over extracting predictions on stock prices from the increasing amount of available information for performing profitable trades is becoming more and more severe. With the development of big data analysis and advanced deep learning methodologies, traders hope to fruitfully analyse market information, e.g. price time series, through machine learning. Spot prices of stocks provide a simple snapshot representation of a financial market. Stock prices fluctuate over time, affected by numerous factors, and the prediction of their changes is at the core of both long-term and short-term financial investing. The collective patterns of price movements are generally referred to as market states. As a paramount example, when stock prices follow an upward trend, it is called a bull market, and when stock prices follow a downward trend is called a bear market
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2008.06042&r=all
  7. By: Carlos León (Banco de la República de Colombia); Paolo Barucca (University College London, United Kingdom); Oscar Acero (Banco de la República de Colombia); Gerardo Gage (Centro de Estudios Monetarios Latinoamericanos (CEMLA), México); Fabio Ortega (Banco de la República de Colombia)
    Abstract: We present a general supervised machine learning methodology to represent the payment behavior of financial institutions starting from a database of transactions in the Colombian large-value payment system. The methodology learns a feedforward artificial neural network parameterization to represent the payment patterns through 113 features corresponding to financial institutions’ contribution to payments, funding habits, payments timing, payments concentration, centrality in the payments network, and systemic impact due to failure to pay. The representation is then used to test the coherence of out-of-sample payment patterns of the same institution to its characteristic patterns. The performance is remarkable, with an out-of-sample classification error around three percent. The performance is robust to reductions in the number of features by unsupervised feature selection. Also, we test that network centrality and systemic impact features contribute to enhancing the performance of the methodology definitively. For financial authorities, this is the first step towards the automated detection of individual financial institutions’ anomalous behavior in payment systems. **** RESUMEN: Presentamos una metodología general de aprendizaje automático supervisado para representar el comportamiento de pago de las instituciones financieras a partir de una base de datos de transacciones del sistema de pagos de alto valor de Colombia. La metodología utiliza una red neuronal artificial para representar los patrones de pago de instituciones financieras a través de 113 características que corresponden a su contribución a los pagos, hábitos de fondeo, momento de pagos, concentración de pagos, centralidad en la red de pagos, e impacto sistémico debido a la imposibilidad de pagar. Esta representación es utilizada para probar la coherencia de los patrones de pago fuera de muestra de una institución financiera con sus patrones de pago característicos. El desempeño del modelo es notable, con un error de clasificación fuera de muestra cercano a tres por ciento. El desempeño es robusto a reducciones en el número de características con base en la selección no supervisada de características. También se comprueba que la centralidad en la red de pagos y el impacto sistémico son características que efectivamente mejoran el desempeño de la metodología. Para las autoridades financieras este es un primer paso hacia la detección automatizada de anomalías en el comportamiento de las instituciones financieras como participantes en sistemas de pago.
    Keywords: Payments, neural networks, feature selection, machine learning, pattern recognition, pagos, redes neuronales, selección de características, aprendizaje automático, reconocimiento de patrones
    JEL: C45 E42 G21
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:bdr:borrec:1130&r=all
  8. By: Vegard H. Larsen (Norges Bank and Centre for Applied Macroeconomics and Commodity Prices, BI Norwegian Business School); Leif Anders Thorsrud (Norges Bank and Centre for Applied Macroeconomics and Commodity Prices, BI Norwegian Business School); Julia Zhulanova (Centre for Applied Macroeconomics and Commodity Prices, BI Norwegian Business School)
    Abstract: We investigate the role played by the media in the expectations formation process of households. Using a novel news-topic-based approach we show that news types the media choose to report on, e.g., fiscal policy, health, and politics, are good predictors of households' stated inflation expectations. In turn, in a noisy information model setting, augmented with a simple media channel, we document that the underlying time series properties of relevant news topics explain the time-varying information rigidity among households. As such, we not only provide a novel estimate showing the degree to which information rigidities among households varies across time, but also provide, using a large news corpus and machine learning algorithms, robust and new evidence highlighting the role of the media for understanding inflation expectations and information rigidities.
    Keywords: expectations, media, machine learning, inflation
    JEL: C11 C53 D83 D84 E13 E31 E37
    Date: 2019–02–20
    URL: http://d.repec.org/n?u=RePEc:bno:worpap:2019_05&r=all
  9. By: Jiequn Han; Ruimeng Hu; Jihao Long
    Abstract: Stochastic differential games have been used extensively to model agents' competitions in Finance, for instance, in P2P lending platforms from the Fintech industry, the banking system for systemic risk, and insurance markets. The recently proposed machine learning algorithm, deep fictitious play, provides a novel efficient tool for finding Markovian Nash equilibrium of large $N$-player asymmetric stochastic differential games [J. Han and R. Hu, Mathematical and Scientific Machine Learning Conference, 2020]. By incorporating the idea of fictitious play, the algorithm decouples the game into $N$ sub-optimization problems, and identifies each player's optimal strategy with the deep backward stochastic differential equation (BSDE) method parallelly and repeatedly. In this paper, under appropriate conditions, we prove the convergence of deep fictitious play (DFP) to the true Nash equilibrium. We can also show that the strategy based on DFP forms an $\epsilon$-Nash equilibrium. We generalize the algorithm by proposing a new approach to decouple the games, and present numerical results of large population games showing the empirical convergence of the algorithm beyond the technical assumptions in the theorems.
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2008.05519&r=all
  10. By: Nik Dawson; Sacha Molitorisz; Marian-Andrei Rizoiu; Peter Fray
    Abstract: In Australia and beyond, journalism is reportedly an industry in crisis, a crisis exacerbated by COVID-19. However, the evidence revealing the crisis is often anecdotal or limited in scope. In this unprecedented longitudinal research, we draw on data from the Australian journalism jobs market from January 2012 until March 2020. Using Data Science and Machine Learning techniques, we analyse two distinct data sets: job advertisements (ads) data comprising 3,698 journalist job ads from a corpus of over 6.7 million Australian job ads; and official employment data from the Australian Bureau of Statistics. Having matched and analysed both sources, we address both the demand for and supply of journalists in Australia over this critical period. The data show that the crisis is real, but there are also surprises. Counter-intuitively, the number of journalism job ads in Australia rose from 2012 until 2016, before falling into decline. Less surprisingly, for the entire period studied the figures reveal extreme volatility, characterised by large and erratic fluctuations. The data also clearly show that COVID-19 has significantly worsened the crisis. We can also tease out more granular findings, including: that there are now more women than men journalists in Australia, but that gender inequity is worsening, with women journalists getting younger and worse-paid just as men journalists are, on average, getting older and better-paid; that, despite the crisis besetting the industry, the demand for journalism skills has increased; and that the skills sought by journalism job ads increasingly include social media and generalist communications.
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2008.12459&r=all
  11. By: Jason Abaluck; Leila Agha; David C. Chan Jr; Daniel Singer; Diana Zhu
    Abstract: The application of machine learning (ML) to randomized controlled trials (RCTs) can quantify and improve misallocation in healthcare. We study the decision to prescribe anticoagulants for atrial fibrillation patients; anticoagulation reduces stroke risk but increases hemorrhage risk. We combine observational data on treatment choice and guideline use with ML estimates of heterogeneous treatment effects from eight RCTs. When physicians adopt a clinical guideline, treatment decisions shift towards the recommendation but adherence remains far from perfect. Improving guideline adherence would produce larger gains than informing physicians about guidelines. Adherence to an optimal rule would prevent 47% more strokes without increasing hemorrhages.
    JEL: I11 I18 O33
    Date: 2020–07
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:27467&r=all
  12. By: Xiao Li; Weili Wu
    Abstract: Bitcoin, as one of the most popular cryptocurrency, is recently attracting much attention of investors. Bitcoin price prediction task is consequently a rising academic topic for providing valuable insights and suggestions. Existing bitcoin prediction works mostly base on trivial feature engineering, that manually designs features or factors from multiple areas, including Bticoin Blockchain information, finance and social media sentiments. The feature engineering not only requires much human effort, but the effectiveness of the intuitively designed features can not be guaranteed. In this paper, we aim to mining the abundant patterns encoded in bitcoin transactions, and propose k-order transaction graph to reveal patterns under different scope. We propose the transaction graph based feature to automatically encode the patterns. A novel prediction method is proposed to accept the features and make price prediction, which can take advantage from particular patterns from different history period. The results of comparison experiments demonstrate that the proposed method outperforms the most recent state-of-art methods.
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2008.09667&r=all
  13. By: Linyu Zheng; Hongmei He
    Abstract: The capital market plays a vital role in marketing operations for aerospace industry. However, due to the uncertainty and complexity of the stock market and many cyclical factors, the stock prices of listed aerospace companies fluctuate significantly. This makes the share price prediction challengeable. To improve the prediction of share price for aerospace industry sector and well understand the impact of various indicators on stock prices, we provided a hybrid prediction model by the combination of Principal Component Analysis (PCA) and Recurrent Neural Networks. We investigated two types of aerospace industries (manufacturer and operator). The experimental results show that PCA could improve both accuracy and efficiency of prediction. Various factors could influence the performance of prediction models, such as finance data, extracted features, optimisation algorithms, and parameters of the prediction model. The selection of features may depend on the stability of historical data: technical features could be the first option when the share price is stable, whereas fundamental features could be better when the share price has high fluctuation. The delays of RNN also depend on the stability of historical data for different types of companies. It would be more accurate through using short-term historical data for aerospace manufacturers, whereas using long-term historical data for aerospace operating airlines. The developed model could be an intelligent agent in an automatic stock prediction system, with which, the financial industry could make a prompt decision for their economic strategies and business activities in terms of predicted future share price, thus improving the return on investment. Currently, COVID-19 severely influences aerospace industries. The developed approach can be used to predict the share price of aerospace industries at post COVID-19 time.
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2008.11788&r=all
  14. By: Kamwoo Lee; Jeanine Braithwaite
    Abstract: Up-to-date poverty maps are an important tool for policymakers, but until now, have been prohibitively expensive to produce. We propose a generalizable prediction methodology to produce poverty maps at the village level using geospatial data and machine learning algorithms. We tested the proposed method for 25 Sub-Saharan African countries and validated them against survey data. The proposed method can increase the validity of both single country and cross-country estimations leading to higher precision in poverty maps of the 25 countries than previously available. More importantly, our cross-country estimation enables the creation of poverty maps when it is not practical or cost-effective to field new national household surveys, as is the case with many Sub-Saharan African countries and other low- and middle-income countries.
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2009.00544&r=all
  15. By: Goller, Daniel
    Abstract: We analyse a sequential contest with two players in darts where one of the contestants enjoys a technical advantage. Using methods from the causal machine learning literature, we analyse the built-in advantage, which is the first-mover having potentially more but never less moves. Our empirical findings suggest that the first-mover has an 8.6 percentage points higher probability to win the match induced by the technical advantage. Contestants with low performance measures and little experience have the highest built-in advantage. With regard to the fairness principle that contestants with equal abilities should have equal winning probabilities, this contest is ex-ante fair in the case of equal built-in advantages for both competitors and a randomized starting right. Nevertheless, the contest design produces unequal probabilities of winning for equally skilled contestants because of asymmetries in the built-in advantage associated with social pressure for contestants competing at home and away.
    Keywords: Causal machine learning, heterogeneity, contest design, social pressure, built-in advantage, incentives, performance, darts
    JEL: C14 D02 D20 Z20
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:usg:econwp:2020:13&r=all
  16. By: Zezheng Zhang; Matloob Khushi
    Abstract: Foreign exchange is the largest financial market in the world, and it is also one of the most volatile markets. Technical analysis plays an important role in the forex market and trading algorithms are designed utilizing machine learning techniques. Most literature used historical price information and technical indicators for training. However, the noisy nature of the market affects the consistency and profitability of the algorithms. To address this problem, we designed trading rule features that are derived from technical indicators and trading rules. The parameters of technical indicators are optimized to maximize trading performance. We also proposed a novel cost function that computes the risk-adjusted return, Sharpe and Sterling Ratio (SSR), in an effort to reduce the variance and the magnitude of drawdowns. An automatic robotic trading (RoboTrading) strategy is designed with the proposed Genetic Algorithm Maximizing Sharpe and Sterling Ratio model (GA-MSSR) model. The experiment was conducted on intraday data of 6 major currency pairs from 2018 to 2019. The results consistently showed significant positive returns and the performance of the trading system is superior using the optimized rule-based features. The highest return obtained was 320% annually using 5-minute AUDUSD currency pair. Besides, the proposed model achieves the best performance on risk factors, including maximum drawdowns and variance in return, comparing to benchmark models. The code can be accessed at https://github.com/zzzac/rule-based-fore xtrading-system
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2008.09471&r=all
  17. By: Wim Naudé; Ricardo Vinuesa
    Abstract: The COVID-19 pandemic holds at least seven lessons for the relationship between data-driven decision making, the use of artificial intelligence, and development.
    Keywords: data science, data, artificial intelligence, COVID-19, Development, global crisis
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:unu:wpaper:wp-2020-109&r=all
  18. By: Aiusha Sangadiev; Rodrigo Rivera-Castro; Kirill Stepanov; Andrey Poddubny; Kirill Bubenchikov; Nikita Bekezin; Polina Pilyugina; Evgeny Burnaev
    Abstract: This work proposes DeepFolio, a new model for deep portfolio management based on data from limit order books (LOB). DeepFolio solves problems found in the state-of-the-art for LOB data to predict price movements. Our evaluation consists of two scenarios using a large dataset of millions of time series. The improvements deliver superior results both in cases of abundant as well as scarce data. The experiments show that DeepFolio outperforms the state-of-the-art on the benchmark FI-2010 LOB. Further, we use DeepFolio for optimal portfolio allocation of crypto-assets with rebalancing. For this purpose, we use two loss-functions - Sharpe ratio loss and minimum volatility risk. We show that DeepFolio outperforms widely used portfolio allocation techniques in the literature.
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2008.12152&r=all
  19. By: Konstantin T. Matchev; Prasanth Shyamsundar
    Abstract: We introduce a new machine-learning-based approach, which we call the Independent Classifier networks (InClass nets) technique, for the nonparameteric estimation of conditional independence mixture models (CIMMs). We approach the estimation of a CIMM as a multi-class classification problem, since dividing the dataset into different categories naturally leads to the estimation of the mixture model. InClass nets consist of multiple independent classifier neural networks (NNs), each of which handles one of the variates of the CIMM. Fitting the CIMM to the data is performed by simultaneously training the individual NNs using suitable cost functions. The ability of NNs to approximate arbitrary functions makes our technique nonparametric. Further leveraging the power of NNs, we allow the conditionally independent variates of the model to be individually high-dimensional, which is the main advantage of our technique over existing non-machine-learning-based approaches. We derive some new results on the nonparametric identifiability of bivariate CIMMs, in the form of a necessary and a (different) sufficient condition for a bivariate CIMM to be identifiable. We provide a public implementation of InClass nets as a Python package called RainDancesVI and validate our InClass nets technique with several worked out examples. Our method also has applications in unsupervised and semi-supervised classification problems.
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2009.00131&r=all
  20. By: Künn, Steffen (Maastricht University); Seel, Christian (Maastricht University); Zegners, Dainis (Erasmus University Rotterdam)
    Abstract: During the recent COVID-19 pandemic, traditional (offline) chess tournaments were prohibited and instead held online. We exploit this as a unique setting to assess the impact of moving offline tasks online on the cognitive performance of individuals. We use the Artificial Intelligence embodied in a powerful chess engine to assess the quality of chess moves and associated errors. Using within-player comparisons, we find a statistically and economically significant decrease in performance when competing online compared to competing offline. Our results suggest that teleworking might have adverse effects on workers performing cognitive tasks.
    Keywords: productivity, teleworking, chess, COVID-19
    JEL: H12 L23 M11 M54
    Date: 2020–07
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp13491&r=all
  21. By: Kibrom A. Abay Author-Name: Luca Tiberti Author-Name: Tsega G. Mezgebo Author-Name: Meron Endale
    Abstract: Despite evolving evidence that Africa is experiencing urbanization in a different way, empirical evaluations of the welfare implications of urban-development programs in Africa remain scant. We investigated the welfare implications of recent urbanization in rural areas and small towns in Ethiopia using household-level longitudinal data and satellite-based night-light intensity. Controlling for time-invariant unobserved heterogeneity (across individuals and localities) and exploiting intertemporal and interspatial variation in satellite-based night-light intensity, we found that urbanization, as measured by night-light intensity, was associated with significant welfare improvement. In particular, we found that a one-unit increase in night-light intensity was associated with an improvement in household welfare of about 2%. Much of this was driven by the increase in labor-market participation in the non-farm sector, mainly salaried employment, induced by urbanization. Other potential impact pathways, such as an increase in consumer prices or migration explained little (if any) of the change in household welfare. Finally, our quantile and inequality analyses suggested that the observed urbanization had a negligible effect on the distribution of household welfare. Our results can inform public policy debates on the consequences and implications of urban expansion in Africa.
    Keywords: urbanization, night-light intensity, welfare, labor-market outcomes, Ethiopia, sub-Saharan Africa
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:lvl:pmmacr:2020-02&r=all
  22. By: Ned Augenblick; Jonathan T. Kolstad; Ziad Obermeyer; Ao Wang
    Abstract: Group testing increases efficiency by pooling patient specimens and clearing the entire group with one negative test. Optimal grouping strategy is well studied in one-off testing scenarios with reasonably well-known prevalence rates and no correlations in risk. We discuss how the strategy changes in a pandemic environment with repeated testing, rapid local infection spread, and highly uncertain risk. First, repeated testing mechanically lowers prevalence at the time of the next test. This increases testing efficiency, such that increasing frequency by x times only increases expected tests by around √x rather than x. However, this calculation omits a further benefit of frequent testing: infected people are quickly removed from the population, which lowers prevalence and generates further efficiency. Accounting for this decline in intra-group spread, we show that increasing frequency can paradoxically reduce the total testing cost. Second, we show that group size and efficiency increases with intra-group risk correlation, which is expected in natural test groupings based on proximity. Third, because optimal groupings depend on uncertain risk and correlation, we show how better estimates from machine learning can drive large efficiency gains. We conclude that frequent group testing, aided by machine learning, is a promising and inexpensive surveillance strategy.
    JEL: I1 I18
    Date: 2020–07
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:27457&r=all
  23. By: Ashley Davey; Harry Zheng
    Abstract: This paper proposes two algorithms for solving stochastic control problems with deep reinforcement learning, with a focus on the utility maximisation problem. The first algorithm solves Markovian problems via the Hamilton Jacobi Bellman (HJB) equation. We solve this highly nonlinear partial differential equation (PDE) with a second order backward stochastic differential equation (2BSDE) formulation. The convex structure of the problem allows us to describe a dual problem that can either verify the original primal approach or bypass some of the complexity. The second algorithm utilises the full power of the duality method to solve non-Markovian problems, which are often beyond the scope of stochastic control solvers in the existing literature. We solve an adjoint BSDE that satisfies the dual optimality conditions. We apply these algorithms to problems with power, log and non-HARA utilities in the Black-Scholes, the Heston stochastic volatility, and path dependent volatility models. Numerical experiments show highly accurate results with low computational cost, supporting our proposed algorithms.
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2008.11757&r=all
  24. By: Manuel José Cepeda E.; Guillermo Otálora L.
    Abstract: Las tecnologías de inteligencia artificial vienen siendo utilizadas en toda clase de interacciones entre consumidores y empresas, así como entre gobiernos y ciudadanos. Distintos gobiernos del mundo están usando la inteligencia artificial para mejorar la prestación de sus servicios, y al mismo tiempo, algunos sistemas judiciales la están usando para automatizar partes de procesos judiciales y volverlos más accesibles. Este documento propone que en Colombia se utilicen herramientas de inteligencia artificial para la justicia en tres aspectos: (i) la gestión del conocimiento jurídico, (ii) la gestión de información para las políticas públicas en justicia y (iii) la gestión del proceso judicial. En estos tres aspectos sería posible automatizar tareas concretas, liberando para los jueces el tiempo para concentrarse en tareas de mayor complejidad. El documento considera que no debe haber “robots jueces”, y sostiene que la inteligencia artificial se debe aplicar para fortalecer la justicia y apoyar a los jueces. Discute las ventajas de la inteligencia artificial como apoyo a los jueces, así como los retos para la adopción de las tecnologías de inteligencia artificial en justicia, en especial los aspectos éticos e institucionales. Finalmente, se hacen recomendaciones, entre las cuales se encuentra la de aprovechar el potencial del expediente digital para diseñar soluciones concretas de inteligencia artificial y empezar a aplicarlas gradualmente, pero pronto. Al sector privado se recomienda, entre otras cosas, desarrollar soluciones concretas de inteligencia artificial en materia de resolución de conflictos, que posteriormente puedan ser aprovechadas en el sector judicial.
    Keywords: Administración de Justicia, Reforma a la Justicia, Inteligencia artificial, Modernización de la Justicia, Digitalización de la Justicia, Política Pública, Colombia
    JEL: D63
    Date: 2020–07–31
    URL: http://d.repec.org/n?u=RePEc:col:000124:018369&r=all

This nep-big issue is ©2020 by Tom Coupé. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.