nep-big New Economics Papers
on Big Data
Issue of 2021‒08‒09
twenty papers chosen by
Tom Coupé
University of Canterbury

  1. AI in Finance: Challenges, Techniques and Opportunities By Longbing Cao
  2. Sentiment and uncertainty about regulation By Tara M. Sinclair; Zhoudan Xie
  3. Using Twitter to Track Immigration Sentiment During Early Stages of the COVID-19 Pandemic By Rowe, Francisco; Mahony, Michael; Graells-Garrido, Eduardo; Rango, Marzia; Sievers, Niklas
  4. Machine Learning and Factor-Based Portfolio Optimization By Thomas Conlon; John Cotter; Iason Kynigakis
  5. The impact of machine learning and big data on credit markets By Eccles, Peter; Grout, Paul; Siciliani, Paolo; Zalewska, Anna
  6. Data Analytics and Machine Learning paradigm to gauge performances combining classification, ranking and sorting for system analysis By Andrea Pontiggia; Giovanni Fasano
  7. Artificial Intelligence and China’s Grand Strategy By Leo S.F. Lin
  8. Algorithm is Experiment: Machine Learning, Market Design, and Policy Eligibility Rules By NARITA Yusuke; YATA Kohei
  9. Stock Movement Prediction with Financial News using Contextualized Embedding from BERT By Qinkai Chen
  10. "Asymptotic Expansion and Deep Neural Networks Overcome the Curse of Dimensionality in the Numerical Approximation of Kolmogorov Partial Differential Equations with Nonlinear Coefficients" By Akihiko Takahashi; Toshihiro Yamada
  11. A Data-driven Explainable Case-based Reasoning Approach for Financial Risk Detection By Wei Li; Florentina Paraschiv; Georgios Sermpinis
  12. Systematizing the Lexicon of Platforms in Information Systems: A Data-Driven Study By Christian Bartelheimer
  13. The AI Economist: Optimal Economic Policy Design via Two-level Deep Reinforcement Learning By Stephan Zheng; Alexander Trott; Sunil Srinivasa; David C. Parkes; Richard Socher
  14. Online Supplements: Combining Crowd and Machine Intelligence to Detect False News in Social Media By Wei, Xuan; Zhang, Zhu; Zhang, Mingyue; Chen, Weiyun; Zeng, Daniel Dajun
  15. COVID-19 Tourism Recovery in the ASEAN and East Asia Region: Asymmetric Patterns and Implications By Stathis Polyzos; Anestis Fotiadis; Aristeidis Samitas
  16. Regional Climate Model Emulator Based on Deep Learning: Concept and First Evaluation of a Novel Hybrid Downscaling Approach By Gadat, Sébastien; Corre, Lola; Doury, Antoine; Ribes, Aurélien; Somot, Samuel
  17. "Deep Asymptotic Expansion with Weak Approximation " By Yuga Iguchi; Riu Naito; Yusuke Okano; Akihiko Takahashi; Toshihiro Yamada
  18. A data-science-driven short-term analysis of Amazon, Apple, Google, and Microsoft stocks By Shubham Ekapure; Nuruddin Jiruwala; Sohan Patnaik; Indranil SenGupta
  19. Implementing the BBE Agent-Based Model of a Sports-Betting Exchange By Dave Cliff; James Hawkins; James Keen; Roberto Lau-Soto
  20. Tracking the Ups and Downs in Indonesia’s Economic Activity During COVID-19 Using Mobility Index: Evidence from Provinces in Java and Bali By Yose Rizal Damuri; Prabaning Tyas; Haryo Aswicahyono; Lionel Priyadi; Stella Kusumawardhani; Ega Kurnia Yazid

  1. By: Longbing Cao
    Abstract: AI in finance broadly refers to the applications of AI techniques in financial businesses. This area has been lasting for decades with both classic and modern AI techniques applied to increasingly broader areas of finance, economy and society. In contrast to either discussing the problems, aspects and opportunities of finance that have benefited from specific AI techniques and in particular some new-generation AI and data science (AIDS) areas or reviewing the progress of applying specific techniques to resolving certain financial problems, this review offers a comprehensive and dense roadmap of the overwhelming challenges, techniques and opportunities of AI research in finance over the past decades. The landscapes and challenges of financial businesses and data are firstly outlined, followed by a comprehensive categorization and a dense overview of the decades of AI research in finance. We then structure and illustrate the data-driven analytics and learning of financial businesses and data. The comparison, criticism and discussion of classic vs. modern AI techniques for finance are followed. Lastly, open issues and opportunities address future AI-empowered finance and finance-motivated AI research.
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2107.09051&r=
  2. By: Tara M. Sinclair; Zhoudan Xie
    Abstract: Regulatory policy can create economic and social benefits, but poorly designed or excessive regulation may generate substantial adverse effects on the economy. In this paper, we present measures of sentiment and uncertainty about regulation in the U.S. over time and examine their relationships with macroeconomic performance. We construct the measures using lexicon-based sentiment analysis of an original news corpus, which covers 493,418 news articles related to regulation from seven leading U.S. newspapers. As a result, we build monthly indexes of sentiment and uncertainty about regulation and categorical indexes for 14 regulatory policy areas from January 1985 to August 2020. Impulse response functions indicate that a negative shock to sentiment about regulation is associated with large, persistent drops in future output and employment, while increased regulatory uncertainty overall reduces output and employment temporarily. These results suggest that sentiment about regulation plays a more important economic role than uncertainty about regulation. Furthermore, economic outcomes are particularly sensitive to sentiment around transportation regulation and to uncertainty around labor regulation.
    Keywords: Regulation, text analysis, NLP, sentiment analysis, uncertainty
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:een:camaaa:2021-54&r=
  3. By: Rowe, Francisco (University of Liverpool); Mahony, Michael; Graells-Garrido, Eduardo; Rango, Marzia; Sievers, Niklas
    Abstract: In 2020, the world faced an unprecedented challenge to tackle and understand the spread and impacts of COVID- 19. Large-scale coordinated efforts have been dedicated to understand the global health and economic implications of the pandemic. Yet, the rapid spread of discrimination and xenophobia against specific populations, particularly migrants and individuals of Asian descent, has largely been neglected. Understanding public attitudes towards migration is essential to counter discrimination against immigrants and promote social cohesion. Traditional data sources to monitor public opinion – ethnographies, interviews, and surveys – are often limited due to small samples, high cost, low temporal frequency, slow collection, release and coarse spatial resolution. New forms of data, particularly from social media, can help overcome these limitations. While some bias exists, social media data are produced at an unprecedented temporal frequency, geographical granularity, are collected globally and accessible in real-time. Drawing on a data set of 30.39 million tweets and natural language processing, this paper aims to measure shifts in public sentiment opinion about migration during early stages of the COVID-19 pandemic in Germany, Italy, Spain, the United Kingdom and the United States. Results show an increase of migration-related Tweets along with COVID-19 cases during national lockdowns in all five countries. Yet, we found no evidence of a significant increase in anti-immigration sentiment, as rises in the volume of negative messages are offset by comparable increases in positive messages. Additionally, we presented evidence of growing social polarisation concerning migration, showing high concentrations of strongly positive and strongly negative sentiments.
    Date: 2021–07–25
    URL: http://d.repec.org/n?u=RePEc:osf:socarx:pc3za&r=
  4. By: Thomas Conlon (Smurfit Graduate Business School, University College Dublin); John Cotter (Smurfit Graduate Business School, University College Dublin); Iason Kynigakis (Smurfit Graduate Business School, University College Dublin)
    Abstract: We examine machine learning and factor-based portfolio optimization. We find that factors based on autoencoder neural networks exhibit a weaker relationship with commonly used characteristic-sorted portfolios than popular dimensionality reduction techniques. Machine learning methods also lead to covariance and portfolio weight structures that diverge from simpler estimators. Minimum-variance portfolios using latent factors derived from autoencoders and sparse methods outperform simpler benchmarks in terms of risk minimization. These effects are amplified for investors with an increased sensitivity to risk-adjusted returns, during high volatility periods or when accounting for tail risk. Covariance matrices with a time-varying error component improve portfolio performance at a cost of higher turnover.
    Keywords: Autoencoder, Covariance matrix, Dimensionality reduction, Factor models, Machine learning, Minimum-variance, Principal component analysis, Partial least squares, Portfolio optimization, Sparse principal component analysis, Sparse partial least squares
    JEL: C38 C4 C45 C5 C58 G1 G11
    Date: 2021–03–11
    URL: http://d.repec.org/n?u=RePEc:ucd:wpaper:202111&r=
  5. By: Eccles, Peter (Bank of England); Grout, Paul (Bank of England); Siciliani, Paolo (Bank of England); Zalewska, Anna (University of Bath)
    Abstract: There is evidence that machine learning (ML) can improve the screening of risky borrowers, but the empirical literature gives diverse answers as to the impact of ML on credit markets. We provide a model in which traditional banks compete with fintech (innovative) banks that screen borrowers using ML technology and show that the impact of the adoption of the ML technology on credit markets depends on the characteristics of the market (eg borrower mix, cost of innovation, the intensity of competition, precision of the innovative technology, etc.). We provide a series of scenarios. For example, we show that if implementing ML technology is relatively expensive and lower-risk borrowers are a significant proportion of all risky borrowers, then all risky borrowers will be worse off following the introduction of ML, even when the lower-risk borrowers can be separated perfectly from others. At the other extreme, we show that if costs of implementing ML are low and there are few lower-risk borrowers, then lower-risk borrowers gain from the introduction of ML, at the expense of higher-risk and safe borrowers. Implications for policy, including the potential for tension between micro and macroprudential policies, are explored.
    Keywords: Adverse selection; banking; big data; capital requirements; credit markets; fintech; machine learning; prudential regulation
    JEL: G21 G28 G32
    Date: 2021–07–09
    URL: http://d.repec.org/n?u=RePEc:boe:boeewp:0930&r=
  6. By: Andrea Pontiggia (Dept. of Management, Università Ca' Foscari Venice); Giovanni Fasano (Dept. of Management, Università Ca' Foscari Venice)
    Abstract: We consider the problem of measuring the performances associated with members of a given group of homogeneous individuals. We provide both an analysis, relying on Machine Learning paradigms, along with a numerical experience based on three conceptually different real applications. A keynote aspect in the proposed approach is represented by our data–driven framework, where guidelines for evaluating individuals’ performance are derived from the data associated to the entire group. This makes our analysis and the relative outcomes quite versatile, so that a number of real problems can be studied in view of the proposed general perspective.
    Keywords: Performance Analysis, Data Analytics, Support Vector Machines, Human Resources
    JEL: M51 C38
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:vnm:wpdman:182&r=
  7. By: Leo S.F. Lin (The University of Southern Mississippi, USA,)
    Abstract: This paper offers a preliminary study analyzing the role of artificial intelligence (AI) in the People’s Republic of China’s grand strategy. This paper examines the discourse on the role of artificial intel-ligence in China and how it fits into China’s grand strategy policies. Particularly, this paper will fo-cus on three grand strategy themes: leader’s perception, grand strategy means, and grand strategy ends. China’s evolving national interests and strategic ideas are the central concern for its grand strategy. Beijing has the most ambitious AI strategy of all nations and provides the most resources for AI development. Since 2017, the development of AI has become part of China’s grand strategy plans settings out goals to build a domestic artificial intelligence industry. The AI sector has turned into a national priority which was included in President Xi Jinping’s grand vision for China. China’s goals are to make the country “the world’s premier artificial intelligence innovation center for AI†by 2030. Ultimately, AI will foster a new national leadership and establish the key fundamentals for great economic power. There are many AI applications in several grand strategy means, including military and economic policies. This paper uses a qualitative content analysis method to examine the case. Data was collected from Chinese leaders’ speeches, government statements, official publi-cations, and Chinese state media. This paper concludes that AI will become one of the key compo-nents in China’s grand strategy means, including economic, military, and intelligence capabilities. By promoting AI technology, China’s grand strategy ends are to maintain national power, national face, and international reputations.
    Keywords: China’s grand strategy, Artificial intelligence, China dream
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:smo:scmowp:01240&r=
  8. By: NARITA Yusuke; YATA Kohei
    Abstract: Algorithms produce a growing portion of decisions and recommendations both in policy and business. Such algorithmic decisions are natural experiments (conditionally quasi-randomly assigned instruments) since the algorithms make decisions based only on observable input variables. We use this observation to develop a treatment-effect estimator for a class of stochastic and deterministic algorithms. Our estimator is shown to be consistent and asymptotically normal for well-defined causal effects. A key special case of our estimator is a high-dimensional regression discontinuity design. The proofs use tools from differential geometry and geometric measure theory, which may be of independent interest. The practical performance of our method is first demonstrated in a high-dimensional simulation resembling decision-making by machine learning algorithms. Our estimator has smaller mean squared errors compared to alternative estimators. We finally apply our estimator to evaluate the effect of the Coronavirus Aid, Relief, and Economic Security (CARES) Act, where more than $10 billion worth of relief funding is allocated to hospitals via an algorithmic rule. The estimates suggest that the relief funding has little effect on COVID-19-related hospital activity levels. Naive OLS and IV estimates exhibit substantial selection bias.
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:eti:dpaper:21057&r=
  9. By: Qinkai Chen
    Abstract: News events can greatly influence equity markets. In this paper, we are interested in predicting the short-term movement of stock prices after financial news events using only the headlines of the news. To achieve this goal, we introduce a new text mining method called Fine-Tuned Contextualized-Embedding Recurrent Neural Network (FT-CE-RNN). Compared with previous approaches which use static vector representations of the news (static embedding), our model uses contextualized vector representations of the headlines (contextualized embeddings) generated from Bidirectional Encoder Representations from Transformers (BERT). Our model obtains the state-of-the-art result on this stock movement prediction task. It shows significant improvement compared with other baseline models, in both accuracy and trading simulations. Through various trading simulations based on millions of headlines from Bloomberg News, we demonstrate the ability of this model in real scenarios.
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2107.08721&r=
  10. By: Akihiko Takahashi (Faculty of Economics, The University of Tokyo); Toshihiro Yamada (Graduate School of Economics, Hitotsubashi University and Japan Science and Technology Agency (JST))
    Abstract: This paper proposes a new spatial approximation method without the curse of dimensionalityfor solving high-dimensional partial differential equations (PDEs) by using an asymptotic expan-sion method with a deep learning-based algorithm. In particular, the mathematical justi cationon the spatial approximation is provided, and a numerical example for a 100 dimensional Kol-mogorov PDE shows effectiveness of our method.
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:tky:fseres:2021cf1167&r=
  11. By: Wei Li; Florentina Paraschiv; Georgios Sermpinis
    Abstract: The rapid development of artificial intelligence methods contributes to their wide applications for forecasting various financial risks in recent years. This study introduces a novel explainable case-based reasoning (CBR) approach without a requirement of rich expertise in financial risk. Compared with other black-box algorithms, the explainable CBR system allows a natural economic interpretation of results. Indeed, the empirical results emphasize the interpretability of the CBR system in predicting financial risk, which is essential for both financial companies and their customers. In addition, our results show that the proposed automatic design CBR system has a good prediction performance compared to other artificial intelligence methods, overcoming the main drawback of a standard CBR system of highly depending on prior domain knowledge about the corresponding field.
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2107.08808&r=
  12. By: Christian Bartelheimer (Paderborn University; Paderborn University; Hamm-Lippstadt University of Applied Sciences; Paderborn University)
    Abstract: While the Information Systems (IS) discipline has researched digital platforms extensively, the body of knowledge appertaining to platforms still appears fragmented and lacking conceptual consistency. Based on automated text mining and unsupervised machine learning, we collect, analyze, and interpret the IS discipline's comprehensive research on platforms—comprising 11,049 papers spanning 44 years of research activity. From a cluster analysis concerning these concepts’ semantically most similar words, we identify six research streams on platforms, each with their own platform terms. Based on interpreting the identified concepts vis-à-vis the extant research and considering a temporal perspective on the concepts’ application, we present a taxonomy and a lexicon of platform concepts, to guide further research on platforms in the IS discipline. Researchers and managers can build on our results to position their work appropriately, applying the platform concepts that fit their results best. On a community level, we contribute to establishing a more consistent view on digital platforms as a prevalent topic in IS research that keeps advancing to new frontiers.
    Keywords: platform; text mining; machine learning; data communications; interpretive research; systems design and implementation
    JEL: C71 D85 L22
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:pdn:dispap:79&r=
  13. By: Stephan Zheng; Alexander Trott; Sunil Srinivasa; David C. Parkes; Richard Socher
    Abstract: AI and reinforcement learning (RL) have improved many areas, but are not yet widely adopted in economic policy design, mechanism design, or economics at large. At the same time, current economic methodology is limited by a lack of counterfactual data, simplistic behavioral models, and limited opportunities to experiment with policies and evaluate behavioral responses. Here we show that machine-learning-based economic simulation is a powerful policy and mechanism design framework to overcome these limitations. The AI Economist is a two-level, deep RL framework that trains both agents and a social planner who co-adapt, providing a tractable solution to the highly unstable and novel two-level RL challenge. From a simple specification of an economy, we learn rational agent behaviors that adapt to learned planner policies and vice versa. We demonstrate the efficacy of the AI Economist on the problem of optimal taxation. In simple one-step economies, the AI Economist recovers the optimal tax policy of economic theory. In complex, dynamic economies, the AI Economist substantially improves both utilitarian social welfare and the trade-off between equality and productivity over baselines. It does so despite emergent tax-gaming strategies, while accounting for agent interactions and behavioral change more accurately than economic theory. These results demonstrate for the first time that two-level, deep RL can be used for understanding and as a complement to theory for economic design, unlocking a new computational learning-based approach to understanding economic policy.
    Date: 2021–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2108.02755&r=
  14. By: Wei, Xuan; Zhang, Zhu; Zhang, Mingyue; Chen, Weiyun; Zeng, Daniel Dajun
    Abstract: This is the online supplements to paper "Combining Crowd and Machine Intelligence to Detect False News in Social Media", which is forthcoming at Management Information Systems Quarterly.
    Date: 2021–07–04
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:6svp9&r=
  15. By: Stathis Polyzos; Anestis Fotiadis; Aristeidis Samitas (College of Business, Zayed University, Abu Dhabi, UAE)
    Abstract: The aim of this paper is to produce forecasts for tourism flows and tourism revenue for ASEAN and East Asian countries after the end of the COVID-19 pandemic. By implementing two different machine-learning methodologies (the Long Short Term Memory neural network and the Generalised Additive Model) and using different training data sets, we aim to forecast the recovery patterns for these data series for the first 12 months after the end of crisis. We thus produce a baseline forecast, based on the averages of our different models, as well as a worst- and best-case scenario. We show that recovery is asymmetric across the group of countries in the ASEAN and East Asian region and that recovery in tourism revenue is generally slower than in tourist arrivals. We show significant losses of approximately 48%, persistent after 12 months, for some countries, while others display increases of approximately 40% when compared to pre-crisis levels. Our work aims to quantify the projected drop in tourist arrivals and tourism revenue for ASEAN and East Asian countries over the coming months. The results of the proposed research can be used by policymakers as they determine recovery plans, where tourism will undoubtedly play a very important role.
    Keywords: COVID-19, tourism, deep learning, ASEAN, East Asia
    JEL: H12 P46 Z32
    Date: 2021–06–08
    URL: http://d.repec.org/n?u=RePEc:era:wpaper:dp-2021-12&r=
  16. By: Gadat, Sébastien; Corre, Lola; Doury, Antoine; Ribes, Aurélien; Somot, Samuel
    Abstract: Providing reliable information on climate change at local scale remains a challenge of first importance for impact studies and policymakers. Here, we propose a novel hybrid downscaling method combining the strengths of both empirical statistical downscaling methods and Regional Climate Models (RCMs). The aim of this tool is to enlarge the size of high-resolution RCM simulation ensembles at low cost. We build a statistical RCM-emulator by estimating the downscaling function included in the RCM. This framework allows us to learn the relationship between large-scale predictors and a local surface variable of interest over the RCM domain in present and future climate. Furthermore, the emulator relies on a neural network architecture, which grants computational efficiency. The RCM-emulator developed in this study is trained to produce daily maps of the near-surface temperature at the RCM resolution (12km). The emulator demonstrates an excellent ability to reproduce the complex spatial structure and daily variability simulated by the RCM and in particular the way the RCM refines locally the low-resolution climate patterns. Training in future climate appears to be a key feature of our emulator. Moreover, there is a huge computational benefit in running the emulator rather than the RCM, since training the emulator takes about 2 hours on GPU, and the prediction is nearly instantaneous. However, further work is needed to improve the way the RCM-emulator reproduces some of the temperature extremes, the intensity of climate change, and to extend the proposed methodology to different regions, GCMs, RCMs, and variables of interest.
    Keywords: Emulator, Hybrid downscaling , Regional Climate Modeling , Statistical Downscaling , Deep Neural Network, Machine Learning.
    Date: 2021–07–21
    URL: http://d.repec.org/n?u=RePEc:tse:wpaper:125808&r=
  17. By: Yuga Iguchi (MUFG Bank); Riu Naito (Japan Post Insurance and Hitotsubashi University); Yusuke Okano (SMBC Nikko Securitie); Akihiko Takahashi (Faculty of Economics, The University of Tokyo); Toshihiro Yamada (Graduate School of Economics, Hitotsubashi University and Japan Science and Technology Agency (JST))
    Abstract: This paper proposes a new spatial approximation method without the curse of dimensionalityfor solving high-dimensional partial differential equations (PDEs) by using an asymptotic expan-sion method with a deep learning-based algorithm. In particular, the mathematical justi cationon the spatial approximation is provided, and a numerical example for a 100 dimensional Kol-mogorov PDE shows effectiveness of our method.
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:tky:fseres:2021cf1168&r=
  18. By: Shubham Ekapure; Nuruddin Jiruwala; Sohan Patnaik; Indranil SenGupta
    Abstract: In this paper, we implement a combination of technical analysis and machine/deep learning-based analysis to build a trend classification model. The goal of the paper is to apprehend short-term market movement, and incorporate it to improve the underlying stochastic model. Also, the analysis presented in this paper can be implemented in a \emph{model-independent} fashion. We execute a data-science-driven technique that makes short-term forecasts dependent on the price trends of current stock market data. Based on the analysis, three different labels are generated for a data set: $+1$ (buy signal), $0$ (hold signal), or $-1$ (sell signal). We propose a detailed analysis of four major stocks- Amazon, Apple, Google, and Microsoft. We implement various technical indicators to label the data set according to the trend and train various models for trend estimation. Statistical analysis of the outputs and classification results are obtained.
    Date: 2021–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2107.14695&r=
  19. By: Dave Cliff; James Hawkins; James Keen; Roberto Lau-Soto
    Abstract: We describe three independent implementations of a new agent-based model (ABM) that simulates a contemporary sports-betting exchange, such as those offered commercially by companies including Betfair, Smarkets, and Betdaq. The motivation for constructing this ABM, which is known as the Bristol Betting Exchange (BBE), is so that it can serve as a synthetic data generator, producing large volumes of data that can be used to develop and test new betting strategies via advanced data analytics and machine learning techniques. Betting exchanges act as online platforms on which bettors can find willing counterparties to a bet, and they do this in a way that is directly comparable to the manner in which electronic financial exchanges, such as major stock markets, act as platforms that allow traders to find willing counterparties to buy from or sell to: the platform aggregates and anonymises orders from multiple participants, showing a summary of the market that is updated in real-time. In the first instance, BBE is aimed primarily at producing synthetic data for in-play betting (also known as in-race or in-game betting) where bettors can place bets on the outcome of a track-race event, such as a horse race, after the race has started and for as long as the race is underway, with betting only ceasing when the race ends. The rationale for, and design of, BBE has been described in detail in a previous paper that we summarise here, before discussing our comparative results which contrast a single-threaded implementation in Python, a multi-threaded implementation in Python, and an implementation where Python header-code calls simulations of the track-racing events written in OpenCL that execute on a 640-core GPU -- this runs approximately 1000 times faster than the single-threaded Python. Our source-code for BBE is freely available on GitHub.
    Date: 2021–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2108.02419&r=
  20. By: Yose Rizal Damuri (Centre for Strategic and International Studies (CSIS), Indonesia); Prabaning Tyas (Tenggara Strategics, Indonesia); Haryo Aswicahyono (Centre for Strategic and International Studies (CSIS), Indonesia); Lionel Priyadi (Tenggara Strategics, Indonesia); Stella Kusumawardhani (Tenggara Strategics, Indonesia); Ega Kurnia Yazid (Centre for Strategic and International Studies (CSIS), Indonesia)
    Abstract: A timely and reliable prediction of economic activities is crucial in policymaking, especially in the current COVID-19 pandemic situation, which requires real-time decisions. However, making frequent predictions is challenging due to the substantial delays in releasing aggregate economic data. This study aims to nowcast Indonesia’s economic activities during the COVID-19 pandemic using the novel high-frequency Facebook Mobility Index as a predictor. Employing mixed-frequency, mixed-data sampling, and benchmark least-squares models, we expanded the mobility index and used it to track the growth dynamics of the gross regional domestic product of provinces in Java and Bali and performed a bottom-up approach to estimate the aggregated economic growth of the provinces altogether. Our results suggested that the daily Facebook Mobility Index was a considerably reliable predictor for projecting economic activities on time. All models almost consistently produced reliable directional predictions. Notably, we found the mixed data sampling-autoregressive model to be slightly superior to the other models in terms of overall precision and directional predictive accuracy across observations.
    Keywords: COVID-19, nowcasting, GDP, mobility, Mixed-frequency
    JEL: C20 C53 R11
    Date: 2021–07–05
    URL: http://d.repec.org/n?u=RePEc:era:wpaper:dp-2021-18&r=

This nep-big issue is ©2021 by Tom Coupé. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.