nep-big New Economics Papers
on Big Data
Issue of 2022‒12‒19
28 papers chosen by
Tom Coupé
University of Canterbury

  1. Humans Feel Too Special for Machines to Score Their Morals By Purcell, Zoe; Bonnefon, Jean-François
  2. FinBERT-LSTM: Deep Learning based stock price prediction using News Sentiment Analysis By Shayan Halder
  3. Spatial Machine Learning – New Opportunities for Regional Science By Katarzyna Kopczewska
  4. Online Investor Sentiment via Machine Learnings By Zongwu Cai; Pixiong Chen
  5. A Nowcasting Model of Industrial Production using Alternative Data and Machine Learning Approaches By Kakuho Furukawa; Ryohei Hisano; Yukio Minoura; Tomoyuki Yagi
  6. Effects of Artificial Intelligence, Big Data Analytics, and Business Intelligence on Digital Transformation in UAE Telecommunication Firms By , editor2021; Younus, Ahmed Muayad
  7. On the application of Machine Learning in telecommunications forecasting: A comparison By Petre, Konstantin; Varoutas, Dimitris
  8. Online Dynamic Portfolio Choices By Zongwu Cai; Pixiong Chen
  9. Redirect the Probability Approach in Econometrics Towards PAC Learning By Duo Qin
  10. Intellectual Property Protection Lost and Competition: An Examination Using Machine Learning By Utku U. Acikalin; Tolga Caskurlu; Gerard Hoberg; Gordon M. Phillips
  11. ESG Factors and Firms’ Credit Risk By Bonacorsi, Laura; Cerasi, Vittoria; Galfrascoli, Paola; Manera, Matteo
  12. Consumer credit in the age of AI: Beyond anti-discrimination law By Langenbucher, Katja
  13. The Effect of Artificial Intelligence on Job Performance in China's Small and Medium-Sized Enterprises (SMEs) By , editor2021; Younus, Ahmed Muayad
  14. Forecasting Bitcoin volatility spikes from whale transactions and CryptoQuant data using Synthesizer Transformer models By Dorien Herremans; Kah Wee Low
  15. Humans Feel Too Special for Machines to Score Their Morals By Bonnefon, Jean-François; Purcell, Zoe
  16. Exploring New Ways to Classify Industries for Energy Analysis and Modeling By Liz Wachs; Colin McMillan; Gale Boyd; Matt Doolin
  17. Using multimodal learning and deep generative models for corporate bankruptcy prediction By Rogelio A. Mancisidor
  18. Digital skills for all? From computer literacy to AI skills in online job advertisements By Matteo Sostero; Songül Tolan
  19. Regulating Algorithmic Learning in Digital Platform Ecosystems through Data Sharing and Data Siloing: Consequences for Innovation and Welfare By Krämer, Jan; Shekhar, Shiva; Hofmann, Janina
  20. Tone of Mass Media News Affect Pledge Amounts in Reward Crowdfunding Campaign By Wesley Mendes-da-Silva; Israel José dos Santos Felipe; Cristiana Cerqueira Leal; Marcelo Otone Aguiar
  21. Motif-aware temporal GCN for fraud detection in signed cryptocurrency trust networks By Chong Mo; Song Li; Geoffrey K. F. Tso; Jiandong Zhou; Yiyan Qi; Mingjie Zhu
  22. On Pricing of Discrete Asian and Lookback Options under the Heston Model By Leonardo Perotti; Lech A. Grzelak
  23. Unconfoundedness with Network Interference By Michael P. Leung; Pantelis Loupos
  24. Evaluating COVID-19’s Impact on Firm Performance in the CAREC Region Using Night-Time Light Data: Azerbaijan, Georgia, Kazakhstan, and Mongolia By Karymshakov, Kamalbek; Azhgaliyeva, Dina; Mishra, Ranjeeta; Aseinov, Dastan
  25. Enhanced Bayesian Neural Networks for Macroeconomics and Finance By Niko Hauzenberger; Florian Huber; Karin Klieber; Massimiliano Marcellino
  26. The Future Economics of Artificial Intelligence: Mythical Agents, a Singleton and the Dark Forest By Naudé, Wim
  27. Newly Developed Flexible Grid Trading Model Combined ANN and SSO algorithm By Wei-Chang Yeh; Yu-Hsin Hsieh; Chia-Ling Huang
  28. New Online Investor Sentiment and Asset Returns By Zongwu Cai; Pixiong Chen

  1. By: Purcell, Zoe; Bonnefon, Jean-François
    Abstract: Artificial Intelligence (AI) can be harnessed to create sophisticated social and moral scoring systems —enabling people and organizations to form judgements of others at scale. However, it also poses significant ethical challenges and is, subsequently, the subject of wide debate. As these technologies are developed and governing bodies face regulatory decisions, it is crucial that we understand the attraction or resistance that people have for AI moral scoring. Across four experiments, we show that the acceptability of moral scoring by AI is related to expectations about the quality of those scores, but that expectations about quality are compromised by people's tendency to see themselves as morally peculiar. We demonstrate that people overestimate the peculiarity of their moral profile, believe that AI will neglect this peculiarity, and resist for this reason the introduction of moral scoring by AI.
    Keywords: Artificial Intelligence; social credit scoring, ethics; consumer psychology
    Date: 2022–11
    URL: http://d.repec.org/n?u=RePEc:tse:iastwp:127526&r=big
  2. By: Shayan Halder
    Abstract: Economy is severely dependent on the stock market. An uptrend usually corresponds to prosperity while a downtrend correlates to recession. Predicting the stock market has thus been a centre of research and experiment for a long time. Being able to predict short term movements in the market enables investors to reap greater returns on their investments. Stock prices are extremely volatile and sensitive to financial market. In this paper we use Deep Learning networks to predict stock prices, assimilating financial, business and technology news articles which present information about the market. First, we create a simple Multilayer Perceptron (MLP) network and then expand into more complex Recurrent Neural Network (RNN) like Long Short Term Memory (LSTM), and finally propose FinBERT-LSTM model, which integrates news article sentiments to predict stock price with greater accuracy by analysing short-term market information. We then train the model on NASDAQ-100 index stock data and New York Times news articles to evaluate the performance of MLP, LSTM, FinBERT-LSTM models using mean absolute error (MAE), mean absolute percentage error (MAPE) and accuracy metrics.
    Date: 2022–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2211.07392&r=big
  3. By: Katarzyna Kopczewska (Faculty of Economic Sciences, University of Warsaw)
    Abstract: This paper is a methodological guide on using machine learning in the spatial context. It provides an overview of the existing spatial toolbox proposed in the literature: unsupervised learning, which deals with clustering of spatial data and supervised learning, which displaces classical spatial econometrics. It shows the potential and traps of using this developing methodology. It catalogues and comments on the usage of spatial clustering methods (for locations and values, separately and jointly) for mapping, bootstrapping, cross-validation, GWR modelling, and density indicators. It shows details of spatial machine learning models, combined with spatial data integration, modelling, model fine-tuning and predictions, to deal with spatial autocorrelation and big data. The paper delineates "already available" and "forthcoming" methods and gives inspirations to transplant modern quantitative methods from other thematic areas to research in regional science.
    Keywords: spatial machine learning, clustering, spatial covariates, spatial cross-validation, spatial autocorrelation
    JEL: C31 R10 C49
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:war:wpaper:2021-16&r=big
  4. By: Zongwu Cai (Department of Economics, The University of Kansas, Lawrence, KS 66045, USA); Pixiong Chen (Department of Economics, The University of Kansas, Lawrence, KS 66045, USA)
    Abstract: In this paper, we propose to utilize machine learning methods to determine the expected aggregated stock market risk premium based on online investor sentiment. Our empirical studies provide a strong evidence that some machine learning methods, such as the extreme gradient boosting or random forest, show significant predictive ability in terms of out-of-sample R-square with high-dimensional investor sentiment proxies. They also outperform the traditional linear models, which reveal a possible unobserved nonlinear relationship between online investor sentiment and risk premium. Moreover, this predictability based on online investor sentiment has a better economic value that it improves portfolio performance for investors who need to decide the optimal asset allocation in terms of certainty equivalent return gain and Sharpe ratio.
    JEL: C45 C55 C58 G11 G17
    Date: 2022–11
    URL: http://d.repec.org/n?u=RePEc:kan:wpaper:202217&r=big
  5. By: Kakuho Furukawa (Bank of Japan); Ryohei Hisano (The University of Tokyo); Yukio Minoura (Bank of Japan); Tomoyuki Yagi (Bank of Japan)
    Abstract: Recent years have seen a growing trend to utilize "alternative data" in addition to traditional statistical data in order to understand and assess economic conditions in real time. In this paper, we construct a nowcasting model for the Indices of Industrial Production (IIP), which measure production activity in the manufacturing sector in Japan. The model has the following characteristics: First, it uses alternative data (mobility data and electricity demand data) that is available in real-time and can nowcast the IIP one to two months before their official release. Second, the model employs machine learning techniques to improve the nowcasting accuracy by endogenously changing the mixing ratio of nowcast values based on traditional economic statistics (the Indices of Industrial Production Forecast) and nowcast values based on alternative data, depending on the economic situation. The estimation results show that by applying machine learning techniques to alternative data, production activity can be nowcasted with high accuracy, including when it went through large fluctuations during the spread of the COVID-19 pandemic.
    Keywords: Industrial production; Mobility data; Electricity data; Nowcasting; Machine learning; COVID-19
    JEL: C49 C55 E23 E27
    Date: 2022–11–25
    URL: http://d.repec.org/n?u=RePEc:boj:bojwps:wp22e16&r=big
  6. By: , editor2021; Younus, Ahmed Muayad
    Abstract: This research’s primary objective is to investigate the impact of artificial intelligence, big data analytics, and business intelligence on digital transformation in UAE telecommunications companies. Following the completion of the sample checking procedure, 200 samples were collected. The Amos program was used to process all the collected data in the research study. The findings of the research demonstrate a set of relationships and linkages that can enhance digital transformation. Moreover, a summary of the findings revealed that all three hypotheses H1, H2, and H3 were found to be valid and significant. This study concluded that artificial intelligence, big data analytics, and business intelligence have a positive impact on developing and enhancing for digital transformation.
    Date: 2022–06–06
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:8usrk&r=big
  7. By: Petre, Konstantin; Varoutas, Dimitris
    Abstract: Over the past few decades, a large number of research papers has published focused on forecasting ICT products using various diffusion models like logistic, Gompertz, Bass, etc. Much less research work has been done towards the application of time series forecasting in ICT such as ARIMA model which seems to be an attractive alternative. More recently with the advancement in computational power, machine learning and artificial intelligence have become popular due to superior performance than classical models in many areas of concern. In this paper, broadband penetration is analysed separately for all OECD countries, trying to figure out which model is superior in most cases and phases in time. Although diffusion models are dedicated for this purpose, the ARIMA model has nevertheless shown an enormous influence as a good alternative in many previous works. In this study, a new approach using LSTM networks stands out to be a promising method for projecting high technology innovations diffusion.
    Keywords: Diffusion models,ARIMA,LSTM,broadband penetration forecasting
    Date: 2022
    URL: http://d.repec.org/n?u=RePEc:zbw:itse22:265665&r=big
  8. By: Zongwu Cai (Department of Economics, The University of Kansas, Lawrence, KS 66045, USA); Pixiong Chen (Department of Economics, The University of Kansas, Lawrence, KS 66045, USA)
    Abstract: In this paper, we study the online investor sentiment and propose using nonparametric generalized method of moment proposed by Cai (2003) to estimate the portfolio policy via an investment sentiment index for optimal asset allocations. We find that portfolio performance is improved by introducing the flexibility into portfolio policy and incorporation of investor sentiment, constructed based on machine learning method, as a predictor that captures the time variations of investment opportunities. It is shown that the market timing depending on investor sentiment is nonlinear and varies across assets.
    JEL: C14 C44 G11 G17
    Date: 2022–11
    URL: http://d.repec.org/n?u=RePEc:kan:wpaper:202218&r=big
  9. By: Duo Qin (Department of Economics, SOAS University of London)
    Abstract: Infiltration of machine learning (ML) methods into econometrics has remained relatively slow, compared with their extensive applications in many other disciplines. The bottleneck is traced to two key factors – a communal nescience of the theoretical foundation of ML and an outdated probability foundation. The present study ventures on an overhaul of the probability approach by Haavelmo (1944) in light of ML theories of learnibility, centred upon the notion of probably approximately correct (PAC) learning. The study argues for a reorientation of the probability approach towards assisting decision making for model learning and selection purposes. The first part of the study is presented here.
    Keywords: probability; uncertainty; machine learning; hypothesis testing; knowledge; representation
    JEL: C10 C18 B40
    Date: 2022–03
    URL: http://d.repec.org/n?u=RePEc:soa:wpaper:249&r=big
  10. By: Utku U. Acikalin; Tolga Caskurlu; Gerard Hoberg; Gordon M. Phillips
    Abstract: We examine the impact of lost intellectual property protection on innovation, competition, acquisitions, lawsuits and employment agreements. We consider firms whose ability to protect intellectual property (IP) using patents is weakened following the Alice Corp. vs. CLS Bank International Supreme Court decision. This decision has impacted patents in multiple areas including business methods, software, and bioinformatics. We use state-of-the-art machine learning techniques to identify firms’ existing patent portfolios’ potential exposure to the Alice decision. While all affected firms decrease patenting post-Alice, we find an unequal impact of decreased patent protection. Large affected firms benefit as their sales and market valuations increase, and their exposure to lawsuits decreases. They also acquire fewer firms post-Alice. Small affected firms lose as they face increased competition, product-market encroachment, and lower profits and valuations. They increase R&D and have their employees sign more nondisclosure agreements.
    JEL: D43 G34 O31 O33 O34
    Date: 2022–11
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:30671&r=big
  11. By: Bonacorsi, Laura; Cerasi, Vittoria; Galfrascoli, Paola; Manera, Matteo
    Abstract: We study the relationship between the risk of default and Environmental, Social and Governance (ESG) factors using Supervised Machine Learning (SML) techniques on a cross-section of European listed companies. Our proxy for credit risk is the z-score originally proposed by Altman (1968). We consider an extensive number of ESG raw factors sourced from the rating provider MSCI as potential explanatory variables. In a first stage we show, using different SML methods such as LASSO and Random Forest, that a selection of ESG factors, in addition to the usual accounting ratios, helps explaining a firm’s probability of default. In a second stage, we measure the impact of the selected variables on the risk of default. Our approach provides a novel perspective to understand which environmental, social responsibility and governance characteristics may reinforce the credit score of individual companies.
    Keywords: Financial Economics, Productivity Analysis, Research Methods/ Statistical Methods
    Date: 2022–11–29
    URL: http://d.repec.org/n?u=RePEc:ags:feemwp:329521&r=big
  12. By: Langenbucher, Katja
    Abstract: Search costs for lenders when evaluating potential borrowers are driven by the quality of the underwriting model and by access to data. Both have undergone radical change over the last years, due to the advent of big data and machine learning. For some, this holds the promise of inclusion and better access to finance. Invisible prime applicants perform better under AI than under traditional metrics. Broader data and more refined models help to detect them without triggering prohibitive costs. However, not all applicants profit to the same extent. Historic training data shape algorithms, biases distort results, and data as well as model quality are not always assured. Against this background, an intense debate over algorithmic discrimination has developed. This paper takes a first step towards developing principles of fair lending in the age of AI. It submits that there are fundamental difficulties in fitting algorithmic discrimination into the traditional regime of anti-discrimination laws. Received doctrine with its focus on causation is in many cases ill-equipped to deal with algorithmic decision-making under both, disparate treatment, and disparate impact doctrine. The paper concludes with a suggestion to reorient the discussion and with the attempt to outline contours of fair lending law in the age of AI.
    Keywords: credit scoring methodology,AI enabled credit scoring,AI borrower classification,responsible lending,credit scoring regulation,financial privacy,statistical discrimination
    JEL: C18 C32 K12 K23 K33 K40 J14 O31 O33
    Date: 2022
    URL: http://d.repec.org/n?u=RePEc:zbw:lawfin:42&r=big
  13. By: , editor2021; Younus, Ahmed Muayad
    Abstract: Applications of artificial intelligence (AI) in business have garnered much attention in recent years, but the implementation issues posed by AI have not been addressed. The purpose of this study was to shed light on the effect of artificial intelligence and its associated variables on job performance. Privacy, consent, security, scalability, the role of corporations, and the changing nature of business are used as a study community and focused in Small and Medium Enterprises (SMEs) in China, business sector. To collect data from the random sample, a questionnaire was constructed. 220 managers were included in the sample. Additionally, the study took a descriptive method and analyzed the data using SPSS. The findings indicated that artificial intelligence has a statistically significant effect on employment. Performance is determined solely by factors. Additionally, the findings indicated that gender, academic credentials, and years of experience all have a statistically significant impact on work performance If the implementation science community wants to aid in the general adoption of business, the concerns outlined in this research will demand significant attention in the future years.
    Date: 2022–06–10
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:qg8x7&r=big
  14. By: Dorien Herremans; Kah Wee Low
    Abstract: The cryptocurrency market is highly volatile compared to traditional financial markets. Hence, forecasting its volatility is crucial for risk management. In this paper, we investigate CryptoQuant data (e.g. on-chain analytics, exchange and miner data) and whale-alert tweets, and explore their relationship to Bitcoin's next-day volatility, with a focus on extreme volatility spikes. We propose a deep learning Synthesizer Transformer model for forecasting volatility. Our results show that the model outperforms existing state-of-the-art models when forecasting extreme volatility spikes for Bitcoin using CryptoQuant data as well as whale-alert tweets. We analysed our model with the Captum XAI library to investigate which features are most important. We also backtested our prediction results with different baseline trading strategies and the results show that we are able to minimize drawdown while keeping steady profits. Our findings underscore that the proposed method is a useful tool for forecasting extreme volatility movements in the Bitcoin market.
    Date: 2022–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2211.08281&r=big
  15. By: Bonnefon, Jean-François; Purcell, Zoe
    Abstract: Artificial Intelligence (AI) can be harnessed to create sophisticated social and moral scoring systems —enabling people and organizations to form judgements of others at scale. However, it also poses significant ethical challenges and is, subsequently, the subject of wide debate. As these technologies are developed and governing bodies face regulatory decisions, it is crucial that we understand the attraction or resistance that people have for AI moral scoring. Across four experiments, we show that the acceptability of moral scoring by AI is related to expectations about the quality of those scores, but that expectations about quality are compromised by people's tendency to see themselves as morally peculiar. We demonstrate that people overestimate the peculiarity of their moral profile, believe that AI will neglect this peculiarity, and resist for this reason the introduction of moral scoring by AI.
    JEL: D91
    Date: 2022–11–25
    URL: http://d.repec.org/n?u=RePEc:tse:wpaper:127527&r=big
  16. By: Liz Wachs; Colin McMillan; Gale Boyd; Matt Doolin
    Abstract: Combustion, other emitting processes and fossil energy use outside the power sector have become urgent concerns given the United States’ commitment to achieving net-zero greenhouse gas emissions by 2050. Industry is an important end user of energy and relies on fossil fuels used directly for process heating and as feedstocks for a diverse range of applications. Fuel and energy use by industry is heterogeneous, meaning even a single product group can vary broadly in its production routes and associated energy use. In the United States, the North American Industry Classification System (NAICS) serves as the standard for statistical data collection and reporting. In turn, data based on NAICS are the foundation of most United States energy modeling. Thus, the effectiveness of NAICS at representing energy use is a limiting condition for current expansive planning to improve energy efficiency and alternatives to fossil fuels in industry. Facility-level data could be used to build more detail into heterogeneous sectors and thus supplement data from Bureau of the Census and U.S Energy Information Administration reporting at NAICS code levels but are scarce. This work explores alternative classification schemes for industry based on energy use characteristics and validates an approach to estimate facility-level energy use from publicly available greenhouse gas emissions data from the U.S. Environmental Protection Agency (EPA). The approaches in this study can facilitate understanding of current, as well as possible future, energy demand. First, current approaches to the construction of industrial taxonomies are summarized along with their usefulness for industrial energy modeling. Unsupervised machine learning techniques are then used to detect clusters in data reported from the U.S. Department of Energy’s Industrial Assessment Center program. Clusters of Industrial Assessment Center data show similar levels of correlation between energy use and explanatory variables as three-digit NAICS codes. Interestingly, the clusters each include a large cross section of NAICS codes, which lends additional support to the idea that NAICS may not be particularly suited for correlation between energy use and the variables studied. Fewer clusters are needed for the same level of correlation as shown in NAICS codes. Initial assessment shows a reasonable level of separation using support vector machines with higher than 80% accuracy, so machine learning approaches may be promising for further analysis. The IAC data is focused on smaller and medium-sized facilities and is biased toward higher energy users for a given facility type. Cladistics, an approach for classification developed in biology, is adapted to energy and process characteristics of industries. Cladistics applied to industrial systems seeks to understand the progression of organizations and technology as a type of evolution, wherein traits are inherited from previous systems but evolve due to the emergence of inventions and variations and a selection process driven by adaptation to pressures and favorable outcomes. A cladogram is presented for evolutionary directions in the iron and steel sector. Cladograms are a promising tool for constructing scenarios and summarizing directions of sectoral innovation. The cladogram of iron and steel is based on the drivers of energy use in the sector. Phylogenetic inference is similar to machine learning approaches as it is based on a machine-led search of the solution space, therefore avoiding some of the subjectivity of other classification systems. Our prototype approach for constructing an industry cladogram is based on process characteristics according to the innovation framework derived from Schumpeter to capture evolution in a given sector. The resulting cladogram represents a snapshot in time based on detailed study of process characteristics. This work could be an important tool for the design of scenarios for more detailed modeling. Cladograms reveal groupings of emerging or dominant processes and their implications in a way that may be helpful for policymakers and entrepreneurs, allowing them to see the larger picture, other good ideas, or competitors. Constructing a cladogram could be a good first step to analysis of many industries (e.g. nitrogenous fertilizer production, ethyl alcohol manufacturing), to understand their heterogeneity, emerging trends, and coherent groupings of related innovations. Finally, validation is performed for facility-level energy estimates from the EPA Greenhouse Gas Reporting Program. Facility-level data availability continues to be a major challenge for industrial modeling. The method outlined by (McMillan et al. 2016; McMillan and Ruth 2019) allows estimating of facility level energy use based on mandatory greenhouse gas reporting. The validation provided here is an important step for further use of this data for industrial energy modeling.
    Date: 2022–11
    URL: http://d.repec.org/n?u=RePEc:cen:wpaper:22-49&r=big
  17. By: Rogelio A. Mancisidor
    Abstract: This research introduces for the first time the concept of multimodal learning in bankruptcy prediction models. We use the Conditional Multimodal Discriminative (CMMD) model to learn multimodal representations that embed information from accounting, market, and textual modalities. The CMMD model needs a sample with all data modalities for model training. At test time, the CMMD model only needs access to accounting and market modalities to generate multimodal representations, which are further used to make bankruptcy predictions. This fact makes the use of bankruptcy prediction models using textual data realistic and possible, since accounting and market data are available for all companies unlike textual data. The empirical results in this research show that the classification performance of our proposed methodology is superior compared to that of a large number of traditional classifier models. We also show that our proposed methodology solves the limitation of previous bankruptcy models using textual data, as they can only make predictions for a small proportion of companies. Finally, based on multimodal representations, we introduce an index that is able to capture the uncertainty of the financial situation of companies during periods of financial distress.
    Date: 2022–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2211.08405&r=big
  18. By: Matteo Sostero (European Commission – JRC); Songül Tolan (European Commission – JRC)
    Abstract: The digital transition of the economy is widely expected to change the nature of work. This may happen both through creating new digital job profiles, and by digitising existing jobs. We track the trends in demand for digital skills across occupations, using data from over 60 million online job advertisements in the United Kingdom over 2012-2020, the longest-running such data source in Europe. Although online job advertisements tend to understate the prevalence of basic digital competence (like computer literacy or office software) compared to representative surveys, they are particularly precise in tracking skills related to emerging digital technologies. We classify over 13,000 different skills required by employers in the data into clusters, through a community-detection algorithm based on the co-occurrence of skills in job advertisements. We identify several clusters that relate to advanced digital skills in emerging domains. We also find that digital skills are at the core of some “non-digital” domains, like the administrative and clerical cluster. Advanced digital skills also pay a notable wage premium: premium: skills in the AI & Big Data cluster are associated with about 10.8% higher offered wages, compared to similar advertisements. For skills in the Advanced ICT cluster, the wage premium is about 15.9% and for ICT Support the premium is about 6.3%. Overall, online job advertisements provide a unique view into the process of competence definition of emerging skill profiles.
    Keywords: Digital Transformation, Future of Work, Digital Skills, Artificial Intelligence
    Date: 2022–11
    URL: http://d.repec.org/n?u=RePEc:ipt:laedte:202207&r=big
  19. By: Krämer, Jan; Shekhar, Shiva; Hofmann, Janina
    Abstract: Algorithmic learning gives rise to a data-driven network effects, which allow a dominant platform to reinforce its dominant market position. Data-driven network effects can also spill over to related markets and thereby allow to leverage a dominant position. This has led policymakers to propose data siloing and mandated data sharing remedies for dominant data-driven platforms in order to keep digital markets open and contestable. While data siloing seeks to prevent the spillover of data-driven network effects generated by algorithmic learning to other markets, data sharing seeks to share this externality with rival firms. Using a game-theoretic model, we investigate the impacts of both types of regulation. Our results bear important policy implications, as we demonstrate that data siloing and data sharing are potentially harmful remedies, which can reduce the innovation incentives of the regulated platform, and can lead overall lower consumer surplus and total welfare.
    Keywords: Data-driven network effects,algorithmic learning,regulation,data sharing,data siloing
    Date: 2022
    URL: http://d.repec.org/n?u=RePEc:zbw:itse22:265645&r=big
  20. By: Wesley Mendes-da-Silva (São School of Business Administration of the Fundação Getulio Vargas); Israel José dos Santos Felipe (Federal University of Ouro Preto/Brazil and NIPE/Portugal); Cristiana Cerqueira Leal (School of Economics and Management & NIPE – Center for Research in Economics and Management, University of Minho); Marcelo Otone Aguiar (Federal University of Espirito Santo)
    Abstract: We study whether the tone of the daily news in mass media, used as a proxy for market sentiment, affects the typical daily pledge amount in reward crowdfunding campaigns. Based on unique data from over 350,000 pledges in reward crowdfunding campaigns in over 2,600 cities in Brazil, we find that market sentiment affects the willingness of backers to make larger pledges. Our unprecedented results reveal that good news induces pledges of larger amounts. The effect of tone over pledge amounts is inhibited by the geographic distance backer-entrepreneur, and it is reinforced by the income inequality in the pledger’s city.
    Keywords: Natural Language Processing, Crowdfunding, Media, Investor sentiment
    JEL: L26 G32 G41 O31 C41 I31
    Date: 2022
    URL: http://d.repec.org/n?u=RePEc:nip:nipewp:2/2022&r=big
  21. By: Chong Mo; Song Li; Geoffrey K. F. Tso; Jiandong Zhou; Yiyan Qi; Mingjie Zhu
    Abstract: Graph convolutional networks (GCNs) is a class of artificial neural networks for processing data that can be represented as graphs. Since financial transactions can naturally be constructed as graphs, GCNs are widely applied in the financial industry, especially for financial fraud detection. In this paper, we focus on fraud detection on cryptocurrency truct networks. In the literature, most works focus on static networks. Whereas in this study, we consider the evolving nature of cryptocurrency networks, and use local structural as well as the balance theory to guide the training process. More specifically, we compute motif matrices to capture the local topological information, then use them in the GCN aggregation process. The generated embedding at each snapshot is a weighted average of embeddings within a time window, where the weights are learnable parameters. Since the trust networks is signed on each edge, balance theory is used to guide the training process. Experimental results on bitcoin-alpha and bitcoin-otc datasets show that the proposed model outperforms those in the literature.
    Date: 2022–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2211.13123&r=big
  22. By: Leonardo Perotti; Lech A. Grzelak
    Abstract: We propose a new, data-driven approach for efficient pricing of - fixed- and float-strike - discrete arithmetic Asian and Lookback options when the underlying process is driven by the Heston model dynamics. The method proposed in this article constitutes an extension of our previous work, where the problem of sampling from time-integrated stochastic bridges was addressed. The model relies on the Seven-League scheme, where artificial neural networks are employed to "learn" the distribution of the random variable of interest utilizing stochastic collocation points. The method results in a robust procedure for Monte Carlo pricing. Furthermore, semi-analytic formulae for option pricing are provided in a simplified, yet general, framework. The model guarantees high accuracy and a reduction of the computational time up to thousands of times compared to classical Monte Carlo pricing schemes.
    Date: 2022–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2211.03638&r=big
  23. By: Michael P. Leung; Pantelis Loupos
    Abstract: This paper studies nonparametric estimation of treatment and spillover effects using observational data from a single large network. We consider a model of network interference that allows for peer influence in selection into treatment or outcomes but requires influence to decay with network distance. In this setting, the network and covariates of all units can be potential sources of confounding, in contrast to existing work that assumes confounding is limited to a known, low-dimensional function of these objects. To estimate the first-stage nuisance functions of the doubly robust estimator, we propose to use graph neural networks, which are designed to approximate functions of graph-structured inputs. Under our model of interference, we derive primitive conditions for a network analog of approximate sparsity, which provides justification for the use of shallow architectures.
    Date: 2022–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2211.07823&r=big
  24. By: Karymshakov, Kamalbek (Asian Development Bank Institute); Azhgaliyeva, Dina (Asian Development Bank Institute); Mishra, Ranjeeta (Asian Development Bank Institute); Aseinov, Dastan (Asian Development Bank Institute)
    Abstract: We examine economic activity measured with firm performance indicators using the changes in intensity of night-time light in four Central Asia Regional Economic Cooperation economies: Azerbaijan, Georgia, Kazakhstan, and Mongolia. The empirical analysis is based on the World Bank Enterprise Survey data for 2019 and a follow-up survey conducted during the coronavirus disease (COVID-19) pandemic. The enterprise survey dataset was enhanced with data on night-time light intensity from Google Earth and the strictness of “lockdown-style” policies. Using the probit regression model, we investigate the impact of COVID-19 on firm performance and night-time light in CAREC countries. Firm performance is measured using four variables: decrease in sales, demand, export share, and working hours. Our results show that, as the night-time light increases, the likelihood of performance deterioration is reduced. Larger firms are more likely to maintain their performance than smaller firms. The sales in the manufacturing, clothing, and services sectors are more likely to decline than those in the food sector. Accordingly, the results point to a significant decline in the performance of firms operating in the service sector compared with those in the food sector during the pandemic.
    Keywords: Central Asia; COVID-19; big data; firm performance; gender; SMEs
    JEL: C13 C25 C55 L25
    Date: 2022–07
    URL: http://d.repec.org/n?u=RePEc:ris:adbiwp:1332&r=big
  25. By: Niko Hauzenberger; Florian Huber; Karin Klieber; Massimiliano Marcellino
    Abstract: We develop Bayesian neural networks (BNNs) that permit to model generic nonlinearities and time variation for (possibly large sets of) macroeconomic and financial variables. From a methodological point of view, we allow for a general specification of networks that can be applied to either dense or sparse datasets, and combines various activation functions, a possibly very large number of neurons, and stochastic volatility (SV) for the error term. From a computational point of view, we develop fast and efficient estimation algorithms for the general BNNs we introduce. From an empirical point of view, we show both with simulated data and with a set of common macro and financial applications that our BNNs can be of practical use, particularly so for observations in the tails of the cross-sectional or time series distributions of the target variables.
    Date: 2022–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2211.04752&r=big
  26. By: Naudé, Wim (RWTH Aachen University)
    Abstract: This paper contributes to the economics of AI by exploring three topics neglected by economists: (i) the notion of a Singularity (and Singleton), (ii) the existential risks that AI may pose to humanity, including that from an extraterrestrial AI in a Dark Forest universe; and (iii) the relevance of economics' Mythical Agent (homo economicus) for the design of value-aligned AI-systems. From the perspective of expected utility maximization, which both the fields of AI and economics share, these three topics are interrelated. By exploring these topics, several future avenues for economic research on AI becomes apparent, and areas where economic theory may benefit from a greater understanding of AI can be identified. Two further conclusions that emerge are first that a Singularity and existential risk from AI are still science fiction: which, however, should not preclude economics from bearing on the issues (it does not deter philosophers); and two, that economists should weigh in more on existential risk, and not leave this topic to lose credibility because of the Pascalian fanaticism of longtermism.
    Keywords: technology, artificial intelligence, economics, growth, existential risk, longtermism, Fermi Paradox, Grabby Aliens
    JEL: O40 O33 D01 D64
    Date: 2022–11
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp15713&r=big
  27. By: Wei-Chang Yeh; Yu-Hsin Hsieh; Chia-Ling Huang
    Abstract: In modern society, the trading methods and strategies used in financial market have gradually changed from traditional on-site trading to electronic remote trading, and even online automatic trading performed by a pre-programmed computer programs because the continuous development of network and computer computing technology. The quantitative trading, which the main purpose is to automatically formulate people's investment decisions into a fixed and quantifiable operation logic that eliminates all emotional interference and the influence of subjective thoughts and applies this logic to financial market activities in order to obtain excess profits above average returns, has led a lot of attentions in financial market. The development of self-adjustment programming algorithms for automatically trading in financial market has transformed a top priority for academic research and financial practice. Thus, a new flexible grid trading model combined with the Simplified Swarm Optimization (SSO) algorithm for optimizing parameters for various market situations as input values and the fully connected neural network (FNN) and Long Short-Term Memory (LSTM) model for training a quantitative trading model to automatically calculate and adjust the optimal trading parameters for trading after inputting the existing market situation is developed and studied in this work. The proposed model provides a self-adjust model to reduce investors' effort in the trading market, obtains outperformed investment return rate and model robustness, and can properly control the balance between risk and return.
    Date: 2022–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2211.12839&r=big
  28. By: Zongwu Cai (Department of Economics, The University of Kansas, Lawrence, KS 66045, USA); Pixiong Chen (Department of Economics, The University of Kansas, Lawrence, KS 66045, USA)
    Abstract: This paper proposes two data-driven econometric approaches to construct online investor sentiment indices based on internet search queries, which are built by the partial least squares and LASSO methods, respectively. By examining the relationship between investor sentiment and stock risk premium on overall market level, our empirical findings are that these sentiment indices have predictive power both in and out of sample, and the out-of-sample predictability of the online investor sentiment indices proposed by the paper is robust for different horizons. Moreover, our investor sentiment indices are also able to predict the returns of cross-sectional characteristics portfolios. This predictability based on investor sentiment has economic value since it improves portfolio performance, in terms of certainty equivalent return gain and Sharpe ratio, for investors who conduct the optimal asset allocation.
    Keywords: Asset return; Data-driven method; Online investor sentiment; Partial least squares; Portfolio choice.
    JEL: C22 C53 G11 G17
    Date: 2022–11
    URL: http://d.repec.org/n?u=RePEc:kan:wpaper:202216&r=big

This nep-big issue is ©2022 by Tom Coupé. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.