nep-big New Economics Papers
on Big Data
Issue of 2021‒06‒21
29 papers chosen by
Tom Coupé
University of Canterbury

  1. Artificial intelligence masters’ programmes - An analysis of curricula building blocks By Juan Manuel Dodero
  2. Predicting French SME Failures: New Evidence from Machine Learning Techniques By Christophe Schalck; Meryem Schalck
  3. How Using Machine Learning Classification as a Variable in Regression Leads to Attenuation Bias and What to Do About It By Zhang, Han
  4. Signatured Deep Fictitious Play for Mean Field Games with Common Noise By Ming Min; Ruimeng Hu
  5. Modeling and forecasting production indices using artificial neural networks, taking into account intersectoral relationships and comparing the predictive qualities of various architectures By Kaukin Andrey; Kosarev Vladimir
  6. A Simple and General Debiased Machine Learning Theorem with Finite Sample Guarantees By Victor Chernozhukov; Whitney K. Newey; Rahul Singh
  7. Demand Estimation Using Managerial Responses to Automated Price Recommendations By Daniel Garcia; Juha Tolvanen; Alexander K. Wagner
  8. Online Trading Models in the Forex Market Considering Transaction Costs By Koya Ishikawa; Kazuhide Nakata
  9. Urban economics in a historical perspective: Recovering data with machine learning By Pierre-Philippe Combes; Laurent Gobillon; Yanos Zylberberg
  10. Mapping the NFT revolution: market trends, trade networks and visual features By Matthieu Nadini; Laura Alessandretti; Flavio Di Giacinto; Mauro Martino; Luca Maria Aiello; Andrea Baronchelli
  11. Classification of monetary and fiscal dominance regimes using machine learning techniques By Hinterlang, Natascha; Hollmayr, Josef
  12. Price graphs: Utilizing the structural information of financial time series for stock prediction By Junran Wu; Ke Xu; Xueyuan Chen; Shangzhe Li; Jichang Zhao
  13. Employment and Productivity Dynamics and Patent Applications Related to the Fourth Industrial Revolution (Japanese) By IKEUCHI Kenta
  14. Reservoir optimization and Machine Learning methods By Xavier Warin
  15. Economic Nowcasting with Long Short-Term Memory Artificial Neural Networks (LSTM) By Daniel Hopp
  16. Random feature neural networks learn Black-Scholes type PDEs without curse of dimensionality By Lukas Gonon
  17. Deep Learning Statistical Arbitrage By Jorge Guijarro-Ordonez; Markus Pelger; Greg Zanotti
  18. Certification Systems for Machine Learning: Lessons from Sustainability By Matus, Kira; Veale, Michael
  19. Deep reinforcement learning on a multi-asset environment for trading By Ali Hirsa; Joerg Osterrieder; Branka Hadji-Misheva; Jan-Alexander Posth
  20. Slow Momentum with Fast Reversion: A Trading Strategy Using Deep Learning and Changepoint Detection By Kieran Wood; Stephen Roberts; Stefan Zohren
  21. Regime Changes and Fiscal Sustainability in Kenya with Comparative Nonlinear Granger Causalities Across East-African Countries By William Nganga Irungu; Julien Chevallier; Simon Wagura Ndiritu
  22. Estimating air quality co-benefits of energy transition using machine learning By Da Zhang; Qingyi Wang; Shaojie Song; Simiao Chen; Mingwei Li; Lu Shen; Siqi Zheng; Bofeng Cai; Shenhao Wang
  23. Ungleich verteilte Corona-Infektionen zwischen den Bundesländern By Hübler, Olaf
  24. State of implementation of the OECD AI Principles: Insights from national AI policies By OECD
  25. Night lights in determining and assessing socio-economic processes By Trusov Alexandr; Botvich Dmitry; Maruev Sergey
  26. Host type and pricing on Airbnb: Seasonality and perceived market power By Georges Casamatta; Sauveur Giannoni; Daniel Brunstein; Johan Jouve
  27. ECB Communication: What Is It Telling Us? By Rokas Kaminskas; Modestas Stukas; Linas Jurksas
  28. Remote sensing of urban cyclone impact and resilience: Evidence from Idai By Peter Fisker; David Malmgren-Hansen; Thomas Pave Sohnesen
  29. Forced to Play Too Many Matches? A DeepLearning Assessment of Crowded Schedule By Stefano Cabras; Marco Delogu; J.D. Tena

  1. By: Juan Manuel Dodero (School of Engineering - University of Cadiz)
    Abstract: This report identifies building blocks of master programs on Artificial Intelligence (AI), on the basis of the existing programs available in the European Union. These building blocks provide a first analysis that requires acceptance and sharing by the AI community. The proposal analyses first, the knowledge contents, and second, the educational competences declared as the learning outcomes, of 45 post-graduate academic masters’ programs related with AI from universities in 13 European countries (Belgium, Denmark, Finland, France, Germany, Italy, Ireland, Netherlands, Portugal, Spain, and Sweden in the EU; plus Switzerland and the United Kingdom). As a closely related and relevant part of Informatics and Computer Science, major AI-related curricula on data science have been also taken into consideration for the analysis. The definition of a specific AI curriculum besides data science curricula is motivated by the necessity of a deeper understanding of topics and skills of the former that build up the foundations of strong AI versus narrow AI, which is the general focus of the latter. The body of knowledge with the proposed building blocks for AI consists of a number of knowledge areas, which are classified as Essential, Core, General and Applied. First, the AI Essentials cover topics and competences from foundational disciplines that are fundamental to AI. Second, topics and competences showing a close interrelationship and specific of AI are classified in a set of AI Core domain-specific areas, plus one AI General area for non-domain-specific knowledge. Third, AI Applied areas are built on top of topics and competences required to develop AI applications and services under a more philosophical and ethical perspective. All the knowledge areas are refined into knowledge units and topics for the analysis. As the result of studying core AI knowledge topics from the master programs sample, machine learning is observed to prevail, followed in order by: computer vision; human-computer interaction; knowledge representation and reasoning; natural language processing; planning, search and optimisation; and robotics and intelligent automation. A significant number of master programs analysed are significantly focused on machine learning topics, despite being initially classified in another domain. It is noteworthy that machine learning topics, along with selected topics on knowledge representation, depict a high degree of commonality in AI and data science programs. Finally, the competence-based analysis of the sample master programs’ learning outcomes, based on Bloom’s cognitive levels, outputs that understanding and creating cognitive levels are dominant. Besides, analysing and evaluating are the most scarce cognitive levels. Another relevant outcome is that master programs on AI under the disciplinary lenses of engineering studies show a notable scarcity of competences related with informatics or computing, which are fundamental to AI.
    Keywords: artificial intelligence, competence-based curriculum, master program, higher education, digital skills
    Date: 2021–05
  2. By: Christophe Schalck; Meryem Schalck
    Abstract: The aim of this study is to provide new insights into French small and medium-sized enterprises (SME) failure prediction using a unique database of French SMEs over the 2012?2018 period including both financial and nonfinancial variables. We also include text variables related to the type of activity. We compare the predictive performance of three estimation methods: a dynamic Probit model, logistic Lasso regression, and XGBoost algorithm. The results show that the XGBoost algorithm has the highest performance in predicting business failure from a broad dataset. We use SHAP values to interpret the results and identify the main factors of failure. Our analysis shows that both financial and nonfinancial variables are failure factors. Our results confirm the role of financial variables in predicting business failure, while self-employment is the factor that most strongly increases the probability of failure. The size of the SME is also a business failure factor. Our results show that a number of nonfinancial variables, such as localization and economic conditions, are drivers of SME failure. The results also show that certain activities are associated with a prediction of lower failure probability while some activities are associated with a prediction of higher failure.
    Keywords: SME; failure prediction; Machine learning; XGBoost; SHAP values
    JEL: G33 C41 C46
    Date: 2021–01–01
  3. By: Zhang, Han (The Hong Kong University of Science and Technology)
    Abstract: Social scientists have increasingly been applying machine learning algorithms to "big data" to measure theoretical concepts they cannot easily measure before, and then been using these machine-predicted variables in a regression. This article first demonstrates that directly inserting binary predictions (i.e., classification) without regard for prediction error will generally lead to attenuation biases of either slope coefficients or marginal effect estimates. We then propose several estimators to obtain consistent estimates of coefficients. The estimators require the existence of validation data, of which researchers have both machine prediction and true values. This validation data is either automatically available during training algorithms or can be easily obtained. Monte Carlo simulations demonstrate the effectiveness of the proposed estimators. Finally, we summarize the usage pattern of machine learning predictions in 18 recent publications in top social science journals, apply our proposed estimators to two of them, and offer some practical recommendations.
    Date: 2021–05–29
  4. By: Ming Min; Ruimeng Hu
    Abstract: Existing deep learning methods for solving mean-field games (MFGs) with common noise fix the sampling common noise paths and then solve the corresponding MFGs. This leads to a nested-loop structure with millions of simulations of common noise paths in order to produce accurate solutions, which results in prohibitive computational cost and limits the applications to a large extent. In this paper, based on the rough path theory, we propose a novel single-loop algorithm, named signatured deep fictitious play, by which we can work with the unfixed common noise setup to avoid the nested-loop structure and reduce the computational complexity significantly. The proposed algorithm can accurately capture the effect of common uncertainty changes on mean-field equilibria without further training of neural networks, as previously needed in the existing machine learning algorithms. The efficiency is supported by three applications, including linear-quadratic MFGs, mean-field portfolio game, and mean-field game of optimal consumption and investment. Overall, we provide a new point of view from the rough path theory to solve MFGs with common noise with significantly improved efficiency and an extensive range of applications. In addition, we report the first deep learning work to deal with extended MFGs (a mean-field interaction via both the states and controls) with common noise.
    Date: 2021–06
  5. By: Kaukin Andrey (Russian Presidential Academy of National Economy and Public Administration); Kosarev Vladimir (Russian Presidential Academy of National Economy and Public Administration)
    Abstract: This paper analyzes the possibilities of using convolutional and recurrent neural networks to predict the indices of industrial production of the Russian economy. Since the indices are asymmetric in periods of growth and decline, it was hypothesized that nonlinear methods will improve the quality of the forecast relative to linear ones.
    Keywords: convolutional neural networks, recurrent neural networks
    Date: 2021–01
  6. By: Victor Chernozhukov; Whitney K. Newey; Rahul Singh
    Abstract: Debiased machine learning is a meta algorithm based on bias correction and sample splitting to calculate confidence intervals for functionals (i.e. scalar summaries) of machine learning algorithms. For example, an analyst may desire the confidence interval for a treatment effect estimated with a neural network. We provide a nonasymptotic debiased machine learning theorem that encompasses any global or local functional of any machine learning algorithm that satisfies a few simple, interpretable conditions. Formally, we prove consistency, Gaussian approximation, and semiparametric efficiency by finite sample arguments. The rate of convergence is root-n for global functionals, and it degrades gracefully for local functionals. Our results culminate in a simple set of conditions that an analyst can use to translate modern learning theory rates into traditional statistical inference. The conditions reveal a new double robustness property for ill posed inverse problems.
    Date: 2021–05
  7. By: Daniel Garcia; Juha Tolvanen; Alexander K. Wagner
    Abstract: We provide a new framework to identify demand elasticities in markets where managers rely on algorithmic recommendations for price setting, and apply it to a dataset containing bookings for a sample of mid-sized hotels in Europe. Using non-binding algorithmic price recommendations and observed delay in price adjustments by decision makers, we demonstrate that a control-function approach, combined with state-of-the-art model selection techniques, can be used to isolate exogenous price variation and identify demand elasticities across hotel room types and over time. We confirm these elasticity estimates with a difference-in-differences approach that leverages the same delays in price adjustments by decision makers. However, the difference-in-differences estimates are more noisy and only yield consistent estimates if data is pooled across hotels. We then apply our control-function approach to two classic questions in the dynamic pricing literature: the evolution of price elasticity of demand over time as well as the effects of a transitory price change on future demand due to the presence of strategic buyers. Finally, we discuss how our empirical framework can be applied directly to other decision-making situations in which recommendation systems are used.
    Keywords: big data, causal inference, machine learning, revenue management, price recommendations, demand estimation
    JEL: L13 L83 D12
    Date: 2021
  8. By: Koya Ishikawa; Kazuhide Nakata
    Abstract: In recent years, a wide range of investment models have been created using artificial intelligence. Automatic trading by artificial intelligence can expand the range of trading methods, such as by conferring the ability to operate 24 hours a day and the ability to trade with high frequency. Automatic trading can also be expected to trade with more information than is available to humans if it can sufficiently consider past data. In this paper, we propose an investment agent based on a deep reinforcement learning model, which is an artificial intelligence model. The model considers the transaction costs involved in actual trading and creates a framework for trading over a long period of time so that it can make a large profit on a single trade. In doing so, it can maximize the profit while keeping transaction costs low. In addition, in consideration of actual operations, we use online learning so that the system can continue to learn by constantly updating the latest online data instead of learning with static data. This makes it possible to trade in non-stationary financial markets by always incorporating current market trend information.
    Date: 2021–06
  9. By: Pierre-Philippe Combes (Institut d'Études Politiques [IEP] - Paris, CNRS - Centre National de la Recherche Scientifique); Laurent Gobillon (PSE - Paris School of Economics - ENPC - École des Ponts ParisTech - ENS Paris - École normale supérieure - Paris - PSL - Université Paris sciences et lettres - UP1 - Université Paris 1 Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique - EHESS - École des hautes études en sciences sociales - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement, PJSE - Paris Jourdan Sciences Economiques - UP1 - Université Paris 1 Panthéon-Sorbonne - ENS Paris - École normale supérieure - Paris - PSL - Université Paris sciences et lettres - EHESS - École des hautes études en sciences sociales - ENPC - École des Ponts ParisTech - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement); Yanos Zylberberg (University of Bristol [Bristol])
    Abstract: A recent literature has used a historical perspective to better understand fundamental questions of urban economics. However, a wide range of historical documents of exceptional quality remain underutilised: their use has been hampered by their original format or by the massive amount of information to be recovered. In this paper, we describe how and when the flexibility and predictive power of machine learning can help researchers exploit the potential of these historical documents. We first discuss how important questions of urban economics rely on the analysis of historical data sources and the challenges associated with transcription and harmonisation of such data. We then explain how machine learning approaches may address some of these challenges and we discuss possible applications.
    Keywords: urban economics,history,machine learning
    Date: 2021–05
  10. By: Matthieu Nadini; Laura Alessandretti; Flavio Di Giacinto; Mauro Martino; Luca Maria Aiello; Andrea Baronchelli
    Abstract: Non Fungible Tokens (NFTs) are digital assets that represent objects like art, videos, in-game items and music. They are traded online, often with cryptocurrency, and they are generally encoded as smart contracts on a blockchain. Media and public attention towards NFTs has exploded in 2021, when the NFT art market has experienced record sales while celebrated new star artists. However, little is known about the overall structure and evolution of the NFT market. Here, we analyse data concerning 6.1 million trades of 4.7 million NFTs generating a total trading volume of 935 millions US dollars. Our data are obtained primarily from the Ethereum and WAX blockchains and cover the period between June 23, 2017 and April 27, 2021. First, we characterize the statistical properties of the market. Second, we build the network of interactions and show that traders have bursts of activity followed by inactive periods, and typically specialize on NFTs associated to similar objects. Third, we cluster objects associated to NFTs according to their visual features and show that NFTs within the same category tend to be visually homogeneous. Finally, we investigate the predictability of NFT sales. We use simple machine learning algorithms and find that prices can be best predicted by the sale history of the NFT collection, but also by some features describing the properties of the associated object (e.g., visual features of digital images). We anticipate that our analysis will be of interest to both researchers and practitioners and will spark further research on the NFT production, adoption and trading in different contexts.
    Date: 2021–06
  11. By: Hinterlang, Natascha; Hollmayr, Josef
    Abstract: This paper identiftes U.S. monetary and ftscal dominance regimes using machine learning techniques. The algorithms are trained and verifted by employing simulated data from Markov-switching DSGE models, before they classify regimes from 1968-2017 using actual U.S. data. All machine learning methods outperform a standard logistic regression concerning the simulated data. Among those the Boosted Ensemble Trees classifter yields the best results. We ftnd clear evidence of ftscal dominance before Volcker. Monetary dominance is detected between 1984-1988, before a ftscally led regime turns up around the stock market crash lasting until 1994. Until the beginning of the new century, monetary dominance is established, while the more recent evidence following the ftnancial crisis is mixed with a tendency towards ftscal dominance.
    Keywords: Monetary-fiscal interaction,Machine Learning,Classification,Markov-switching DSGE
    JEL: C38 E31 E63
    Date: 2021
  12. By: Junran Wu; Ke Xu; Xueyuan Chen; Shangzhe Li; Jichang Zhao
    Abstract: Stock prediction, with the purpose of forecasting the future price trends of stocks, is crucial for maximizing profits from stock investments. While great research efforts have been devoted to exploiting deep neural networks for improved stock prediction, the existing studies still suffer from two major issues. First, the long-range dependencies in time series are not sufficiently captured. Second, the chaotic property of financial time series fundamentally lowers prediction performance. In this study, we propose a novel framework to address both issues regarding stock prediction. Specifically, in terms of transforming time series into complex networks, we convert market price series into graphs. Then, structural information, referring to associations among temporal points and the node weights, is extracted from the mapped graphs to resolve the problems regarding long-range dependencies and the chaotic property. We take graph embeddings to represent the associations among temporal points as the prediction model inputs. Node weights are used as a priori knowledge to enhance the learning of temporal attention. The effectiveness of our proposed framework is validated using real-world stock data, and our approach obtains the best performance among several state-of-the-art benchmarks. Moreover, in the conducted trading simulations, our framework further obtains the highest cumulative profits. Our results supplement the existing applications of complex network methods in the financial realm and provide insightful implications for investment applications regarding decision support in financial markets.
    Date: 2021–06
  13. By: IKEUCHI Kenta
    Abstract: In recent years, the development of new digital-related technologies such as artificial intelligence (AI) and Internet-of Things (IoT) and their industrial applications have been attracting attention. These technological developments collectively are called the "Fourth Industrial Revolution" which is set to bring about major changes in industrial structure. On the other hand, previous research using national / industrial level data has pointed out that progress in digitization widens the productivity gap between companies and reduces the dynamics of the market. Therefore, this research analyzes the relationship between the development of technologies related to the Fourth Industrial Revolution such as artificial intelligence and IoT and market dynamics using Japanese firm-level micro datasets. The patent data is combined with the Basic Survey of Japanese Business Structure and Activities, Census of Manufacture, Economic Census for Business Frame, Economic Census for Business Activity and Establishment and Enterprise Census of Japan to build firm-level panel data and to examine how the research and development activities related to the Fourth Industrial Revolution, such as artificial intelligence and IoT, are associated with the productivity and employment growth of business establishments and firms, and discuss the policy implications. The results of this study show that the development of technologies related to the Fourth Industrial Revolution such as AI and IoT are associated with the dynamics of productivity and employment in firms. The development of AI-related technologies has particularly benefited large firms, with limited benefits to small and medium-sized firms.
    Date: 2021–03
  14. By: Xavier Warin
    Abstract: After showing the efficiency of feedforward networks to estimate control in high dimension in the global optimization of some storages problems, we develop a modification of an algorithm based on some dynamic programming principle. We show that classical feedforward networks are not effective to estimate Bellman values for reservoir problems and we propose some neural networks giving far better results. At last, we develop a new algorithm mixing LP resolution and conditional cuts calculated by neural networks to solve some stochastic linear problems.
    Date: 2021–06
  15. By: Daniel Hopp
    Abstract: Artificial neural networks (ANNs) have been the catalyst to numerous advances in a variety of fields and disciplines in recent years. Their impact on economics, however, has been comparatively muted. One type of ANN, the long short-term memory network (LSTM), is particularly wellsuited to deal with economic time-series. Here, the architecture's performance and characteristics are evaluated in comparison with the dynamic factor model (DFM), currently a popular choice in the field of economic nowcasting. LSTMs are found to produce superior results to DFMs in the nowcasting of three separate variables; global merchandise export values and volumes, and global services exports. Further advantages include their ability to handle large numbers of input features in a variety of time frequencies. A disadvantage is the inability to ascribe contributions of input features to model outputs, common to all ANNs. In order to facilitate continued applied research of the methodology by avoiding the need for any knowledge of deep-learning libraries, an accompanying Python library was developed using PyTorch,
    Date: 2021–06
  16. By: Lukas Gonon
    Abstract: This article investigates the use of random feature neural networks for learning Kolmogorov partial (integro-)differential equations associated to Black-Scholes and more general exponential L\'evy models. Random feature neural networks are single-hidden-layer feedforward neural networks in which only the output weights are trainable. This makes training particularly simple, but (a priori) reduces expressivity. Interestingly, this is not the case for Black-Scholes type PDEs, as we show here. We derive bounds for the prediction error of random neural networks for learning sufficiently non-degenerate Black-Scholes type models. A full error analysis is provided and it is shown that the derived bounds do not suffer from the curse of dimensionality. We also investigate an application of these results to basket options and validate the bounds numerically. These results prove that neural networks are able to \textit{learn} solutions to Black-Scholes type PDEs without the curse of dimensionality. In addition, this provides an example of a relevant learning problem in which random feature neural networks are provably efficient.
    Date: 2021–06
  17. By: Jorge Guijarro-Ordonez; Markus Pelger; Greg Zanotti
    Abstract: Statistical arbitrage identifies and exploits temporal price differences between similar assets. We propose a unifying conceptual framework for statistical arbitrage and develop a novel deep learning solution, which finds commonality and time-series patterns from large panels in a data-driven and flexible way. First, we construct arbitrage portfolios of similar assets as residual portfolios from conditional latent asset pricing factors. Second, we extract the time series signals of these residual portfolios with one of the most powerful machine learning time-series solutions, a convolutional transformer. Last, we use these signals to form an optimal trading policy, that maximizes risk-adjusted returns under constraints. We conduct a comprehensive empirical comparison study with daily large cap U.S. stocks. Our optimal trading strategy obtains a consistently high out-of-sample Sharpe ratio and substantially outperforms all benchmark approaches. It is orthogonal to common risk factors, and exploits asymmetric local trend and reversion patterns. Our strategies remain profitable after taking into account trading frictions and costs. Our findings suggest a high compensation for arbitrageurs to enforce the law of one price.
    Date: 2021–06
  18. By: Matus, Kira; Veale, Michael
    Abstract: Forthcoming (open access) in Regulation and Governance Abstract—The increasing deployment of machine learning systems has raised many concerns about its varied negative societal impacts. Notable among policy proposals to mitigate these issues is the notion that (some) machine learning systems should be certified. In this paper, we illustrate how recent approaches to certifying machine learning may be building upon the wrong foundations and examine what better foundations may look like. While prominent approaches to date have centered on networking standards initiatives led by organizations including the IEEE or ISO, we argue that machine learning certification may be better grounded in the very different institutional structures found in the sustainability domain. We first illustrate how policy challenges of machine learning and sustainability have significant structural similarities. Like many commodities, machine learning is characterized by difficult or impossible to observe credence properties, such as the characteristics of data collection, or carbon emissions from model training, as well as value chain issues, such as emerging core-periphery inequalities, networks of labor, and fragmented and modular value creation. We examine how focusing on networking standards, as is currently done, is likely to fail as a method to govern the credence properties of machine learning. While networking standards typically draw their adoption and enforcement from a functional need to conform in order to participate in a network, salient policy issues in machine learning issues benefit from no such dynamic. Finally, we apply existing research on certification systems for sustainability to the qualities and challenges of machine learning to generate lessons across the two, aiming to inform design considerations for emerging regimes.
    Date: 2021–06–02
  19. By: Ali Hirsa; Joerg Osterrieder; Branka Hadji-Misheva; Jan-Alexander Posth
    Abstract: Financial trading has been widely analyzed for decades with market participants and academics always looking for advanced methods to improve trading performance. Deep reinforcement learning (DRL), a recently reinvigorated method with significant success in multiple domains, still has to show its benefit in the financial markets. We use a deep Q-network (DQN) to design long-short trading strategies for futures contracts. The state space consists of volatility-normalized daily returns, with buying or selling being the reinforcement learning action and the total reward defined as the cumulative profits from our actions. Our trading strategy is trained and tested both on real and simulated price series and we compare the results with an index benchmark. We analyze how training based on a combination of artificial data and actual price series can be successfully deployed in real markets. The trained reinforcement learning agent is applied to trading the E-mini S&P 500 continuous futures contract. Our results in this study are preliminary and need further improvement.
    Date: 2021–06
  20. By: Kieran Wood; Stephen Roberts; Stefan Zohren
    Abstract: Momentum strategies are an important part of alternative investments and are at the heart of commodity trading advisors (CTAs). These strategies have however been found to have difficulties adjusting to rapid changes in market conditions, such as during the 2020 market crash. In particular, immediately after momentum turning points, where a trend reverses from an uptrend (downtrend) to a downtrend (uptrend), time-series momentum (TSMOM) strategies are prone to making bad bets. To improve the response to regime change, we introduce a novel approach, where we insert an online change-point detection (CPD) module into a Deep Momentum Network (DMN) [1904.04912] pipeline, which uses an LSTM deep-learning architecture to simultaneously learn both trend estimation and position sizing. Furthermore, our model is able to optimise the way in which it balances 1) a slow momentum strategy which exploits persisting trends, but does not overreact to localised price moves, and 2) a fast mean-reversion strategy regime by quickly flipping its position, then swapping it back again to exploit localised price moves. Our CPD module outputs a changepoint location and severity score, allowing our model to learn to respond to varying degrees of disequilibrium, or smaller and more localised changepoints, in a data driven manner. Using a portfolio of 50, liquid, continuous futures contracts over the period 1990-2020, the addition of the CPD module leads to an improvement in Sharpe ratio of $33\%$. Even more notably, this module is especially beneficial in periods of significant nonstationarity, and in particular, over the most recent years tested (2015-2020) the performance boost is approximately $400\%$. This is especially interesting as traditional momentum strategies have been underperforming in this period.
    Date: 2021–05
  21. By: William Nganga Irungu; Julien Chevallier; Simon Wagura Ndiritu
    Abstract: This study seeks to investigate the nature of fiscal policy regime in Kenya, and the extent to which fiscal policy is sustainable in the long run by taking into account periodic regime changes. Markov switching models were used to determine fiscal policy regimes endogenously. Regime switching tests were used to test whether the No-Ponzi game condition and the debt stabilizing condition were met. The results established that the regime-switching model was suitable in explaining regime sustainable and sustainable cycles. An investigation of fiscal policy regimes established that both sustainable and unsustainable regimes were dominant and each lasted for an average of four years. There was evidence to suggest the existence of procyclical fiscal policy in Kenya. Regime switching tests for long-run sustainability suggested that the No-Ponzi game condition weakly holds in the Kenyan economy. Regime-based sensitivity analysis suggests that the persistence of unsustainability regime for more than four years could threaten long-run fiscal sustainability. Sensitivity tests are conducted by resorting to (i) Self-Exciting Threshold Autoregressive Models at the country-level, and (ii) non-linear Granger causalities across a FeedForward Artificial Neural Network composed of East-African countries (Burundi, Kenya, Rwanda, Tanzania and Uganda).
    Keywords: Fiscal policy; Markov-switching; No-Ponzi game condition; SETAR; Non-linear Granger causality; Feed-Forward Artificial Neural Network
    JEL: E62 F30 H61
    Date: 2020–01–01
  22. By: Da Zhang; Qingyi Wang; Shaojie Song; Simiao Chen; Mingwei Li; Lu Shen; Siqi Zheng; Bofeng Cai; Shenhao Wang
    Abstract: Estimating health benefits of reducing fossil fuel use from improved air quality provides important rationales for carbon emissions abatement. Simulating pollution concentration is a crucial step of the estimation, but traditional approaches often rely on complicated chemical transport models that require extensive expertise and computational resources. In this study, we develop a novel and succinct machine learning framework that is able to provide precise and robust annual average fine particle (PM2.5) concentration estimations directly from a high-resolution fossil energy use data set. The accessibility and applicability of this framework show great potentials of machine learning approaches for integrated assessment studies. Applications of the framework with Chinese data reveal highly heterogeneous health benefits of reducing fossil fuel use in different sectors and regions in China with a mean of \$34/tCO2 and a standard deviation of \$84/tCO2. Reducing rural and residential coal use offers the highest co-benefits with a mean of \$360/tCO2. Our findings prompt careful policy designs to maximize cost-effectiveness in the transition towards a carbon-neutral energy system.
    Date: 2021–05
  23. By: Hübler, Olaf
    Abstract: In diesem Beitrag wird die regionale Ausbreitung von COVID-19-Infektionen untersucht. Unter Verwendung der Hauptkomponentenanalyse, des LARS- und RLASSO-Auswahlverfahrens erfolgt eine Variablenreduktion. Geprüft wird die Bedeutung von Zustandsabhängigkeit, unbeobachteter Heterogenität und Strukturbrüchen. Die empirische Analyse zeigt, dass sowohl regionale Strukturvariablen als auch regional aggregierte Persönlichkeitsmerkmale bedeutsam sind für die unterschiedliche Corona-Ausbreitung. Die nord-östlichen Bundesländer weisen einen geringeren Grad an Betroffenheit auf. Regionen mit hohem Migrantenanteil zeigen eine höhere Inzidenz als andere. Werden Persönlichkeitsmerkmale vernachlässigt, so wird die Bedeutung des Migrationseinflusses überschätzt. Mit der Schulbildung, der Armutsgefährdung und der Haushaltsgröße wurden drei weitere wichtige Merkmale identifiziert. In Bundeslädern mit überproportional vielen Menschen ohne Schulabschluss werden tendenziell weniger COVID-19-Fälle ausgewiesen. Je mehr Kooperationsbereitschaft und emotionale Labilität ausgeprägt sind, umso höher ist die Gefahr der Ansteckung. Ein positiv signifikanter Zusammenhang zwischen Infektionen und Tests wird durch die Schätzungen abgebildet. Eine weniger klare Verknüpfung zeigt sich zwischen Impfungen und der Zahl der Infektionen. Über die drei Corona-Wellen hinweg, offenbaren sich deutliche Veränderungen. Dies betrifft die Bedeutung des Migrantenanteil, der armutsgefährdete Familien und die geographische Lage der Bundesländer.
    Keywords: COVID-19; Bundesländer; regionale Merkmale; Persönlichkeitsmerkmale; Impfungen; PCA-Tests; Hauptkomponentenanalyse; Machine Learning; cluster-robuste Schätzung; Zustandsabhängigkeit; unbeobachtete Merkmale; Heterogenität; Corona-Wellen; Strukturbruch
    JEL: C21 C23 I12 R12
    Date: 2021–06
  24. By: OECD
    Abstract: This is the first report on the state of implementation of the policy recommendations to governments contained in the OECD Principles on Artificial Intelligence adopted in May 2019. This report presents a conceptual framework, provides findings, identifies good practices, and examines emerging trends in AI policy, particularly on how countries are implementing the five recommendations to policy makers contained in the OECD AI Principles. The report builds both on the expert input provided at meetings of the OECD.AI Network of Experts working group on national AI policies that took place online from February 2020 to April 2021 and on the EC-OECD database of national AI strategies and policies. As policy makers and AI actors around the world move from principles to implementation, this report aims to inform the implementation of the OECD AI Principles. This report is also a contribution to the OECD AI Policy Observatory.
    Keywords: AI, artificial intelligence
    Date: 2021–06–18
  25. By: Trusov Alexandr (Russian Presidential Academy of National Economy and Public Administration); Botvich Dmitry (Russian Presidential Academy of National Economy and Public Administration); Maruev Sergey (Russian Presidential Academy of National Economy and Public Administration)
    Abstract: The dynamics of night lights in the visible range of the spectrum on land in the long term correlates with population density, GDP and with technological progress in general. Currently, large arrays of satellite images are available that allow a retrospective analysis of this correlation over the past 30 years.
    Keywords: cartographic data analysis, satellite photography
    Date: 2021–01
  26. By: Georges Casamatta (Università di Corsica); Sauveur Giannoni (Università di Corsica); Daniel Brunstein (Università di Corsica); Johan Jouve (Università di Corsica; Università di Corsica)
    Abstract: The literature on short-term rental emphasises the heterogeneity of the hosts pop- ulation. Some argue that professional and opportunistic hosts differ in terms of their pricing strategy. This study highlights how differences in market perception and in- formation create a price differential between professional and non-professional players. Proposing an original and accurate definition of professional hosts, we rely on a large dataset of almost 9,000 properties and 73,000 observations to investigate the pricing behaviour of Airbnb sellers in Corsica (France). Using OLS and the double-machine learning methods, we demonstrate that a price differential exists between professional and opportunistic sellers. In addition, we assess the impact of seasonality in demand on the size and direction of this price differential. Professionals perceive a higher de- gree of market power than others during the peak season and it allows them to enhance their revenues.
    Date: 2021–05
  27. By: Rokas Kaminskas (Bank of Lithuania, ISM University of Management and Economics); Modestas Stukas (Bank of Lithuania); Linas Jurksas (Bank of Lithuania, Vilnius University)
    Abstract: This paper examines changing ECB communication and how it has impacted euro area financial markets over the past two decades. We applied a combination of topic modelling and sentiment analysis for over 2000 public ECB Executive Board member speeches, as well as over 200 ECB press conferences. Topic analysis revealed that the ECB’s main focus has shifted from strategy and objectives, at the inception of the euro area, to various policy actions during the global financial crisis and, more recently, to instruments and economic developments. Sentiment analysis showed an expected trend of a more negative communication tone during periods of turmoil and a gradual shift to a more dovish monetary policy tone over time. Regression analysis revealed that sentiment indices had the expected impact on financial market indicators, while press conferences showed substantially stronger effects than speeches.
    Keywords: ECB, speeches, press conferences, text analysis, sentiments, financial markets
    JEL: C80 E43 E44 E58 G12
    Date: 2021–05–11
  28. By: Peter Fisker; David Malmgren-Hansen; Thomas Pave Sohnesen
    Abstract: Cyclone Idai, the most devastating cyclone ever recorded in Southern Africa, caused havoc in large parts of central Mozambique, especially the port city of Beira, upon its landfall in March 2019. This study reviews and compares measurements of the impact, using various sources of remote sensing data.
    Keywords: Remote sensing, Natural disasters, Urban
    Date: 2021
  29. By: Stefano Cabras; Marco Delogu; J.D. Tena
    Abstract: Do important upcoming or recent scheduled tasks affect the current productivity of working teams? How is the impact (if any) modified according to team size or by external conditions faced by workers? We study this issue using association football data where team performance is clearly defined and publicly-observed before and after completing different activities (football matches). UEFA Champions League (CL) games affect European domestic league matches in a quasi-random fashion. We estimate this effect using a deep learning model, a novel strategy in this context, that allows controlling for many interacting confounding factors without imposing an ad-hoc parametric specification. This approach is instrumental in estimating performance under ‘what if’ situations required in a causal analysis. We find that dispersion of attention and effort to different tournaments significantly worsens domestic performance before/after playing the CL match. However, the size of the impact is higher in the latter case. Our results also suggest that this distortion is higher for small teams and that, compared to home teams, away teams react more conservatively by increasing their probability of drawing. We discuss the implications of these results in the multitasking literature.
    Keywords: multitasking, causal analysis, deep learning, sports economics

This nep-big issue is ©2021 by Tom Coupé. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.