nep-big New Economics Papers
on Big Data
Issue of 2020‒07‒13
23 papers chosen by
Tom Coupé
University of Canterbury

  1. The Hard Problem of Prediction for Conflict Prevention By Mueller, H.; Rauh, C.
  2. Artificial Intelligence in Asset Management By Bartram, Söhnke M; Branke, Jürgen; Motahari, Mehrshad
  3. Real-Time Prediction of BITCOIN Price using Machine Learning Techniques and Public Sentiment Analysis By S M Raju; Ali Mohammad Tarif
  4. SECure: A Social and Environmental Certificate for AI Systems By Abhishek Gupta; Camylle Lanteigne; Sara Kingsley
  5. Priority to unemployed immigrants? A causal machine learning evaluation of training in Belgium By Bart Cockx; Michael Lechner; Joost Bollens
  6. Priority of Unemployed Immigrants? A Causal Machine Learning Evaluation of Training in Belgium By Bart Cockx; Michael Lechner; Joost Bollens
  7. Deep Stock Predictions By Akash Doshi; Alexander Issa; Puneet Sachdeva; Sina Rafati; Somnath Rakshit
  8. Quantum computing for Finance: state of the art and future prospects By Daniel J. Egger; Claudio Gambella; Jakub Marecek; Scott McFaddin; Martin Mevissen; Rudy Raymond; Andrea Simonetto; Stefan Woerner; Elena Yndurain
  9. Hybrid ARFIMA Wavelet Artificial Neural Network Model for DJIA Index Forecasting By Heni Boubaker; Giorgio Canarella; Rangan Gupta; Stephen M. Miller
  10. Identifying innovative actors in the Electricity Supply Industry using machine learning: an application to UK patent data By Geoffroy G Dolphin; Michael G Pollitt
  11. A Bayesian Time-Varying Autoregressive Model for Improved Short- and Long-Term Prediction By Christoph Berninger; Almond St\"ocker; David R\"ugamer
  12. El planeamiento educativo situacional: datos abiertos y big data ¿herramientas de políticas públicas efectivas?. By Claus, Agustín
  13. A Data-driven Market Simulator for Small Data Environments By Hans B\"uhler; Blanka Horvath; Terry Lyons; Imanol Perez Arribas; Ben Wood
  14. The Role of Corporate Governance and Estimation Methods in Predicting Bankruptcy By Nawaf Almaskati; Ron Bird; Yue Lu; Danny Leung
  15. Distilling Large Information Sets to Forecast Commodity Returns: Automatic Variable Selection or HiddenMarkov Models? By Massimo Guidolin; Manuela Pedio
  16. Selective Migration, Occupational Choice, and the Wage Returns to College Majors By Ransom, Tyler
  17. Ensemble Learning with Statistical and Structural Models By Jiaming Mao; Jingzhi Xu
  18. Identifying Innovative Actors in the Electricicity Supply Industry Using Machine Learning: An Application to UK Patent Data By Dolphin, G.; Pollitt, M.
  19. Design and Evaluation of Personalized Free Trials By Hema Yoganarasimhan; Ebrahim Barzegary; Abhishek Pani
  20. Work That Can Be Done from Home: Evidence on Variation within and across Occupations and Industries By Adams-Prassl, Abigail; Boneva, Teodora; Golin, Marta; Rauh, Christopher
  21. An Artificial Intelligence Solution for Electricity Procurement in Forward Markets By Thibaut Th\'eate; S\'ebastien Mathieu; Damien Ernst
  22. Monetary policy transmission mechanism in Poland What do we know in 2019? By Tomasz Chmielewski; Andrzej Kocięcki; Tomasz Łyziak; Jan Przystupa; Ewa Stanisławska; Małgorzata Walerych; Ewa Wróbel
  23. COVID-19, Lockdowns and Well-Being: Evidence from Google Trends By Brodeur, Abel; Clark, Andrew E.; Fleche, Sarah; Powdthavee, Nattavudh

  1. By: Mueller, H.; Rauh, C.
    Abstract: There is a growing interest in prevention in several policy areas and this provides a strong motivation for an improved integration of machine learning into models of decision making. In this article we propose a framework to tackle conflict prevention. A key problem of conflict forecasting for prevention is that predicting the start of conflict in previously peaceful countries needs to overcome a low baseline risk. To make progress in this hard problem this project combines unsupervised with supervised machine learning. Specifically, the latent Dirichlet allocation (LDA) model is used for feature extraction from 4.1 million newspaper articles and these features are then used in a random forest model to predict conflict. The output of the forecast model is then analyzed in a framework of cost minimization in which excessive intervention costs due to false positives can be traded off against the damages and destruction caused by conflict. News text is able provide a useful forecast for the hard problem even when evaluated in such a cost-benefit framework. The aggregation into topics allows the forecast to rely on subtle signals from news which are positively or negatively related to conflict risk.
    Keywords: Conflict prediction, Conflict trap, Topic models, LDA, Random forest, News text, Machine learning
    JEL: F51 C53
    Date: 2020–03–10
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:2015&r=all
  2. By: Bartram, Söhnke M; Branke, Jürgen; Motahari, Mehrshad
    Abstract: Artificial intelligence (AI) has a growing presence in asset management and has revolutionized the sector in many ways. It has improved portfolio management, trading, and risk management practices by increasing efficiency, accuracy, and compliance. In particular, AI techniques help construct portfolios based on more accurate risk and returns forecasts and under more complex constraints. Trading algorithms utilize AI to devise novel trading signals and execute trades with lower transaction costs, and AI improves risk modelling and forecasting by generating insights from new sources of data. Finally, robo-advisors owe a large part of their success to AI techniques. At the same time, the use of AI can create new risks and challenges, for instance as a result of model opacity, complexity, and reliance on data integrity.
    Keywords: Algorithmic trading; decision trees; deep learning; evolutionary algorithms; Lasso; Machine Learning; neural networks; NLP; random forests; SVM
    JEL: G11 G17
    Date: 2020–03
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:14525&r=all
  3. By: S M Raju; Ali Mohammad Tarif
    Abstract: Bitcoin is the first digital decentralized cryptocurrency that has shown a significant increase in market capitalization in recent years. The objective of this paper is to determine the predictable price direction of Bitcoin in USD by machine learning techniques and sentiment analysis. Twitter and Reddit have attracted a great deal of attention from researchers to study public sentiment. We have applied sentiment analysis and supervised machine learning principles to the extracted tweets from Twitter and Reddit posts, and we analyze the correlation between bitcoin price movements and sentiments in tweets. We explored several algorithms of machine learning using supervised learning to develop a prediction model and provide informative analysis of future market prices. Due to the difficulty of evaluating the exact nature of a Time Series(ARIMA) model, it is often very difficult to produce appropriate forecasts. Then we continue to implement Recurrent Neural Networks (RNN) with long short-term memory cells (LSTM). Thus, we analyzed the time series model prediction of bitcoin prices with greater efficiency using long short-term memory (LSTM) techniques and compared the predictability of bitcoin price and sentiment analysis of bitcoin tweets to the standard method (ARIMA). The RMSE (Root-mean-square error) of LSTM are 198.448 (single feature) and 197.515 (multi-feature) whereas the ARIMA model RMSE is 209.263 which shows that LSTM with multi feature shows the more accurate result.
    Date: 2020–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2006.14473&r=all
  4. By: Abhishek Gupta (Montreal AI Ethics Institute; Microsoft); Camylle Lanteigne (Montreal AI Ethics Institute; McGill University); Sara Kingsley (Carnegie Mellon University)
    Abstract: In a world increasingly dominated by AI applications, an understudied aspect is the carbon and social footprint of these power-hungry algorithms that require copious computation and a trove of data for training and prediction. While profitable in the short-term, these practices are unsustainable and socially extractive from both a data-use and energy-use perspective. This work proposes an ESG-inspired framework combining socio-technical measures to build eco-socially responsible AI systems. The framework has four pillars: compute-efficient machine learning, federated learning, data sovereignty, and a LEEDesque certificate. Compute-efficient machine learning is the use of compressed network architectures that show marginal decreases in accuracy. Federated learning augments the first pillar's impact through the use of techniques that distribute computational loads across idle capacity on devices. This is paired with the third pillar of data sovereignty to ensure the privacy of user data via techniques like use-based privacy and differential privacy. The final pillar ties all these factors together and certifies products and services in a standardized manner on their environmental and social impacts, allowing consumers to align their purchase with their values.
    Date: 2020–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2006.06217&r=all
  5. By: Bart Cockx; Michael Lechner; Joost Bollens (-)
    Abstract: Based on administrative data of unemployed in Belgium, we estimate the labour market effects of three training programmes at various aggregation levels using Modified Causal Forests, a causal machine learning estimator. While all programmes have positive effects after the lock-in period, we find substantial heterogeneity across programmes and unemployed. Simulations show that “black-box” rules that reassign unemployed to programmes that maximise estimated individual gains can considerably improve effectiveness: up to 20% more (less) time spent in (un)employment within a 30 months window. A shallow policy tree delivers a simple rule that realizes about 70% of this gain.
    Keywords: Policy evaluation, active labour market policy, causal machine learning, modified causal forest, conditional average treatment effects
    JEL: C21 J68
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:rug:rugwps:20/998&r=all
  6. By: Bart Cockx; Michael Lechner; Joost Bollens
    Abstract: Based on administrative data of unemployed in Belgium, we estimate the labour market effects of three training programmes at various aggregation levels using Modified Causal Forests, a causal machine learning estimator. While all programmes have positive effects after the lock-in period, we find substantial heterogeneity across programmes and unemployed. Simulations show that “black-box” rules that reassign unemployed to programmes that maximise estimated individual gains can considerably improve effectiveness: up to 20% more (less) time spent in (un)employment within a 30 months window. A shallow policy tree delivers a simple rule that realizes about 70% of this gain.
    Keywords: policy evaluation, active labour market policy, causal machine learning, modified causal forest, conditional average treatment effects
    JEL: J68
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_8297&r=all
  7. By: Akash Doshi; Alexander Issa; Puneet Sachdeva; Sina Rafati; Somnath Rakshit
    Abstract: Forecasting stock prices can be interpreted as a time series prediction problem, for which Long Short Term Memory (LSTM) neural networks are often used due to their architecture specifically built to solve such problems. In this paper, we consider the design of a trading strategy that performs portfolio optimization using the LSTM stock price prediction for four different companies. We then customize the loss function used to train the LSTM to increase the profit earned. Moreover, we propose a data driven approach for optimal selection of window length and multi-step prediction length, and consider the addition of analyst calls as technical indicators to a multi-stack Bidirectional LSTM strengthened by the addition of Attention units. We find the LSTM model with the customized loss function to have an improved performance in the training bot over a regressive baseline such as ARIMA, while the addition of analyst call does improve the performance for certain datasets.
    Date: 2020–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2006.04992&r=all
  8. By: Daniel J. Egger; Claudio Gambella; Jakub Marecek; Scott McFaddin; Martin Mevissen; Rudy Raymond; Andrea Simonetto; Stefan Woerner; Elena Yndurain
    Abstract: This paper outlines our point of view regarding the applicability, state of the art, and potential of quantum computing for problems in finance. We provide an introduction to quantum computing as well as a survey on problem classes in finance that are computationally challenging classically and for which quantum computing algorithms are promising. In the main part, we describe in detail quantum algorithms for specific applications arising in financial services, such as those involving simulation, optimization, and machine learning problems. In addition, we include demonstrations of quantum algorithms on IBM Quantum back-ends and discuss the potential benefits of quantum algorithms for problems in financial services. We conclude with a summary of technical challenges and future prospects.
    Date: 2020–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2006.14510&r=all
  9. By: Heni Boubaker (International University of Rabat, BEAR LAB, Technopolis Rabat-Shore Rocade Rabat-Sale, Morocco); Giorgio Canarella (Department of Economics, Lee Business School, University of Nevada, Las Vegas; Las Vegas, Nevada); Rangan Gupta (Department of Economics, University of Pretoria, Pretoria, 0002, South Africa); Stephen M. Miller (Department of Economics, Lee Business School, University of Nevada, Las Vegas; Las Vegas, Nevada)
    Abstract: This paper proposes a hybrid modelling approach for forecasting returns and volatilities of the stock market. The model, called ARFIMA-WLLWNN model, integrates the advantages of the ARFIMA model, the wavelet decomposition technique (namely, the discrete MODWT with Daubechies least asymmetric wavelet filter) and artificial neural network (namely, the LLWNN neural network). The model develops through a two-phase approach. In phase one, a wavelet decomposition improves the forecasting accuracy of the LLWNN neural network, resulting in the Wavelet Local Linear Wavelet Neural Network (WLLWNN) model. The Back Propagation (BP) and Particle Swarm Optimization (PSO) learning algorithms optimize the WLLWNN structure. In phase two, the residuals of an ARFIMA model of the conditional mean become the input to the WLLWNN model. The hybrid ARFIMA-WLLWNN model is evaluated using daily closing prices for the Dow Jones Industrial Average (DJIA) index over 01/01/2010 to 02/11/2020. The experimental results indicate that the PSO-optimized version of the hybrid ARFIMA-WLLWNN outperforms the LLWNN, WLLWNN, ARFIMA-LLWNN, and the ARFIMA-HYAPARCH models and provides more accurate out-of-sample forecasts over validation horizons of one, five and twenty-two days.
    Keywords: Wavelet decomposition, WLLWNN, Neural network, ARFIMA, HYGARCH
    JEL: C45 C58 G17
    Date: 2020–06
    URL: http://d.repec.org/n?u=RePEc:pre:wpaper:202056&r=all
  10. By: Geoffroy G Dolphin (EPRG, CJBS, University of Cambridge); Michael G Pollitt (EPRG, CJBS, University of Cambridge)
    Keywords: innovation, electricity sector, machine learning
    JEL: L94 O31 O38
    Date: 2020–03
    URL: http://d.repec.org/n?u=RePEc:enp:wpaper:eprg2004&r=all
  11. By: Christoph Berninger; Almond St\"ocker; David R\"ugamer
    Abstract: Motivated by the application to German interest rates, we propose a timevarying autoregressive model for short and long term prediction of time series that exhibit a temporary non-stationary behavior but are assumed to mean revert in the long run. We use a Bayesian formulation to incorporate prior assumptions on the mean reverting process in the model and thereby regularize predictions in the far future. We use MCMC-based inference by deriving relevant full conditional distributions and employ a Metropolis-Hastings within Gibbs Sampler approach to sample from the posterior (predictive) distribution. In combining data-driven short term predictions with long term distribution assumptions our model is competitive to the existing methods in the short horizon while yielding reasonable predictions in the long run. We apply our model to interest rate data and contrast the forecasting performance to the one of a 2-Additive-Factor Gaussian model as well as to the predictions of a dynamic Nelson-Siegel model.
    Date: 2020–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2006.05750&r=all
  12. By: Claus, Agustín
    Abstract: La administración y el gobierno del sistema educativo Argentino enfrenta un desafío estructural para cumplir con el derecho efectivo a la educación en el marco de la agenda global de los Objetivos de Desarrollo Sostenible 2030 de las Naciones Unidas y de la Agenda mDigital y de Planificación para América Latina y el Caribe de la Comisión Económica para América Latina. En la estructura del sistema educativo argentino conviven dos esquemas de administración educativa. Por un lado, la gestión escolar, se encuentra descentralizada a nivel subnacional en las provincias y, por otro lado, el financiamiento educativo se encuentra fuertemente centralizado en el Estado nacional que impacta en la gobernanza de los niveles subnacionales. Este diseño institucional, funcional y de gestión generó una desigualdad territorial y espacial en la distribución de la oferta escolar, respecto de la demanda social por educación y, a la vez, una inequidad respecto del esfuerzo diferencial del financiamiento educativo en el marco de la capacidades fiscales de los niveles subnacionales. El presente trabajo se inscribe en el eje de prospectiva territorial convocada para la Reunión de expertos en Planificación, Multiescalar y Desarrollo Territorial del año 2017 a realizarse en Santiago de Chile, cuyo propósito y objetivo aspira a posicionar y recuperar el rol del planeamiento estratégico situacional en el sistema educativo de la república Argentina.
    Keywords: EDUCACION, POLITICA EDUCATIVA, PLANIFICACION DE LA EDUCACION, MACRODATOS, BASES DE DATOS, EDUCATION, EDUCATIONAL POLICY, EDUCATIONAL PLANNING, BIG DATA, DATABASES
    Date: 2019–09–28
    URL: http://d.repec.org/n?u=RePEc:ecr:col043:45619&r=all
  13. By: Hans B\"uhler; Blanka Horvath; Terry Lyons; Imanol Perez Arribas; Ben Wood
    Abstract: Neural network based data-driven market simulation unveils a new and flexible way of modelling financial time series without imposing assumptions on the underlying stochastic dynamics. Though in this sense generative market simulation is model-free, the concrete modelling choices are nevertheless decisive for the features of the simulated paths. We give a brief overview of currently used generative modelling approaches and performance evaluation metrics for financial time series, and address some of the challenges to achieve good results in the latter. We also contrast some classical approaches of market simulation with simulation based on generative modelling and highlight some advantages and pitfalls of the new approach. While most generative models tend to rely on large amounts of training data, we present here a generative model that works reliably in environments where the amount of available training data is notoriously small. Furthermore, we show how a rough paths perspective combined with a parsimonious Variational Autoencoder framework provides a powerful way for encoding and evaluating financial time series in such environments where available training data is scarce. Finally, we also propose a suitable performance evaluation metric for financial time series and discuss some connections of our Market Generator to deep hedging.
    Date: 2020–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2006.14498&r=all
  14. By: Nawaf Almaskati (University of Waikato); Ron Bird (University of Waikato); Yue Lu (University of Waikato); Danny Leung (University of Technology Sydney)
    Abstract: In a sample covering bankruptcies in public US firms in the period 2000 to 2015, we find that the addition of governance variables significantly improves the classification power and prediction accuracy of various bankruptcy prediction models. We also find that while adding governance variables improves the performance of bankruptcy prediction models, the additional explanatory power provided by adding the governance measures improves the further we are from bankruptcy, which implies that governance variables tend to provide earlier and more accurate warnings of the firm’s bankruptcy potential. Our analysis of five commonly used statistical methods in the literature showed that regardless of the bankruptcy model used, hazard analysis provides the best classification and out-of-sample forecast accuracy among the parametric methods. Nevertheless, non-parametric methods such as neural networks or data envelopment analysis appear to provide better classification accuracy regardless of the model selected.
    Keywords: corporate governance; bankruptcy studies; default prediction; non-parametric methods
    JEL: D81 G10 G14 G30 G32
    Date: 2019–07–31
    URL: http://d.repec.org/n?u=RePEc:wai:econwp:19/16&r=all
  15. By: Massimo Guidolin; Manuela Pedio
    Abstract: We investigate the out-of-sample, recursive predictive accuracy for (fully hedged) commodity future returns of two sets of forecasting models, i.e., hidden Markov chain models in which the coefficients of predictive regressions follow a regime switching process and stepwise variable selection algorithms in which the coefficients of predictors not selected are set to zero. We perform the analysis under four alternative loss functions, i.e., squared and the absolute value, and the realized, portfolio Sharpe ratio and MV utility when the portfolio is built upon optimal weights computed solving a standard MV portfolio problem. We find that neither HMM or stepwise regressions manage to systematically (or even just frequently) outperform a plain vanilla AR benchmark according to RMSFE or MAFE statistical loss functions. However, in particular stepwise variable selection methods create economic value in out-of-sample meanvariance portfolio tests. Because we impose transaction costs not only ex post but also ex ante, so that an investor uses the forecasts of a model only when they increase expected utility, the economic value improvement is maximum when transaction costs are taken into account.
    Keywords: Backward and forward stepwise regressions; hidden Markov models, out-of-sample forecasting; commodity futures returns; mean-variance portfolios.
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:baf:cbafwp:cbafwp20140&r=all
  16. By: Ransom, Tyler (University of Oklahoma)
    Abstract: I examine the extent to which the returns to college majors are influenced by selective migration and occupational choice across locations in the US. To quantify the role of selection, I develop and estimate an extended Roy model of migration, occupational choice, and earnings where, upon completing their education, individuals choose a location in which to live and an occupation in which to work. In order to estimate this high-dimensional choice model, I make use of machine learning methods that allow for model selection and estimation simultaneously in a non-parametric setting. I find that OLS estimates of the returns to business and STEM majors relative to education majors are biased upward by 15% on average. Using estimates of the model, I characterize the migration behavior of different college majors and find that migration flows are twice as sensitive to occupational concentration as they are towage returns.
    Keywords: college major, migration, occupation, Roy model
    JEL: I2 J3 R1
    Date: 2020–06
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp13370&r=all
  17. By: Jiaming Mao; Jingzhi Xu
    Abstract: Statistical and structural modeling represent two distinct approaches to data analysis. In this paper, we propose a set of novel methods for combining statistical and structural models for improved prediction and causal inference. Our first proposed estimator has the doubly robustness property in that it only requires the correct specification of either the statistical or the structural model. Our second proposed estimator is a weighted ensemble that has the ability to outperform both models when they are both misspecified. Experiments demonstrate the potential of our estimators in various settings, including fist-price auctions, dynamic models of entry and exit, and demand estimation with instrumental variables.
    Date: 2020–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2006.05308&r=all
  18. By: Dolphin, G.; Pollitt, M.
    Abstract: The recent history of the Electricity Supply Industry (ESI) of major western economies was marked by two fundamental changes: a transition toward liberalised electricity markets and a policy-led push to decarbonise the electricity generation portfolio. These changes not only affected the pace and nature of innovation activity in the sector but also altered the set of innovative actors. The present paper provides a methodology to identify these actors, which we apply to priority patents filed at the UK Intellectual Property Office over the period 1955-2016. The analysis also indicates that (i) the recent increase in innovation activity originates overwhelmingly from upstream Original Equipment Manufacturers and (ii) innovation activity in `green' electricity supply technologies slowed down in recent years.
    Keywords: innovation, electricity sector, machine learning
    JEL: L94 O31 O38
    Date: 2020–03–03
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:2013&r=all
  19. By: Hema Yoganarasimhan; Ebrahim Barzegary; Abhishek Pani
    Abstract: Free trial promotions, where users are given a limited time to try the product for free, are a commonly used customer acquisition strategy in the Software as a Service (SaaS) industry. We examine how trial length affect users' responsiveness, and seek to quantify the gains from personalizing the length of the free trial promotions. Our data come from a large-scale field experiment conducted by a leading SaaS firm, where new users were randomly assigned to 7, 14, or 30 days of free trial. First, we show that the 7-day trial to all consumers is the best uniform policy, with a 5.59% increase in subscriptions. Next, we develop a three-pronged framework for personalized policy design and evaluation. Using our framework, we develop seven personalized targeting policies based on linear regression, lasso, CART, random forest, XGBoost, causal tree, and causal forest, and evaluate their performances using the Inverse Propensity Score (IPS) estimator. We find that the personalized policy based on lasso performs the best, followed by the one based on XGBoost. In contrast, policies based on causal tree and causal forest perform poorly. We then link a method's effectiveness in designing policy with its ability to personalize the treatment sufficiently without over-fitting (i.e., capture spurious heterogeneity). Next, we segment consumers based on their optimal trial length and derive some substantive insights on the drivers of user behavior in this context. Finally, we show that policies designed to maximize short-run conversions also perform well on long-run outcomes such as consumer loyalty and profitability.
    Date: 2020–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2006.13420&r=all
  20. By: Adams-Prassl, Abigail (University of Oxford); Boneva, Teodora (University of Zurich); Golin, Marta (University of Oxford); Rauh, Christopher (University of Montreal)
    Abstract: Using large, geographically representative surveys from the US and UK, we document variation in the percentage of tasks workers can do from home. We highlight three dimensions of heterogeneity that have previously been neglected. First, the share of tasks that can be done from home varies considerably both across as well as within occupations and industries. The distribution of the share of tasks that can be done from home within occupations, industries, and occupation-industry pairs is systematic and remarkably consistent across countries and survey waves. Second, as the pandemic has progressed, the share of workers who can do all tasks from home has increased most in those occupations in which the pre-existing share was already high. Third, even within occupations and industries, we find that women can do fewer tasks from home. Using machine-learning methods, we extend our working-from-home measure to all disaggregated occupation-industry pairs. The measure we present in this paper is a critical input for models considering the possibility to work from home, including models used to assess the impact of the pandemic or design policies targeted at reopening the economy.
    Keywords: working from home, occupations, industry, Coronavirus, COVID-19, telework
    JEL: J21 J24
    Date: 2020–06
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp13374&r=all
  21. By: Thibaut Th\'eate; S\'ebastien Mathieu; Damien Ernst
    Abstract: Retailers and major consumers of electricity generally purchase a critical percentage of their estimated electricity needs years ahead on the forward markets. This long-term electricity procurement task consists of determining when to buy electricity so that the resulting energy cost is minimised, and the forecast consumption is covered. In this scientific article, the focus is set on a yearly base load product, named calendar (CAL), which is tradable up to three years ahead of the delivery period. This research paper introduces a novel algorithm providing recommendations to either buy electricity now or wait for a future opportunity based on the history of CAL prices. This algorithm relies on deep learning forecasting techniques and on an indicator quantifying the deviation from a perfectly uniform reference procurement strategy. Basically, a new purchase operation is advised when this mathematical indicator hits the trigger associated with the market direction predicted by the forecaster. On average, the proposed approach surpasses benchmark procurement strategies and achieves a reduction in costs of 1.65% with respect to the perfectly uniform reference procurement strategy achieving the mean electricity price. Moreover, in addition to automating the electricity procurement task, this algorithm demonstrates more consistent results throughout the years compared to the benchmark strategies.
    Date: 2020–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2006.05784&r=all
  22. By: Tomasz Chmielewski (Narodowy Bank Polski); Andrzej Kocięcki (Narodowy Bank Polski); Tomasz Łyziak (Narodowy Bank Polski); Jan Przystupa (Narodowy Bank Polski); Ewa Stanisławska (Narodowy Bank Polski); Małgorzata Walerych (Narodowy Bank Polski); Ewa Wróbel (Narodowy Bank Polski)
    Abstract: The monetary policy of Narodowy Bank Polski (NBP)—pursued in accordance with the assumptions of the inflation targeting strategy—remains conventional. The Polish central bank has the capacity to change the basic monetary policy instrument, i.e. the short-term interest rate, in both directions. Therefore, the aim of this report—similarly to its previous editions— is to analyse the transmission mechanism of the conventional monetary policy.1 However, this does not mean that the analysis of the monetary policy transmission mechanism faces no limitations. The main problem constraining modelling in this area is related to the lack of variability of the NBP reference rate, very low volatility of monetary policy shocks identified with various methods, full predictability of monetary policy decisions in recent years and wellestablished expectations of private sector concerning stability of the NBP reference rate in the near future. Under these circumstances, drawing conclusions on the strength and delays of the mechanism through which potential changes in the short-term interest rate would affect the economy is more difficult and more uncertain than before. Thus, the hypothesis seems likely that economic agents used to stable interest rates and expecting their maintenance at the current level, can respond to potential changes in monetary policy parameters in another way than in the past. This is illustrated by the high uncertainty of the current response functions of various variables to monetary policy shocks, obtained from models with time-varying parameters. For the above reasons, our view of the monetary policy transmission mechanism in Poland is multi-faceted in this report. Although we show the results of standard models estimated on long samples, we attach greater importance to models with time-varying coefficients and we extend studies of the transmission mechanism at the microeconomic level, taking into account the heterogeneity of entities and their response to monetary policy decisions. In addition, we analyse the importance of various forms of central bank communication, including the text content (tone) of decision-makers’ documents, enabling the central bank to influence the expectations of private sector entities even if short-term interest rates do not change.
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:nbp:nbpmis:329&r=all
  23. By: Brodeur, Abel; Clark, Andrew E.; Fleche, Sarah; Powdthavee, Nattavudh
    Abstract: The COVID-19 pandemic has led many governments to implement lockdowns. While lockdowns may help to contain the spread of the virus, they may result in substantial damage to population well-being. We use Google Trends data to test whether the lockdowns implemented in Europe and America led to changes in well-being related topic search terms. Using differences-in-differences and a regression discontinuity design to evaluate the causal effects of lockdown, we find a substantial increase in the search intensity for boredom in Europe and the US. We also found a significant increase in searches for loneliness, worry and sadness, while searches for stress, suicide and divorce on the contrary fell. Our results suggest that people's mental health may have been severely affected by the lockdown.
    Keywords: Boredom,COVID-19,Loneliness,Well-being
    JEL: I12 I31 J22
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:zbw:glodps:552&r=all

This nep-big issue is ©2020 by Tom Coupé. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.