nep-big New Economics Papers
on Big Data
Issue of 2020‒06‒08
twenty-one papers chosen by
Tom Coupé
University of Canterbury

  1. Which bills are lobbied? Predicting and interpreting lobbying activity in the US By Ivan Slobozhan; Peter Ormosi; Rajesh Sharma
  2. Forecasting Inflation in a Data-Rich Environment: The Benefits of Machine Learning Methods By Marcelo Madeiros; Gabriel Vasconcelos; Álvaro Veiga; Eduardo Zilberman
  3. Machine Learning Econometrics: Bayesian algorithms and methods By Korobilis, Dimitris; Pettenuzzo, Davide
  4. When are Google data useful to nowcast GDP? An approach via pre-selection and shrinkage By Laurent Ferrara; Anna Simoni
  5. How do we measure firm performance? A review of issues facing entrepreneurship researchers By Josh Siepel; Marcus Dejardin
  6. Predicting the COVID-19 Pandemic in Canada and the US By Ba Chu; Shafiullah Qureshi
  7. Analyse du discours médical sur Twitter®. Étude d’un corpus de tweets émis par des médecins généralistes entre juin 2012 et mars 2017 et contenant le hashtag #DocTocToc By A. Salles; J. Dufour; P. Hassanaly; P. Michel; C. Cabot; J. Grosjean
  8. World Seaborne Trade in Real Time: A Proof of Concept for Building AIS-based Nowcasts from Scratch By Diego A. Cerdeiro; Andras Komaromi; Yang Liu; Mamoon Saeed
  9. Corruption in the Times of Pandemia By Gallego, Jorge; Prem, Mounu; Vargas, Juan F.
  10. Application of Nonlinear Autoregressive with Exogenous Input (NARX) neural network in macroeconomic forecasting, national goal setting and global competitiveness assessment By Liyang Tang
  11. Multi-View Graph Convolutional Networks for Relationship-Driven Stock Prediction By Jiexia Ye; Juanjuan Zhao; Kejiang Ye; Chengzhong Xu
  12. A daily fever curve for the Swiss economy By Marc Burri; Daniel Kaufmann
  13. The perils of misusing remote sensing data: The case of forest cover By Fergusson, Leopoldo; Saavedra, Santiago; Vargas, Juan F.
  14. Banking Supervision, Monetary Policy and Risk-Taking: Big Data Evidence from 15 Credit Registers’ By Altavilla, Carlo; Boucinha, Miguel; Peydró, José-Luis; Smets, Frank
  15. A Taxonomy of Tasks for Assessing the Impact of New Technologies on Work By Enrique Fernandez-Macias; Martina Bisello
  16. Trust and Compliance to Public Health Policies in Times of COVID-19 By Bargain, Olivier; Aminjonov, Ulugbek
  17. "Interest Rate Model With Investor Attitude and Text Mining" By Souta Nakatani; Kiyohiko G. Nishimura; Taiga Saito; Akihiko Takahashi
  18. Consumers' Mobility, Expenditure and Online- Offline Substitution Response to COVID-19: Evidence from French Transaction Data By David Bounie; Youssouf Camara; John Galbraith
  19. Consumer Responses to the COVID-19 Crisis: Evidence from Bank Account Transaction Data By Asger Lau Andersen; Emil Toft Hansen; Niels Johannesen; Adam Sheridan
  20. "Retos para el análisis y la estimación de la distribución de probabilidad en Big-data" By Catalina Bolancé
  21. COVID-19, Lockdowns and Well-Being: Evidence from Google Trends By Brodeur, Abel; Clark, Andrew E.; Flèche, Sarah; Powdthavee, Nattavudh

  1. By: Ivan Slobozhan; Peter Ormosi; Rajesh Sharma
    Abstract: Using lobbying data from OpenSecrets.org, we offer several experiments applying machine learning techniques to predict if a piece of legislation (US bill) has been subjected to lobbying activities or not. We also investigate the influence of the intensity of the lobbying activity on how discernible a lobbied bill is from one that was not subject to lobbying. We compare the performance of a number of different models (logistic regression, random forest, CNN and LSTM) and text embedding representations (BOW, TF-IDF, GloVe, Law2Vec). We report results of above 0.85% ROC AUC scores, and 78% accuracy. Model performance significantly improves (95% ROC AUC, and 88% accuracy) when bills with higher lobbying intensity are looked at. We also propose a method that could be used for unlabelled data. Through this we show that there is a considerably large number of previously unlabelled US bills where our predictions suggest that some lobbying activity took place. We believe our method could potentially contribute to the enforcement of the US Lobbying Disclosure Act (LDA) by indicating the bills that were likely to have been affected by lobbying but were not filed as such.
    Date: 2020–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2005.06386&r=all
  2. By: Marcelo Madeiros; Gabriel Vasconcelos; Álvaro Veiga; Eduardo Zilberman
    Abstract: Inflation forecasting is an important but difficult task. Here, we explore advances in machine learning (ML) methods and the availability of new datasets to forecast US inflation. Despite the skepticism in the previous literature, we show that ML models with a large number of covariates are systematically more accurate than the benchmarks. The ML method that deserves more attention is the random forest model, which dominates all other models. Its good performance is due not only to its specific method of variable selection but also the potential nonlinearities between past key macroeconomic variables and inflation.
    Date: 2019–05
    URL: http://d.repec.org/n?u=RePEc:chb:bcchwp:834&r=all
  3. By: Korobilis, Dimitris; Pettenuzzo, Davide
    Abstract: As the amount of economic and other data generated worldwide increases vastly, a challenge for future generations of econometricians will be to master efficient algorithms for inference in empirical models with large information sets. This Chapter provides a review of popular estimation algorithms for Bayesian inference in econometrics and surveys alternative algorithms developed in machine learning and computing science that allow for efficient computation in high-dimensional settings. The focus is on scalability and parallelizability of each algorithm, as well as their ability to be adopted in various empirical settings in economics and finance.
    Keywords: MCMC; approximate inference; scalability; parallel computation
    JEL: C11 C15 C49 C88
    Date: 2020–05–05
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:100165&r=all
  4. By: Laurent Ferrara; Anna Simoni
    Abstract: We analyse whether, and when, a large set of Google search data can be useful to increase GDP nowcasting accuracy once we control for information contained in official variables. We put forward a new approach that combines variable pre-selection and Ridge regularization and we provide theoretical results on the asymptotic behaviour of the estimator. Empirical results on the euro area show that Google data convey useful information for pseudo-real-time nowcasting of GDP growth during the four first weeks of the quarter, when macroeconomic information is lacking. However, as soon as official data become available, their relative nowcasting power vanishes. In addition, a true real-time analysis confirms that Google data constitute a reliable alternative when official data are lacking.
    Keywords: Nowcasting, Big data, Google search data, Sure Independence Screening, Ridge Regularization
    JEL: C53 C55 E37
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:drm:wpaper:2020-11&r=all
  5. By: Josh Siepel (SPRU - Science and Technology Policy Research - University of Sussex, University of Sussex [London, UK]); Marcus Dejardin (UCL - Université Catholique de Louvain, Université de Namur [Namur])
    Abstract: This paper aims to provide a succinct overview of the important challenges facing researchers seeking to perform firm level research, along with an overview of the different data sources that may be used, and some techniques that can be employed to ensure that data is robust. An emphasis is put on the linked importance of research design and choice of data. We discuss quantitative data and, more specifically, the measures used to observe firm performance, and present different types of data sources that researchers may use when studying firm level data, i.e. self-report data, official statistics, commercial data, combinations of data, and Big Data. We examine potential problems with data, from measurement to respondent and researcher errors. Finally, some key points and some avenues for future research are briefly reviewed.
    Keywords: Firm Growth,Firm Performance,Methodology,Data Sources,Self-report Data,Official Data,Big Data
    Date: 2020–05–12
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:halshs-02571478&r=all
  6. By: Ba Chu (Department of Economics, Carleton University); Shafiullah Qureshi (Department of Economics, Carleton University)
    Abstract: Our proposed time series model with the quartic trend function predicts that the peak of confirmed coronavirus cases has passed in Canada and the US while the end period of the pandemic will come around June 2020 in the best scenario and till the end of 2020 in the worst scenario. Both the bootstrap distance-based test of independence and the XGBoost algorithm reveals a strong link between the coronavirus case count and relevant Google Trends features (defined by search intensities of various keywords that the public entered in the Google internet search engine during this pandemic).
    Keywords: COVID-19; prediction; machine-learning; google trends
    Date: 2020–05–04
    URL: http://d.repec.org/n?u=RePEc:car:carecp:20-05&r=all
  7. By: A. Salles (SESSTIM - U1252 INSERM - Aix Marseille Univ - UMR 259 IRD - Sciences Economiques et Sociales de la Santé & Traitement de l'Information Médicale - IRD - Institut de Recherche pour le Développement - AMU - Aix Marseille Université - INSERM - Institut National de la Santé et de la Recherche Médicale); J. Dufour (SESSTIM - U1252 INSERM - Aix Marseille Univ - UMR 259 IRD - Sciences Economiques et Sociales de la Santé & Traitement de l'Information Médicale - IRD - Institut de Recherche pour le Développement - AMU - Aix Marseille Université - INSERM - Institut National de la Santé et de la Recherche Médicale); P. Hassanaly (SESSTIM - U1252 INSERM - Aix Marseille Univ - UMR 259 IRD - Sciences Economiques et Sociales de la Santé & Traitement de l'Information Médicale - IRD - Institut de Recherche pour le Développement - AMU - Aix Marseille Université - INSERM - Institut National de la Santé et de la Recherche Médicale); P. Michel (AMSE - Aix-Marseille Sciences Economiques - EHESS - École des hautes études en sciences sociales - ECM - École Centrale de Marseille - CNRS - Centre National de la Recherche Scientifique - AMU - Aix Marseille Université); C. Cabot (CHU de Rouen, Département d’informatique et d’information médicales, TIBS - LITIS - Equipe Traitement de l'information en Biologie Santé - LITIS - Laboratoire d'Informatique, de Traitement de l'Information et des Systèmes - ULH - Université Le Havre Normandie - NU - Normandie Université - UNIROUEN - Université de Rouen Normandie - NU - Normandie Université - INSA Rouen Normandie - Institut national des sciences appliquées Rouen Normandie - INSA - Institut National des Sciences Appliquées - NU - Normandie Université); J. Grosjean (LIMICS - Laboratoire d'Informatique Médicale et Ingénierie des Connaissances en e-Santé - UP13 - Université Paris 13 - INSERM - Institut National de la Santé et de la Recherche Médicale - SU - Sorbonne Université, CHU de Rouen, Département d’informatique et d’information médicales)
    Abstract: Introduction Les technologies de l'information et de la communication ont permis la naissance du web 2.0, caractérisé par la mise en place et l'utilisation de nouveaux outils collaboratifs de communication tels que les blogs, les wikis, les fils RSS et les réseaux sociaux. En s'appropriant ces outils, une médecine participative basée sur le partage d'informations et d'expériences entre professionnels, patients et tout acteur de la santé s'est développée. Depuis juin 2012, une communauté médicale échange sur Twitter avec le hashtag #DocTocToc et contribue à la naissance de la e-santé sur ce réseau social. L'objectif de cette étude est d'analyser les principales thématiques des demandes effectuées via le hashtag #DocTocToc par les médecins généralistes entre juin 2012 et mars 2017. Méthodes Une collecte de données par une méthode de « web scraping » a permis de constituer un corpus de tweets dont les auteurs ont été identifiés manuellement afin de procéder à un échantillonnage, de façon à ne conserver que les tweets émis par les médecins généralistes. Une étape de prétraitement a permis de transformer les formes potentiellement non reconnues par les logiciels de traitement du langage naturel. Le corpus a été appréhendé à l'aide de deux approches : une approche lexicale via le logiciel Iramuteq® et une indexation terminologique par l'extracteur de concepts multi-terminologiques (ECMT) du Catalogue et index des sites médicaux francophones (CISMeF). Résultats Sur les 12 716 tweets recueillis, 7366 étaient rédigés par des médecins généralistes et ont été analysés. L'approche lexicale détermine deux grands mondes lexicaux représentés sous forme de dendrogramme, l'un en lien avec les demandes médico administratives relatives à la gestion du cabinet et à la prise en charge sociale du patient, l'autre en lien avec les demandes d'ordre purement médicales. La méthode d'indexation terminologique met en évidence les spécialités médicales pourvoyeuses de demandes de télé-expertise : gynécologie, neurologie, infectiologie, pédiatrie, cardiologie, dermatologie ; et permet de les croiser avec l'objectif de la demande : diagnostic, thérapeutique. Conclusion Sur Twitter®, le hashtag #DocTocToc est utilisé par les médecins généralistes comme un espace de partage informel d'informations en matière de santé mais aussi de gestion de problèmes administratifs et sociaux. Le DocsTocToc se présente comme un groupe d'échange de pratique à grande échelle ou le médecin compte sur l'avis de ses pairs.(Fig. 1)
    Keywords: Big data,Communication,e-santé,Twitter,Text mining
    Date: 2019–05
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-02443385&r=all
  8. By: Diego A. Cerdeiro; Andras Komaromi; Yang Liu; Mamoon Saeed
    Abstract: Maritime data from the Automatic Identification System (AIS) have emerged as a potential source for real time information on trade activity. However, no globally applicable end-to-end solution has been published to transform raw AIS messages into economically meaningful, policy-relevant indicators of international trade. Our paper proposes and tests a set of algorithms to fill this gap. We build indicators of world seaborne trade using raw data from the radio signals that the global vessel fleet emits for navigational safety purposes. We leverage different machine-learning techniques to identify port boundaries, construct port-to-port voyages, and estimate trade volumes at the world, bilateral and within-country levels. Our methodology achieves a good fit with official trade statistics for many countries and for the world in aggregate. We also show the usefulness of our approach for sectoral analyses of crude oil trade, and for event studies such as Hurricane Maria and the effect of measures taken to contain the spread of the novel coronavirus. Going forward, ongoing refinements of our algorithms, additional data on vessel characteristics, and country-specific knowledge should help improve the performance of our general approach for several country cases.
    Date: 2020–05–14
    URL: http://d.repec.org/n?u=RePEc:imf:imfwpa:20/57&r=all
  9. By: Gallego, Jorge; Prem, Mounu; Vargas, Juan F.
    Abstract: The public health crisis caused by the COVID-19 pandemic, coupled with the subsequent economic emergency and social turmoil, has pushed governments to substantially and swiftly increase spending. Because of the pressing nature of the crisis, public procurement rules and procedures have been relaxed in many places in order to expedite transactions. However, this may also create opportunities for corruption. Using contract-level information on public spending from Colombia's e-procurement platform, and a difference-in-differences identification strategy, we find that municipalities classified by a machine learning algorithm as traditionally more prone to corruption react to the pandemic-led spending surge by using a larger proportion of discretionary non-competitive contracts and increasing their average value. This is especially so in the case of contracts to procure crisis-related goods and services. Our evidence suggests that large negative shocks that require fast and massive spending may increase corruption, thus at least partially offsetting the mitigating effects of this fiscal instrument.
    Keywords: DiCorruption; COVID-19; Public procurement; Machine learning
    JEL: H57 D73 I18 H75
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:rie:riecdt:43&r=all
  10. By: Liyang Tang
    Abstract: This paper selects the NARX neural network as the method through literature review, and constructs specific NARX neural networks under application scenarios involving macroeconomic forecasting, national goal setting and global competitiveness assessment. Through case studies on China, US and Eurozone, this study explores how those limited & partial exogenous inputs or abundant & comprehensive exogenous inputs, a small set of most relevant exogenous inputs or a large set of exogenous inputs covering all major aspects of the macro economy, whole area related exogenous inputs or both whole area and subdivision area related exogenous inputs specifically affect the forecasting performance of NARX neural networks for specific macroeconomic indicators or indices. Through the case study on Russia this paper explores how the limited & most relevant exogenous inputs set or the abundant & comprehensive exogenous inputs set specifically influences the prediction performance of those specific NARX neural networks for national goal setting. Finally, comparative studies on the application of NARX neural networks for the forecasts of Global Competitiveness Indices (GCIs) of various economies are conducted, in order to explore whether the specific NARX neural network trained on the basis of the GCI related data of some economies can make sufficiently accurate predictions about GCIs of other economies, and whether the specific NARX neural network trained on the basis of the data of some type of economies can give more accurate predictions about GCIs of the same type of economies than those of different type of economies. Based on all of the above successful application, this paper provides policy recommendations on applying fully trained NARX neural networks that are assessed as qualified to assist or even replace the deductive and inductive abilities of the human brain in a variety of appropriate tasks.
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2005.08735&r=all
  11. By: Jiexia Ye; Juanjuan Zhao; Kejiang Ye; Chengzhong Xu
    Abstract: Stock price movement prediction is commonly accepted as a very challenging task due to the extremely volatile nature of financial markets. Previous works typically focus on understanding the temporal dependency of stock price movement based on the history of individual stock movement, but they do not take the complex relationships among involved stocks into consideration. However it is well known that an individual stock price is correlated with prices of other stocks. To address that, we propose a deep learning-based framework, which utilizes recurrent neural network (RNN) and graph convolutional network (GCN) to predict stock movement. Specifically, we first use RNN to model the temporal dependency of each related stock' price movement based on their own information of the past time slices, then we employ GCN to model the influence from involved stock based on three novel graphs which represent the shareholder relationship, industry relationship and concept relationship among stocks based on investment decisions. Experiments on two stock indexes in China market show that our model outperforms other baselines. To our best knowledge, it is the first time to incorporate multi-relationships among involved stocks into a GCN based deep learning framework for predicting stock price movement.
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2005.04955&r=all
  12. By: Marc Burri; Daniel Kaufmann
    Abstract: Because macroeconomic data is published with a substantial delay, assessing the health of the economy during the rapidly evolving Covid-19 crisis is challenging. We develop a fever curve for the Swiss economy using publicly available daily financial market and news data. The indicator can be computed with a delay of one day. Moreover, it is highly correlated with macroeconomic data and survey indicators of Swiss economic activity. Therefore, it provides timely and reliable warning signals if the health of the economy takes a turn for the worse.
    Keywords: Covid-19, Leading indicator, Financial market data, News sentiment, Forecasting, Switzerland
    JEL: E32 E37 C53
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:irn:wpaper:20-05&r=all
  13. By: Fergusson, Leopoldo; Saavedra, Santiago; Vargas, Juan F.
    Abstract: Research on deforestation has grown exponentially due to the availability of satellite-based measures of forest cover. One of the most popular is Global Forest Change (GFC). Using GFC, we estimate that the Colombian civil conflict increases ‘forest cover’. Using an alternative source that validates the same remote sensing images in the ground, we find the opposite effect. This occurs because, in spite of its name, GFC measures tree cover, including vegetation other than native forest. Most users of GFC seem unaware of this. In our case, most of the conflicting results are explained by GFC’s misclassification of oil palm crops as ‘forest’. Our findings call for caution when using automated classification of imagery for specific research questions.
    Keywords: Forest Cover; Conflict; Measurement
    JEL: D74 Q23 Q34
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:rie:riecdt:41&r=all
  14. By: Altavilla, Carlo; Boucinha, Miguel; Peydró, José-Luis; Smets, Frank
    Abstract: We analyse the effects of supranational versus national banking supervision on credit supply, and its interactions with monetary policy. For identification, we exploit: (i) a new, proprietary dataset based on 15 European credit registers; (ii) the institutional change leading to the centralisation of European banking supervision; (iii) high-frequency monetary policy surprises; (iv) differences across euro area countries, also vis-à-vis non-euro area countries. We show that supranational supervision reduces credit supply to firms with very high ex-ante and ex-post credit risk, while stimulating credit supply to firms without loan delinquencies. Moreover, the increased risk-sensitivity of credit supply driven by centralised supervision is stronger for banks operating in stressed countries. Exploiting heterogeneity across banks, we find that the mechanism driving the results is higher quantity and quality of human resources available to the supranational supervisor rather than changes in incentives due to the reallocation of supervisory responsibility to the new institution. Finally, there are crucial complementarities between supervision and monetary policy: centralised supervision offsets excessive bank risk-taking induced by a more accommodative monetary policy stance, but does not offset more productive risk-taking. Overall, we show that using multiple credit registers – first time in the literature – is crucial for external validity.
    Keywords: supervision,banking,AnaCredit,monetary policy,Euro Area crisis
    JEL: E51 E52 E58 G01 G21 G28
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:zbw:esprep:216793&r=all
  15. By: Enrique Fernandez-Macias (European Commission - JRC); Martina Bisello (Eurofound)
    Abstract: In recent years, the increasing concern about the labour market implications of technological change has led economists to look in more detail at the structure of work content and job tasks. Incorporating insights from other traditions of task analysis, in particular from the labour process approach (Braverman, 1974), as well as from recent research on skills, work organisation and occupational change (see for instance Green, 2013; Cohen, 2016; Fernández-Macías and Hurley, 2017), in this paper we propose a new comprehensive and detailed taxonomy of tasks. Going beyond existing broad classifications, our taxonomy aims at connecting the substantive content of work with its organisational context by answering two key questions: what do people do at work and how do they do their work? For illustrative purposes, we show how our approach allows a better understanding of the impact of new technologies on work, by accounting for relevant ongoing transformations such as the diffusion of artificial intelligence and the unfolding of digital labour platforms.
    Keywords: tasks, technological change, occupations, labour markets, structural change, artificial intelligence, digital labour platforms, Europe.
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:ipt:laedte:202004&r=all
  16. By: Bargain, Olivier (University of Aix-Marseille II); Aminjonov, Ulugbek (University of Bordeaux)
    Abstract: While degraded trust and cohesion within a country are often shown to have large socioeconomic impacts, they can also have dramatic consequences when compliance is required for collective survival. We illustrate this point in the context of the COVID-19 crisis. Policy responses all over the world aim to reduce social interaction and limit contagion. Using data on human mobility and political trust at regional level in Europe, we examine whether the compliance to these containment policies depends on the level of trust in policy makers prior to the crisis. Using a double difference approach around the time of lockdown announcements, we find that high-trust regions decrease their mobility related to non-necessary activities significantly more than low-trust regions. We also exploit country and time variation in treatment using the daily strictness of national policies. The efficiency of policy stringency in terms of mobility reduction significantly increases with trust. The trust effect is nonlinear and increases with the degree of stringency. We assess how the impact of trust on mobility potentially translates in terms of mortality growth rate.
    Keywords: COVID-19, political trust, policy stringency
    JEL: H12 I12 I18 Z18
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp13205&r=all
  17. By: Souta Nakatani (Mitsubishi UFJ Trust Investment Technology Institute Co., Ltd.and Graduate School of Economics, The University of Tokyo); Kiyohiko G. Nishimura (National Graduate Institute for Policy Studies (GRIPS) and The University of Tokyo); Taiga Saito (Faculty of Economics, The University of Tokyo); Akihiko Takahashi (Faculty of Economics, The University of Tokyo)
    Abstract: This paper develops and estimates an interest rate model with investor attitude factors, which are extracted by a text mining method. First, we consider two contrastive attitudes (optimistic versus conservative) towards uncertainties about Brownian motions driving economy, develop an interest rate model, and obtain an empirical framework of the economy consisting of permanent and transitory factors. Second, we apply the framework to a bond market under extremely low interest rate environment in recent years, and show that our three-factor model with level, steepening and flattening factors based on different investor attitudes is capable of explaining the yield curve in the Japanese government bond (JGB) markets. Third, text mining of a large text base of daily financial news reports enables us to distinguish between steepening and flattening factors, and from these textual data we can identify events and economic conditions that are associated with the steepening and flattening factors. We then estimate the yield curve and three factors with frequencies of relevant word groups chosen from textual data in addition to observed interest rates. Finally, we show that the estimated three factors, extracted only from the bond market data, are able to explain the movement in stock markets, in particular Nikkei 225 index.
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:tky:fseres:2020cf1152&r=all
  18. By: David Bounie (SES - Département Sciences Economiques et Sociales - Télécom ParisTech); Youssouf Camara; John Galbraith
    Abstract: This paper investigates a number of general phenomena connected with consumer behaviour in response to a severe economic shock, using billions of French card transactions measured before and during the COVID-19 epidemic. We examine changes in consumer mobility, anticipatory behaviour in response to announced restrictions, and the contrasts between the responses of online and traditional point-of-sale (off-line) consumption expenditures to the shock. We track hourly, daily and weekly responses as well as estimating an aggregate fixed-period impact effect via a difference-indifference estimator. The results, particularly at the sectoral level, suggest that recourse to the online shopping option diminished somewhat the overall impact of the shock on consumption expenditure, thereby increasing resiliency of the economy.
    Keywords: COVID-19,consumption expenditure,consumer mobility,online com- merce,resiliency,transaction data
    Date: 2020–05–07
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-02566443&r=all
  19. By: Asger Lau Andersen (CEBI, Department of Economics, University of Copenhagen); Emil Toft Hansen (CEBI, Department of Economics, University of Copenhagen); Niels Johannesen (CEBI, Department of Economics, University of Copenhagen); Adam Sheridan (CEBI, Department of Economics, University of Copenhagen)
    Abstract: This paper uses transaction-level customer data from the largest bank in Denmark to estimate the change in consumer spending caused by the COVID-19 pandemic and the resulting shutdown of the Danish economy. We find that aggregate spending was on average 27% below the counterfactual level without the pandemic in the seven weeks following the shutdown. The spending drop was mostly concentrated on goods and services whose supply was directly restricted by the shutdown, suggesting a limited role for spillovers to non-restricted sectors through demand in the short term. The spending drop was larger for individuals with more ex ante exposure to the adverse consequences of the crisis in the form of job loss, wealth destruction, severe disease and disrupted consumption patterns and, most notably, for individuals with an ex post realization of crisis-related unemployment.
    Keywords: COVID-19; consumer spending; pandemic; social distancing; shutdown
    JEL: D12 H31 I18
    Date: 2020–05–25
    URL: http://d.repec.org/n?u=RePEc:kud:kucebi:2018&r=all
  20. By: Catalina Bolancé (Department of Econometrics, Riskcenter-IREA, University of Barcelona, Av. Diagonal, 690, 08034 Barcelona, Spain.)
    Abstract: En este documento se describen los principales conceptos relacionados con el ajuste no paramétrico de la distribución de probabilidades cuando se dispone de datos masivos y estos poseen fuerte asimetría a la derecha. En concreto, se estudian datos que representan perdidas positivas, que son muy heterogéneos y, por tanto, que pueden ser muy reducidos, cercanos a cero, o muy elevados y, además, pueden proceder de distintas distribuciones de probabilidad. Además, se mostrará cómo, aún disponiendo de una gran cantidad de datos, el efecto de la censura y el truncamiento sigue siendo un problema de falta de información que provoca grandes sesgos en los valores estimados. También, se describirán algunos resultados relacionados con la estimación paramétrica desde la perspectiva del uso de datos masivos. Finalmente, se presentarán algunos estimadores tipo núcleo, que ya han sido propuestos en la literatura, y que abordan algunas dificultades de los estimadores núcleos más clásicos cuando en los datos existen valores muy extremos los cuales es necesario modelizar para la cuantificación del riesgo.
    Keywords: Análisis univariante, Estimación paramétrica, Estimación no paramétrica, Censura, Truncamiento, Quantificación del riesgo. JEL classification: E30, E39, Y10
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:ira:wpaper:202007&r=all
  21. By: Brodeur, Abel (University of Ottawa); Clark, Andrew E. (Paris School of Economics); Flèche, Sarah (Aix-Marseille University); Powdthavee, Nattavudh (University of Warwick)
    Abstract: The COVID-19 pandemic has led many governments to implement lockdowns. While lockdowns may help to contain the spread of the virus, they may result in substantial damage to population well-being. We use Google Trends data to test whether the lockdowns implemented in Europe and America led to changes in well-being related topic search terms. Using differences-in-differences and a regression discontinuity design to evaluate the causal effects of lockdown, we find a substantial increase in the search intensity for boredom in Europe and the US. We also found a significant increase in searches for loneliness, worry and sadness, while searches for stress, suicide and divorce on the contrary fell. Our results suggest that people's mental health may have been severely affected by the lockdown.
    Keywords: boredom, COVID-19, loneliness, well-being
    JEL: I12 I31 J22
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp13204&r=all

This nep-big issue is ©2020 by Tom Coupé. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.