nep-big New Economics Papers
on Big Data
Issue of 2020‒08‒17
23 papers chosen by
Tom Coupé
University of Canterbury

  1. Applications of artificial intelligence technologies on mental health research during COVID-19 By Hossain, Md Mahbub; McKyer, E. Lisako J.; Ma, Ping
  2. Measuring uncertainty at the regional level using newspaper text By Christopher Rauh
  3. Towards better understanding of complex machine learning models using Explainable Artificial Intelligence (XAI) - case of Credit Scoring modelling By Marta Kłosok; Marcin Chlebus
  4. AI Watch - Artificial Intelligence in public services: Overview of the use and impact of AI in public services in the EU By Gianluca MISURACA; Colin van Noordt
  5. Predicting prices of S&P500 index using classical methods and recurrent neural networks By Mateusz Kijewski; Robert Ślepaczuk
  6. Artificial Neural Networks Performance in WIG20 Index Options Pricing By Maciej Wysocki; Robert Ślepaczuk
  7. Grounded reality meets machine learning: A deep-narrative analysis framework for energy policy research By Debnath, R.; Darby, S.; Bardhan, R.; Mohaddes, K.; Sunikka-Blank, M.
  8. The potential influence of machine learning and data science on the future of economics: Overview of highly-cited research By Deshpande, Advait
  9. Building(s and) cities: delineating urban areas with a machine learning algorithm By Daniel Arribas-Bel; Miquel-Àngel Garcia-López; Elisabet Viladecans-Marsal
  10. The hard problem of prediction for conflict prevention By Hannes Mueller; Christopher Rauh
  11. Predicting flood insurance claims with hydrologic and socioeconomic demographics via machine learning: exploring the roles of topography, minority populations, and political dissimilarity By Knighton, James; Buchanan, Brian; Guzman, Christian; Elliott, Rebecca; White, Eric; Rahm, Brian
  12. Grade Expectations: How well can we predict future grades based on past performance? By Jake Anders; Catherine Dilnot; Lindsey Macmillan; Gill Wyness
  13. Mind the gap! Machine learning, ESG metrics and sustainable investment By Ariel Lanza; Enrico Bernardini; Ivan Faiella
  14. Measuring Inequality using Geospatial Data By Jaqueson Galimberti; Stefan Pichler; Regina Pleninger
  15. Choosing between explicit cartel formation and tacit collusion – An experiment By Maximilian Andres; Lisa Bruttel; Jana Friedrichsen
  16. Financial intermediation and technology: What’s old, what’s new? By Boot, Arnoud; Hoffmann, Peter; Laeven, Luc; Ratnovski, Lev
  17. Winners and losers from COVID-19: Evidence from Google search data for Egypt By Abay, Kibrom A.; Ibrahim, Hosam
  18. Central banks in parliaments: a text analysis of the parliamentary hearings of the Bank of England, the European Central Bank and the Federal Reserve By Fraccaroli, Nicolò; Giovannini, Alessandro; Jamet, Jean-Francois
  19. Digital Transformation of Public Service and Administration By Mishra, Mukesh Kumar
  20. Competitor Collaboration Before a Crisis By Sea Matilda Bez; Henry Chesbrough
  21. How Did COVID-19 and Stabilization Policies Affect Spending and Employment? A New Real-Time Economic Tracker Based on Private Sector Data By Raj Chetty; John N. Friedman; Nathaniel Hendren; Michael Stepner; The Opportunity Insights Team
  22. Using publicly available remote sensing products to evaluate REDD+ projects in Brazil By Gabriela Demarchi; Subervie Julie; Thibault Catry; Isabelle Tritsch
  23. Using publicly available remote sensing products to evaluate REDD+ projects in Brazil By Gabriela Demarchi; Subervie Julie; Thibault Catry; Isabelle Tritsch

  1. By: Hossain, Md Mahbub; McKyer, E. Lisako J.; Ma, Ping
    Abstract: The coronavirus disease (COVID-19) pandemic has impacted mental health globally. It is essential to deploy advanced research methodologies that may use complex data to draw meaningful inferences facilitating mental health research and policymaking during this pandemic. Artificial intelligence (AI) technologies offer a wide range of opportunities to leverage advancements in data sciences in analyzing health records, behavioral data, social media contents, and outcomes data on mental health. Several studies have reported the use of several AI technologies such as vector machines, neural networks, latent Dirichlet allocation, decision trees, and clustering to detect and treat depression, schizophrenia, Alzheimer’s disease, and other mental health problems. The applications of such technologies in the context of COVID-19 is still under development, which calls for further deployment of AI technologies in mental health research in this pandemic using clinical and psychosocial data through technological partnerships and collaborations. Lastly, policy-level commitment and deployment of resources to facilitate the use of robust AI technologies for assessing and addressing mental health problems during the COVID-19 pandemic.
    Date: 2020–06–23
    URL: http://d.repec.org/n?u=RePEc:osf:socarx:w6c9b&r=all
  2. By: Christopher Rauh (Université de Montréal)
    Abstract: In this paper I present a methodology to provide uncertainty measures at the regional level in real time using the full bandwidth of news. In order to do so I download vast amounts of newspaper articles, summarize these into topics using unsupervised machine learning, and then show that the resulting topics foreshadow fluctuations in economic indicators. Given large regional disparities in economic performance and trends within countries, it is particularly important to have regional measures for a policymaker to tailor policy responses. I use a vector-autoregression model for the case of Canada, a large and diverse country, to show that the generated topics are significantly related to movements in economic performance indicators, inflation, and the unemployment rate at the national and provincial level. Evidence is provided that a composite index of the generated diverse topics can serve as a measure of uncertainty. Moreover, I show that some topics are general enough to have homogenous associations across provinces, while others are specific to fluctuations in certain regions.
    Keywords: Machine learning, Latent Dirichlet allocation, Newspaper text, Economic uncertainty, Topic model, Canada
    Date: 2019–08
    URL: http://d.repec.org/n?u=RePEc:mtl:montde:2019-07&r=all
  3. By: Marta Kłosok (Faculty of Economic Sciences, University of Warsaw); Marcin Chlebus (Faculty of Economic Sciences, University of Warsaw)
    Abstract: recent years many scientific journals have widely explored the topic of machine learning interpretability. It is important as application of Artificial Intelligence is growing rapidly and its excellent performance is of huge potential for many. There is also need for overcoming the barriers faced by analysts implementing intelligent systems. The biggest one relates to the problem of explaining why the model made a certain prediction. This work brings the topic of methods for understanding a black-box from both the global and local perspective. Numerous agnostic methods aimed at interpreting black-box model behavior and predictions generated by these complex structures are analyzed. Among them are: Permutation Feature Importance, Partial Dependence Plot, Individual Conditional Expectation Curve, Accumulated Local Effects, techniques approximating predictions of the black-box for single observations with surrogate models (interpretable white-boxes) and Shapley values framework. Our prospect leads toward the question to what extent presented tools enhance model transparency. All of the frameworks are examined in practice with a credit default data use case. The overview presented prove that each of the method has some limitations, but overall almost all summarized techniques produce reliable explanations and contribute to higher transparency accountability of decision systems.
    Keywords: machine learning, explainable Artificial Intelligence, visualization techniques, model interpretation, variable importance
    JEL: C25
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:war:wpaper:2020-18&r=all
  4. By: Gianluca MISURACA (European Commission - JRC); Colin van Noordt (Tallinn Technology University)
    Abstract: This report is published in the context of AI Watch, the European Commission knowledge service to monitor the development, uptake and impact of Artificial Intelligence (AI) for Europe, launched in December 2018 as part of the Coordinated Plan on the Development and Use of Artificial Intelligence Made in Europe. The report presents the results of the findings from the mapping of the use of AI in support of public services in the EU. The analysis serves to contribute landscaping the current state of the art in the field, and provide a preliminary overview of Member States efforts to integrate AI in their government operations and adopt AI-enabled innovations in the public sector. From the analysis it emerges that the interest on the use of AI within governments to support redesigning internal processes and policy-making mechanisms, as well as to improve public services delivery and engagement with citizens is growing. Governments across the EU are exploring the potential of AI use to improve policy design and evaluation, while reorganising the internal management at all governance levels. Indeed, when used in a responsible way, the combination of new, large data sources with advanced machine learning algorithms could radically improve the operating methods of the public sector, paving the way to pro-active public service delivery models and relieving resource constrained organisations from mundane and repetitive tasks.
    Keywords: Artificial Intelligence, Public Services, Digital Governance, Innovation, Public Sector, European Union
    Date: 2020–07
    URL: http://d.repec.org/n?u=RePEc:ipt:iptwpa:jrc120399&r=all
  5. By: Mateusz Kijewski (Quantitative Finance Research Group; Faculty of Economic Sciences, University of Warsaw); Robert Ślepaczuk (Quantitative Finance Research Group; Faculty of Economic Sciences, University of Warsaw)
    Abstract: This study implements algorithmic investment strategies with buy/sell signals based on classical methods and recurrent neural network model (LSTM). The research compares the performance of investment algorithms on time series of S&P500 index covering 20 years of data from 2000 to 2020. This paper presents an approach for dynamic optimization of parameters during backtesting process by using rolling training-testing window. Every method was tested in terms of robustness to changes in parameters and evaluated by appropriate performance statistics e.g. Information Ratio, Maximum Drawdown, etc. Combination of signals from different methods was stable and outperformed benchmark of Buy & Hold strategy doubling its returns on the same level of risk. Detailed sensitivity analysis revealed that classical methods which used rolling training-testing window were significantly more robust to changes in parameters than LSTM model in which hyperparameters were selected heuristically.
    Keywords: : machine learning, recurrent neural networks, long short-term memory model, time series analysis, algorithmic investment strategies, systematic transactional systems, technical analysis, ARIMA model
    JEL: C4 C14 C45 C53 C58 G13
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:war:wpaper:2020-27&r=all
  6. By: Maciej Wysocki (Quantitative Finance Research Group; Faculty of Economic Sciences, University of Warsaw); Robert Ślepaczuk (Quantitative Finance Research Group; Faculty of Economic Sciences, University of Warsaw)
    Abstract: In this paper the performance of artificial neural networks in option pricing is analyzed and compared with the results obtained from the Black – Scholes – Merton model based on the historical volatility. The results are compared based on various error metrics calculated separately between three moneyness ratios. The market data-driven approach is taken in order to train and test the neural network on the real-world data from the Warsaw Stock Exchange. The artificial neural network does not provide more accurate option prices. The Black – Scholes – Merton model turned out to be more precise and robust to various market conditions. In addition, the bias of the forecasts obtained from the neural network differs significantly between moneyness states.
    Keywords: option pricing, machine learning, artificial neural networks, implied volatility, supervised learning, index options, Black – Scholes – Merton model
    JEL: C4 C14 C45 C53 C58 G13
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:war:wpaper:2020-19&r=all
  7. By: Debnath, R.; Darby, S.; Bardhan, R.; Mohaddes, K.; Sunikka-Blank, M.
    Abstract: Text-based data sources like narratives and stories have become increasingly popular as critical insight generator in energy research and social science. However, their implications in policy application usually remain superficial and fail to fully exploit state-of-the-art resources which digital era holds for text analysis. This paper illustrates the potential of deep-narrative analysis in energy policy research using text analysis tools from the cutting-edge domain of computational social sciences, notably topic modelling. We argue that a nested application of topic modelling and grounded theory in narrative analysis promises advances in areas where manual-coding driven narrative analysis has traditionally struggled with directionality biases, scaling, systematisation and repeatability. The nested application of the topic model and the grounded theory goes beyond the frequentist approach of narrative analysis and introduces insight generation capabilities based on the probability distribution of words and topics in a text corpus. In this manner, our proposed methodology deconstructs the corpus and enables the analyst to answer research questions based on the foundational element of the text data structure. We verify theoretical compatibility through a meta-analysis of a state-of-the-art bibliographic database on energy policy, narratives and computational social science. Furthermore, we establish a proof-ofconcept using a narrative-based case study on energy externalities in slum rehabilitation housing in Mumbai, India. We find that the nested application contributes to the literature gap on the need for multidisciplinary methodologies that can systematically include qualitative evidence into policymaking.
    Keywords: energy policy, narratives, topic modelling, computational social science, text analysis, methodological framework
    JEL: Q40 Q48 R28
    Date: 2020–07–14
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:2062&r=all
  8. By: Deshpande, Advait
    Abstract: This working paper provides an overview of the potential influence of machine learning and data science on economics as a field. The findings presented are drawn from highly cited research which was identified based on Google Scholar searches. For each of the articles reviewed, this working paper covers what is likely to change and what is likely to remain unchanged in economics due to the emergence and increasing influence of machine learning and data science methods.
    Date: 2020–04–30
    URL: http://d.repec.org/n?u=RePEc:osf:socarx:9nh8g&r=all
  9. By: Daniel Arribas-Bel (University of Liverpool); Miquel-Àngel Garcia-López (Universitat Autònoma de Barcelona & IEB); Elisabet Viladecans-Marsal (Universitat de Barcelona & IEB)
    Abstract: This paper proposes a novel methodology for delineating urban areas based on a machine learning algorithm that groups buildings within portions of space of sufficient density. To do so, we use the precise geolocation of all 12 million buildings in Spain. We exploit building heights to create a new dimension for urban areas, namely, the vertical land, which provides a more accurate measure of their size. To better understand their internal structure and to illustrate an additional use for our algorithm, we also identify employment centers within the delineated urban areas. We test the robustness of our method and compare our urban areas to other delineations obtained using administrative borders and commuting-based patterns. We show that: 1) our urban areas are more similar to the commuting-based delineations than the administrative boundaries but that they are more precisely measured; 2) when analyzing the urban areas’ size distribution, Zipf’s law appears to hold for their population, surface and vertical land; and 3) the impact of transportation improvements on the size of the urban areas is not underestimated.
    Keywords: Buildings, urban areas, city size, transportation, machine learning
    JEL: R12 R14 R2 R4
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:ieb:wpaper:doc2019-10&r=all
  10. By: Hannes Mueller (Institut d'Analisi Economica (CSIC)); Christopher Rauh (Université de Montréal)
    Abstract: There is a rising interest in conflict prevention and this interest provides a strong motivation for better conflict forecasting. A key problem of conflict forecasting for prevention is that predicting the start of conflict in previously peaceful countries is extremely hard. To make progress in this hard problem this project exploits both supervised and unsupervised machine learning. Specifically, the latent Dirichlet allocation (LDA) model is used for feature extraction from 3.8 million newspaper articles and these features are then used in a random forest model to predict conflict. We find that several features are negatively associated with the outbreak of conflict and these gain importance when predicting hard onsets. This is because the decision tree uses the text features in lower nodes where they are evaluated conditionally on conflict history, which allows the random forest to adapt to the hard problem and provides useful forecasts for prevention.
    Date: 2019–04
    URL: http://d.repec.org/n?u=RePEc:mtl:montde:2019-02&r=all
  11. By: Knighton, James; Buchanan, Brian; Guzman, Christian; Elliott, Rebecca; White, Eric; Rahm, Brian
    Abstract: Current research on flooding risk often focuses on understanding hazards, de-emphasizing the complex pathways of exposure and vulnerability. We investigated the use of both hydrologic and social demographic data for flood exposure mapping with Random Forest (RF) regression and classification algorithms trained to predict both parcel- and tract-level flood insurance claims within New York State, US. Topographic characteristics best described flood claim frequency, but RF prediction skill was improved at both spatial scales when socioeconomic data was incorporated. Substantial improvements occurred at the tract-level when the percentage of minority residents, housing stock value and age, and the political dissimilarity index of voting precincts were used to predict insurance claims. Census tracts with higher numbers of claims and greater densities of low-lying tax parcels tended to have low proportions of minority residents, newer houses, and less political similarity to state level government. We compared this data-driven approach and a physically-based pluvial flood routing model for prediction of the spatial extents of flooding claims in two nearby catchments of differing land use. The floodplain we defined with physically based modeling agreed well with existing federal flood insurance rate maps, but underestimated the spatial extents of historical claim generating areas. In contrast, RF classification incorporating hydrologic and socioeconomic demographic data likely overestimated the flood-exposed areas. Our research indicates that quantitative incorporation of social data can improve flooding exposure estimates.
    Keywords: FEMA; flooding; flooding insurance claims; LIS-FLOOD; random forest; socio-hydrology; vulnerability
    JEL: R14 J01
    Date: 2020–10–15
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:105761&r=all
  12. By: Jake Anders (Centre for Education Policy and Equalising Opportunities, UCL Institute of Education, University College London); Catherine Dilnot (Oxford Brookes Business School); Lindsey Macmillan (Centre for Education Policy and Equalising Opportunities, UCL Institute of Education, University College London); Gill Wyness (Centre for Education Policy and Equalising Opportunities, UCL Institute of Education, University College London)
    Abstract: The Covid-19 pandemic has led to unprecedented disruption of England's education system, including the cancellation of all formal examination. Instead of sitting exams, the class of 2020 will be assigned "calculated grades" based on predictions by their teachers. However, teacher predictions of pupil grades are a common feature of the English education system, with such predictions forming the basis of university applications in normal years. But previous research has shown these predictions are highly inaccurate, creating concern for teachers, pupils and parents. In this paper, we ask whether it is possible to improve on teachers' predictions, using detailed measures of pupils' past performance and non-linear and machine learning approaches. Despite lacking their informal knowledge, we can make modest improvements on the accuracy of teacher predictions with our models, with around 1 in 4 pupils being correctly predicted. We show that predictions are improved where we have information on 'related' GCSEs. We also find heterogeneity in the ability to predict successfully, according to student achievement, school type and subject of study. Notably, high achieving non-selective state school pupils are more likely to be under-predicted compared to their selective state and private school counterparts. Overall, the low rates of prediction, regardless of the approach taken, raises the question as to why predicted grades form such a crucial part of our education system.
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:ucl:cepeow:20-14&r=all
  13. By: Ariel Lanza (Kellogg School of Management, Northwestern University (PhD student)); Enrico Bernardini (Banca d'Italia); Ivan Faiella (Banca d'Italia)
    Abstract: This work proposes a novel approach for overcoming the current inconsistencies in ESG scores by using Machine Learning (ML) techniques to identify those indicators that better contribute to the construction of efficient portfolios. ML can achieve this result without needing a model-based methodology, typical of the modern portfolio theory approaches. The ESG indicators identified by our approach show a discriminatory power that also holds after accounting for the contribution of the style factors identified by the Fama-French five-factor model and the macroeconomic factors of the BIRR model. The novelty of the paper is threefold: a) the large array of ESG metrics analysed, b) the model-free methodology ensured by ML and c) the disentangling of the contribution of ESG-specific metrics to the portfolio performance from both the traditional style and macroeconomic factors. According to our results, more information content may be extracted from the available raw ESG data for portfolio construction purposes and half of the ESG indicators identified using our approach are environmental. Among the environmental indicators, some refer to companies' exposure and ability to manage climate change risk, namely the transition risk.
    Keywords: portfolio construction, factor models, sustainable investment, ESG, machine learning
    JEL: C63 G11 Q56
    Date: 2020–06
    URL: http://d.repec.org/n?u=RePEc:bdi:opques:qef_561_20&r=all
  14. By: Jaqueson Galimberti (Faculty of Business, Economics and Law at AUT University); Stefan Pichler (KOF Swiss Economic Institute, ETH Zurich); Regina Pleninger (KOF Swiss Economic Institute, ETH Zurich)
    Abstract: The main challenge in studying economic inequality is limited data availability, which is particularly problematic in developing countries. We construct a measure of economic inequality for 234 countries and territories from 1992 to 2013 using satellite data on nighttime light emissions as well as gridded population data. Key methodological innovations include the use of varying levels of data aggregation, and a parsimonious calibration of the lights-prosperity relationship to match traditional inequality measures based on income data. Indeed, we obtain a measure that is significantly correlated with cross-country variation in income inequality. Subsequently, we provide three applications of the data in the fields of health economics and international finance. Our results show that light- and income-based inequality measures lead to similar results in terms of cross-country correlations, but not for the dynamics of inequality measure can capture more enduring features of economic activity that are not directly captured by income.
    Keywords: Nighttime lights, inequality, gridded population
    JEL: D63 E01 I14 O11 O47 O57
    Date: 2020–07
    URL: http://d.repec.org/n?u=RePEc:aut:wpaper:202007&r=all
  15. By: Maximilian Andres (University of Potsdam); Lisa Bruttel (University of Potsdam); Jana Friedrichsen (HU Berlin, WZB Berlin Social Science Center, DIW Berlin)
    Abstract: Numerous studies investigate which sanctioning institutions prevent cartel formation but little is known as to how these sanctions work. We contribute to understanding the inner workings of cartels by studying experimentally the effect of sanctioning institutions on firms’ communication. Using machine learning to organize the chat communication into topics, we find that firms are significantly less likely to communicate explicitly about price fixing when sanctioning institutions are present. At the same time, average prices are lower when communication is less explicit. A mediation analysis suggests that sanctions are effective in hindering cartel formation not only because they introduce a risk of being fined but also by reducing the prevalence of explicit price communication.
    Keywords: cartel, collusion, communication, machine learning, experiment
    JEL: C92 D43 L41
    Date: 2020–07
    URL: http://d.repec.org/n?u=RePEc:pot:cepadp:19&r=all
  16. By: Boot, Arnoud; Hoffmann, Peter; Laeven, Luc; Ratnovski, Lev
    Abstract: We study the effects of technological change on financial intermediation, distinguishing between innovations in information (data collection and processing) and communication (relationships and distribution). Both follow historic trends towards an increased use of hard information and less in-person interaction, which are accelerating rapidly. We point to more recent innovations, such as the combination of data abundance and artificial intelligence, and the rise of digital platforms. We argue that in particular the rise of new communication channels can lead to the vertical and horizontal disintegration of the traditional bank business model. Specialized providers of financial services can chip away activities that do not rely on access to balance sheets, while platforms can interject themselves between banks and customers. We discuss limitations to these challenges, and the resulting policy implications. JEL Classification: G20, G21, E58, O33
    Keywords: communication, financial innovation, financial intermediation, fintech, information
    Date: 2020–07
    URL: http://d.repec.org/n?u=RePEc:ecb:ecbwps:20202438&r=all
  17. By: Abay, Kibrom A.; Ibrahim, Hosam
    Abstract: Evolving pieces of evidence show that services are hardest hit by the COVID-19 pandemic, both globally and in Egypt. Employing Google search data, we examine the implications of COVID-19 on demand for various services in Egypt. • We find that demand for those services that require face-to-face interaction, including hotels and restaurants, air travel and tourism services, significantly dipped after Egypt detected the first COVID-19 case and more so after the Egyptian government introduced major restrictions and curfews. For instance, in the first two months of the outbreak of the pandemic, February and March, demand for hotel and restaurant services contracted by about 70 percent. • In contrast, demand for services that substitute or reduce personal interactions, such as information and communications technologies (ICT) and delivery services, have enjoyed a significant boost. Demand for ICT services tripled, while demand for delivery services doubled in the four months since the outbreak of the pandemic. • Intuitively, these results suggest that individuals and enterprises operating in these sectors are expected to experience heterogenous impacts and damages associated with the pandemic. Our results, along with other evolving evidence, reinforce that those services and sectors negatively affected by the outbreak and spread of COVID-19 deserve attention. • Finally, our analysis highlights the potential of near real-time "big data" to substitute and complement conventional data sources to estimate economic impacts and, hence, inform immediate and medium-term policy responses.
    Keywords: EGYPT, ARAB COUNTRIES, MIDDLE EAST, SOUTHWESTERN ASIA, ASIA, coronavirus, coronavirus disease, Coronavirinae, internet, Information and Communication Technologies (icts), demand, recreation, pandemics, technology, Covid-19, Google searching, Google trends, online search, Google search, lockdown
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:fpr:menapn:8&r=all
  18. By: Fraccaroli, Nicolò; Giovannini, Alessandro; Jamet, Jean-Francois
    Abstract: As the role of central banks expanded, demand for public scrutiny of their actions increased. This paper investigates whether parliamentary hearings, the main tool to hold central banks accountable, are fit for this purpose. Using text analysis, it detects the topics and sentiments in parliamentary hearings of the Bank of England, the European Central Bank and the Federal Reserve from 1999 to 2019. It shows that, while central bank objectives play the most relevant role in determining the topic, unemployment is negatively associated with the focus of hearings on price stability. Sentiments are more negative when uncertainty is higher and when inflation is more distant from the central bank’s inflation aim. These findings suggest that parliamentarians use hearings to scrutinise the performance of central banks in line with their objectives and economic developments, but also that uncertainty is associated with a higher perceived risk of under-performance of central banks. JEL Classification: E02, E52, E58
    Keywords: central bank accountability, monetary policy, text analysis, uncertainty
    Date: 2020–07
    URL: http://d.repec.org/n?u=RePEc:ecb:ecbwps:20202442&r=all
  19. By: Mishra, Mukesh Kumar
    Abstract: This paper presents the results of a process tracing study of digital transformation in India. As the world embraces the revolution 4.0, emerging technologies such as 5G, Artificial Intelligence, among others, will transform governance efficiency and effectiveness. As the current revolution sweeps, the world and the IoT ride the wave, the reality here is that, governments of the future are destined to be digital by default. We need to prepare for this transformation for the future administration which will be digital. Government have to set for themselves Political Objectives to achieve greater trust in system, including through responsiveness and transparency, and by providing opportunities for greater engagement by service users and citizens in general.
    Keywords: Digital Transformation,Smart Administration
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:zbw:esprep:222522&r=all
  20. By: Sea Matilda Bez (MRM - Montpellier Research in Management - UM1 - Université Montpellier 1 - UM3 - Université Paul-Valéry - Montpellier 3 - UM2 - Université Montpellier 2 - Sciences et Techniques - UPVD - Université de Perpignan Via Domitia - Groupe Sup de Co Montpellier (GSCM) - Montpellier Business School - UM - Université de Montpellier, Labex Entreprendre - UM - Université de Montpellier); Henry Chesbrough (University of California [Berkeley] - University of California)
    Abstract: For artificial intelligence (AI) technology to impact society positively, the major AI companies must coordinate their efforts and agree on safe practices. The social legitimacy of AI development depends on building a consensus among AI companies to prevent its potentially damaging downsides. Consortia like the Partnership on AI (PAI) aim to have AI competitors collaborate to flag risks in AI development and create solutions to manage those risks. PAI can apply valuable lessons learned from other industries about how to facilitate collective action but do so proactively rather than after the fact. The Dynamic Capabilities Framework of "sensing, seizing, and transforming" provides a process map for the AI industry to create processes to reduce the risk of a major disaster or crisis.
    Keywords: Artificial intelligence,Dynamic capabilities,Competitor collaboration
    Date: 2020–05–03
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-02565068&r=all
  21. By: Raj Chetty; John N. Friedman; Nathaniel Hendren; Michael Stepner; The Opportunity Insights Team
    Abstract: We build a publicly available platform that tracks economic activity at a granular level in real time using anonymized data from private companies. We report daily statistics on consumer spending, business revenues, employment rates, and other key indicators disaggregated by county, industry, and income group. Using these data, we study the mechanisms through which COVID-19 affected the economy by analyzing heterogeneity in its impacts across geographic areas and income groups. We first show that high-income individuals reduced spending sharply in mid-March 2020, particularly in areas with high rates of COVID-19 infection and in sectors that require physical interaction. This reduction in spending greatly reduced the revenues of businesses that cater to high-income households in person, notably small businesses in affluent ZIP codes. These businesses laid o↵ most of their low-income employees, leading to a surge in unemployment claims in affluent areas. Building on this diagnostic analysis, we use event study designs to estimate the causal effects of policies aimed at mitigating the adverse impacts of COVID. State-ordered reopenings of economies have little impact on local employment. Stimulus payments to low-income households increased consumer spending sharply, but had modest impacts on employment in the short run, perhaps because very little of the increased spending flowed to businesses most affected by the COVID-19 shock. Paycheck Protection Program loans have also had little impact on employment at small businesses. These results suggest that traditional macroeconomic tools – stimulating aggregate demand or providing liquidity to businesses – may have diminished capacity to restore employment when consumer spending is constrained by health concerns. During a pandemic, it may be more fruitful to mitigate economic hardship through social insurance. More broadly, this analysis illustrates how real-time economic tracking using private sector data can help rapidly identify the origins of economic crises and facilitate ongoing evaluation of policy impacts.
    JEL: E0 H0 J0
    Date: 2020–06
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:27431&r=all
  22. By: Gabriela Demarchi (CEE-M - Centre d'Economie de l'Environnement - Montpellier - FRE2010 - UM - Université de Montpellier - CNRS - Centre National de la Recherche Scientifique - Montpellier SupAgro - Institut national d’études supérieures agronomiques de Montpellier - Institut Agro - Institut national d'enseignement supérieur pour l'agriculture, l'alimentation et l'environnement - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement); Subervie Julie (CEE-M - Centre d'Economie de l'Environnement - Montpellier - FRE2010 - UM - Université de Montpellier - CNRS - Centre National de la Recherche Scientifique - Montpellier SupAgro - Institut national d’études supérieures agronomiques de Montpellier - Institut Agro - Institut national d'enseignement supérieur pour l'agriculture, l'alimentation et l'environnement - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement); Thibault Catry (UMR 228 Espace-Dev, Espace pour le développement - IRD - Institut de Recherche pour le Développement - UPVD - Université de Perpignan Via Domitia - AU - Avignon Université - UR - Université de La Réunion - UM - Université de Montpellier - UG - Université de Guyane - UA - Université des Antilles); Isabelle Tritsch (UMR 228 Espace-Dev, Espace pour le développement - IRD - Institut de Recherche pour le Développement - UPVD - Université de Perpignan Via Domitia - AU - Avignon Université - UR - Université de La Réunion - UM - Université de Montpellier - UG - Université de Guyane - UA - Université des Antilles)
    Abstract: The perpetuity and improvement of REDD+ projects for curbing deforestation require rigorous impact evaluations of the effectiveness of existing on-the-ground interventions. Today, a number of global and regional remote sensing (RS) products are publicly available for detecting changes in forest cover worldwide. In this study, we assess the suitability of using these readily available products to evaluate the impact of REDD+ local projects targeting smallholders (owning plots of less than 100 ha) in the Brazilian Amazon. Firstly, we reconstruct forest loss for the period between 2008 and 2017 of 17,066 farms located in the Transamazonian region, using data derived from two landcover change datasets: Global Forest Change (GFC) and Amazon Deforestation Monitoring Project (PRODES). Secondly, we evaluate the consistency between the two sources of data. Lastly, we estimate the long-term impact of a REDDÅ project using both RS products. Results suggest that the deforestation estimates from the two data-sets are statistically different and that GFC detects systematically higher rates of deforestation than PRODES. However, we estimate that an average of about 2 ha of forest were saved on each participating farm during the first years of the program regardless the source of data. These results suggest that these products may not be suitable for accurately monitoring and measuring deforestation at the farm-level, but they can be a useful source of data on impact assessment of forest conservation projects.
    Keywords: remote sensing products,deforestation,impact evaluation,Brazilian Amazon.,REDD+
    Date: 2020–07–13
    URL: http://d.repec.org/n?u=RePEc:hal:wpceem:hal-02898225&r=all
  23. By: Gabriela Demarchi (CEE-M - Centre d'Economie de l'Environnement - Montpellier - FRE2010 - UM - Université de Montpellier - CNRS - Centre National de la Recherche Scientifique - Montpellier SupAgro - Institut national d’études supérieures agronomiques de Montpellier - Institut Agro - Institut national d'enseignement supérieur pour l'agriculture, l'alimentation et l'environnement - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement); Subervie Julie (CEE-M - Centre d'Economie de l'Environnement - Montpellier - FRE2010 - UM - Université de Montpellier - CNRS - Centre National de la Recherche Scientifique - Montpellier SupAgro - Institut national d’études supérieures agronomiques de Montpellier - Institut Agro - Institut national d'enseignement supérieur pour l'agriculture, l'alimentation et l'environnement - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement); Thibault Catry (UMR 228 Espace-Dev, Espace pour le développement - IRD - Institut de Recherche pour le Développement - UPVD - Université de Perpignan Via Domitia - AU - Avignon Université - UR - Université de La Réunion - UM - Université de Montpellier - UG - Université de Guyane - UA - Université des Antilles); Isabelle Tritsch (UMR 228 Espace-Dev, Espace pour le développement - IRD - Institut de Recherche pour le Développement - UPVD - Université de Perpignan Via Domitia - AU - Avignon Université - UR - Université de La Réunion - UM - Université de Montpellier - UG - Université de Guyane - UA - Université des Antilles)
    Abstract: The perpetuity and improvement of REDD+ projects for curbing deforestation require rigorous impact evaluations of the effectiveness of existing on-the-ground interventions. Today, a number of global and regional remote sensing (RS) products are publicly available for detecting changes in forest cover worldwide. In this study, we assess the suitability of using these readily available products to evaluate the impact of REDD+ local projects targeting smallholders (owning plots of less than 100 ha) in the Brazilian Amazon. Firstly, we reconstruct forest loss for the period between 2008 and 2017 of 17,066 farms located in the Transamazonian region, using data derived from two landcover change datasets: Global Forest Change (GFC) and Amazon Deforestation Monitoring Project (PRODES). Secondly, we evaluate the consistency between the two sources of data. Lastly, we estimate the long-term impact of a REDDÅ project using both RS products. Results suggest that the deforestation estimates from the two data-sets are statistically different and that GFC detects systematically higher rates of deforestation than PRODES. However, we estimate that an average of about 2 ha of forest were saved on each participating farm during the first years of the program regardless the source of data. These results suggest that these products may not be suitable for accurately monitoring and measuring deforestation at the farm-level, but they can be a useful source of data on impact assessment of forest conservation projects.
    Keywords: remote sensing products,deforestation,impact evaluation,Brazilian Amazon.,REDD+
    Date: 2020–07–13
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-02898225&r=all

This nep-big issue is ©2020 by Tom Coupé. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.