nep-big New Economics Papers
on Big Data
Issue of 2022‒04‒04
seventeen papers chosen by
Tom Coupé
University of Canterbury

  1. Explainable Artificial Intelligence: interpreting default forecasting models based on Machine Learning By Giuseppe Cascarino; Mirko Moscatelli; Fabio Parlapiano
  2. Volatility forecasting with machine learning and intraday commonality By Chao Zhang; Yihuang Zhang; Mihai Cucuringu; Zhongmin Qian
  3. Stock Embeddings: Learning Distributed Representations for Financial Assets By Rian Dolphin; Barry Smyth; Ruihai Dong
  4. Interpolation of temporal biodiversity change, loss, and gain across scales: a machine learning approach By Keil, Petr; Chase, Jonathan
  5. Reciprocity in Machine Learning By Mukund Sundararajan; Walid Krichene
  6. Predicting refugee flows from Ukraine with an approach to Big (Crisis) Data: a new opportunity for refugee and humanitarian studies By Jurić, Tado
  7. Stripping the Discount Curve - a Robust Machine Learning Approach By Damir Filipović; Markus Pelger; Ye Ye
  8. Inteligencia artificial: una reevaluación By Andrés Fernández Díaz; Benito Rodríguez Mallol
  9. Vers une meilleure compréhension de la transformation numérique optimisée par lâIA et de ses implications pour les PME manufacturières au Canada - Une recherche qualitative exploratoire By Amir Taherizadeh; Catherine Beaudry
  10. JAQ of All Trades: Job Mismatch, Firm Productivity and Managerial Quality By Luca Coraggio; Marco Pagano; Annalisa Scognamiglio; Joacim Tåg
  11. Measuring the Impact of Campaign Finance on Congressional Voting: A Machine Learning Approach By Matthias Lalisse
  12. An SMP-Based Algorithm for Solving the Constrained Utility Maximization Problem via Deep Learning By Kristof Wiedermann
  13. The determinants of AI innovation across European firms By Igna, Ioana; Venturini, Francesco
  14. The Ebb of Fiat and the Flow of Cryptocurrency By De Castro, Angelo
  15. Market Power and Artificial Intelligence Work on Online Labour Markets By DUCH BROWN Nestor; GOMEZ-HERRERA Estrella; MUELLER-LANGER Frank; TOLAN Songul
  16. Do sustainable company stock prices increase with ESG scrutiny? Evidence using social media. By Kvam, Emilie; Molnar, Peter; Wankel, Ingvild; Odegaard, Bernt Arne
  17. Personalized Pricing, Competition and Welfare By Harold Houba; Evgenia Motchenkova; Hui Wang

  1. By: Giuseppe Cascarino (Bank of Italy); Mirko Moscatelli (Bank of Italy); Fabio Parlapiano (Bank of Italy)
    Abstract: Forecasting models based on machine learning (ML) algorithms have been shown to outperform traditional models in several applications. The lack of an easily interpretable functional form, however, is a major challenge for their adoption, especially when a knowledge of the estimated relationships and an explanation of individual forecasts are needed, for instance due to regulatory requirements or when forecasts are used in policy making. We apply some of the most established methods from the eXplainable Artificial Intelligence (XAI) literature to shed light on the random forest corporate default forecasting model in Moscatelli et al. (2019) applied to Italian non-financial firms. The methods provide insight into the relative importance of financial and credit variables to predict firms’ financial distress. We complement the analysis by showing how the importance of these variables in explaining default risk changes over time in the period 2009-19. When financial conditions deteriorate, the variables characterized by a more complex relationship with financial distress, such as firms’ liquidity and indebtedness indicators, become more important in predicting borrowers’ defaults. We also discuss how ML models could enhance the accuracy of credit assessment for those borrowers with less developed credit relationships such as smaller firms
    Keywords: explainable artificial intelligence, model-agnostic explainability, artificial intelligence, machine learning, credit scoring, fintech
    JEL: G2 C52 C55 D83
    Date: 2022–03
  2. By: Chao Zhang; Yihuang Zhang; Mihai Cucuringu; Zhongmin Qian
    Abstract: We apply machine learning models to forecast intraday realized volatility (RV), by exploiting commonality in intraday volatility via pooling stock data together, and by incorporating a proxy for the market volatility. Neural networks dominate linear regressions and tree models in terms of performance, due to their ability to uncover and model complex latent interactions among variables. Our findings remain robust when we apply trained models to new stocks that have not been included in the training set, thus providing new empirical evidence for a universal volatility mechanism among stocks. Finally, we propose a new approach to forecasting one-day-ahead RVs using past intraday RVs as predictors, and highlight interesting diurnal effects that aid the forecasting mechanism. The results demonstrate that the proposed methodology yields superior out-of-sample forecasts over a strong set of traditional baselines that only rely on past daily RVs.
    Date: 2022–02
  3. By: Rian Dolphin; Barry Smyth; Ruihai Dong
    Abstract: Identifying meaningful relationships between the price movements of financial assets is a challenging but important problem in a variety of financial applications. However with recent research, particularly those using machine learning and deep learning techniques, focused mostly on price forecasting, the literature investigating the modelling of asset correlations has lagged somewhat. To address this, inspired by recent successes in natural language processing, we propose a neural model for training stock embeddings, which harnesses the dynamics of historical returns data in order to learn the nuanced relationships that exist between financial assets. We describe our approach in detail and discuss a number of ways that it can be used in the financial domain. Furthermore, we present the evaluation results to demonstrate the utility of this approach, compared to several important benchmarks, in two real-world financial analytics tasks.
    Date: 2022–02
  4. By: Keil, Petr; Chase, Jonathan
    Abstract: 1. Estimates of temporal change of biodiversity, and its components loss and gain, are needed at local and geographical scales. However, we lack them because of data in-completeness, heterogeneity, and lack of temporal replication. Hence, we need a tool to integrate heterogeneous data and to account for their incompleteness. 2. We introduce spatiotemporal machine learning interpolation that can estimate cross-scale biodiversity change and its components. The approach naturally captures the expected and complex interactions between scale (grain), geography, data types, and drivers of change. As such it can integrate inventory data from reserves or countries with data from atlases and local survey plots. We present two flavors, both blending tree-based machine learning (random forests, boosted trees) with advances in ecolog-ical scaling: The first combines machine learning with species-area relationships (SAR method), the second with occupancy-area relationships (OAR method). 3. Using simulated data and an empirical example of global mammals and European plants, we show that tree-based machine learning effectively captures temporal biodi-versity change, loss, and gain across a continuum of spatial grains. This can be done despite the lack of time series data (i.e., it does not require temporal replication at sites), temporal biases in the amount of data, and highly uneven sampling area. These estimates can be mapped at any desired spatial resolution. 4. In all, this is a user-friendly and computationally fast approach with minimal require-ments on data format. It can integrate heterogeneous biodiversity data to obtain esti-mates of temporal biodiversity change, loss, and gain, that would otherwise be invisi-ble in the raw data alone.
    Date: 2022–03–15
  5. By: Mukund Sundararajan (Google); Walid Krichene (Google Research)
    Abstract: Machine learning is pervasive. It powers recommender systems such as Spotify, Instagram and YouTube, and health-care systems via models that predict sleep patterns, or the risk of disease. Individuals contribute data to these models and benefit from them. Are these contributions (outflows of influence) and benefits (inflows of influence) reciprocal? We propose measures of outflows, inflows and reciprocity building on previously proposed measures of training data influence. Our initial theoretical and empirical results indicate that under certain distributional assumptions, some classes of models are approximately reciprocal. We conclude with several open directions.
    Date: 2022–02
  6. By: Jurić, Tado
    Abstract: Background: This paper shows that Big Data and the so-called tools of digital demography, such as Google Trends (GT) and insights from social networks such as Instagram, Twitter and Facebook, can be useful for determining, estimating, and predicting the forced migration flows to the EU caused by the war in Ukraine. Objective: The objective of this study was to test the usefulness of Google Trends indexes to predict further forced migration from Ukraine to the EU (mainly to Germany) and gain demographic insights from social networks into the age and gender structure of refugees. Methods: The primary methodological concept of our approach is to monitor the digital trace of Internet searches in Ukrainian, Russian and English with the Google Trends analytical tool ( Initially, keywords were chosen that are most predictive, specific, and common enough to predict the forced migration from Ukraine. We requested the data before and during the war outbreak and divided the keyword frequency for each migration-related query to standardise the data. We compared this search frequency index with official statistics from UNHCR to prove the significations of results and correlations and test the models predictive potential. Since UNHCR does not yet have complete data on the demographic structure of refugees, to fill this gap, we used three other alternative Big Data sources: Facebook, Twitter and Instagram. Results: All tested migration-related search queries about emigration planning from Ukraine show the positive linear association between Google index and data from official UNHCR statistics; R2 = 0.1211 for searches in Russian and R2 = 0.1831 for searches in Ukrainian. It is noticed that Ukrainians use the Russian language more often to search for terms than Ukrainian. Increase in migration-related search activities in Ukraine such as граница (Rus. border), кордону (Ukr. border); Польща (Poland); Германия (Rus. Germany), Німеччина (Ukr. Germany) and Угорщина and Венгрия (Hungary) correlate strongly with officially UNHCR data for externally displaced persons from Ukraine. All three languages show that the interest in Poland is the highest. When refugees arrive in nearby countries, the search for terms related to Germany, such as crossing the border + Germany, etc., is proliferating. This result confirms our hypothesis that one-third of all refugees will cross into Germany. According to Big Data insights, the estimate of the total number of expected refugees is to expect 5,4 Million refugees. The age group most represented is between 24 and 45 years (data for children are unavailable), and over 65% are women. Conclusion: The increase in migration-related search queries is correlated with the rise in the number of refugees from Ukraine in the EU. Thus this method allows reliable forecasts. Understanding the consequences of forced migration from Ukraine is crucial to enabling UNHCR and governments to develop optimal humanitarian strategies and prepare for refugee reception and possible integration. The benefit of this method is reliable estimates and forecasting that can allow governments and UNHCR to prepare and better respond to the recent humanitarian crisis.
    Keywords: refugee,Ukraine,Big Data,forced migration,Google Trends,UNHCR
    Date: 2022
  7. By: Damir Filipović (Ecole Polytechnique Fédérale de Lausanne; Swiss Finance Institute); Markus Pelger (Stanford University - Department of Management Science & Engineering); Ye Ye (Stanford University)
    Abstract: We introduce a robust, flexible and easy-to-implement method for estimating the yield curve from Treasury securities. This method is non-parametric and optimally learns basis functions in reproducing Hilbert spaces with an economically motivated smoothness reward. We provide a closed-form solution of our machine learning estimator as a simple kernel ridge regression, which is straightforward and fast to implement. We show in an extensive empirical study on U.S. Treasury securities, that our method strongly dominates all parametric and non-parametric benchmarks. Our method achieves substantially smaller out-of-sample yield and pricing errors, while being robust to outliers and data selection choices. We attribute the superior performance to the optimal trade-off between flexibility and smoothness, which positions our method as the new standard for yield curve estimation.
    Keywords: yield curve estimation, U.S. Treasury securities, term structure of interest rates, nonparametric method, machine learning in finance, reproducing kernel Hilbert space
    JEL: C14 C38 C55 E43 G12
    Date: 2022–03
  8. By: Andrés Fernández Díaz (Facultad de Ciencias Económicas y Empresariales. Universidad Complutense de Madrid.); Benito Rodríguez Mallol (Facultad de Ciencias Económicas y Empresariales. Universidad Complutense de Madrid. Title: Artificial Intelligence: A Reapraisal)
    Abstract: This article begins with a brief reference to the history of the Artificial Intelligence (A.I.) highlighting the great figures of George Boole, Kurt Gödel and Alan Turing. The algebra of the first, the undecidability theorem of the second and the advanced machine of the third marked the fundamental milestones of this evolution. The use of the binary system and the introduction of quantum mechanics is explained, which is a great step forward by being able to count on the advantages of quantum computer and algorithms, along with quantum statistics and other new complementary technologies. Regarding the controversy over whether A.I. can outperform Human Intelligence (H.I.) we conclude at the end of the work that enormous effort made on the way to the present allows us to affirm that Artificial Intelligence constitutes, in a certain sense, an asymptote of the Human Intelligence. Abstract: El artículo comienza con una breve referencia a la historia de la Inteligencia Artificial, (I.A.) en la que destacamos en sus respectivos apartados las eminentes figuras de George Boole, Kurt Gödel y Alan Turing. El álgebra del primero, el teorema de la indecidibilidad del segundo y la avanzada y determinante máquina del tercero marcan los hitos fundamentales de dicha evolución. Se explica el empleo del sistema binario, así como la introducción de la mecánica cuántica, lo que supone un gran paso hacia adelante, al poder contar con las ventajas de las computadoras y los algoritmos cuánticos, junto a la estadística cuántica y otras nuevas tecnologías complementarias. Respecto a la controversia sobre si la I.A. puede superar a la Inteligencia humana, tras un último epígrafe dedicado a este tema, concluimos afirmando que el enorme esfuerzo realizado en el camino recorrido hasta la actualidad permite afirmar que la Inteligencia Artificial constituye una especie de “asíntota” de la Inteligencia Humana.
    Keywords: A.I., Turing, Computadoras, Algoritmos, Cuánticos, H.I. Length: 43 pages
    Date: 2022
  9. By: Amir Taherizadeh; Catherine Beaudry
    Abstract: This report presents the main results of the qualitative and exploratory study aimed at explaining how artificial intelligence (AI), as general-purpose technology (GPT), impacts firm-level productivity and employment. Analyzing primary and secondary data sources (including 27 interviews, reports, and panel discussions), we first develop the maturity spectrum of AI adoption and classify small and medium-sized enterprises (SMEs) that integrate AI into their work processes into four archetypes: The Wishful, The Achievers, The Leaders, and The Visionaries. By characterizing each archetype, we highlight nuances of changes that need to take place for a firm to further progress to the next stage of AI adoption. Second, we identify and explain seven barriers that are associated with the pervasive integration of AI among SMEs in manufacturing industries. Third, in three distinct case studies, we explore three AI projects conducted by Quebec-based AI-focused firms to show how machine learning (ML) integration into products and work processes can act as a productivity enhancer and identify its impact on firm-level employment. Overall, our results suggest that successful AI integration requires a firm-level digital transformation which we illustrate as a continuum. In the early stages of adoption (including firms in The Wishful and The Achievers classes) where AI adoption is project-centric, firms’ employment tends to increase in parallel with productivity gains. The much-needed upskilling of the existing workforce also occurs at the same time. Further, as firms integrate AI on an enterprise-wide scale (The Leaders and The Visionaries) and increase the level of their innovation activities, firm-level job losses are experienced in parallel to productivity gains. Next, we introduce indirect indicators of AI pervasiveness as more realistic measures to evaluate the rate of AI adoption by SMEs in a fluid stage. Finally, we propose four recommendations that have implications for researchers, practitioners as well as policymakers. To quote this document Taherizadeh, A. and Beaudry C. (2021). Vers une meilleure compréhension de la transformation numérique optimisée par l’IA et de ses implications pour les PME manufacturières au Canada - Une recherche qualitative exploratoire (2022RP-04, CIRANO). Ce rapport présente les principaux résultats d’une étude qualitative exploratoire visant à examiner l’impact de l’intelligence artificielle (IA), en tant que technologie à usage général (TUG) sur la productivité et l’emploi à l’échelle de l’entreprise. À la suite de l’analyse de sources de données primaires et secondaires (comprenant 27 entretiens, rapports et discussions de groupe), nous établissons d’abord une échelle de maturité de l’adoption de l’IA et un classement des petites et moyennes entreprises (PME) qui intègrent l’IA dans leurs processus de travail en quatre archétypes : l’Aspirant, le Fonceur, le Leader et le Visionnaire. Nous définissons chaque archétype de façon à mettre en évidence les changements particuliers à opérer pour qu’une entreprise puisse passer à l’étape suivante de l’adoption de l’IA. Deuxièmement, nous définissons et examinons sept obstacles à l’adoption généralisée de l’IA par les PME manufacturières. Troisièmement, à l’aide de trois études de cas, nous explorons trois projets d’IA menés par des entreprises québécoises axées sur l’IA afin de montrer, d’une part, l’apport de l’intégration de l’apprentissage automatique (AA) aux produits et aux processus de travail sur le plan de la productivité des entreprises, et d’autre part son effet sur leurs effectifs. Dans l’ensemble, les résultats de notre étude suggèrent que la réussite de l’intégration de l’IA nécessite une transformation numérique au niveau de l’entreprise, que nous présentons comme un continuum. Dans les premières étapes, où l’adoption de l’IA se fait autour de projets (en particulier pour les entreprises des catégories Aspirant et Fonceur), les effectifs des entreprises ont tendance à augmenter parallèlement aux gains de productivité en même temps que le perfectionnement indispensable des compétences de la main-d’œuvre existante. En outre, lorsque l’IA est déployée à l’échelle de l’entreprise (chez les Leaders et les Visionnaires) et que cette dernière rehausse le niveau de ses activités d’innovation, on enregistre plutôt des pertes d’emploi parallèlement aux gains de productivité. Par la suite, nous introduisons des indicateurs indirects de l’omniprésence de l’IA, car nous estimons qu’il s’agit de mesures plus réalistes pour évaluer le taux d’adoption de l’IA par les PME en phase fluide. Enfin, nous proposons quatre recommandations qui ont des implications pour les chercheurs, les praticiens et les responsables politiques. Pour citer ce document Taherizadeh, A. et Beaudry C. (2021). Vers une meilleure compréhension de la transformation numérique optimisée par l’IA et de ses implications pour les PME manufacturières au Canada - Une recherche qualitative exploratoire (2022RP-04, CIRANO).
    Keywords: Artificial intelligence,Digital transformation,General Purpose Technology,Employment,Machine learning,Productivity,Manufacturing,Technological Innovation, Intelligence artificielle,Transformation numérique,Technologies à usage général,Emploi,Apprentissage automatique,Productivité,Secteur manufacturier,Innovation technologique
    JEL: C80 D20 L60 M50 O10
    Date: 2022–03–11
  10. By: Luca Coraggio (Università di Napoli Federico II); Marco Pagano (University of Naples Federico II, CSEF and EIEF.); Annalisa Scognamiglio (Università di Napoli Federico II and CSEF); Joacim Tåg (Research Institute of Industrial Economics (IFN))
    Abstract: Does the matching between workers and jobs help explain productivity differentials across firms? To address this question we develop a job-worker allocation quality measure (JAQ) by combining employer-employee administrative data with machine learning techniques. The proposed measure is positively and significantly associated with labor earnings over workers’ careers. At firm level, it features a robust positive correlation with firm productivity, and with managerial turnover leading to an improvement in the quality and experience of management. JAQ can be constructed for any employer-employee data including workers’ occupations, and used to explore the effect of corporate restructuring on workers’ allocation and careers.
    Keywords: jobs, workers, matching, mismatch, machine learning, productivity, management.
    JEL: D22 D23 D24 G34 J24 J31 J62 L22 L23 M12 M54
    Date: 2022–03–30
  11. By: Matthias Lalisse (Johns Hopkins University)
    Abstract: How much does money drive legislative outcomes in the United States? In this article, we use aggregated campaign finance data as well as a Transformer based text embedding model to predict roll call votes for legislation in the US Congress with more than 90% accuracy. In a series of model comparisons in which the input feature sets are varied, we investigate the extent to which campaign finance is predictive of voting behavior in comparison with variables like partisan affiliation. We find that the financial interests backing a legislator's campaigns are independently predictive in both chambers of Congress, but also uncover a sizable asymmetry between the Senate and the House of Representatives. These findings are cross-referenced with a Representational Similarity Analysis (RSA) linking legislators' financial and voting records, in which we show that "legislators who vote together get paid together", again discovering an asymmetry between the House and the Senate in the additional predictive power of campaign finance once party is accounted for. We suggest an explanation of these facts in terms of Thomas Ferguson's Investment Theory of Party Competition: due to a number of structural differences between the House and Senate, but chiefly the lower amortized cost of obtaining individuated influence with Senators, political investors prefer operating on the House using the party as a proxy.
    Keywords: campaign finance, congressional voting, investment theory of party competition, machine learning, Representational Similarity Analysis, political money
    JEL: H10 D72 P16 C45
    Date: 2022–02–22
  12. By: Kristof Wiedermann
    Abstract: We consider the utility maximization problem under convex constraints with regard to theoretical results which allow the formulation of algorithmic solvers which make use of deep learning techniques. In particular for the case of random coefficients, we prove a stochastic maximum principle (SMP), which also holds for utility functions $U$ with $\mathrm{id}_{\mathbb{R}^{+}} \cdot U'$ being not necessarily nonincreasing, like the power utility functions, thereby generalizing the SMP proved by Li and Zheng (2018). We use this SMP together with the strong duality property for defining a new algorithm, which we call deep primal SMP algorithm. Numerical examples illustrate the effectiveness of the proposed algorithm - in particular for higher-dimensional problems and problems with random coefficients, which are either path dependent or satisfy their own SDEs. Moreover, our numerical experiments for constrained problems show that the novel deep primal SMP algorithm overcomes the deep SMP algorithm's (see Davey and Zheng (2021)) weakness of erroneously producing the value of the corresponding unconstrained problem. Furthermore, in contrast to the deep controlled 2BSDE algorithm from Davey and Zheng (2021), this algorithm is also applicable to problems with path dependent coefficients. As the deep primal SMP algorithm even yields the most accurate results in many of our studied problems, we can highly recommend its usage. Moreover, we propose a learning procedure based on epochs which improved the results of our algorithm even further. Implementing a semi-recurrent network architecture for the control process turned out to be also a valuable advancement.
    Date: 2022–02
  13. By: Igna, Ioana (CIRCLE, Lund University); Venturini, Francesco (University of Perugia)
    Abstract: Using patent data for a panel sample of European companies between 1995 and 2016 we explore whether the innovative success in Artificial Intelligence (AI) is related to earlier firms’ research in the area of Information and Communication Technology (ICT), and identify which company characteristics and external factors shape this performance. We show that AI innovation has been developed by the most prolific firms in the field of ICT, presents strong dynamic returns (learning effects), and benefits from complementaries with knowledge developed in network and communication technologies, high-speed computing and data analysis, and more recently in cognition and imaging. AI patent productivity increases with the scale of research but is lower in presence of narrow and mature technological competencies of the firm. AI innovating companies are found to benefit from spillovers associated with innovations developed in the field of ICT by the business sector; this effect, however, is confined to frontier firms. Our findings suggest that, with the take-off of the new technology, the technological lead of top AI innovators has increased mainly due to the accumulation of internal competencies and the expanding knowledge base. These trends help explain the concentration process of the world’s data market.
    Keywords: AI; ICT; patenting; European firms
    JEL: O31 O32 O34
    Date: 2022–03–02
  14. By: De Castro, Angelo
    Abstract: This paper examines various models and strategies for the adoption of cryptocurrencies, arguments on the stabilization of their value, and the relationship between artificial intelligence and blockchain technology.
    Date: 2022–02–16
  15. By: DUCH BROWN Nestor (European Commission - JRC); GOMEZ-HERRERA Estrella; MUELLER-LANGER Frank; TOLAN Songul (European Commission - JRC)
    Abstract: We investigate three alternative but complementary indicators of market power on one of the largest online labour markets (OLMs) in Europe: (1) the elasticity of labour demand, (2) the elasticity of labour supply, and (3) the concentration of market shares. We explore how these indicators relate to an exogenous change in platform policy. In the middle of the observation period, the platform made it mandatory for employers to signal the rates they were willing to pay as given by the level of experience required to perform a project, i.e., entry, intermediate or expert level. We find a positive labour supply elasticity ranging between 0.06 and 0.15, which is higher for expert-level projects. We also find that the labour demand elasticity increased while the labour supply elasticity decreased after the policy change. Based on this, we argue that market-designing platform providers can influence the labour demand and supply elasticities on OLMs with the terms and conditions they set for the platform. We also explore the demand for and supply of AI-related labour on the OLM under study. We provide evidence for a significantly higher demand for AI-related labour (ranging from +1.4% to +4.1%) and a significantly lower supply of AI-related labour (ranging from -6.8% to -1.6%) than for other types of labour. We also find that workers on AI projects receive 3.0%-3.2% higher wages than workers on non-AI projects.
    Keywords: Online labour markets, artificial intelligence, market power, exogenous change in platform policy
    Date: 2022–02
  16. By: Kvam, Emilie (NTNU); Molnar, Peter (University of Stavanger); Wankel, Ingvild (NTNU); Odegaard, Bernt Arne (University of Stavanger)
    Abstract: We investigate the link between stock returns and ESG (Environmental, Social and Governance) concerns. The ESG concerns are measured by ESG-related sentiment extracted from Google Trends and Twitter, and also by the VIX index. We find that higher ESG scores are associated with lower stock returns on average. However, companies with high ESG scores deliver high returns in times of ESG concerns. Our results are consistent with the implications of equilibrium models of Pastor et al. (2021) and Pedersen et al. (2021) about the ESG score and changes in ESG concerns (preferences or news).
    Keywords: ESG investing; Social Media; Exclusion
    JEL: G10 G20
    Date: 2022–03–15
  17. By: Harold Houba (Vrije Universiteit Amsterdam); Evgenia Motchenkova (Vrije Universiteit Amsterdam); Hui Wang (Beijing Zhengjiang Science and Technology Co.)
    Abstract: Data-driven AI pricing algorithms in on-line markets collect consumer information and use it in their pricing technologies. In the simplest symmetric Hotelling's model such technologies reduce prices and profits. We extend Hotelling's model with vertically differentiated products, cost asymmetries and arbitrary adjustment costs. We provide a characterization of competition in personalized pricing: Sellers compete in offering consumer surplus, personalized prices are constrained monopoly prices and social welfare is maximal. For linear adjustment costs, adopting personalized pricing technology is a dominant strategy for both sellers. We derive conditions under which the most efficient seller increases her profit through personalized pricing. While aggregate consumer surplus increases, consumers with high switching costs may be hurt. Finally, we discuss several extensions of our approach such as oligopoly.
    JEL: L1 D43 L13
    Date: 2022–02–24

This nep-big issue is ©2022 by Tom Coupé. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.