nep-big New Economics Papers
on Big Data
Issue of 2023‒06‒19
27 papers chosen by
Tom Coupé
University of Canterbury

  1. Trillion Dollar Words: A New Financial Dataset, Task & Market Analysis By Agam Shah; Suvan Paturi; Sudheer Chava
  2. German farmers' perceived usefulness of satellite-based index insurance - Insights from a transtheoretical model By Nordmeyer, Eike Florenz
  3. A topic modeling perspective on investor uncertainty By Perico Ortiz, Daniel; Schnaubelt, Matthias; Seifert, Oleg
  4. The Newsvendor with Advice By Lin An; Andrew A. Li; Benjamin Moseley; R. Ravi
  5. Are Basel III requirements up to the task? Evidence from bankruptcy prediction models By Pierre Durand; Gaëtan Le Quang; Arnold Vialfont
  6. Environmental regulation and productivity growth in the euro area: testing the Porter hypothesis By Benatti, Nicola; Groiss, Martin; Kelly, Petra; Lopez-Garcia, Paloma
  7. Copula Variational LSTM for High-dimensional Cross-market Multivariate Dependence Modeling By Jia Xu; Longbing Cao
  8. Essays on the Adoption and Diffusion of Big Data Analytics and Artificial Intelligence Technology By Nicolas Ameye
  9. The fundamental value of art NFTs By Fridgen, Gilbert; Kräussl, Roman; Papageorgiou, Orestis; Tugnetti, Alessandro
  10. What Drives Tax Policy? Political, Institutional and Economic Determinants of State Tax Policy By Sarah Robinson; Alisa Tazhitdinova
  11. Measuring Consistency in Text-based Financial Forecasting Models By Linyi Yang; Yingpeng Ma; Yue Zhang
  12. Artificial neural networks to solve dynamic programming problems: A bias-corrected Monte Carlo operator By Julien Pascal
  13. The Global Governance of Artificial Intelligence: Next Steps for Empirical and Normative Research By Jonas Tallberg; Eva Erman; Markus Furendal; Johannes Geith; Mark Klamberg; Magnus Lundgren
  14. Unlocking the power of generative AI models and systems such as GPT-4 and ChatGPT for higher education: A guide for students and lecturers By Gimpel, Henner; Hall, Kristina; Decker, Stefan; Eymann, Torsten; Lämmermann, Luis; Mädche, Alexander; Röglinger, Maximilian; Ruiner, Caroline; Schoch, Manfred; Schoop, Mareike; Urbach, Nils; Vandrik, Steffen
  15. The Fast and The Studious? Ramadan Observance and Student Performance By Kyra Hanemaaijer; Olivier Marie; Marco Musumeci
  16. Temporal and Heterogeneous Graph Neural Network for Financial Time Series Prediction By Sheng Xiang; Dawei Cheng; Chencheng Shang; Ying Zhang; Yuqi Liang
  17. Information and Transparency: Using Machine Learning to Detect Communication By Brown, David P.; Cajueiro, Daniel O.; Eckert, Andrew; Silveira, Douglas
  18. Executive Voiced Laughter and Social Approval: An Explorative Machine Learning Study By Niklas Mueller; Steffen Klug; Andreas Koenig; Alexander Kathan; Lukas Christ; Bjoern Schuller; Shahin Amiriparian
  19. Using neural networks to predict the value of stocks based on news data By Borisenko Georgy
  20. An investigation of auctions in the Regional Greenhouse Gas Initiative By Khezr, Peyman; Pourkhanali, Armin
  21. A typology of Malian farmers and their credit repayment performance - An unsupervised machine learning approach By Olkers, Tim; Liu, Shuang; Mußhoff, Oliver
  22. How Do Political Connections of Firms Matter during an Economic Crisis? By Chen, Yutong; Chiplunkar, Gaurav; Sekhri, Sheetal; Sen, Anirban; Seth, Aaditeshwar
  23. E2EAI: End-to-End Deep Learning Framework for Active Investing By Zikai Wei; Bo Dai; Dahua Lin
  24. Personality Traits and Financial Outcomes By Claire Greene; Oz Shy; Joanna Stavins
  25. Can artificial intelligence improve the effectiveness of government support policies? By Kim, Minho; Han, Jaepil
  26. Deep learning detection of types of water-bodies using optical variables and ensembling By Nasir, Nida; Kansal, Afreen; Alshaltone, Omar; Barneih, Feras; Shanableh, Abdallah; Al-Shabi, Mohammad; Al Shammaa, Ahmed
  27. Zero is Not Hero Yet: Benchmarking Zero-Shot Performance of LLMs for Financial Tasks By Agam Shah; Sudheer Chava

  1. By: Agam Shah; Suvan Paturi; Sudheer Chava
    Abstract: Monetary policy pronouncements by Federal Open Market Committee (FOMC) are a major driver of financial market returns. We construct the largest tokenized and annotated dataset of FOMC speeches, meeting minutes, and press conference transcripts in order to understand how monetary policy influences financial markets. In this study, we develop a novel task of hawkish-dovish classification and benchmark various pre-trained language models on the proposed dataset. Using the best-performing model (RoBERTa-large), we construct a measure of monetary policy stance for the FOMC document release days. To evaluate the constructed measure, we study its impact on the treasury market, stock market, and macroeconomic indicators. Our dataset, models, and code are publicly available on Huggingface and GitHub under CC BY-NC 4.0 license.
    Date: 2023–05
  2. By: Nordmeyer, Eike Florenz
    Abstract: Index insurance is a promising tool to mitigate drought-related income losses in agriculture. Yet, the basis risk of index insurance based on meteorological observations inhibits farmers’ demand. To reduce the basis risk, the integration of satellite data has received research attention. However, farmers’ perceptions of satellite-based index insurance remain unknown. To derive initial insights into German farmers’ perceived usefulness (PU) of satellite-based index insurance, we surveyed 127 German farmers in a risk management context and applied a modified transtheoretical model of behavioral change (TTMC). This revealed detailed information on German farmers’ PU of satellite-based index insurance and its influencing factors. The results indicate that the average farmer perceives satellite-based index insurance as useful. Particularly, a higher educational level in the agricultural context as well as higher trust in index insurance products increases farmers’ PU. Moreover, higher relative climate-related income losses increase farmers’ PU. The results are of importance to insurers interested in the drivers of farmers’ PU of upcoming satellite-based index insurance and offers a starting point for researchers focusing on acceptance of index insurance and satellite data as well as for further applications of the TTMU.
    Keywords: Risk and Uncertainty, Research and Development/Tech Change/Emerging Technologies
    Date: 2023–03
  3. By: Perico Ortiz, Daniel; Schnaubelt, Matthias; Seifert, Oleg
    Abstract: We leverages computational linguistics to determine how the narrative content of earnings conference calls influences investors' uncertainty about a firm's future valuation. By applying statistical topic modeling to a corpus of 18, 254 conference calls, we extract topics and tones from both analyst questions and executive responses. Our findings show that incorporating the estimated topics significantly increases the explained variance of implied volatility changes of equity options. Furthermore, our approach enables us to disentangle the overall effect into tone and topic effects, with executive statements' topics having the largest net effect, while tones from analyst statements are particularly relevant for pricing call options.
    Keywords: Earnings Conference Calls, Option Implied Volatility, Natural Language Processing, Sentiment, Topic Modeling
    Date: 2023
  4. By: Lin An; Andrew A. Li; Benjamin Moseley; R. Ravi
    Abstract: The standard newsvendor model assumes a stochastic demand distribution as well as costs for overages and underages. The celebrated critical fractile formula can be used to determine the optimal inventory levels. While the model has been leveraged in numerous applications, often in practice more characteristics and features of the problem are known. Using these features, it is common to employ machine learning to predict inventory levels over the classic newsvendor approach. An emerging line of work has shown how to use incorporate machine learned predictions into models to circumvent lower bounds and give improved performance. This paper develops the first newsvendor model that incorporates machine learned predictions. The paper considers a repeated newsvendor setting with nonstationary demand. There is a prediction is for each period's demand and, as is the case in machine learning, the prediction can be noisy. The goal is for an inventory management algorithm to take advantage of the prediction when it is high quality and to have performance bounded by the best possible algorithm without a prediction when the prediction is highly inaccurate. This paper proposes a generic model of a nonstationary newsvendor without predictions and develops optimal upper and lower bounds on the regret. The paper then propose an algorithm that takes a prediction as advice which, without a priori knowledge of the accuracy of the advice, achieves the nearly optimal minimax regret. The perforamce mataches the best possible had the accuracy been known in advance. We show the theory is predictive of practice on real data and demonstrtate emprically that our algorithm has a 14% to 19% lower cost than a clairvoyant who knows the quality of the advice beforehand.
    Date: 2023–05
  5. By: Pierre Durand (Université Paris Est Créteil, ERUDITE, 94010 Créteil Cedex, France); Gaëtan Le Quang (Univ Lyon, Université Lumière Lyon 2, GATE UMR 5824, F-69130 Ecully, France); Arnold Vialfont (Université Paris Est Créteil, ERUDITE, 94010 Créteil Cedex, France)
    Abstract: Using a database comprising US bank balance sheet variables covering the 2000-2018 period and the list of failed banks as provided by the FDIC, we run various models to exhibit the main determinants of bank default. Among these models, Logistic Regression, Random Forest, Histogram-based Gradient Boosting Classification and Gradient Boosting Classification perform the best. Relying on various machine learning interpretation tools, we manage to provide evidence that 1) capital is a stronger predictor of default than liquidity, 2) Basel III capital requirements are set at a too low level. More precisely, having a look at the impact of the interaction between capital ratios (the risk-weighted ratio and the simple leverage ratio) and the liquidity ratio (liquid assets over total assets) on the probability of default, we show that the influence of capital on this latter completely outweighs that of liquidity, which is in fact very limited. From a prudential perspective, this questions the recent stress put on liquidity regulation. Concerning capital requirements, we provide evidence that setting the risk-weighted ratio at 15% and the simple leverage ratio at 10% would significantly decrease the probability of default without hampering banks'activities. Overall, these results therefore call for strengthening capital requirements while at the same time releasing the regulatory pressure put on liquidity.
    Keywords: Basel III; capital requirements ; liquidity regulation ; bankruptcy prediction models ; statistical learning ; classification
    JEL: C44 G21 G28
    Date: 2023
  6. By: Benatti, Nicola; Groiss, Martin; Kelly, Petra; Lopez-Garcia, Paloma
    Abstract: This paper analyses the impact of changes in environmental regulations on productivity growth at country- and firm-level. We exploit several data sources and the environmen-tal policy stringency index, to evaluate the Porter hypothesis, according to which firms’ productivity can benefit from more stringent environmental policies. By using panel local projections, we estimate the regulatory impact over a five-year horizon. The identification of causal impacts of regulatory changes is achieved by the estimation of firms’ CO2 emissions via a machine learning algorithm. At country- and firm-level, policy tightening affects high-polluters’ productivity negatively and stronger than their less-polluting peers. However, among high-polluting firms, large ones experience positive total factor productivity growth due to easier access to finance and greater innovativeness. Hence, we do not find support for the Porter hypothesis in general. However for technology support policies and firms with the required resources, policy tightening can enhance productivity. JEL Classification: O44, Q52, Q58
    Keywords: emissions, environmental regulation, euro area, Porter hypothesis, productivity
    Date: 2023–05
  7. By: Jia Xu; Longbing Cao
    Abstract: We address an important yet challenging problem - modeling high-dimensional dependencies across multivariates such as financial indicators in heterogeneous markets. In reality, a market couples and influences others over time, and the financial variables of a market are also coupled. We make the first attempt to integrate variational sequential neural learning with copula-based dependence modeling to characterize both temporal observable and latent variable-based dependence degrees and structures across non-normal multivariates. Our variational neural network WPVC-VLSTM models variational sequential dependence degrees and structures across multivariate time series by variational long short-term memory networks and regular vine copula. The regular vine copula models nonnormal and long-range distributional couplings across multiple dynamic variables. WPVC-VLSTM is verified in terms of both technical significance and portfolio forecasting performance. It outperforms benchmarks including linear models, stochastic volatility models, deep neural networks, and variational recurrent networks in cross-market portfolio forecasting.
    Date: 2023–05
  8. By: Nicolas Ameye
    Abstract: Essays on the Adoption and Diffusion of Big Data Analytics and Artificial Intelligence TechnologyThe motivation behind this thesis lies around developing the academic literature, on one hand, on the impact of a specific technology on an organization’s strategy, as well as, on the other hand, on the characteristics and components driving and inhibiting the adoption and diffusion of a specific technology inside an organization.By investigating the drivers and challenges of adopting and diffusing Artificial Intelligence (AI) in an organization, this research aims to answer to the following research question: “What are the main complements and antecedents to the adoption and diffusion of Big Data Analytics and Artificial Intelligence technology, at an organizational level?”.To answer that question, we must first understand how the established models of rank, order, stock and epidemic effects influence the adoption of AI technology. Different streams of works have highlighted four main groups of factors affecting the diffusion of new technologies within or across firms: rank, order, stock and epidemic effects. This thesis examines how these factors influence both the adoption and diffusion of Artificial Intelligence technology across and within firms.Second, this thesis extends the established models to incorporate the effects of uncertainty and competitive intensity on the adoption behaviors of AI technologies among firms. We investigate how uncertainty and competitive intensity affect the adoption behaviours of AI technology among firms.Third, this thesis investigates how technological and managerial complementarities influence the adoption and diffusion of AI technology. To this aim, we extend the established models to incorporate effects of technological and managerial complementarities in the adoption and diffusion of Artificial Intelligence technology among firms. We investigate how technological and managerial complementarities help in facilitating inter-firm diffusion, in driving intra-firm diffusion and in reducing the barriers to AI technology adoption.Fourth, this thesis investigates the discrepancies in adoption and use of AI technology between SMEs and large organizations. To this aim, we explore the determinants and patterns of inter- and intra-firm diffusion at both SMEs and large organization levels.A first finding of this thesis highlights the influence of industry-level adoption on a focal firm’s own adoption. This thesis points at the presence of herding behaviors by which firms tend to follow the crowd. As the share of adopters in the industry increases, the crowd gets bigger and provides a more compelling reason to adopt. However, as our results suggest, these herding behaviors are fragile, exacerbated by competitive forces, and counterbalanced by certain sources of uncertainty while strengthened by others. A second finding of this thesis highlights the importance of pre-existing digital capabilities in the adoption of AI technology. The adoption of AI requires a high degree of maturity and a significant stock of complementary digital technology. This is most likely due to the cumulative nature of AI technology that heavily relies on the information and process infrastructure of the firm. But this implies that leapfrogging on the technology is very unlikely with AI, encouraging firms to build the right foundations (in terms of infrastructure, systems, processes and skills) early on.
    Keywords: Technology adoption; Artificial Intelligence
    Date: 2023–05–25
  9. By: Fridgen, Gilbert; Kräussl, Roman; Papageorgiou, Orestis; Tugnetti, Alessandro
    Abstract: This paper examines the level of speculation associated with art non-fungible tokens (NFTs), comprehends the characteristics that confer value on them and designs a profitable trading strategy based on our findings. We analyze 860, 067 art NFTs that have been deployed on the Ethereum blockchain and have been involved in 317, 950 sales using machine learning methods to forecast the probability of sale, the trade frequency and the average price. We find that NFTs are highly speculative assets and that their price and recurrence of sale are heavily determined by the floor and the last sales prices, independent of any fundamental value.
    Keywords: Non-fungible tokens (NFTs), Machine Learning, Fundamental Value, Speculation, Ethereum, Blockchain, Non-fungible tokens (NFTs)
    JEL: C55 G11 Z11
    Date: 2023
  10. By: Sarah Robinson; Alisa Tazhitdinova
    Abstract: We collect detailed data on U.S. state personal income, corporate, sales, cigarette, gasoline, and alcohol taxes over the past 70 years to shed light on the determinants of state tax policies. We provide a comprehensive summary of how tax policy has changed over time, within and across states. We then use permutation analysis, variance decomposition, and machine learning techniques to show that the timing and magnitude of tax changes are not driven by economic needs, state politics, institutional rules, neighbor competition, or demographics. Altogether, these factors explain less than 20% of observed tax variation.
    JEL: D7 H2 H7
    Date: 2023–05
  11. By: Linyi Yang; Yingpeng Ma; Yue Zhang
    Abstract: Financial forecasting has been an important and active area of machine learning research, as even the most modest advantage in predictive accuracy can be parlayed into significant financial gains. Recent advances in natural language processing (NLP) bring the opportunity to leverage textual data, such as earnings reports of publicly traded companies, to predict the return rate for an asset. However, when dealing with such a sensitive task, the consistency of models -- their invariance under meaning-preserving alternations in input -- is a crucial property for building user trust. Despite this, current financial forecasting methods do not consider consistency. To address this problem, we propose FinTrust, an evaluation tool that assesses logical consistency in financial text. Using FinTrust, we show that the consistency of state-of-the-art NLP models for financial forecasting is poor. Our analysis of the performance degradation caused by meaning-preserving alternations suggests that current text-based methods are not suitable for robustly predicting market information. All resources are available on GitHub.
    Date: 2023–05
  12. By: Julien Pascal
    Abstract: Artificial Neural Networks (ANNs) are powerful tools that can solve dynamic programming problems arising in economics. In this context, estimating ANN parameters involves minimizing a loss function based on the model’s stochastic functional equations. In general, the expectations appearing in the loss function admit no closed-form solution, so numerical approximation techniques must be used. In this paper, I analyze a bias-corrected Monte Carlo operator (bc-MC) that approximates expectations by Monte Carlo. I show that the bc-MC operator is a generalization of the all-in-one expectation operator, already proposed in the literature. I propose a method to optimally set the hyperparameters defining the bc-MC operator and illustrate the findings numerically with well-known economic models. I also demonstrate that the bc-MC operator can scale to high-dimensional models. With just a few minutes of computing time, I find a global solution to an economic model with a kink in the decision function and more than 100 dimensions.
    Keywords: Dynamic programming, Artificial Neural Network, Machine Learning, Monte Carlo
    JEL: C45 C61 C63 C68 E32 E37
    Date: 2023–03
  13. By: Jonas Tallberg; Eva Erman; Markus Furendal; Johannes Geith; Mark Klamberg; Magnus Lundgren
    Abstract: Artificial intelligence (AI) represents a technological upheaval with the potential to change human society. Because of its transformative potential, AI is increasingly becoming subject to regulatory initiatives at the global level. Yet, so far, scholarship in political science and international relations has focused more on AI applications than on the emerging architecture of global AI regulation. The purpose of this article is to outline an agenda for research into the global governance of AI. The article distinguishes between two broad perspectives: an empirical approach, aimed at mapping and explaining global AI governance; and a normative approach, aimed at developing and applying standards for appropriate global AI governance. The two approaches offer questions, concepts, and theories that are helpful in gaining an understanding of the emerging global governance of AI. Conversely, exploring AI as a regulatory issue offers a critical opportunity to refine existing general approaches to the study of global governance.
    Date: 2023–05
  14. By: Gimpel, Henner; Hall, Kristina; Decker, Stefan; Eymann, Torsten; Lämmermann, Luis; Mädche, Alexander; Röglinger, Maximilian; Ruiner, Caroline; Schoch, Manfred; Schoop, Mareike; Urbach, Nils; Vandrik, Steffen
    Abstract: Generative AI technologies, such as large language models, have the potential to revolutionize much of our higher education teaching and learning. ChatGPT is an impressive, easy-to-use, publicly accessible system demonstrating the power of large language models such as GPT-4. Other compa- rable generative models are available for text processing, images, audio, video, and other outputs - and we expect a massive further performance increase, integration in larger software systems, and diffusion in the coming years. This technological development triggers substantial uncertainty and change in university-level teaching and learning. Students ask questions like: How can ChatGPT or other artificial intelligence tools support me? Am I allowed to use ChatGPT for a seminar or final paper, or is that cheating? How exactly do I use ChatGPT best? Are there other ways to access models such as GPT-4? Given that such tools are here to stay, what skills should I acquire, and what is obsolete? Lecturers ask similar questions from a different perspective: What skills should I teach? How can I test students' competencies rather than their ability to prompt generative AI models? How can I use ChatGPT and other systems based on generative AI to increase my efficiency or even improve my students' learning experience and outcomes? Even if the current discussion revolves around ChatGPT and GPT-4, these are only the forerunners of what we can expect from future generative AI-based models and tools. So even if you think ChatGPT is not yet technically mature, it is worth looking into its impact on higher education. This is where this whitepaper comes in. It looks at ChatGPT as a contemporary example of a conversational user interface that leverages large language models. The whitepaper looks at ChatGPT from the perspective of students and lecturers. It focuses on everyday areas of higher education: teaching courses, learning for an exam, crafting seminar papers and theses, and assessing students' learning outcomes and performance. For this purpose, we consider the chances and concrete application possibilities, the limits and risks of ChatGPT, and the underlying large language models. (...)
    Date: 2023
  15. By: Kyra Hanemaaijer (Erasmus University Rotterdam); Olivier Marie (Erasmus University Rotterdam); Marco Musumeci (Erasmus University Rotterdam)
    Abstract: What are the consequences of religious obligations conflicting with civic duties? We investigate this question by evaluating changes in the performance of practicing Muslim students when end-of-secondary-school exams and Ramadan overlapped in the Netherlands. Using administrative data on exam takers and a machine learning model to individually predict fasting probability, we estimate that the grades and pass rate of compliers dropped significantly. This negative impact was especially strong for low achievers and those from religiously segregated schools. Investigating mechanisms, we find suggestive evidence that not being able to sleep in the morning before an afternoon exam was particularly detrimental to performance.
    Keywords: Religion, Productivity, Ramadan, Education, The Netherlands
    JEL: I2 I24 Z12 J15
    Date: 2023–04–28
  16. By: Sheng Xiang; Dawei Cheng; Chencheng Shang; Ying Zhang; Yuqi Liang
    Abstract: The price movement prediction of stock market has been a classical yet challenging problem, with the attention of both economists and computer scientists. In recent years, graph neural network has significantly improved the prediction performance by employing deep learning on company relations. However, existing relation graphs are usually constructed by handcraft human labeling or nature language processing, which are suffering from heavy resource requirement and low accuracy. Besides, they cannot effectively response to the dynamic changes in relation graphs. Therefore, in this paper, we propose a temporal and heterogeneous graph neural network-based (THGNN) approach to learn the dynamic relations among price movements in financial time series. In particular, we first generate the company relation graph for each trading day according to their historic price. Then we leverage a transformer encoder to encode the price movement information into temporal representations. Afterward, we propose a heterogeneous graph attention network to jointly optimize the embeddings of the financial time series data by transformer encoder and infer the probability of target movements. Finally, we conduct extensive experiments on the stock market in the United States and China. The results demonstrate the effectiveness and superior performance of our proposed methods compared with state-of-the-art baselines. Moreover, we also deploy the proposed THGNN in a real-world quantitative algorithm trading system, the accumulated portfolio return obtained by our method significantly outperforms other baselines.
    Date: 2023–05
  17. By: Brown, David P. (University of Alberta, Department of Economics); Cajueiro, Daniel O. (University of Brasilia); Eckert, Andrew (University of Alberta, Department of Economics); Silveira, Douglas (University of Alberta, Department of Economics)
    Abstract: Information and data transparency have been shown to have an important impact on competitive behavior and market outcomes. Market transparency can enhance competition by allowing firms to respond efficiently to a changing market environment. However, a high degree of information can facilitate coordination by enhancing communication and the monitoring of rival behavior. A recent example highlighting concerns over the use of publicly available information to communicate across firms involves the Alberta wholesale electricity market. This market used to release anonymized information on firms’ pricing strategies in near real-time. Allegations were raised that firms were using unique patterns in their prices to reveal their identities to rival firms and coordinate on higher prices. This paper uses machine learning techniques to investigate how firms could use anonymized publicly available information to communicate with their rivals. These techniques can be employed as a possible screen to evaluate whether publicly available information can be used to identify rival behavior and facilitate coordination. Based on these results, regulators can determine if the degree of market transparency is detrimental to market competition.
    Keywords: Machine Learning; Electricity; Market Power; Competition Policy
    JEL: D43 L13 L50 L94 Q40
    Date: 2023–05–23
  18. By: Niklas Mueller; Steffen Klug; Andreas Koenig; Alexander Kathan; Lukas Christ; Bjoern Schuller; Shahin Amiriparian
    Abstract: We study voiced laughter in executive communication and its effect on social approval. Integrating research on laughter, affect-as-information, and infomediaries' social evaluations of firms, we hypothesize that voiced laughter in executive communication positively affects social approval, defined as audience perceptions of affinity towards an organization. We surmise that the effect of laughter is especially strong for joint laughter, i.e., the number of instances in a given communication venue for which the focal executive and the audience laugh simultaneously. Finally, combining the notions of affect-as-information and negativity bias in human cognition, we hypothesize that the positive effect of laughter on social approval increases with bad organizational performance. We find partial support for our ideas when testing them on panel data comprising 902 German Bundesliga soccer press conferences and media tenor, applying state-of-the-art machine learning approaches for laughter detection as well as sentiment analysis. Our findings contribute to research at the nexus of executive communication, strategic leadership, and social evaluations, especially by introducing laughter as a highly consequential potential, but understudied social lubricant at the executive-infomediary interface. Our research is unique by focusing on reflexive microprocesses of social evaluations, rather than the infomediary-routines perspectives in infomediaries' evaluations. We also make methodological contributions.
    Date: 2023–05
  19. By: Borisenko Georgy (Department of Economics, Lomonosov Moscow State University)
    Abstract: This paper is devoted to forecasting the value of shares of large Russian companies traded on the Moscow Stock Exchange based on news. Neural networks transformers are used as models for forecasting. Moreover, classical machine learning methods are also involved in the analysis for comparison with the neural network approach. Major Russian news sources and Telegram channels are used as news data. Models trained on different sources are also compared. As a result of the study, it was found that classical machine learning methods cope better with this task in the general case, but neural networks also show good quality. The paper also provides recommendations on the choice of a news source and the choice of a task statement.
    Keywords: shape price, news, network approach, Telegr?m
    JEL: C63 G14
    Date: 2023–05
  20. By: Khezr, Peyman; Pourkhanali, Armin
    Abstract: The Regional Greenhouse Gas Initiative (RGGI), as the largest cap-and-trade system in the United States, employs quarterly auctions to distribute emissions permits to firms. This study examines firm behavior and auction performance from both theoretical and empirical perspectives. We utilize auction theory to offer theoretical insights regarding the optimal bidding behavior of firms participating in these auctions. Subsequently, we analyze data from the past 58 RGGI auctions to assess the relevant parameters, employing panel random effects and machine learning models. Our findings indicate that most significant policy changes within RGGI, such as the Cost Containment Reserve, positively impacted the auction clearing price. Furthermore, we identify critical parameters, including the number of bidders and the extent of their demand in the auction, demonstrating their influence on the auction clearing price. This paper presents valuable policy insights for all cap-and-trade systems that allocate permits through auctions, as we employ data from an established market to substantiate the efficacy of policies and the importance of specific parameters.
    Keywords: Emissions permit, auctions, uniform-price, RGGI
    JEL: C5 D21 Q5
    Date: 2023–04–24
  21. By: Olkers, Tim; Liu, Shuang; Mußhoff, Oliver
    Abstract: The availability of formal credit is crucial for the development of the agricultural sector as it can enhance farmers’ purchasing power to acquire inputs and agricultural technology. This, in turn, can increase productivity and resilience throughout the sector. Therefore, the analysis of bank client and loan data in the agricultural sector in a developing country is of interest. We explore the question of who the clients of agricultural credit are and whether they can be clustered into different groups by using an unsupervised machine learning technique. We also investigate whether the loan repayment performance of these clusters differs based on various logit regressions. According to our results, there are 3 different clusters of farmers in Mali that differ by personal characteristics (such as age or gender) as well as credit demand characteristics (e.g., loan amount, interest rates, credit duration, number of credits). Each cluster that differs in their characteristics demonstrates a dissimilar repayment performance. Hence, different instruments as well as communication designs are needed to meet the financial needs of the different clusters and to strengthen the resilience of different groups of farmers in Mali. Our findings provide an important foundation for the design of future agricultural policies and financial products for the agricultural sector as they emphasise the heterogeneity of agricultural lenders in general.
    Keywords: Agricultural Finance, Agricultural and Food Policy
    Date: 2023–03
  22. By: Chen, Yutong (University of Virginia); Chiplunkar, Gaurav (University of Virginia); Sekhri, Sheetal (University of Virginia); Sen, Anirban (Microsoft Corporation); Seth, Aaditeshwar (Indian Institute of Technology Delhi)
    Abstract: We use a new machine learning-enabled, social network based measurement technique to assemble a novel dataset of firms' political connections in India. Leveraging this data along with a long panel of detailed financial transactions of firms, we study how political connections matter during an economic downturn. Using a synthetic difference-in-differences framework, we find that connected firms had 8-10% higher income, sales, and TFPR gains that were persistent for over a three-year period following the crisis. We unpack various mechanisms and show that connected firms were able to delay their short-term payments to suppliers and creditors, delay debt and interest payments, decrease expensive long-term borrowings from banks in favor of short-term non-collateral ones, and increase investments in productive assets such as computers and software. Our method to determine political connections is portable to other applications and contexts.
    Keywords: political connections, firms, crisis
    JEL: O16 D22 D73
    Date: 2023–05
  23. By: Zikai Wei; Bo Dai; Dahua Lin
    Abstract: Active investing aims to construct a portfolio of assets that are believed to be relatively profitable in the markets, with one popular method being to construct a portfolio via factor-based strategies. In recent years, there have been increasing efforts to apply deep learning to pursue "deep factors'' with more active returns or promising pipelines for asset trends prediction. However, the question of how to construct an active investment portfolio via an end-to-end deep learning framework (E2E) is still open and rarely addressed in existing works. In this paper, we are the first to propose an E2E that covers almost the entire process of factor investing through factor selection, factor combination, stock selection, and portfolio construction. Extensive experiments on real stock market data demonstrate the effectiveness of our end-to-end deep leaning framework in active investing.
    Date: 2023–05
  24. By: Claire Greene; Oz Shy; Joanna Stavins
    Abstract: The Big Five personality traits—openness to experience, conscientiousness, extroversion, agreeableness, and neuroticism—are widely used in understanding human behavior. Using data collected from a survey and diary of consumer payment choice, we investigate how the Big Five traits affect three financial outcomes: being unbanked, holding a credit card, and carrying credit card debt. Although each personality trait is correlated with each of the financial outcomes we examine, they mostly become statistically insignificant when we control for demographics and income in regressions. Carrying credit card debt (revolving), however, is significantly affected by conscientiousness, openness, and agreeableness: Credit card adopters who are less conscientious, more open to experiences, or more agreeable are significantly more likely to revolve credit card debt. A machine learning algorithm confirms that conscientiousness is the major factor separating revolvers from other credit cardholders.
    Keywords: credit card debt; consumer payments; personality traits; financial behavior; unbanked
    JEL: D12 D14 E42
    Date: 2023–03–01
  25. By: Kim, Minho; Han, Jaepil
    Abstract: Despite high hopes for artificial intelligence (AI) to generate powerful innovations across the public sphere backed by its strong prediction skills, Korea has not fully brought the technologies into the public sector in tasks like identifying policy target groups and managing follow-up tasks in line with its policy objectives.Recent cases of AI-applied public services in Korea show limited usage, mainly replacing simple repetitive tasks. Few leading countries are trying to apply AI-based analysis to select promising policy target groups to effectively achieve policy goals and follow up on the performance of public projects. While the existing management system for policy performance is mostly about ex-post assessment of project outcomes, the application of AI technologies signifies a shift to data-driven decision-making that uses ex-ante forecasts of policy effects. An analysis of AI-applied recipient selection of small and medium enterprise (SME) policy support programs demonstrated the efficiency of AI in predicting the performance of beneficiary firms after the program and AI's potential to significantly improve the effectiveness of public support by providing helpful information in screening out unfit SMEs. Using firm-level data, this study applies machine learning to various public financing programs (subsidies or loans for SMEs) funded by the Ministry of SMEs and Startups and finds that AI helps predict the growth of recipient firms in the years following policy support. The application of AI in identifying fitting recipients likely to achieve intended objectives may increase project effectiveness. In a KDI survey in 2020, respondents pointed out that what hinders transitioning into a system of AI-applied, data-driven policymaking in the public sector are: 1) incomplete standardization and linkage of policy information between governmental ministries and 2) lack of expertise in technology utilization in the public sector. By developing a strategy to propel a transition into data-driven policymaking in the public sector, coordinated national-level efforts must be made to heighten policy effectiveness across different public fields, including education, health care, public safety, national defense, and business support. One way to adopt AI technologies in the public sector is by designing a policy to support technology adoption for competent public institutions. Support measures may cover system, data platform, security, organizational consulting, training, etc. Detailed strategies are: 1) unifying existing data management systems into one single platform, 2) reorganizing the way government work gets done to enable efficient exchange of policy information, and 3) building a trust-based public-private partnership. By examining the policy cycle from planning and implementation to evaluation, it is important to clarify areas for AI to contribute to policy decision-making. Also, the government needs step-by-step strategies toward data-driven policymaking, such as setting clear project objectives, selecting and sharing data, establishing system and security, and promoting operational transparency.
    Keywords: Artificial intelligence, Public sector, SME policy, South Korea
    Date: 2022
  26. By: Nasir, Nida; Kansal, Afreen; Alshaltone, Omar; Barneih, Feras; Shanableh, Abdallah; Al-Shabi, Mohammad; Al Shammaa, Ahmed
    Abstract: Water features are one of the most crucial environmental elements for strengthening climate-change adaptation. Remote sensing (RS) technologies driven by artificial intelligence (AI) have emerged as one of the most sought-after approaches for automating water information extraction and indeed. In this paper, a stacked ensemble model approach is proposed on AquaSat dataset (more than 500, 000 images collection via satellite and Google Earth Engine). A one-way Analysis of variance (ANOVA) test and the Kruskal Wallis test are conducted for various optical-based variables at 99% significance level to understand how these vary for different water bodies. An oversampling is done on the training data using Synthetic Minority Oversampling Technique (SMOTE) to solve the problem of class imbalance while the model is tested on an imbalanced data, replicating the real-life situation. To enhance state-of-the-art, the pros of standalone machine learning classifiers and neural networks have been utilized. The stacked model obtained 100% accuracy on the testing data when using the decision tree classifier as the meta model. This study has been cross validated five-fold and will help researchers working in in-situ water bodies detection with the use of stacked model classification.
    Keywords: ANOVA; classification; meta learning; smote; stacked modeling
    JEL: C1
    Date: 2023–05–01
  27. By: Agam Shah; Sudheer Chava
    Abstract: Recently large language models (LLMs) like ChatGPT have shown impressive performance on many natural language processing tasks with zero-shot. In this paper, we investigate the effectiveness of zero-shot LLMs in the financial domain. We compare the performance of ChatGPT along with some open-source generative LLMs in zero-shot mode with RoBERTa fine-tuned on annotated data. We address three inter-related research questions on data annotation, performance gaps, and the feasibility of employing generative models in the finance domain. Our findings demonstrate that ChatGPT performs well even without labeled data but fine-tuned models generally outperform it. Our research also highlights how annotating with generative models can be time-intensive. Our codebase is publicly available on GitHub under CC BY-NC 4.0 license.
    Date: 2023–05

This nep-big issue is ©2023 by Tom Coupé. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.