nep-big New Economics Papers
on Big Data
Issue of 2024‒07‒22
23 papers chosen by
Tom Coupé, University of Canterbury


  1. Can AI with High Reasoning Ability Replicate Human-like Decision Making in Economic Experiments? By Ayato Kitadai; Sinndy Dayana Rico Lugo; Yudai Tsurusaki; Yusuke Fukasawa; Nariaki Nishino
  2. Text mining in economics and health economics using Stata By Carlo Drago
  3. Convolutional Neural Networks to signal currency crises: from the Asian financial crisis to the Covid crisis. By Eric AVENEL
  4. Estimating Treatment Effects under Recommender Interference: A Structured Neural Networks Approach By Ruohan Zhan; Shichao Han; Yuchen Hu; Zhenling Jiang
  5. Management Decisions in Manufacturing using Causal Machine Learning -- To Rework, or not to Rework? By Philipp Schwarz; Oliver Schacht; Sven Klaassen; Daniel Gr\"unbaum; Sebastian Imhof; Martin Spindler
  6. Testing identification in mediation and dynamic treatment models By Martin Huber; Kevin Kloiber; Lukas Laffers
  7. A Survey of Large Language Models for Financial Applications: Progress, Prospects and Challenges By Yuqi Nie; Yaxuan Kong; Xiaowen Dong; John M. Mulvey; H. Vincent Poor; Qingsong Wen; Stefan Zohren
  8. Artificial Intelligence in Agriculture: Revolutionizing Methods and Practices in Portugal By Maria José Sousa
  9. MacroHFT: Memory Augmented Context-aware Reinforcement Learning On High Frequency Trading By Chuqiao Zong; Chaojie Wang; Molei Qin; Lei Feng; Xinrun Wang; Bo An
  10. Optimizing Sales Forecasts through Automated Integration of Market Indicators By Lina D\"oring; Felix Grumbach; Pascal Reusch
  11. Climate Policy Uncertainty and Financial Stress: Evidence for China By Rangan Gupta; Qiang Ji; Christian Pierdzioch
  12. Distance to Export: A Machine Learning Approach with Portuguese Firms By Paulo Barbosa; João Cortes; João Amador
  13. The Hybrid Forecast of S&P 500 Volatility ensembled from VIX, GARCH and LSTM models By Natalia Roszyk; Robert Ślepaczuk
  14. Operator Deep Smoothing for Implied Volatility By Lukas Gonon; Antoine Jacquier; Ruben Wiedemann
  15. Algorithms for College Admissions Decision Support: Impacts of Policy Change and Inherent Variability By Lee, Jinsook; Harvey, Emma; Zhou, Joyce; Garg, Nikhil; Joachims, Thorsten; Kizilcec, René F.
  16. Application of Natural Language Processing in Financial Risk Detection By Liyang Wang; Yu Cheng; Ao Xiang; Jingyu Zhang; Haowei Yang
  17. Improving Realized LGD approximation: A Novel Framework with XGBoost for handling missing cash-flow data By Zuzanna Kostecka; Robert Ślepaczuk
  18. A Blueprint for Improving Automated Driving System Safety By Cohen D'Agostino, Mollie; Michael, Cooper E; Ramos, Marilla; Correa-Jullian, Camila
  19. The #MeToo Movement and Judges' Gender Gap in Decisions By Cai, Xiqian; Chen, Shuai; Cheng, Zhengquan
  20. Financial Assets Dependency Prediction Utilizing Spatiotemporal Patterns By Haoren Zhu; Pengfei Zhao; Wilfred Siu Hung NG; Dik Lun Lee
  21. Enhancing supply chain security with automated machine learning By Haibo Wang; Lutfu S. Sua; Bahram Alidaee
  22. Gasoline Prices and Presidential Approval Ratings of the United States By Rangan Gupta; Christian Pierdzioch; Aviral K. Tiwari
  23. Different Newspapers – Different Inflation Perceptions By Arndt, Sarah

  1. By: Ayato Kitadai; Sinndy Dayana Rico Lugo; Yudai Tsurusaki; Yusuke Fukasawa; Nariaki Nishino
    Abstract: Economic experiments offer a controlled setting for researchers to observe human decision-making and test diverse theories and hypotheses; however, substantial costs and efforts are incurred to gather many individuals as experimental participants. To address this, with the development of large language models (LLMs), some researchers have recently attempted to develop simulated economic experiments using LLMs-driven agents, called generative agents. If generative agents can replicate human-like decision-making in economic experiments, the cost problem of economic experiments can be alleviated. However, such a simulation framework has not been yet established. Considering the previous research and the current evolutionary stage of LLMs, this study focuses on the reasoning ability of generative agents as a key factor toward establishing a framework for such a new methodology. A multi-agent simulation, designed to improve the reasoning ability of generative agents through prompting methods, was developed to reproduce the result of an actual economic experiment on the ultimatum game. The results demonstrated that the higher the reasoning ability of the agents, the closer the results were to the theoretical solution than to the real experimental result. The results also suggest that setting the personas of the generative agents may be important for reproducing the results of real economic experiments. These findings are valuable for the future definition of a framework for replacing human participants with generative agents in economic experiments when LLMs are further developed.
    Date: 2024–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2406.11426
  2. By: Carlo Drago (University Niccolò Cusano)
    Abstract: Within the more relevant data science topics, text mining is an important and active research area that offers various ways to extract information and insights from text data. Its continued use and improvement could drive innovation in several areas and improve our ability to interpret, evaluate, and utilize the vast amounts of unstructured text produced in the digital age. Extracting insightful information from text data through text mining in healthcare and business holds great promise. Text mining in business can provide insightful information by analyzing large amounts of text data, including research papers, news, and Fnancial reports. It can help analyze market sentiment, identify emerging trends, and more accurately predict economic indicators by economists. For example, economists can Fnd terms or phrases that reQect investment behavior and sentiment changes by applying text-mining methods to Fnancial news. Text mining can provide essential insights into health economics by examining various textual data, including patient surveys, clinical trials, medical records, and health policy. Researchers and policymakers can use it to understand healthcare utilization patterns better, identify the variables that inQuence patient outcomes, and evaluate the effectiveness of different healthcare treatments. Text mining can examine electronic health data and identify trends in disease incidence, treatment effectiveness, and healthcare utilization. In this presentation, I will illustrate the instruments currently available in Stata to facilitate several text- mining methods.
    Date: 2024–05–09
    URL: https://d.repec.org/n?u=RePEc:boc:isug24:10
  3. By: Eric AVENEL (Univ Rennes, CNRS, CREM – UMR6211, F-35000 Rennes France)
    Abstract: The successive Cournot oligopoly model presented in Salinger (1988) is very popular in the literature on vertical relations. There is however a problem in this model, since the assumption of elastic supply on the intermediate market is inconsistent with the assumption that upstream firms choose their output before downstream firms place their orders. I show that dropping the assumption of elastic supply on the intermediate market and complementing the model with a well chosen allocation rule - the competitive rule of Cho and Tang (2014) - restores the validity of the results in Salinger (1988) and the subsequent contributions using the same model.
    Keywords: Cournot competition, successive oligopoly, allocation rule.
    JEL: L13
    Date: 2024–06
    URL: https://d.repec.org/n?u=RePEc:tut:cremwp:2024-02
  4. By: Ruohan Zhan; Shichao Han; Yuchen Hu; Zhenling Jiang
    Abstract: Recommender systems are essential for content-sharing platforms by curating personalized content. To evaluate updates of recommender systems targeting content creators, platforms frequently engage in creator-side randomized experiments to estimate treatment effect, defined as the difference in outcomes when a new (vs. the status quo) algorithm is deployed on the platform. We show that the standard difference-in-means estimator can lead to a biased treatment effect estimate. This bias arises because of recommender interference, which occurs when treated and control creators compete for exposure through the recommender system. We propose a "recommender choice model" that captures how an item is chosen among a pool comprised of both treated and control content items. By combining a structural choice model with neural networks, the framework directly models the interference pathway in a microfounded way while accounting for rich viewer-content heterogeneity. Using the model, we construct a double/debiased estimator of the treatment effect that is consistent and asymptotically normal. We demonstrate its empirical performance with a field experiment on Weixin short-video platform: besides the standard creator-side experiment, we carry out a costly blocked double-sided randomization design to obtain a benchmark estimate without interference bias. We show that the proposed estimator significantly reduces the bias in treatment effect estimates compared to the standard difference-in-means estimator.
    Date: 2024–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2406.14380
  5. By: Philipp Schwarz; Oliver Schacht; Sven Klaassen; Daniel Gr\"unbaum; Sebastian Imhof; Martin Spindler
    Abstract: In this paper, we present a data-driven model for estimating optimal rework policies in manufacturing systems. We consider a single production stage within a multistage, lot-based system that allows for optional rework steps. While the rework decision depends on an intermediate state of the lot and system, the final product inspection, and thus the assessment of the actual yield, is delayed until production is complete. Repair steps are applied uniformly to the lot, potentially improving some of the individual items while degrading others. The challenge is thus to balance potential yield improvement with the rework costs incurred. Given the inherently causal nature of this decision problem, we propose a causal model to estimate yield improvement. We apply methods from causal machine learning, in particular double/debiased machine learning (DML) techniques, to estimate conditional treatment effects from data and derive policies for rework decisions. We validate our decision model using real-world data from opto-electronic semiconductor manufacturing, achieving a yield improvement of 2 - 3% during the color-conversion process of white light-emitting diodes (LEDs).
    Date: 2024–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2406.11308
  6. By: Martin Huber; Kevin Kloiber; Lukas Laffers
    Abstract: We propose a test for the identification of causal effects in mediation and dynamic treatment models that is based on two sets of observed variables, namely covariates to be controlled for and suspected instruments, building on the test by Huber and Kueck (2022) for single treatment models. We consider models with a sequential assignment of a treatment and a mediator to assess the direct treatment effect (net of the mediator), the indirect treatment effect (via the mediator), or the joint effect of both treatment and mediator. We establish testable conditions for identifying such effects in observational data. These conditions jointly imply (1) the exogeneity of the treatment and the mediator conditional on covariates and (2) the validity of distinct instruments for the treatment and the mediator, meaning that the instruments do not directly affect the outcome (other than through the treatment or mediator) and are unconfounded given the covariates. Our framework extends to post-treatment sample selection or attrition problems when replacing the mediator by a selection indicator for observing the outcome, enabling joint testing of the selectivity of treatment and attrition. We propose a machine learning-based test to control for covariates in a data-driven manner and analyze its finite sample performance in a simulation study. Additionally, we apply our method to Slovak labor market data and find that our testable implications are not rejected for a sequence of training programs typically considered in dynamic treatment evaluations.
    Date: 2024–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2406.13826
  7. By: Yuqi Nie; Yaxuan Kong; Xiaowen Dong; John M. Mulvey; H. Vincent Poor; Qingsong Wen; Stefan Zohren
    Abstract: Recent advances in large language models (LLMs) have unlocked novel opportunities for machine learning applications in the financial domain. These models have demonstrated remarkable capabilities in understanding context, processing vast amounts of data, and generating human-preferred contents. In this survey, we explore the application of LLMs on various financial tasks, focusing on their potential to transform traditional practices and drive innovation. We provide a discussion of the progress and advantages of LLMs in financial contexts, analyzing their advanced technologies as well as prospective capabilities in contextual understanding, transfer learning flexibility, complex emotion detection, etc. We then highlight this survey for categorizing the existing literature into key application areas, including linguistic tasks, sentiment analysis, financial time series, financial reasoning, agent-based modeling, and other applications. For each application area, we delve into specific methodologies, such as textual analysis, knowledge-based analysis, forecasting, data augmentation, planning, decision support, and simulations. Furthermore, a comprehensive collection of datasets, model assets, and useful codes associated with mainstream applications are presented as resources for the researchers and practitioners. Finally, we outline the challenges and opportunities for future research, particularly emphasizing a number of distinctive aspects in this field. We hope our work can help facilitate the adoption and further development of LLMs in the financial sector.
    Date: 2024–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2406.11903
  8. By: Maria José Sousa (ISCTE Instituto Universitário de Lisboa)
    Abstract: Artificial Intelligence (AI) has emerged as a focal point for researchers and industry experts, continuously redefined by technological advancements. AI encompasses the development of machines impersonating human cognitive processes, such as learning, reasoning, and self-correction. Its wide-ranging applications across industries have showcased its increasing precision and efficiency, and Agriculture has also embraced AI to increase income and efficiency. In this regard a literature review to comprehensively understand the concept, existing research, and projects related to AI in agriculture was performed. Moreover, this paper approaches the potential of AI in agriculture practically, addressing the emergence of new methods and practices, using a case study approach, and analyzing the perceptions of impacts of AI in agriculture, from experts, academics, and agriculture professionals regarding the application of AI. It contributes to real application development, offering insights that resonate within academic and practical dimensions.
    Keywords: Artificial Intelligence, Agriculture, Efficiency, Quantitative analysis
    JEL: D20 Q16
    URL: https://d.repec.org/n?u=RePEc:mde:wpaper:180
  9. By: Chuqiao Zong; Chaojie Wang; Molei Qin; Lei Feng; Xinrun Wang; Bo An
    Abstract: High-frequency trading (HFT) that executes algorithmic trading in short time scales, has recently occupied the majority of cryptocurrency market. Besides traditional quantitative trading methods, reinforcement learning (RL) has become another appealing approach for HFT due to its terrific ability of handling high-dimensional financial data and solving sophisticated sequential decision-making problems, \emph{e.g., } hierarchical reinforcement learning (HRL) has shown its promising performance on second-level HFT by training a router to select only one sub-agent from the agent pool to execute the current transaction. However, existing RL methods for HFT still have some defects: 1) standard RL-based trading agents suffer from the overfitting issue, preventing them from making effective policy adjustments based on financial context; 2) due to the rapid changes in market conditions, investment decisions made by an individual agent are usually one-sided and highly biased, which might lead to significant loss in extreme markets. To tackle these problems, we propose a novel Memory Augmented Context-aware Reinforcement learning method On HFT, \emph{a.k.a.} MacroHFT, which consists of two training phases: 1) we first train multiple types of sub-agents with the market data decomposed according to various financial indicators, specifically market trend and volatility, where each agent owns a conditional adapter to adjust its trading policy according to market conditions; 2) then we train a hyper-agent to mix the decisions from these sub-agents and output a consistently profitable meta-policy to handle rapid market fluctuations, equipped with a memory mechanism to enhance the capability of decision-making. Extensive experiments on various cryptocurrency markets demonstrate that MacroHFT can achieve state-of-the-art performance on minute-level trading tasks.
    Date: 2024–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2406.14537
  10. By: Lina D\"oring; Felix Grumbach; Pascal Reusch
    Abstract: Recognizing that traditional forecasting models often rely solely on historical demand, this work investigates the potential of data-driven techniques to automatically select and integrate market indicators for improving customer demand predictions. By adopting an exploratory methodology, we integrate macroeconomic time series, such as national GDP growth, from the \textit{Eurostat} database into \textit{Neural Prophet} and \textit{SARIMAX} forecasting models. Suitable time series are automatically identified through different state-of-the-art feature selection methods and applied to sales data from our industrial partner. It could be shown that forecasts can be significantly enhanced by incorporating external information. Notably, the potential of feature selection methods stands out, especially due to their capability for automation without expert knowledge and manual selection effort. In particular, the Forward Feature Selection technique consistently yielded superior forecasting accuracy for both SARIMAX and Neural Prophet across different company sales datasets. In the comparative analysis of the errors of the selected forecasting models, namely Neural Prophet and SARIMAX, it is observed that neither model demonstrates a significant superiority over the other.
    Date: 2024–05
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2406.07564
  11. By: Rangan Gupta (Department of Economics, University of Pretoria, Private Bag X20, Hatfield 0028, South Africa); Qiang Ji (Institutes of Science and Development, Chinese Academy of Sciences, Beijing, China; School of Public Policy and Management, University of Chinese Academy of Sciences, Beijing, China); Christian Pierdzioch (Department of Economics, Helmut Schmidt University, Holstenhofweg 85, P.O.B. 700822, 22008 Hamburg, Germany)
    Abstract: Focusing on China, we study the predictive value of Chinese climate policy uncertainty (CCPU) for subsequent stress in China’s financial markets in a sample of daily data running from October 2006 to December 2022. We control for the impact of international spillover effects of financial stress originating in the European Union (EU), the United Kingdom (UK), and the United States (US), and also for a large number of other important macroeconomic, financial, behavioral variables. Given the large number of predictors, we use random forests, an ensemble machine-learning technique, to trace out the impact of CCPU on financial stress by means of an out-of-sample forecasting experiment. We find that CCPU has predictive value for subsequent financial stress, and that its predictive power is stronger than that of measures of global climate risk. Its predictive value is strongest at a short (daily) forecast horizon and tends to decrease when the length of the forecast horizon increases. Moreover, we document the predictive value of CCPU across a spectrum of conditional quantiles of financial stress.
    Keywords: Financial stress, Climate risks, China, Random forests, Forecasting
    JEL: C22 C32 C53 G15 Q54
    Date: 2024–06
    URL: https://d.repec.org/n?u=RePEc:pre:wpaper:202428
  12. By: Paulo Barbosa; João Cortes; João Amador (ISEG - University of Lisbon and Portugal Trade & Investment; Portugal Trade & Investment; Banco de Portugal and Nova SBE)
    Abstract: This paper estimates how distant a firm is from becoming a successful exporter. The empirical exercise uses very rich data for Portuguese firms and assumes that there are non-trivial determinants to distinguish between exporters and non-exporters. An array of machine learning models - Bayesian Additive Regression Tree (BART), Missingness not at Random (BART-MIA), Random Forest, Logit Regression and Neural Networks – are trained to predict firms’ export probability and shed light on the critical factors driving the transition to successful export ventures. Neural Networks outperform the other techniques and remain highly accurate when we change the export definitions and the training and testing strategies. We show that the most influential variables for prediction are labour productivity, firms’ goods and services imports, capital intensity and wages.
    Keywords: Machine learning, Forecasting exporters, Trade promotion, Micro level data, Portugal
    JEL: F17 C53 C55 L21
    Date: 2024–07
    URL: https://d.repec.org/n?u=RePEc:mde:wpaper:182
  13. By: Natalia Roszyk (University of Warsaw, Faculty of Economic Sciences, Quantitative Finance Research Group); Robert Ślepaczuk (University of Warsaw, Faculty of Economic Sciences, Quantitative Finance Research Group, Department of Quantitative Finance and Machine Learning)
    Abstract: Predicting the S&P 500 index's volatility is crucial for investors and financial analysts as it helps in assessing market risk and making informed investment decisions. Volatility represents the level of uncertainty or risk related to the size of changes in a security's value, making it an essential indicator for financial planning. This study explores four methods to improve the accuracy of volatility forecasts for the S&P 500: the established GARCH model, known for capturing historical volatility patterns; an LSTM network that utilizes past volatility and log returns; a hybrid LSTM-GARCH model that combines the strengths of both approaches; and an advanced version of the hybrid model that also factors in the VIX index to gauge market sentiment. This analysis is based on a daily dataset that includes data for S&P 500 and VIX index, covering the period from January 3, 2000, to December 21, 2023. Through rigorous testing and comparison, we found that machine learning approaches, particularly the hybrid LSTM models, significantly outperform the traditional GARCH model. The inclusion of the VIX index in the hybrid model further enhances its forecasting ability by incorporating real-time market sentiment. The results of this study offer valuable insights for achieving more accurate volatility predictions, enabling better risk management and strategic investment decisions in the volatile environment of the S&P 500.
    Keywords: volatility forecasting, LSTM-GARCH, S&P 500 index, hybrid forecasting models, VIX index, machine learning, financial time series analysis, walk-forward process, hyperparameters tuning, deep learning, recurrent neural networks
    JEL: C4 C45 C55 C65 G11
    Date: 2024
    URL: https://d.repec.org/n?u=RePEc:war:wpaper:2024-13
  14. By: Lukas Gonon; Antoine Jacquier; Ruben Wiedemann
    Abstract: We devise a novel method for implied volatility smoothing based on neural operators. The goal of implied volatility smoothing is to construct a smooth surface that links the collection of prices observed at a specific instant on a given option market. Such price data arises highly dynamically in ever-changing spatial configurations, which poses a major limitation to foundational machine learning approaches using classical neural networks. While large models in language and image processing deliver breakthrough results on vast corpora of raw data, in financial engineering the generalization from big historical datasets has been hindered by the need for considerable data pre-processing. In particular, implied volatility smoothing has remained an instance-by-instance, hands-on process both for neural network-based and traditional parametric strategies. Our general operator deep smoothing approach, instead, directly maps observed data to smoothed surfaces. We adapt the graph neural operator architecture to do so with high accuracy on ten years of raw intraday S&P 500 options data, using a single set of weights. The trained operator adheres to critical no-arbitrage constraints and is robust with respect to subsampling of inputs (occurring in practice in the context of outlier removal). We provide extensive historical benchmarks and showcase the generalization capability of our approach in a comparison with SVI, an industry standard parametrization for implied volatility. The operator deep smoothing approach thus opens up the use of neural networks on large historical datasets in financial engineering.
    Date: 2024–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2406.11520
  15. By: Lee, Jinsook; Harvey, Emma; Zhou, Joyce; Garg, Nikhil; Joachims, Thorsten; Kizilcec, René F. (Cornell University)
    Abstract: Each year, selective American colleges sort through tens of thousands of applications to identify a first-year class that displays both academic merit and diversity. In the 2023-2024 admissions cycle, these colleges faced unprecedented challenges to doing so. First, the number of applications has been steadily growing year-over-year. Second, test-optional policies that have remained in place since the COVID-19 pandemic limit access to key information that has historically been predictive of academic success. Most recently, longstanding debates over affirmative action culminated in the Supreme Court banning race-conscious admissions. Colleges have explored machine learning (ML) models to address the issues of scale and missing test scores, often via ranking algorithms intended to allow human reviewers to focus attention on ‘top’ applicants. However, the Court’s ruling will force changes to these models, which were previously able to consider race as a factor in ranking. There is currently a poor understanding of how these mandated changes will shape applicant ranking algorithms, and, by extension, admitted classes. We seek to address this by quantifying the impact of different admission policies on the applications prioritized for review. We show that removing race data from a previously developed applicant ranking algorithm reduces the diversity of the top-ranked pool of applicants without meaningfully increasing the academic merit of that pool. We contextualize this impact by showing that excluding data on applicant race has a greater impact than excluding other potentially informative variables like intended majors. Finally, we measure the impact of policy change on individuals by comparing the arbitrariness in applicant rank attributable to policy change to the arbitrariness attributable to randomness (i.e., how much an applicant’s rank changes across models that use the same policy but are trained on bootstrapped samples from the same dataset). We find that any given policy has a high degree of arbitrariness (i.e. at most 9% of applicants are consistently ranked in the top 20%), and that removing race data from the ranking algorithm increases arbitrariness in outcomes for most applicants.
    Date: 2024–06–20
    URL: https://d.repec.org/n?u=RePEc:osf:socarx:hds5g
  16. By: Liyang Wang; Yu Cheng; Ao Xiang; Jingyu Zhang; Haowei Yang
    Abstract: This paper explores the application of Natural Language Processing (NLP) in financial risk detection. By constructing an NLP-based financial risk detection model, this study aims to identify and predict potential risks in financial documents and communications. First, the fundamental concepts of NLP and its theoretical foundation, including text mining methods, NLP model design principles, and machine learning algorithms, are introduced. Second, the process of text data preprocessing and feature extraction is described. Finally, the effectiveness and predictive performance of the model are validated through empirical research. The results show that the NLP-based financial risk detection model performs excellently in risk identification and prediction, providing effective risk management tools for financial institutions. This study offers valuable references for the field of financial risk management, utilizing advanced NLP techniques to improve the accuracy and efficiency of financial risk detection.
    Date: 2024–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2406.09765
  17. By: Zuzanna Kostecka (University of Warsaw, Faculty of Economic Sciences, Quantitative Finance Research Group); Robert Ślepaczuk (University of Warsaw, Faculty of Economic Sciences, Quantitative Finance Research Group, Department of Quantitative Finance and Machine Learning)
    Abstract: The scope for the accurate calculation of the Loss Given Default (LGD) parameter is comprehensive in terms of financial data. In this research, we aim to explore methods for improving the approximation of realized LGD in conditions of limited access to the cash-flow data. We enhance the performance of the method which relies on the differences between exposure values (delta outstanding approach) by employing the machine learning (ML) techniques. The research utilizes the data from the mortgage portfolio of one of the European countries and assumes the close resemblance for similar economic contexts. It incorporates non-financial variables and macroeconomic data related to the housing market, improving the accuracy of loss severity approximation. The proposed methodology attempts to mitigate the country-specific (related to the local legal) or portfolio-specific factors in aim to show the general advantage of applying ML techniques, rather than case-specific relation. We developed an XGBoost model that does not rely on cash-flow data yet enhances the accuracy of realized LGD estimation compared to results obtained with the delta outstanding approach. A novel aspect of our work is the detailed exploration of the delta outstanding approach and the methodology for addressing conditions of limited access to cash-flow data through machine learning models.
    Keywords: LGD, Credit risk, Outstanding, Machine Learning, Missing data, Mortgage loan, financial statements, macroeconomic data
    JEL: C4 C45 C55 C65 G11
    Date: 2024
    URL: https://d.repec.org/n?u=RePEc:war:wpaper:2024-12
  18. By: Cohen D'Agostino, Mollie; Michael, Cooper E; Ramos, Marilla; Correa-Jullian, Camila
    Abstract: Vehicle automation represents a new safety frontier that may necessitate a repositioning of our safety oversight systems. This white paper serves as a primer on the technical and legal landscape of automated driving system (ADS) safety. It introduces the latest AI and machine learning techniques that enable ADS functionality. The paper also explores the definitions of safety from the perspectives of standards-setting organizations, federal and state regulations, and legal disciplines. The paper identifies key policy options building on topics raised in the White House’s Blueprint for an AI Bill of Rights, outlining a Blueprint for ADS safety. The analysis concludes that potential ADS safety reforms might include either reform of the Federal Motor Vehicle Safety Standards (FMVSS), or a more holistic risk analysis “safety case” approach. The analysis also looks at caselaw on liability in robotics, as well as judicial activity on consumer and commercial privacy, recognizing that the era of AI will reshape liability frameworks, and data collection must carefully consider how to build in accountability and protect the privacy of consumers and organizations. Lastly, this analysis highlights the need for policies addressing human-machine interaction issues, focusing on guidelines for safety drivers and remote operators. In conclusion, this paper reflects on the need for collaboration among engineers, policy experts, and legal scholars to develop a comprehensive Blueprint for ADS safety and highlights opportunities for future research.
    Keywords: Law, Automated vehicle control, traffic safety, case law, policy, machine learning, artificial intelligence
    Date: 2024–07–01
    URL: https://d.repec.org/n?u=RePEc:cdl:itsdav:qt46d6d86x
  19. By: Cai, Xiqian; Chen, Shuai; Cheng, Zhengquan
    Abstract: Gender inequality and discrimination still persist, even though the gender gap in the labor market has been gradually decreasing. This study examines the effect of the #MeToo movement on judges' gender gap in their vital labor market outcome-judicial decisions on randomly assigned legal cases in China. We apply a difference-in-differences approach to unique verdict data including rich textual information on characteristics of cases and judges, and compare changes in sentences of judges of a different gender after the movement. We find that female judges made more severe decisions post-movement, which almost closed the gender gap. Moreover, we explore a potential mechanism of gender norms, documenting evidence for improved awareness of gender equality among women following the movement and stronger effects on judges' gender gap reduction in regions with better awareness of gender equality. This implies that female judges became willing to stand out and speak up, converging to their male counterparts after the #MeToo movement.
    Keywords: #MeToo movement, Gender gap, Inequality, Judicial decision, Crime, Machine Learning
    JEL: J16 K14 O12 P35 D63
    Date: 2024
    URL: https://d.repec.org/n?u=RePEc:zbw:glodps:1453
  20. By: Haoren Zhu; Pengfei Zhao; Wilfred Siu Hung NG; Dik Lun Lee
    Abstract: Financial assets exhibit complex dependency structures, which are crucial for investors to create diversified portfolios to mitigate risk in volatile financial markets. To explore the financial asset dependencies dynamics, we propose a novel approach that models the dependencies of assets as an Asset Dependency Matrix (ADM) and treats the ADM sequences as image sequences. This allows us to leverage deep learning-based video prediction methods to capture the spatiotemporal dependencies among assets. However, unlike images where neighboring pixels exhibit explicit spatiotemporal dependencies due to the natural continuity of object movements, assets in ADM do not have a natural order. This poses challenges to organizing the relational assets to reveal better the spatiotemporal dependencies among neighboring assets for ADM forecasting. To tackle the challenges, we propose the Asset Dependency Neural Network (ADNN), which employs the Convolutional Long Short-Term Memory (ConvLSTM) network, a highly successful method for video prediction. ADNN can employ static and dynamic transformation functions to optimize the representations of the ADM. Through extensive experiments, we demonstrate that our proposed framework consistently outperforms the baselines in the ADM prediction and downstream application tasks. This research contributes to understanding and predicting asset dependencies, offering valuable insights for financial market participants.
    Date: 2024–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2406.11886
  21. By: Haibo Wang; Lutfu S. Sua; Bahram Alidaee
    Abstract: This study tackles the complexities of global supply chains, which are increasingly vulnerable to disruptions caused by port congestion, material shortages, and inflation. To address these challenges, we explore the application of machine learning methods, which excel in predicting and optimizing solutions based on large datasets. Our focus is on enhancing supply chain security through fraud detection, maintenance prediction, and material backorder forecasting. We introduce an automated machine learning framework that streamlines data analysis, model construction, and hyperparameter optimization for these tasks. By automating these processes, our framework improves the efficiency and effectiveness of supply chain security measures. Our research identifies key factors that influence machine learning performance, including sampling methods, categorical encoding, feature selection, and hyperparameter optimization. We demonstrate the importance of considering these factors when applying machine learning to supply chain challenges. Traditional mathematical programming models often struggle to cope with the complexity of large-scale supply chain problems. Our study shows that machine learning methods can provide a viable alternative, particularly when dealing with extensive datasets and complex patterns. The automated machine learning framework presented in this study offers a novel approach to supply chain security, contributing to the existing body of knowledge in the field. Its comprehensive automation of machine learning processes makes it a valuable contribution to the domain of supply chain management.
    Date: 2024–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2406.13166
  22. By: Rangan Gupta (Department of Economics, University of Pretoria, Private Bag X20, Hatfield 0028, South Africa); Christian Pierdzioch (Department of Economics, Helmut Schmidt University, Holstenhofweg 85, P.O.B. 700822, 22008 Hamburg, Germany); Aviral K. Tiwari (Indian Institute of Management Bodh Gaya, Bodh Gaya, India)
    Abstract: We use random forests, a machine-learning technique, to formally examine the link between real gasoline prices and presidential approval ratings of the United States (US). Random forests make it possible to study this link in a completely data-driven way, such that nonlinearities in the data can easily be detected and a large number of control variables, in line with the extant literature, can be considered. Our empirical findings show that the link between real gasoline prices and the presidential approval ratings is indeed nonlinear, and that the former even has predictive value in an out-of-sample exercise for the latter. We argue that our findings are in line with the so-called pocketbook mechanism, which stipulates that the presidential approval ratings depend on gasoline prices because the latter have sizable impact on personal economic situations of voters.
    Keywords: Presidential approval ratings, Gasoline price, Random forests, Forecasting
    JEL: C22 C53 Q40 Q43
    Date: 2024–06
    URL: https://d.repec.org/n?u=RePEc:pre:wpaper:202427
  23. By: Arndt, Sarah
    Abstract: In this paper, I investigate how inflation signals from different types of newspapers influence household inflation expectations in Germany. Using text data and the large language model GPT-3.5-turbo-1106, I construct newspaper-specific indicators and find significant heterogeneity in their informativeness based on the genre—tabloid versus reputable sources. The tabloid’s indicator is more effective for predicting perceived inflation among low-income and lower-education households, while reputable newspapers better predict higher-income and more educated households’ expectations. Local projections reveal that tabloid sentiment shows an immediate decrease following a monetary policy shock, whereas responses from reputable newspapers are smaller and delayed. Household expectations also vary depending on the type of newspaper affected by the sentiment shock and the socioeconomic background of the household. These findings underscore the differentiated impact of media on inflation expectations across various segments of society, providing valuable insights for policymakers to tailor communication strategies effectively.
    Keywords: Inflation expectations; text mining; forecasting; monetary policy; LLM; ChatGPT
    Date: 2024–06–25
    URL: https://d.repec.org/n?u=RePEc:awi:wpaper:0748

This nep-big issue is ©2024 by Tom Coupé. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.