|
on Artificial Intelligence |
By: | Vafa, Keyon (Harvard U); Athey, Susan (Stanford U); Blei, David M. (Columbia U) |
Abstract: | One thread of empirical work in social science focuses on decomposing group differences in outcomes into unexplained components and components explained by observable factors. In this paper, we study gender wage decompositions, which require estimating the portion of the gender wage gap explained by career histories of workers. Classical methods for decomposing the wage gap employ simple predictive models of wages which condition on a small set of simple summaries of labor history. The problem is that these predictive models cannot take advantage of the full complexity of a worker’s history, and the resulting decompositions thus suffer from omitted variable bias (OVB), where covariates that are correlated with both gender and wages are not included in the model. Here we explore an alternative methodology for wage gap decomposition that employs powerful foundation models, such as large language models, as the predictive engine. Foundation models excel at making accurate predictions from complex, high-dimensional inputs. We use a custom-built foundation model, designed to predict wages from full labor histories, to decompose the gender wage gap. We prove that the way such models are usually trained might still lead to OVB, but develop fine-tuning algorithms that empirically mitigate this issue. Our model captures a richer representation of career history than simple models and predicts wages more accurately. In detail, we first provide a novel set of conditions under which an estimator of the wage gap based on a finetuned foundation model is √𠑛-consistent. Building on the theory, we then propose methods for fine-tuning foundation models that minimize OVB. Using data from the Panel Study of Income Dynamics, we find that history explains more of the gender wage gap than standard econometric models can measure, and we identify elements of history that are important for reducing OVB. |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:ecl:stabus:4206 |
By: | Shapeng Jiang; Lijia Wei; Chen Zhang |
Abstract: | In recent years, large language models (LLMs) have attracted attention due to their ability to generate human-like text. As surveys and opinion polls remain key tools for gauging public attitudes, there is increasing interest in assessing whether LLMs can accurately replicate human responses. This study examines the potential of LLMs, specifically ChatGPT-4o, to replicate human responses in large-scale surveys and to predict election outcomes based on demographic data. Employing data from the World Values Survey (WVS) and the American National Election Studies (ANES), we assess the LLM's performance in two key tasks: simulating human responses and forecasting U.S. election results. In simulations, the LLM was tasked with generating synthetic responses for various socio-cultural and trust-related questions, demonstrating notable alignment with human response patterns across U.S.-China samples, though with some limitations on value-sensitive topics. In prediction tasks, the LLM was used to simulate voting behavior in past U.S. elections and predict the 2024 election outcome. Our findings show that the LLM replicates cultural differences effectively, exhibits in-sample predictive validity, and provides plausible out-of-sample forecasts, suggesting potential as a cost-effective supplement for survey-based research. |
Date: | 2024–11 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2411.01582 |
By: | Mitchell Linegar; Betsy Sinclair; Sander van der Linden; R. Michael Alvarez |
Abstract: | Large Language Models (LLMs) can assist in the prebunking of election misinformation. Using results from a preregistered two-wave experimental study of 4, 293 U.S. registered voters conducted in August 2024, we show that LLM-assisted prebunking significantly reduced belief in specific election myths, with these effects persisting for at least one week. Confidence in election integrity was also increased post-treatment. Notably, the effect was consistent across partisan lines, even when controlling for demographic and attitudinal factors like conspiratorial thinking. LLM-assisted prebunking is a promising tool for rapidly responding to changing election misinformation narratives. |
Date: | 2024–10 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2410.19202 |
By: | Ali Elahi; Fatemeh Taghvaei |
Abstract: | Predicting financial markets and stock price movements requires analyzing a company's performance, historic price movements, industry-specific events alongside the influence of human factors such as social media and press coverage. We assume that financial reports (such as income statements, balance sheets, and cash flow statements), historical price data, and recent news articles can collectively represent aforementioned factors. We combine financial data in tabular format with textual news articles and employ pre-trained Large Language Models (LLMs) to predict market movements. Recent research in LLMs has demonstrated that they are able to perform both tabular and text classification tasks, making them our primary model to classify the multi-modal data. We utilize retrieval augmentation techniques to retrieve and attach relevant chunks of news articles to financial metrics related to a company and prompt the LLMs in zero, two, and four-shot settings. Our dataset contains news articles collected from different sources, historic stock price, and financial report data for 20 companies with the highest trading volume across different industries in the stock market. We utilized recently released language models for our LLM-based classifier, including GPT- 3 and 4, and LLaMA- 2 and 3 models. We introduce an LLM-based classifier capable of performing classification tasks using combination of tabular (structured) and textual (unstructured) data. By using this model, we predicted the movement of a given stock's price in our dataset with a weighted F1-score of 58.5% and 59.1% and Matthews Correlation Coefficient of 0.175 for both 3-month and 6-month periods. |
Date: | 2024–11 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2411.01368 |
By: | Xuewen Han; Neng Wang; Shangkun Che; Hongyang Yang; Kunpeng Zhang; Sean Xin Xu |
Abstract: | In recent years, the application of generative artificial intelligence (GenAI) in financial analysis and investment decision-making has gained significant attention. However, most existing approaches rely on single-agent systems, which fail to fully utilize the collaborative potential of multiple AI agents. In this paper, we propose a novel multi-agent collaboration system designed to enhance decision-making in financial investment research. The system incorporates agent groups with both configurable group sizes and collaboration structures to leverage the strengths of each agent group type. By utilizing a sub-optimal combination strategy, the system dynamically adapts to varying market conditions and investment scenarios, optimizing performance across different tasks. We focus on three sub-tasks: fundamentals, market sentiment, and risk analysis, by analyzing the 2023 SEC 10-K forms of 30 companies listed on the Dow Jones Index. Our findings reveal significant performance variations based on the configurations of AI agents for different tasks. The results demonstrate that our multi-agent collaboration system outperforms traditional single-agent models, offering improved accuracy, efficiency, and adaptability in complex financial environments. This study highlights the potential of multi-agent systems in transforming financial analysis and investment decision-making by integrating diverse analytical perspectives. |
Date: | 2024–11 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2411.04788 |