|
on Artificial Intelligence |
By: | Holtdirk, Tobias; Assenmacher, Dennis; Bleier, Arnim; Wagner, Claudia |
Abstract: | Surveys are a cornerstone of empirical social science research, providing invaluable insights into the opinions, beliefs, behaviours, and characteristics of people. However, issues such as refusal to participate, skipping questions, sampling bias, and attrition significantly impact the quality and reliability of survey data. Recently, researchers have started investigating the potential of Large Language Models (LLMs) to role-play a pre-defined set of "characters" and simulate their survey responses with little or no additional training data and costs. While previous research on forecasting, imputing, and simulating survey answers with LLMs has focused on zero-shot and few-shot approaches, this study investigates the viability of fine-tuning LLMs to simulate responses of survey participants. We fine-tune Large Language Models (LLMs) on subsets of the data from the German Longitudinal Election Study (GLES) and evaluate their predictive performance on the "vote choice" for a random set of held-out participants. We compare the LLMs' performance against various baseline methods. Our findings show that small, fine-tuned open-source LLMs can outperform zero-shot predictions of larger LLMs. They are able to match the performance of established tabular data classifiers, are more sample efficient, and outperform them in cases with systematic non-responses. This study contributes to the growing body of research on LLMs for simulating survey data by demonstrating the effectiveness of fine-tuning approaches. |
Date: | 2024–10–07 |
URL: | https://d.repec.org/n?u=RePEc:osf:osfxxx:udz28 |
By: | Hanzhe Li; Jin Li; Ye Luo; Xiaowei Zhang |
Abstract: | This paper examines how AI persuades doctors when their diagnoses differ. Disagreements arise from two sources: attention differences, which are objective and play a complementary role to the doctor, and comprehension differences, which are subjective and act as substitutes. AI's interpretability influences how doctors attribute these sources and their willingness to change their minds. Surprisingly, uninterpretable AI can be more persuasive by allowing doctors to partially attribute disagreements to attention differences. This effect is stronger when doctors have low abnormality detection skills. Additionally, uninterpretable AI can improve diagnostic accuracy when doctors have career concerns. |
Date: | 2024–10 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2410.01114 |
By: | Marguerita Lane |
Abstract: | This paper examines how different socio-demographic groups experience AI at work. As AI can automate non-routine, cognitive tasks, tertiary-educated workers in “white-collar” occupations will likely face disruption, even if empirical analysis does not suggest that overall employment levels have fallen due to AI, even in “white-collar” occupations. The main risk for those without tertiary education, female and older workers is that they lose out due to lower access to AI-related employment opportunities and to productivity-enhancing AI tools in the workplace. By identifying the main risks and opportunities associated with different socio-demographic groups, the ultimate aim is to allow policy makers to target supports and to capture the benefits of AI (increased productivity and economic growth) without increasing inequalities and societal resistance to technological progress. |
Keywords: | Artificial Intelligence, Education, Employment, Gender, Inequality |
JEL: | J16 J21 J23 J24 O33 |
Date: | 2024–10–31 |
URL: | https://d.repec.org/n?u=RePEc:oec:comaaa:26-en |
By: | Van Khanh Pham; Duc Minh Le |
Abstract: | In the times we live in today, humanity faces unprecedented environmental challenges. The emergence of artificial intelligence (AI) has opened new doors in our collective efforts to address our planet's pressing problems; however, many have doubts on the actual extent of impact that AI have on the environment. In particular, AI also assisting dirty production is a drawback that is largely absent from the literature. To investigate the impact of AI on the environment, we establish mathematical models to model the economy and the production process of goods based on outdated and advanced technologies. The secondary results are stated in the form of lemmas, the main results are stated in the form of theorems. From the theorems we conclude that AI may not on its own prevent an environmental disaster, a reason of which is its concurrent contribution to dirty production. With temporary government intervention, however, AI is able to avert an environmental disaster. |
Date: | 2024–10 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2410.06501 |
By: | Jacques Bughin |
Abstract: | Generative Artificial Intelligence (genAI) is the latest evidence of the transformative value of AI in organizations. One promising avenue lies in software engineering, where genAI can contribute to coding by pairing with developers. Based on a sample of global firms, two main insights emerge on analyzing the productivity implications of genAI-pair coding. Coding quality is negatively correlated with productivity throughput gains, while quality-adjusted productivity gains depend on the extent to which organizations have deployed AI capabilities in the form of data, skills upgrade, and AI governance. As observed with other digital technologies, the success of using genAI is closely tied to complementary technical skills and organizational resources. |
Keywords: | Generative AI, productivity, enterprise RBV, capabilities, machine learning |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:ict:wpaper:2013/378272 |
By: | Jacques Bughin |
Abstract: | Responsible Artificial Intelligence (RAI) is a subset of the ethics associated with the use of artificial intelligence, which will only increase with the recent advent of new regulatory frameworks. However, if many firms have announced the establishment of AI governance rules, there is currently an important gap in understanding whether and why these announcements are being implemented or remain “decoupled” from operations. We assess how large global firms have so far implemented RAI, and the antecedents to RAI implementation across a wide range of RAI initiatives. We find that the operationalization of RAI practices is scattered across firms, with only a fringe of companies extensively industrializing RAI. Social pressure pushes RAI design (”saying”) rather than implementation but the reverse is true for competitive pressure. AI capabilities as a bundle of data quality AI architecture, and talents are strongly associated with RAI design to scaling. |
Keywords: | Artificial intelligence · Ethics · Responsible AI · AI governance · Adoption patterns |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:ict:wpaper:2013/378351 |
By: | Benedikt Gloria; Sven Bienert; Johannes Melsbach; Detlef Schoder |
Abstract: | In recent times, large language models (LLMs) such as ChatGPT and LLaMA have gained significant attention. These models demonstrate remarkable capability in solving complex tasks, drawing knowledge primarily from a generalized database rather than niche subject areas. Consequently, there has been a growing demand for domain-specific LLMs tailored to social and natural sciences, such as BioGPT or BloombergGPT. In this study, we present our own domain-specific LLM focused on real estate, based on the parameter-efficient finetuning technique known as Low-rank adaptation (LoRA) applied to the Mistral 7B model. To create a comprehensive finetuning dataset, we compiled a curated 21k self-instruction dataset sourced from 670 scientific papers, market research, scholarly articles and real estate books. To assess the efficacy of Real-GPT, we devised a set of ca. 5, 000 multiple-choice questions to gauge the real estate knowledge of the models. Despite its notably compact size, our model outperforms other cutting-edge models. Consequently, our developed model not only showcases superior performance but also illustrates its capacity to facilitate investment decisions, interpret current market data, and potentially simplify property valuation processes. This development showcases the potential of LLMs to revolutionize the field of real estate analysis and decision-making. |
Keywords: | Digitalisation; LLMs; NLP; real estate |
JEL: | R3 |
Date: | 2024–01–01 |
URL: | https://d.repec.org/n?u=RePEc:arz:wpaper:eres2024-036 |
By: | Mr. Bas B. Bakker; Sophia Chen; Dmitry Vasilyev; Olga Bespalova; Moya Chin; Daria Kolpakova; Archit Singhal; Yuanchen Yang |
Abstract: | Since 1980, income levels in Latin America and the Caribbean (LAC) have shown no convergence with those in the US, in stark contrast to emerging Asia and emerging Europe, which have seen rapid convergence. A key factor contributing to this divergence has been sluggish productivity growth in LAC. Low productivity growth has been broad-based across industries and firms in the formal sector, with limited diffusion of technology being an important contributing factor. Digital technologies and artificial intelligence (AI) hold significant potential to enhance productivity in the formal sector, foster its expansion, reduce informality, and facilitate LAC’s convergence with advanced economies. However, there is a risk that the region will fall behind advanced countries and frontier emerging markets in AI adoption. To capitalize on the benefits of AI, policies should aim to facilitate technological diffusion and job transition. |
Keywords: | Artificial Intelligence (AI); Productivity Stagnation; Technological Innovation; Latin America; Caribbean; Economic Growth; Labor Productivity; Automation; Macroeconomic Impact; Digital Transformation; Cross-Country Analysis; Regional Development; Technology Adoption; Emerging Economies; Economic Policy. |
Date: | 2024–10–11 |
URL: | https://d.repec.org/n?u=RePEc:imf:imfwpa:2024/219 |
By: | Yanxin Shen; Pulin Kirin Zhang |
Abstract: | Financial sentiment analysis (FSA) is crucial for evaluating market sentiment and making well-informed financial decisions. The advent of large language models (LLMs) such as BERT and its financial variant, FinBERT, has notably enhanced sentiment analysis capabilities. This paper investigates the application of LLMs and FinBERT for FSA, comparing their performance on news articles, financial reports and company announcements. The study emphasizes the advantages of prompt engineering with zero-shot and few-shot strategy to improve sentiment classification accuracy. Experimental results indicate that GPT-4o, with few-shot examples of financial texts, can be as competent as a well fine-tuned FinBERT in this specialized field. |
Date: | 2024–10 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2410.01987 |
By: | Jingyi TIAN; Jun NAGAYASU |
Abstract: | As artificial intelligence (AI) emerges as a key driver of Industry 4.0, nations are vying for a competitive edge in AI advancements, innovation, and applications. This study investigates AI’s role in the financial system by delving into the intricate relationship between AI and financial systemic risk (FSR) across diverse contexts. The results show that, first, AI investment is generally associated with increased FSR. Second, global risk spillover is observed in the FSR of various countries. Extreme events can lead to a sharp and simultaneous increase in FSR across nations. In addition, after removing global risk spillover, the FSR dynamics of countries do not strictly conform to geographical proximity. Third, mechanism analysis reveals that AI increases FSR by enhancing the interconnectedness between entities and raising unemployment. |
Date: | 2024–10 |
URL: | https://d.repec.org/n?u=RePEc:toh:tupdaa:55 |
By: | Nigar Karimova |
Abstract: | The research investigates how the application of a machine-learning random forest model improves the accuracy and precision of a Delphi model. The context of the research is Azerbaijani SMEs and the data for the study has been obtained from a financial institution which had gathered it from the enterprises (as there is no public data on local SMEs, it was not practical to verify the data independently). The research used accuracy, precision, recall and F-1 scores for both models to compare them and run the algorithms in Python. The findings showed that accuracy, precision, recall and F- 1 all improve considerably (from 0.69 to 0.83, from 0.65 to 0.81, from 0.56 to 0.77 and from 0.58 to 0.79, respectively). The implications are that by applying AI models in credit risk modeling, financial institutions can improve the accuracy of identifying potential defaulters which would reduce their credit risk. In addition, an unfair rejection of credit access for SMEs would also go down having a significant contribution to an economic growth in the economy. Finally, such ethical issues as transparency of algorithms and biases in historical data should be taken on board while making decisions based on AI algorithms in order to reduce mechanical dependence on algorithms that cannot be justified in practice. |
Date: | 2024–10 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2410.05330 |
By: | Manuela Pedio; Massimo Guidolin; Giulia Panzeri |
Abstract: | Machine learning is significantly shaping the advancement of various fields, and among them, notably, finance, where its range of applications and efficiency impacts are seemingly boundless. Contemporary techniques, particularly in reinforcement learning, have prompted both practitioners and academics to contemplate the potential of an artificial intelligence revolution in portfolio management. In this paper, we provide an overview of the primary methods in machine learning currently utilized in portfolio decision-making. We delve into discussions surrounding the existing limitations of machine learning algorithms and explore prevailing hypotheses regarding their future expansions. Specifically, we categorize and analyze the applications of machine learning in systematic trading strategies, portfolio weight optimization, smart beta and passive investment strategies, textual analysis, and trade execution, each separately surveyed for a comprehensive understanding. |
Keywords: | Machine learning; portfolio choice; artificial intelligence; neural language processing; stock return predictions, market timing, mean-variance asset allocation. |
JEL: | C45 C61 G10 G11 G17 |
Date: | 2024 |
URL: | https://d.repec.org/n?u=RePEc:baf:cbafwp:cbafwp24233 |
By: | Baptiste Lefort; Eric Benhamou; Jean-Jacques Ohana; David Saltiel; Beatrice Guez |
Abstract: | In this paper, we demonstrate that non-generative, small-sized models such as FinBERT and FinDRoBERTa, when fine-tuned, can outperform GPT-3.5 and GPT-4 models in zero-shot learning settings in sentiment analysis for financial news. These fine-tuned models show comparable results to GPT-3.5 when it is fine-tuned on the task of determining market sentiment from daily financial news summaries sourced from Bloomberg. To fine-tune and compare these models, we created a novel database, which assigns a market score to each piece of news without human interpretation bias, systematically identifying the mentioned companies and analyzing whether their stocks have gone up, down, or remained neutral. Furthermore, the paper shows that the assumptions of Condorcet's Jury Theorem do not hold suggesting that fine-tuned small models are not independent of the fine-tuned GPT models, indicating behavioural similarities. Lastly, the resulted fine-tuned models are made publicly available on HuggingFace, providing a resource for further research in financial sentiment analysis and text classification. |
Date: | 2024–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2409.11408 |