nep-big New Economics Papers
on Big Data
Issue of 2024‒03‒18
28 papers chosen by
Tom Coupé, University of Canterbury


  1. A Study on Stock Forecasting Using Deep Learning and Statistical Models By Himanshu Gupta; Aditya Jaiswal
  2. Explainable Automated Machine Learning for Credit Decisions: Enhancing Human Artificial Intelligence Collaboration in Financial Engineering By Marc Schmitt
  3. A step towards the integration of machine learning and small area estimation By Tomasz \.Z\k{a}d{\l}o; Adam Chwila
  4. DiffsFormer: A Diffusion Transformer on Stock Factor Augmentation By Yuan Gao; Haokun Chen; Xiang Wang; Zhicai Wang; Xue Wang; Jinyang Gao; Bolin Ding
  5. Hyperparameter Tuning for Causal Inference with Double Machine Learning: A Simulation Study By Philipp Bach; Oliver Schacht; Victor Chernozhukov; Sven Klaassen; Martin Spindler
  6. Monthly GDP nowcasting with Machine Learning and Unstructured Data By Juan Tenorio; Wilder Perez
  7. Electricity Price Forecasting in the Irish Balancing Market By Ciaran O'Connor; Joseph Collins; Steven Prestwich; Andrea Visentin
  8. Securing Transactions: A Hybrid Dependable Ensemble Machine Learning Model using IHT-LR and Grid Search By Md. Alamin Talukder; Rakib Hossen; Md Ashraf Uddin; Mohammed Nasir Uddin; Uzzal Kumar Acharjee
  9. Forecasting Imports in OECD Member Countries and Iran by Using Neural Network Algorithms of LSTM By Soheila Khajoui; Saeid Dehyadegari; Sayyed Abdolmajid Jalaee
  10. A robust record linkage approach for anomaly detection in granular insurance asset reporting By Vittoria La Serra; Emiliano Svezia
  11. Machine Learning for Continuous-Time Finance By Victor Duarte; Diogo Duarte; Dejanir H. Silva
  12. Free Trade Agreements and the Movement of Business People By Mayer, Thierry; Rapoport, Hillel; Umana-Dajud, Camilo
  13. Attention-based Dynamic Multilayer Graph Neural Networks for Loan Default Prediction By Sahab Zandi; Kamesh Korangi; Mar\'ia \'Oskarsd\'ottir; Christophe Mues; Cristi\'an Bravo
  14. Assessing economic sentiment with newspaper text indices: evidence from Switzerland By Marie-Catherine Bieri
  15. End-to-End Policy Learning of a Statistical Arbitrage Autoencoder Architecture By Fabian Krause; Jan-Peter Calliess
  16. FNSPID: A Comprehensive Financial News Dataset in Time Series By Zihan Dong; Xinyu Fan; Zhiyuan Peng
  17. Tweet Influence on Market Trends: Analyzing the Impact of Social Media Sentiment on Biotech Stocks By C. Sarai R. Avila
  18. Modeling the Presidential Approval Ratings of the United States using Machine-Learning: Does Climate Policy Uncertainty Matter? By Elie Bouri; Rangan Gupta; Christian Pierdzioch
  19. Political Fragility: Coups d’État and Their Drivers By Aliona Cebotari; Enrique Chueca-Montuenga; Yoro Diallo; Yunsheng Ma; Ms. Rima A Turk; Weining Xin; Harold Zavarce
  20. LLM-driven Imitation of Subrational Behavior : Illusion or Reality? By Andrea Coletta; Kshama Dwarakanath; Penghang Liu; Svitlana Vyetrenko; Tucker Balch
  21. The Heterogeneous Aggregate Valence Analysis (HAVAN) Model: A Flexible Approach to Modeling Unobserved Heterogeneity in Discrete Choice Analysis By Connor R. Forsythe; Cristian Arteaga; John P. Helveston
  22. RiskMiner: Discovering Formulaic Alphas via Risk Seeking Monte Carlo Tree Search By Tao Ren; Ruihan Zhou; Jinyang Jiang; Jiafeng Liang; Qinghao Wang; Yijie Peng
  23. Accounting for Individual-Specific Heterogeneity in Intergenerational Income Mobility By Yoosoon Chang; Steven N. Durlauf; Bo Hu; Joon Y. Park
  24. Using Survey-to-Survey Imputation to Fill Poverty Data Gaps at a Low Cost: Evidence from a Randomized Survey Experiment By Dang, Hai-Anh; Kilic, Talip; Hlasny, Vladimir; Abanokova, Kseniya; Carletto, Calogero
  25. Decoding Bank of Sierra Leone's Monetary Policy Communications: A Text Mining Analysis. By Barrie, Mohamed Samba
  26. Women in economics: the role of gendered references at entry in the profession By Audinga Baltrunaite; Alessandra Casarico; Lucia Rizzica
  27. LLM Voting: Human Choices and AI Collective Decision Making By Joshua C. Yang; Marcin Korecki; Damian Dailisan; Carina I. Hausladen; Dirk Helbing
  28. Rationality Report Cards: Assessing the Economic Rationality of Large Language Models By Narun Raman; Taylor Lundy; Samuel Amouyal; Yoav Levine; Kevin Leyton-Brown; Moshe Tennenholtz

  1. By: Himanshu Gupta; Aditya Jaiswal
    Abstract: Predicting a fast and accurate model for stock price forecasting is been a challenging task and this is an active area of research where it is yet to be found which is the best way to forecast the stock price. Machine learning, deep learning and statistical analysis techniques are used here to get the accurate result so the investors can see the future trend and maximize the return of investment in stock trading. This paper will review many deep learning algorithms for stock price forecasting. We use a record of s&p 500 index data for training and testing. The survey motive is to check various deep learning and statistical model techniques for stock price forecasting that are Moving Averages, ARIMA which are statistical techniques and LSTM, RNN, CNN, and FULL CNN which are deep learning models. It will discuss various models, including the Auto regression integration moving average model, the Recurrent neural network model, the long short-term model which is the type of RNN used for long dependency for data, the convolutional neural network model, and the full convolutional neural network model, in terms of error calculation or percentage of accuracy that how much it is accurate which measures by the function like Root mean square error, mean absolute error, mean squared error. The model can be used to predict the stock price by checking the low MAE value as lower the MAE value the difference between the predicting and the actual value will be less and this model will predict the price more accurately than other models.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.06689&r=big
  2. By: Marc Schmitt
    Abstract: This paper explores the integration of Explainable Automated Machine Learning (AutoML) in the realm of financial engineering, specifically focusing on its application in credit decision-making. The rapid evolution of Artificial Intelligence (AI) in finance has necessitated a balance between sophisticated algorithmic decision-making and the need for transparency in these systems. The focus is on how AutoML can streamline the development of robust machine learning models for credit scoring, while Explainable AI (XAI) methods, particularly SHapley Additive exPlanations (SHAP), provide insights into the models' decision-making processes. This study demonstrates how the combination of AutoML and XAI not only enhances the efficiency and accuracy of credit decisions but also fosters trust and collaboration between humans and AI systems. The findings underscore the potential of explainable AutoML in improving the transparency and accountability of AI-driven financial decisions, aligning with regulatory requirements and ethical considerations.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.03806&r=big
  3. By: Tomasz \.Z\k{a}d{\l}o; Adam Chwila
    Abstract: The use of machine-learning techniques has grown in numerous research areas. Currently, it is also widely used in statistics, including the official statistics for data collection (e.g. satellite imagery, web scraping and text mining, data cleaning, integration and imputation) but also for data analysis. However, the usage of these methods in survey sampling including small area estimation is still very limited. Therefore, we propose a predictor supported by these algorithms which can be used to predict any population or subpopulation characteristics based on cross-sectional and longitudinal data. Machine learning methods have already been shown to be very powerful in identifying and modelling complex and nonlinear relationships between the variables, which means that they have very good properties in case of strong departures from the classic assumptions. Therefore, we analyse the performance of our proposal under a different set-up, in our opinion of greater importance in real-life surveys. We study only small departures from the assumed model, to show that our proposal is a good alternative in this case as well, even in comparison with optimal methods under the model. What is more, we propose the method of the accuracy estimation of machine learning predictors, giving the possibility of the accuracy comparison with classic methods, where the accuracy is measured as in survey sampling practice. The solution of this problem is indicated in the literature as one of the key issues in integration of these approaches. The simulation studies are based on a real, longitudinal dataset, freely available from the Polish Local Data Bank, where the prediction problem of subpopulation characteristics in the last period, with "borrowing strength" from other subpopulations and time periods, is considered.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.07521&r=big
  4. By: Yuan Gao; Haokun Chen; Xiang Wang; Zhicai Wang; Xue Wang; Jinyang Gao; Bolin Ding
    Abstract: Machine learning models have demonstrated remarkable efficacy and efficiency in a wide range of stock forecasting tasks. However, the inherent challenges of data scarcity, including low signal-to-noise ratio (SNR) and data homogeneity, pose significant obstacles to accurate forecasting. To address this issue, we propose a novel approach that utilizes artificial intelligence-generated samples (AIGS) to enhance the training procedures. In our work, we introduce the Diffusion Model to generate stock factors with Transformer architecture (DiffsFormer). DiffsFormer is initially trained on a large-scale source domain, incorporating conditional guidance so as to capture global joint distribution. When presented with a specific downstream task, we employ DiffsFormer to augment the training procedure by editing existing samples. This editing step allows us to control the strength of the editing process, determining the extent to which the generated data deviates from the target domain. To evaluate the effectiveness of DiffsFormer augmented training, we conduct experiments on the CSI300 and CSI800 datasets, employing eight commonly used machine learning models. The proposed method achieves relative improvements of 7.2% and 27.8% in annualized return ratio for the respective datasets. Furthermore, we perform extensive experiments to gain insights into the functionality of DiffsFormer and its constituent components, elucidating how they address the challenges of data scarcity and enhance the overall model performance. Our research demonstrates the efficacy of leveraging AIGS and the DiffsFormer architecture to mitigate data scarcity in stock forecasting tasks.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.06656&r=big
  5. By: Philipp Bach; Oliver Schacht; Victor Chernozhukov; Sven Klaassen; Martin Spindler
    Abstract: Proper hyperparameter tuning is essential for achieving optimal performance of modern machine learning (ML) methods in predictive tasks. While there is an extensive literature on tuning ML learners for prediction, there is only little guidance available on tuning ML learners for causal machine learning and how to select among different ML learners. In this paper, we empirically assess the relationship between the predictive performance of ML methods and the resulting causal estimation based on the Double Machine Learning (DML) approach by Chernozhukov et al. (2018). DML relies on estimating so-called nuisance parameters by treating them as supervised learning problems and using them as plug-in estimates to solve for the (causal) parameter. We conduct an extensive simulation study using data from the 2019 Atlantic Causal Inference Conference Data Challenge. We provide empirical insights on the role of hyperparameter tuning and other practical decisions for causal estimation with DML. First, we assess the importance of data splitting schemes for tuning ML learners within Double Machine Learning. Second, we investigate how the choice of ML methods and hyperparameters, including recent AutoML frameworks, impacts the estimation performance for a causal parameter of interest. Third, we assess to what extent the choice of a particular causal model, as characterized by incorporated parametric assumptions, can be based on predictive performance metrics.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.04674&r=big
  6. By: Juan Tenorio; Wilder Perez
    Abstract: In the dynamic landscape of continuous change, Machine Learning (ML) "nowcasting" models offer a distinct advantage for informed decision-making in both public and private sectors. This study introduces ML-based GDP growth projection models for monthly rates in Peru, integrating structured macroeconomic indicators with high-frequency unstructured sentiment variables. Analyzing data from January 2007 to May 2023, encompassing 91 leading economic indicators, the study evaluates six ML algorithms to identify optimal predictors. Findings highlight the superior predictive capability of ML models using unstructured data, particularly Gradient Boosting Machine, LASSO, and Elastic Net, exhibiting a 20% to 25% reduction in prediction errors compared to traditional AR and Dynamic Factor Models (DFM). This enhanced performance is attributed to better handling of data of ML models in high-uncertainty periods, such as economic crises.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.04165&r=big
  7. By: Ciaran O'Connor; Joseph Collins; Steven Prestwich; Andrea Visentin
    Abstract: Short-term electricity markets are becoming more relevant due to less-predictable renewable energy sources, attracting considerable attention from the industry. The balancing market is the closest to real-time and the most volatile among them. Its price forecasting literature is limited, inconsistent and outdated, with few deep learning attempts and no public dataset. This work applies to the Irish balancing market a variety of price prediction techniques proven successful in the widely studied day-ahead market. We compare statistical, machine learning, and deep learning models using a framework that investigates the impact of different training sizes. The framework defines hyperparameters and calibration settings; the dataset and models are made public to ensure reproducibility and to be used as benchmarks for future works. An extensive numerical study shows that well-performing models in the day-ahead market do not perform well in the balancing one, highlighting that these markets are fundamentally different constructs. The best model is LEAR, a statistical approach based on LASSO, which outperforms more complex and computationally demanding approaches.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.06714&r=big
  8. By: Md. Alamin Talukder; Rakib Hossen; Md Ashraf Uddin; Mohammed Nasir Uddin; Uzzal Kumar Acharjee
    Abstract: Financial institutions and businesses face an ongoing challenge from fraudulent transactions, prompting the need for effective detection methods. Detecting credit card fraud is crucial for identifying and preventing unauthorized transactions.Timely detection of fraud enables investigators to take swift actions to mitigate further losses. However, the investigation process is often time-consuming, limiting the number of alerts that can be thoroughly examined each day. Therefore, the primary objective of a fraud detection model is to provide accurate alerts while minimizing false alarms and missed fraud cases. In this paper, we introduce a state-of-the-art hybrid ensemble (ENS) dependable Machine learning (ML) model that intelligently combines multiple algorithms with proper weighted optimization using Grid search, including Decision Tree (DT), Random Forest (RF), K-Nearest Neighbor (KNN), and Multilayer Perceptron (MLP), to enhance fraud identification. To address the data imbalance issue, we employ the Instant Hardness Threshold (IHT) technique in conjunction with Logistic Regression (LR), surpassing conventional approaches. Our experiments are conducted on a publicly available credit card dataset comprising 284, 807 transactions. The proposed model achieves impressive accuracy rates of 99.66%, 99.73%, 98.56%, and 99.79%, and a perfect 100% for the DT, RF, KNN, MLP and ENS models, respectively. The hybrid ensemble model outperforms existing works, establishing a new benchmark for detecting fraudulent transactions in high-frequency scenarios. The results highlight the effectiveness and reliability of our approach, demonstrating superior performance metrics and showcasing its exceptional potential for real-world fraud detection applications.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.14389&r=big
  9. By: Soheila Khajoui; Saeid Dehyadegari; Sayyed Abdolmajid Jalaee
    Abstract: Artificial Neural Networks (ANN) which are a branch of artificial intelligence, have shown their high value in lots of applications and are used as a suitable forecasting method. Therefore, this study aims at forecasting imports in OECD member selected countries and Iran for 20 seasons from 2021 to 2025 by means of ANN. Data related to the imports of such countries collected over 50 years from 1970 to 2019 from valid resources including World Bank, WTO, IFM, the data turned into seasonal data to increase the number of collected data for better performance and high accuracy of the network by using Diz formula that there were totally 200 data related to imports. This study has used LSTM to analyse data in Pycharm. 75% of data considered as training data and 25% considered as test data and the results of the analysis were forecasted with 99% accuracy which revealed the validity and reliability of the output. Since the imports is consumption function and since the consumption is influenced during Covid-19 Pandemic, so it is time-consuming to correct and improve it to be influential on the imports, thus the imports in the years after Covid-19 Pandemic has had a fluctuating trend.
    Date: 2024–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.01648&r=big
  10. By: Vittoria La Serra (Bank of Italy); Emiliano Svezia (Bank of Italy)
    Abstract: Since 2016, insurance corporations have been reporting granular asset data in Solvency II templates on a quarterly basis. Assets are uniquely identified by codes that must be kept stable and consistent over time; nevertheless, due to reporting errors, unexpected changes in these codes may occur, leading to inconsistencies when compiling insurance statistics. The paper addresses this issue as a statistical matching problem and proposes a supervised classification approach to detect such anomalies. Test results show the potential benefits of machine learning techniques to data quality management processes, specifically of a selected random forest model for supervised binary classification, and the efficiency gains arising from automation.
    Keywords: insurance data, data quality management, record linkage, statistical matching, machine learning
    JEL: C18 C81 G22
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:bdi:opques:qef_821_23&r=big
  11. By: Victor Duarte; Diogo Duarte; Dejanir H. Silva
    Abstract: We develop an algorithm for solving a large class of nonlinear high-dimensional continuous-time models in finance. We approximate value and policy functions using deep learning and show that a combination of automatic differentiation and Ito’s lemma allows for the computation of exact expectations, resulting in a negligible computational cost that is independent of the number of state variables. We illustrate the applicability of our method to problems in asset pricing, corporate finance, and portfolio choice and show that the ability to solve high-dimensional problems allows us to derive new economic insights.
    Date: 2024
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_10909&r=big
  12. By: Mayer, Thierry (Sciences Po, Paris); Rapoport, Hillel (Paris School of Economics); Umana-Dajud, Camilo (CEPII, Paris)
    Abstract: Using provisions to ease the movement of business visitors in trade agreements, we show that removing barriers to the movement of business people promotes trade. We document the increasing complexity of Free Trade Agreements and develop an algorithm that combines machine learning and text analysis techniques to examine the content of FTAs. We use the algorithm to determine which FTAs include provisions to facilitate the movement of business people and whether these are included in dispute settlement mechanisms. We show that provisions facilitating business travel are effective in promoting them and eventually increase bilateral trade flows.
    Keywords: text analysis, machine learning, free trade agreements, business travel, migration
    JEL: F13 F22 F23
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp16789&r=big
  13. By: Sahab Zandi; Kamesh Korangi; Mar\'ia \'Oskarsd\'ottir; Christophe Mues; Cristi\'an Bravo
    Abstract: Whereas traditional credit scoring tends to employ only individual borrower- or loan-level predictors, it has been acknowledged for some time that connections between borrowers may result in default risk propagating over a network. In this paper, we present a model for credit risk assessment leveraging a dynamic multilayer network built from a Graph Neural Network and a Recurrent Neural Network, each layer reflecting a different source of network connection. We test our methodology in a behavioural credit scoring context using a dataset provided by U.S. mortgage financier Freddie Mac, in which different types of connections arise from the geographical location of the borrower and their choice of mortgage provider. The proposed model considers both types of connections and the evolution of these connections over time. We enhance the model by using a custom attention mechanism that weights the different time snapshots according to their importance. After testing multiple configurations, a model with GAT, LSTM, and the attention mechanism provides the best results. Empirical results demonstrate that, when it comes to predicting probability of default for the borrowers, our proposed model brings both better results and novel insights for the analysis of the importance of connections and timestamps, compared to traditional methods.
    Date: 2024–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.00299&r=big
  14. By: Marie-Catherine Bieri
    Abstract: In this study, the signals of more than 530, 000 news articles from 15 large Swiss newspapers are extracted to measure the economic sentiment in Switzerland. Economic sentiment includes consumer sentiment and sentiment about businesses as well. The research period for the text sentiment analysis ranges from 2016 until 2022 and, thus, the impact of the COVID-19 lockdown period is included in this analysis. I contribute two new indices: one concerns the measure of news sentiment in the German-speaking part of Switzerland, and the other concerns the measure of news sentiment in the French-speaking part of Switzerland. The two indices show strong comovement; however, the sentiment in these two language regions is not identical. The indices are available and updatable in real time. The news articles, in contrast to macroeconomic variables such as GDP estimates, are not revised, making these text-based indices an interesting source of information for economic forecasters, especially in times of market turmoil.
    Keywords: Economic sentiment, Sentiment analysis, Text-based Indicator
    JEL: C53 C55 E21 E27 E37
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:snb:snbwpa:2023-07&r=big
  15. By: Fabian Krause; Jan-Peter Calliess
    Abstract: In Statistical Arbitrage (StatArb), classical mean reversion trading strategies typically hinge on asset-pricing or PCA based models to identify the mean of a synthetic asset. Once such a (linear) model is identified, a separate mean reversion strategy is then devised to generate a trading signal. With a view of generalising such an approach and turning it truly data-driven, we study the utility of Autoencoder architectures in StatArb. As a first approach, we employ a standard Autoencoder trained on US stock returns to derive trading strategies based on the Ornstein-Uhlenbeck (OU) process. To further enhance this model, we take a policy-learning approach and embed the Autoencoder network into a neural network representation of a space of portfolio trading policies. This integration outputs portfolio allocations directly and is end-to-end trainable by backpropagation of the risk-adjusted returns of the neural policy. Our findings demonstrate that this innovative end-to-end policy learning approach not only simplifies the strategy development process, but also yields superior gross returns over its competitors illustrating the potential of end-to-end training over classical two-stage approaches.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.08233&r=big
  16. By: Zihan Dong; Xinyu Fan; Zhiyuan Peng
    Abstract: Financial market predictions utilize historical data to anticipate future stock prices and market trends. Traditionally, these predictions have focused on the statistical analysis of quantitative factors, such as stock prices, trading volumes, inflation rates, and changes in industrial production. Recent advancements in large language models motivate the integrated financial analysis of both sentiment data, particularly market news, and numerical factors. Nonetheless, this methodology frequently encounters constraints due to the paucity of extensive datasets that amalgamate both quantitative and qualitative sentiment analyses. To address this challenge, we introduce a large-scale financial dataset, namely, Financial News and Stock Price Integration Dataset (FNSPID). It comprises 29.7 million stock prices and 15.7 million time-aligned financial news records for 4, 775 S&P500 companies, covering the period from 1999 to 2023, sourced from 4 stock market news websites. We demonstrate that FNSPID excels existing stock market datasets in scale and diversity while uniquely incorporating sentiment information. Through financial analysis experiments on FNSPID, we propose: (1) the dataset's size and quality significantly boost market prediction accuracy; (2) adding sentiment scores modestly enhances performance on the transformer-based model; (3) a reproducible procedure that can update the dataset. Completed work, code, documentation, and examples are available at github.com/Zdong104/FNSPID. FNSPID offers unprecedented opportunities for the financial research community to advance predictive modeling and analysis.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.06698&r=big
  17. By: C. Sarai R. Avila
    Abstract: This study investigates the relationship between tweet sentiment across diverse categories: news, company opinions, CEO opinions, competitor opinions, and stock market behavior in the biotechnology sector, with a focus on understanding the impact of social media discourse on investor sentiment and decision-making processes. We analyzed historical stock market data for ten of the largest and most influential pharmaceutical companies alongside Twitter data related to COVID-19, vaccines, the companies, and their respective CEOs. Using VADER sentiment analysis, we examined the sentiment scores of tweets and assessed their relationships with stock market performance. We employed ARIMA (AutoRegressive Integrated Moving Average) and VAR (Vector AutoRegression) models to forecast stock market performance, incorporating sentiment covariates to improve predictions. Our findings revealed a complex interplay between tweet sentiment, news, biotech companies, their CEOs, and stock market performance, emphasizing the importance of considering diverse factors when modeling and predicting stock prices. This study provides valuable insights into the influence of social media on the financial sector and lays a foundation for future research aimed at refining stock price prediction models.
    Date: 2024–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.03353&r=big
  18. By: Elie Bouri (School of Business, Lebanese American University, Lebanon); Rangan Gupta (Department of Economics, University of Pretoria, Private Bag X20, Hatfield 0028, South Africa); Christian Pierdzioch (Department of Economics, Helmut Schmidt University, Holstenhofweg 85, P.O.B. 700822, 22008 Hamburg, Germany)
    Abstract: In the wake of a massive thrust on designing policies to tackle climate change, we study the role of climate policy uncertainty in impacting the presidential approval ratings of the United States (US). We control for other policy related uncertainties and geopolitical risks, over and above macroeconomic and financial predictors used in earlier literature on drivers of approval ratings of the US president. Because we study as many as 19 determinants, and nonlinearity is a well-established observation in this area of research, we utilize random forests, a machine-learning approach, to derive our results over the monthly period of 1987:04 to 2023:12. We find that, though the association of the presidential approval ratings with climate policy uncertainty is moderately negative and nonlinear, this type of uncertainty is in fact relatively more important than other measures of policy-related uncertainties, as well as many of the widely-used macroeconomic and financial indicators associated with presidential approval. In addition, and more importantly, we also detect that the importance of climate policy uncertainty has grown in recent years in terms of its impact on the approval ratings of the US president.
    Keywords: Presidential approval ratings, Climate policy uncertainty, Random forests
    JEL: C22 Q54
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:pre:wpaper:202406&r=big
  19. By: Aliona Cebotari; Enrique Chueca-Montuenga; Yoro Diallo; Yunsheng Ma; Ms. Rima A Turk; Weining Xin; Harold Zavarce
    Abstract: The paper explores the drivers of political fragility by focusing on coups d’état as symptomatic of such fragility. It uses event studies to identify factors that exhibit significantly different dynamics in the runup to coups, and machine learning to identify these stressors and more structural determinants of fragility—as well as their nonlinear interactions—that create an environment propitious to coups. The paper finds that the destabilization of a country’s economic, political or security environment—such as low growth, high inflation, weak external positions, political instability and conflict—set the stage for a higher likelihood of coups, with overlapping stressors amplifying each other. These stressors are more likely to lead to breakdowns in political systems when demographic pressures and underlying structural weaknesses (especially poverty, exclusion, and weak governance) are present or when policies are weaker, through complex interactions. Conversely, strengthened fundamentals and macropolicies have higher returns in structurally fragile environments in terms of staving off political breakdowns, suggesting that continued engagement by multilateral institutions and donors in fragile situations is likely to yield particularly high dividends. The model performs well in predicting coups out of sample, having predicted a high probability of most 2020-23 coups, including in the Sahel region.
    Keywords: Fragility; Drivers of Fragility; Coup d’État; Machine Learning
    Date: 2024–02–16
    URL: http://d.repec.org/n?u=RePEc:imf:imfwpa:2024/034&r=big
  20. By: Andrea Coletta; Kshama Dwarakanath; Penghang Liu; Svitlana Vyetrenko; Tucker Balch
    Abstract: Modeling subrational agents, such as humans or economic households, is inherently challenging due to the difficulty in calibrating reinforcement learning models or collecting data that involves human subjects. Existing work highlights the ability of Large Language Models (LLMs) to address complex reasoning tasks and mimic human communication, while simulation using LLMs as agents shows emergent social behaviors, potentially improving our comprehension of human conduct. In this paper, we propose to investigate the use of LLMs to generate synthetic human demonstrations, which are then used to learn subrational agent policies though Imitation Learning. We make an assumption that LLMs can be used as implicit computational models of humans, and propose a framework to use synthetic demonstrations derived from LLMs to model subrational behaviors that are characteristic of humans (e.g., myopic behavior or preference for risk aversion). We experimentally evaluate the ability of our framework to model sub-rationality through four simple scenarios, including the well-researched ultimatum game and marshmallow experiment. To gain confidence in our framework, we are able to replicate well-established findings from prior human studies associated with the above scenarios. We conclude by discussing the potential benefits, challenges and limitations of our framework.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.08755&r=big
  21. By: Connor R. Forsythe; Cristian Arteaga; John P. Helveston
    Abstract: This paper introduces the Heterogeneous Aggregate Valence Analysis (HAVAN) model, a novel class of discrete choice models. We adopt the term "valence'' to encompass any latent quantity used to model consumer decision-making (e.g., utility, regret, etc.). Diverging from traditional models that parameterize heterogeneous preferences across various product attributes, HAVAN models (pronounced "haven") instead directly characterize alternative-specific heterogeneous preferences. This innovative perspective on consumer heterogeneity affords unprecedented flexibility and significantly reduces simulation burdens commonly associated with mixed logit models. In a simulation experiment, the HAVAN model demonstrates superior predictive performance compared to state-of-the-art artificial neural networks. This finding underscores the potential for HAVAN models to improve discrete choice modeling capabilities.
    Date: 2024–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.00184&r=big
  22. By: Tao Ren; Ruihan Zhou; Jinyang Jiang; Jiafeng Liang; Qinghao Wang; Yijie Peng
    Abstract: The formulaic alphas are mathematical formulas that transform raw stock data into indicated signals. In the industry, a collection of formulaic alphas is combined to enhance modeling accuracy. Existing alpha mining only employs the neural network agent, unable to utilize the structural information of the solution space. Moreover, they didn't consider the correlation between alphas in the collection, which limits the synergistic performance. To address these problems, we propose a novel alpha mining framework, which formulates the alpha mining problems as a reward-dense Markov Decision Process (MDP) and solves the MDP by the risk-seeking Monte Carlo Tree Search (MCTS). The MCTS-based agent fully exploits the structural information of discrete solution space and the risk-seeking policy explicitly optimizes the best-case performance rather than average outcomes. Comprehensive experiments are conducted to demonstrate the efficiency of our framework. Our method outperforms all state-of-the-art benchmarks on two real-world stock sets under various metrics. Backtest experiments show that our alphas achieve the most profitable results under a realistic trading setting.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.07080&r=big
  23. By: Yoosoon Chang; Steven N. Durlauf; Bo Hu; Joon Y. Park
    Abstract: This paper proposes a fully nonparametric model to investigate the dynamics of intergenerational income mobility. In our model, an individual’s income class probabilities depend on parental income in a manner that accommodates nonlinearities and interactions among various individual characteristics and parental characteristics, including race, education, and parental age at childbearing. Consequently, we offer a generalization of Markov chain mobility models. We employ kernel techniques from machine learning and further regularization for estimating this highly flexible model. Utilizing data from the Panel Study of Income Dynamics (PSID), we find that race and parental education play significant roles in determining the influence of parental income on children’s economic prospects.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:bny:wpaper:0129&r=big
  24. By: Dang, Hai-Anh (World Bank); Kilic, Talip (World Bank); Hlasny, Vladimir (UN ESCWA); Abanokova, Kseniya (World Bank); Carletto, Calogero (World Bank)
    Abstract: Survey data on household consumption are often unavailable or incomparable over time in many low- and middle-income countries. Based on a unique randomized survey experiment implemented in Tanzania, this study offers new and rigorous evidence demonstrating that survey-to-survey imputation can fill consumption data gaps and provide low-cost and reliable poverty estimates. Basic imputation models featuring utility expenditures, together with a modest set of predictors on demographics, employment, household assets and housing, yield accurate predictions. Imputation accuracy is robust to varying survey questionnaire length; the choice of base surveys for estimating the imputation model; different poverty lines; and alternative (quarterly or monthly) CPI deflators. The proposed approach to imputation also performs better than multiple imputation and a range of machine learning techniques. In the case of a target survey with modified (e.g., shortened or aggregated) food or non-food consumption modules, imputation models including food or non-food consumption as predictors do well only if the distributions of the predictors are standardized vis-à-vis the base survey. For best-performing models to reach acceptable levels of accuracy, the minimum-required sample size should be 1, 000 for both base and target surveys. The discussion expands on the implications of the findings for the design of future surveys.
    Keywords: consumption, poverty, survey-to-survey imputation, household surveys, Tanzania
    JEL: C15 I32 O15
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp16792&r=big
  25. By: Barrie, Mohamed Samba
    Abstract: This research paper presents the results of a text mining analysis conducted on the quarterly monetary policy statements published by the Bank of Sierra Leone. This study examines the effectiveness and readability of monetary policy communication by the Bank of Sierra Leone. The research focuses on evaluating the clarity and simplicity of the Bank's communication efforts from 2019Q1 to 2023Q1. Utilizing techniques such as word cloud extraction, keyword density analysis, and text readability metrics assessment, we analyzed 17 quarterly monetary policy statements, our corpus consist of 43 pages of text data. The findings reveal insights into the prominent themes, and linguistic characteristics embedded within the Bank's monetary policy statements. Through an examination of keyword frequency and thematic trends, the research shows that the Bank of Sierra Leone's monetary policy statements place emphasis on inflation management, economic growth, and consideration of exogenous factors affecting monetary policy implementation, with language and structure varying across quarters. Moreover, the analysis suggests that understanding these statements typically requires a college-level education, posing accessibility challenges for a significant portion of Sierra Leone’s population. The policy implications drawn from this analysis accentuate the importance of stakeholder engagement, transparency, adoption of digital outreach tools, local language communication, continuous monitoring and evaluation of the Bank’s communication strategy. This research contributes empirical evidence on monetary policy communication in Sierra Leone, offering insights for policymakers on how best to improve its communication strategies to financial market participants and the general public.
    Keywords: Text mining, monetary policy, communication, Bank of Sierra Leone, Effectiveness, Readability, Clarity, Comprehensibility
    JEL: E52 E58 G14 G28 C88
    Date: 2024
    URL: http://d.repec.org/n?u=RePEc:zbw:esprep:283289&r=big
  26. By: Audinga Baltrunaite (Bank of Italy and CEPR); Alessandra Casarico (Bocconi University, CESIfo and Dondena); Lucia Rizzica (Bank of Italy)
    Abstract: We study the presence and the extent of gender differences in reference letters for graduate students in economics and finance, and how these differences relate to early labor market outcomes. To these ends, we build a novel rich dataset and combine Natural Language Processing techniques with standard regression analysis. We find that men are described more often as brilliant and women as hardworking and diligent. We show that the former (latter) description relates positively (negatively) with various subsequent career outcomes. We provide evidence that the observed differences in the way candidates are described are driven by implicit gender stereotypes.
    Keywords: gender bias, research institutions, professional labor markets, word embeddings
    JEL: I23 J16 J44
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:bdi:wptemi:td_1438_24&r=big
  27. By: Joshua C. Yang; Marcin Korecki; Damian Dailisan; Carina I. Hausladen; Dirk Helbing
    Abstract: This paper investigates the voting behaviors of Large Language Models (LLMs), particularly OpenAI's GPT4 and LLaMA2, and their alignment with human voting patterns. Our approach included a human voting experiment to establish a baseline for human preferences and a parallel experiment with LLM agents. The study focused on both collective outcomes and individual preferences, revealing differences in decision-making and inherent biases between humans and LLMs. We observed a trade-off between preference diversity and alignment in LLMs, with a tendency towards more uniform choices as compared to the diverse preferences of human voters. This finding indicates that LLMs could lead to more homogenized collective outcomes when used in voting assistance, underscoring the need for cautious integration of LLMs into democratic processes.
    Date: 2024–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.01766&r=big
  28. By: Narun Raman; Taylor Lundy; Samuel Amouyal; Yoav Levine; Kevin Leyton-Brown; Moshe Tennenholtz
    Abstract: There is increasing interest in using LLMs as decision-making "agents." Doing so includes many degrees of freedom: which model should be used; how should it be prompted; should it be asked to introspect, conduct chain-of-thought reasoning, etc? Settling these questions -- and more broadly, determining whether an LLM agent is reliable enough to be trusted -- requires a methodology for assessing such an agent's economic rationality. In this paper, we provide one. We begin by surveying the economic literature on rational decision making, taxonomizing a large set of fine-grained "elements" that an agent should exhibit, along with dependencies between them. We then propose a benchmark distribution that quantitatively scores an LLMs performance on these elements and, combined with a user-provided rubric, produces a "rationality report card." Finally, we describe the results of a large-scale empirical experiment with 14 different LLMs, characterizing the both current state of the art and the impact of different model sizes on models' ability to exhibit rational behavior.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.09552&r=big

This nep-big issue is ©2024 by Tom Coupé. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.