nep-big New Economics Papers
on Big Data
Issue of 2024‒10‒28
twenty-one papers chosen by
Tom Coupé, University of Canterbury


  1. Mining Causality: AI-Assisted Search for Instrumental Variables By Sukjin Han
  2. Cross-Lingual News Event Correlation for Stock Market Trend Prediction By Sahar Arshad; Nikhar Azhar; Sana Sajid; Seemab Latif; Rabia Latif
  3. Strategic Collusion of LLM Agents: Market Division in Multi-Commodity Competitions By Ryan Y. Lin; Siddhartha Ojha; Kevin Cai; Maxwell F. Chen
  4. A Unified Framework to Classify Business Activities into International Standard Industrial Classification through Large Language Models for Circular Economy By Xiang Li; Lan Zhao; Junhao Ren; Yajuan Sun; Chuan Fu Tan; Zhiquan Yeo; Gaoxi Xiao
  5. Price predictability in limit order book with deep learning model By Kyungsub Lee
  6. Unveiling the Potential of Graph Neural Networks in SME Credit Risk Assessment By Bingyao Liu; Iris Li; Jianhua Yao; Yuan Chen; Guanming Huang; Jiajing Wang
  7. Deep Learning to Play Games By Daniele Condorelli; Massimiliano Furlan
  8. Leveraging Fundamental Analysis for Stock Trend Prediction for Profit By John Phan; Hung-Fu Chang
  9. A Framework for the Construction of a Sentiment-Driven Performance Index: The Case of DAX40 By Fabian Billert; Stefan Conrad
  10. Gender Stereotyping in the Labor Market: A Descriptive Analysis of Almost One Million Job Ads across 710 Occupations and Occupational Positions By Damelang, Andreas; Rückel, Ann-Katrin; Stops, Michael
  11. Trading through Earnings Seasons using Self-Supervised Contrastive Representation Learning By Zhengxin Joseph Ye; Bjoern Schuller
  12. GARCH-Informed Neural Networks for Volatility Prediction in Financial Markets By Zeda Xu; John Liechty; Sebastian Benthall; Nicholas Skar-Gislinge; Christopher McComb
  13. Mixed-Effects Frequency-Adjusted Borders Ordinal Forest: A Tree Ensemble Method for Ordinal Prediction with Hierarchical Data By Buczak, Philip
  14. Deep Gamma Hedging By John Armstrong; George Tatlow
  15. A Spatio-Temporal Machine Learning Model for Mortgage Credit Risk: Default Probabilities and Loan Portfolios By Pascal K\"undig; Fabio Sigrist
  16. Learning to play Sokoban from videos By Fricker, Nicolai Benjamin; Krüger, Nicolai; Schubart, Constantin
  17. What Does ChatGPT Make of Historical Stock Returns? Extrapolation and Miscalibration in LLM Stock Return Forecasts By Shuaiyu Chen; T. Clifton Green; Huseyin Gulen; Dexin Zhou
  18. Experimental evidence that delegating to intelligent machines can increase dishonest behaviour By Köbis, Nils; Rahwan, Zoe; Bersch, Clara; Ajaj, Tamer; Bonnefon, Jean-François; Rahwan, Iyad
  19. American Call Options Pricing With Modular Neural Networks By Ananya Unnikrishnan
  20. The Construction of Instruction-tuned LLMs for Finance without Instruction Data Using Continual Pretraining and Model Merging By Masanori Hirano; Kentaro Imajo
  21. Machine Learning and Econometric Approaches to Fiscal Policies: Understanding Industrial Investment Dynamics in Uruguay (1974-2010) By Diego Vallarino

  1. By: Sukjin Han
    Abstract: The instrumental variables (IVs) method is a leading empirical strategy for causal inference. Finding IVs is a heuristic and creative process, and justifying its validity (especially exclusion restrictions) is largely rhetorical. We propose using large language models (LLMs) to search for new IVs through narratives and counterfactual reasoning, similar to how a human researcher would. The stark difference, however, is that LLMs can accelerate this process exponentially and explore an extremely large search space. We demonstrate how to construct prompts to search for potentially valid IVs. We argue that multi-step prompting is useful and role-playing prompts are suitable for mimicking the endogenous decisions of economic agents. We apply our method to three well-known examples in economics: returns to schooling, production functions, and peer effects. We then extend our strategy to finding (i) control variables in regression and difference-in-differences and (ii) running variables in regression discontinuity designs.
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2409.14202
  2. By: Sahar Arshad; Nikhar Azhar; Sana Sajid; Seemab Latif; Rabia Latif
    Abstract: In the modern economic landscape, integrating financial services with Financial Technology (FinTech) has become essential, particularly in stock trend analysis. This study addresses the gap in comprehending financial dynamics across diverse global economies by creating a structured financial dataset and proposing a cross-lingual Natural Language-based Financial Forecasting (NLFF) pipeline for comprehensive financial analysis. Utilizing sentiment analysis, Named Entity Recognition (NER), and semantic textual similarity, we conducted an analytical examination of news articles to extract, map, and visualize financial event timelines, uncovering the correlation between news events and stock market trends. Our method demonstrated a meaningful correlation between stock price movements and cross-linguistic news sentiments, validated by processing two-year cross-lingual news data on two prominent sectors of the Pakistan Stock Exchange. This study offers significant insights into key events, ensuring a substantial decision margin for investors through effective visualization and providing optimal investment opportunities.
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.00024
  3. By: Ryan Y. Lin; Siddhartha Ojha; Kevin Cai; Maxwell F. Chen
    Abstract: Machine-learning technologies are seeing increased deployment in real-world market scenarios. In this work, we explore the strategic behaviors of large language models (LLMs) when deployed as autonomous agents in multi-commodity markets, specifically within Cournot competition frameworks. We examine whether LLMs can independently engage in anti-competitive practices such as collusion or, more specifically, market division. Our findings demonstrate that LLMs can effectively monopolize specific commodities by dynamically adjusting their pricing and resource allocation strategies, thereby maximizing profitability without direct human input or explicit collusion commands. These results pose unique challenges and opportunities for businesses looking to integrate AI into strategic roles and for regulatory bodies tasked with maintaining fair and competitive markets. The study provides a foundation for further exploration into the ramifications of deferring high-stakes decisions to LLM-based agents.
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.00031
  4. By: Xiang Li; Lan Zhao; Junhao Ren; Yajuan Sun; Chuan Fu Tan; Zhiquan Yeo; Gaoxi Xiao
    Abstract: Effective information gathering and knowledge codification are pivotal for developing recommendation systems that promote circular economy practices. One promising approach involves the creation of a centralized knowledge repository cataloguing historical waste-to-resource transactions, which subsequently enables the generation of recommendations based on past successes. However, a significant barrier to constructing such a knowledge repository lies in the absence of a universally standardized framework for representing business activities across disparate geographical regions. To address this challenge, this paper leverages Large Language Models (LLMs) to classify textual data describing economic activities into the International Standard Industrial Classification (ISIC), a globally recognized economic activity classification framework. This approach enables any economic activity descriptions provided by businesses worldwide to be categorized into the unified ISIC standard, facilitating the creation of a centralized knowledge repository. Our approach achieves a 95% accuracy rate on a 182-label test dataset with fine-tuned GPT-2 model. This research contributes to the global endeavour of fostering sustainable circular economy practices by providing a standardized foundation for knowledge codification and recommendation systems deployable across regions.
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2409.18988
  5. By: Kyungsub Lee
    Abstract: This study explores the prediction of high-frequency price changes using deep learning models. Although state-of-the-art methods perform well, their complexity impedes the understanding of successful predictions. We found that an inadequately defined target price process may render predictions meaningless by incorporating past information. The commonly used three-class problem in asset price prediction can generally be divided into volatility and directional prediction. When relying solely on the price process, directional prediction performance is not substantial. However, volume imbalance improves directional prediction performance.
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2409.14157
  6. By: Bingyao Liu; Iris Li; Jianhua Yao; Yuan Chen; Guanming Huang; Jiajing Wang
    Abstract: This paper takes the graph neural network as the technical framework, integrates the intrinsic connections between enterprise financial indicators, and proposes a model for enterprise credit risk assessment. The main research work includes: Firstly, based on the experience of predecessors, we selected 29 enterprise financial data indicators, abstracted each indicator as a vertex, deeply analyzed the relationships between the indicators, constructed a similarity matrix of indicators, and used the maximum spanning tree algorithm to achieve the graph structure mapping of enterprises; secondly, in the representation learning phase of the mapped graph, a graph neural network model was built to obtain its embedded representation. The feature vector of each node was expanded to 32 dimensions, and three GraphSAGE operations were performed on the graph, with the results pooled using the Pool operation, and the final output of three feature vectors was averaged to obtain the graph's embedded representation; finally, a classifier was constructed using a two-layer fully connected network to complete the prediction task. Experimental results on real enterprise data show that the model proposed in this paper can well complete the multi-level credit level estimation of enterprises. Furthermore, the tree-structured graph mapping deeply portrays the intrinsic connections of various indicator data of the company, and according to the ROC and other evaluation criteria, the model's classification effect is significant and has good "robustness".
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2409.17909
  7. By: Daniele Condorelli; Massimiliano Furlan
    Abstract: We train two neural networks adversarially to play normal-form games. At each iteration, a row and column network take a new randomly generated game and output individual mixed strategies. The parameters of each network are independently updated via stochastic gradient descent to minimize expected regret given the opponent's strategy. Our simulations demonstrate that the joint behavior of the networks converges to strategies close to Nash equilibria in almost all games. For all $2 \times 2$ and in 80% of $3 \times 3$ games with multiple equilibria, the networks select the risk-dominant equilibrium. Our results show how Nash equilibrium emerges from learning across heterogeneous games.
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2409.15197
  8. By: John Phan; Hung-Fu Chang
    Abstract: This paper investigates the application of machine learning models, Long Short-Term Memory (LSTM), one-dimensional Convolutional Neural Networks (1D CNN), and Logistic Regression (LR), for predicting stock trends based on fundamental analysis. Unlike most existing studies that predominantly utilize technical or sentiment analysis, we emphasize the use of a company's financial statements and intrinsic value for trend forecasting. Using a dataset of 269 data points from publicly traded companies across various sectors from 2019 to 2023, we employ key financial ratios and the Discounted Cash Flow (DCF) model to formulate two prediction tasks: Annual Stock Price Difference (ASPD) and Difference between Current Stock Price and Intrinsic Value (DCSPIV). These tasks assess the likelihood of annual profit and current profitability, respectively. Our results demonstrate that LR models outperform CNN and LSTM models, achieving an average test accuracy of 74.66% for ASPD and 72.85% for DCSPIV. This study contributes to the limited literature on integrating fundamental analysis into machine learning for stock prediction, offering valuable insights for both academic research and practical investment strategies. By leveraging fundamental data, our approach highlights the potential for long-term stock trend prediction, supporting portfolio managers in their decision-making processes.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.03913
  9. By: Fabian Billert; Stefan Conrad
    Abstract: We extract the sentiment from german and english news articles on companies in the DAX40 stock market index and use it to create a sentiment-powered pendant. Comparing it to existing products which adjust their weights at pre-defined dates once per month, we show that our index is able to react more swiftly to sentiment information mined from online news. Over the nearly 6 years we considered, the sentiment index manages to create an annualized return of 7.51% compared to the 2.13% of the DAX40, while taking transaction costs into account. In this work, we present the framework we employed to develop this sentiment index.
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2409.20397
  10. By: Damelang, Andreas (FAU); Rückel, Ann-Katrin (FAU); Stops, Michael (Institute for Employment Research (IAB), Nuremberg, Germany)
    Abstract: "This study presents patterns of gender stereotyping in job ads in the German labor market and examines its association with the unequal distribution of men and women across occupations. Using a large dataset of job ads from the "BA-Jobbörse", one of the largest online job portals in Germany, we apply a machine learning algorithm to identify the explicitly verbalized job descriptions. We then use a dictionary of agentic (male-associated) and communal (female-associated) signal words to measure gender stereotyping in the job descriptions. We collect information for 710 different occupations. Our first result shows that more jobs are female-stereotyped than male-stereotyped. We then take the example of two occupational groups that reveal clear differences in tasks contents and are highly relevant regarding important megatrends like digitalization and the demographic change: On the one hand, Science, Technology, Engineering, and Mathematics (STEM) and, on the other hand, Health and Social Services occupations. Additionally, we investigate the hierarchical aspect of occupational gender segregation. We distinguish jobs according to their required skill level and whether or not they are supervisory and leadership positions. In contrast to our first result, we find within STEM occupations as well as in supervisory and leadership positions that the majority of jobs is male-stereotyped. Our findings indicate a positive association between gender stereotyping and occupational gender segregation, suggesting that gender stereotyping in job ads might contribute to the underrepresentation of women in certain occupations and occupational positions." (Author's abstract, IAB-Doku) ((en))
    Keywords: IAB-Open-Access-Publikation
    JEL: J71
    Date: 2024–09–30
    URL: https://d.repec.org/n?u=RePEc:iab:iabdpa:202413
  11. By: Zhengxin Joseph Ye; Bjoern Schuller
    Abstract: Earnings release is a key economic event in the financial markets and crucial for predicting stock movements. Earnings data gives a glimpse into how a company is doing financially and can hint at where its stock might go next. However, the irregularity of its release cycle makes it a challenge to incorporate this data in a medium-frequency algorithmic trading model and the usefulness of this data fades fast after it is released, making it tough for models to stay accurate over time. Addressing this challenge, we introduce the Contrastive Earnings Transformer (CET) model, a self-supervised learning approach rooted in Contrastive Predictive Coding (CPC), aiming to optimise the utilisation of earnings data. To ascertain its effectiveness, we conduct a comparative study of CET against benchmark models across diverse sectors. Our research delves deep into the intricacies of stock data, evaluating how various models, and notably CET, handle the rapidly changing relevance of earnings data over time and over different sectors. The research outcomes shed light on CET's distinct advantage in extrapolating the inherent value of earnings data over time. Its foundation on CPC allows for a nuanced understanding, facilitating consistent stock predictions even as the earnings data ages. This finding about CET presents a fresh approach to better use earnings data in algorithmic trading for predicting stock price trends.
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2409.17392
  12. By: Zeda Xu; John Liechty; Sebastian Benthall; Nicholas Skar-Gislinge; Christopher McComb
    Abstract: Volatility, which indicates the dispersion of returns, is a crucial measure of risk and is hence used extensively for pricing and discriminating between different financial investments. As a result, accurate volatility prediction receives extensive attention. The Generalized Autoregressive Conditional Heteroscedasticity (GARCH) model and its succeeding variants are well established models for stock volatility forecasting. More recently, deep learning models have gained popularity in volatility prediction as they demonstrated promising accuracy in certain time series prediction tasks. Inspired by Physics-Informed Neural Networks (PINN), we constructed a new, hybrid Deep Learning model that combines the strengths of GARCH with the flexibility of a Long Short-Term Memory (LSTM) Deep Neural Network (DNN), thus capturing and forecasting market volatility more accurately than either class of models are capable of on their own. We refer to this novel model as a GARCH-Informed Neural Network (GINN). When compared to other time series models, GINN showed superior out-of-sample prediction performance in terms of the Coefficient of Determination ($R^2$), Mean Squared Error (MSE), and Mean Absolute Error (MAE).
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.00288
  13. By: Buczak, Philip
    Abstract: Predicting ordinal responses such as school grades or rating scale data is a common task in the social and life sciences. Currently, two major streams of methodology exist for ordinal prediction: parametric models such as the proportional odds model and machine learning (ML) methods such as random forest (RF) adapted to ordinal prediction. While methods from the latter stream have displayed high predictive performance, particularly for data characterized by non-linear effects, most of these methods do not support hierarchical data. As such data structures frequently occur in the social and life sciences, e.g., students nested in classes or individual measurements nested within the same person, accounting for hierarchical data is of importance for prediction in these fields. A recently proposed ML method for ordinal prediction displaying promising results for non-hierarchical data is Frequency-Adjusted Borders Ordinal Forest (fabOF). Building on an iterative expectation-maximization-type estimation procedure, I extend fabOF to hierarchical data settings in this work by proposing Mixed-Effects Frequency-Adjusted Borders Ordinal Forest (mixfabOF). Through simulation and a real data example on math achievement, I will demonstrate that mixfabOF can improve upon fabOF and other RF-based ordinal prediction methods for (non-)hierarchical data in the presence of random effects.
    Date: 2024–10–03
    URL: https://d.repec.org/n?u=RePEc:osf:osfxxx:ny6we
  14. By: John Armstrong; George Tatlow
    Abstract: We train neural networks to learn optimal replication strategies for an option when two replicating instruments are available, namely the underlying and a hedging option. If the price of the hedging option matches that of the Black--Scholes model then we find the network will successfully learn the Black-Scholes gamma hedging strategy, even if the dynamics of the underlying do not match the Black--Scholes model, so long as we choose a loss function that rewards coping with model uncertainty. Our results suggest that the reason gamma hedging is used in practice is to account for model uncertainty rather than to reduce the impact of transaction costs.
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2409.13567
  15. By: Pascal K\"undig; Fabio Sigrist
    Abstract: We introduce a novel machine learning model for credit risk by combining tree-boosting with a latent spatio-temporal Gaussian process model accounting for frailty correlation. This allows for modeling non-linearities and interactions among predictor variables in a flexible data-driven manner and for accounting for spatio-temporal variation that is not explained by observable predictor variables. We also show how estimation and prediction can be done in a computationally efficient manner. In an application to a large U.S. mortgage credit risk data set, we find that both predictive default probabilities for individual loans and predictive loan portfolio loss distributions obtained with our novel approach are more accurate compared to conventional independent linear hazard models and also linear spatio-temporal models. Using interpretability tools for machine learning models, we find that the likely reasons for this outperformance are strong interaction and non-linear effects in the predictor variables and the presence of large spatio-temporal frailty effects.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.02846
  16. By: Fricker, Nicolai Benjamin; Krüger, Nicolai; Schubart, Constantin
    Abstract: In order to learn a task through behavior cloning, a dataset consisting of state-action pairs is needed. However, this kind of data is often not available in sufficient quantity or quality. Consequently, several publications have addressed the issue of extracting actions from a sequence of states to convert them into corresponding state-action pairs (Torabi et al., 2018; Edwards et al., 2019; Baker et al., 2022; Bruce et al., 2024). Using this dataset, an agent can then be trained via behavior cloning. For instance, this approach was applied to games such as Cartpole and Mountain Car (Edwards et al., 2019). Additionally, actions were extracted from videos of Minecraft (Baker et al., 2022) and jump 'n' run games (Edwards et al., 2019; Bruce et al., 2024) to train deep neural network models to play these games. In this work, videos from YouTube as well as synthetic videos of the game Sokoban were analyzed. Sokoban is a single-player, turn-based game where the player has to push boxes onto target squares (Murase et al., 1996). The actions that a user performs in the videos were extracted using a modified training procedure described by Edwards et al. (2019). The resulting state-action pairs were used to train deep neural network models to play Sokoban. These models were further improved with reinforcement learning in combination with a Monte Carlo tree search as a planning step. The resulting agent demonstrated moderate playing strength. In addition to learning how to solve a Sokoban puzzle, the rules of Sokoban were learned from videos. This enabled the creation of a Sokoban simulator, which was used to carry out model-based reinforcement learning. This work serves as a proof of concept, demonstrating that it is possible to extract actions from videos of a strategy game, perform behavior cloning, infer the rules of the game, and perform model-based reinforcement learning - all without direct interaction with the game environment. Code and models are available at https://github.com/loanMaster/sokoban_le arning.
    Keywords: Imitation learning, behavior cloning, deep neural network models, reinforcement learning
    Date: 2024
    URL: https://d.repec.org/n?u=RePEc:zbw:iubhit:303523
  17. By: Shuaiyu Chen; T. Clifton Green; Huseyin Gulen; Dexin Zhou
    Abstract: We examine how large language models (LLMs) interpret historical stock returns and compare their forecasts with estimates from a crowd-sourced platform for ranking stocks. While stock returns exhibit short-term reversals, LLM forecasts over-extrapolate, placing excessive weight on recent performance similar to humans. LLM forecasts appear optimistic relative to historical and future realized returns. When prompted for 80% confidence interval predictions, LLM responses are better calibrated than survey evidence but are pessimistic about outliers, leading to skewed forecast distributions. The findings suggest LLMs manifest common behavioral biases when forecasting expected returns but are better at gauging risks than humans.
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2409.11540
  18. By: Köbis, Nils; Rahwan, Zoe (Max Planck Institute for Human Development); Bersch, Clara; Ajaj, Tamer; Bonnefon, Jean-François (Toulouse School of Economics); Rahwan, Iyad
    Abstract: While artificial intelligence (AI) enables significant productivity gains from delegating tasks to machines, it can also facilitate the delegation of unethical behaviour. Here, we demonstrate this risk by having human principals instruct machine agents to perform a task with an incentive to cheat. Principals’ requests for cheating behaviour increased when the interface implicitly afforded unethical conduct: Machine agents programmed via supervised learning or goal specification evoked more cheating than those programmed with explicit rules. Cheating propensity was unaffected by whether delegation was mandatory or voluntary. Given the recent rise of large language model-based chatbots, we also explored delegation via natural language. Here, cheating requests did not vary between human and machine agents, but compliance diverged: When principals intended agents to cheat to the fullest extent, the majority of human agents did not comply, despite incentives to do so. In contrast, GPT4, a state-of-the-art machine agent, nearly fully complied. Our results highlight ethical risks in delegating tasks to intelligent machines, and suggest design principles and policy responses to mitigate such risks.
    Date: 2024–10–04
    URL: https://d.repec.org/n?u=RePEc:osf:osfxxx:dnjgz
  19. By: Ananya Unnikrishnan
    Abstract: An accurate valuation of American call options is critical in most financial decision making environments. However, traditional models like the Barone-Adesi Whaley (B-AW) and Binomial Option Pricing (BOP) methods fall short in handling the complexities of early exercise and market dynamics present in American options. This paper proposes a Modular Neural Network (MNN) model which aims to capture the key aspects of American options pricing. By dividing the prediction process into specialized modules, the MNN effectively models the non-linear interactions that drive American call options pricing. Experimental results indicate that the MNN model outperform both traditional models as well as a simpler Feed-forward Neural Network (FNN) across multiple stocks (AAPL, NVDA, QQQ), with significantly lower RMSE and nRMSE (by mean). These findings highlight the potential of MNNs as a powerful tool to improve the accuracy of predicting option prices.
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2409.19706
  20. By: Masanori Hirano; Kentaro Imajo
    Abstract: This paper proposes a novel method for constructing instruction-tuned large language models (LLMs) for finance without instruction data. Traditionally, developing such domain-specific LLMs has been resource-intensive, requiring a large dataset and significant computational power for continual pretraining and instruction tuning. Our study proposes a simpler approach that combines domain-specific continual pretraining with model merging. Given that general-purpose pretrained LLMs and their instruction-tuned LLMs are often publicly available, they can be leveraged to obtain the necessary instruction task vector. By merging this with a domain-specific pretrained vector, we can effectively create instruction-tuned LLMs for finance without additional instruction data. Our process involves two steps: first, we perform continual pretraining on financial data; second, we merge the instruction-tuned vector with the domain-specific pretrained vector. Our experiments demonstrate the successful construction of instruction-tuned LLMs for finance. One major advantage of our method is that the instruction-tuned and domain-specific pretrained vectors are nearly independent. This independence makes our approach highly effective. The Japanese financial instruction-tuned LLMs we developed in this study are available at https://huggingface.co/pfnet/nekomata-14 b-pfn-qfin-inst-merge.
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2409.19854
  21. By: Diego Vallarino
    Abstract: This paper examines the impact of fiscal incentives on industrial investment in Uruguay from 1974 to 2010. Using a mixed-method approach that combines econometric models with machine learning techniques, the study investigates both the short-term and long-term effects of fiscal benefits on industrial investment. The results confirm the significant role of fiscal incentives in driving long-term industrial growth, while also highlighting the importance of a stable macroeconomic environment, public investment, and access to credit. Machine learning models provide additional insights into nonlinear interactions between fiscal benefits and other macroeconomic factors, such as exchange rates, emphasizing the need for tailored fiscal policies. The findings have important policy implications, suggesting that fiscal incentives, when combined with broader economic reforms, can effectively promote industrial development in emerging economies.
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.00002

This nep-big issue is ©2024 by Tom Coupé. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.