|
on Big Data |
By: | Rachel Soloveichik |
Abstract: | The U.S. Bureau of Economic Analysis has undertaken a series of studies that present methods for quantifying the value of simple data that can be differentiated from the complex data created by highly skilled workers that was studied in Calderón and Rassier 2022. Preliminary studies in this series focus on tax data, individual credit data, and driving data. Additional examples include medical records, educational transcripts, business financial records, customer data, equipment maintenance histories, social media profiles, tourist maps, and many more. If new case studies under this topic are released, they will be added to the listing below. |
JEL: | D14 E01 G14 |
Date: | 2024–04 |
URL: | http://d.repec.org/n?u=RePEc:bea:papers:0124&r= |
By: | Kea Baret (BETA - Bureau d'Économie Théorique et Appliquée - AgroParisTech - UNISTRA - Université de Strasbourg - Université de Haute-Alsace (UHA) - Université de Haute-Alsace (UHA) Mulhouse - Colmar - UL - Université de Lorraine - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement); Amélie Barbier-Gauchard (BETA - Bureau d'Économie Théorique et Appliquée - AgroParisTech - UNISTRA - Université de Strasbourg - Université de Haute-Alsace (UHA) - Université de Haute-Alsace (UHA) Mulhouse - Colmar - UL - Université de Lorraine - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement); Theophilos Papadimitriou (DUTH - Democritus University of Thrace) |
Abstract: | Since the reinforcement of the Stability and Growth Pact (1996), the European Commission closely monitors public finance in the EU members. A failure to comply with the 3% limit rule on the public deficit by a country triggers an audit. In this paper, we present a Machine Learning based forecasting model for the compliance with the 3% limit rule. To do so, we use data spanning the period from 2006 to 2018 (a turbulent period including the Global Financial Crisis and the Sovereign Debt Crisis) for the 28 EU Member States. A set of eight features are identified as predictors from 141 variables through a feature selection procedure. The forecasting is performed using the Support Vector Machines (SVM). The proposed model reached 91.7% forecasting accuracy and outperformed the Logit model that we used as benchmark. |
Keywords: | Fiscal Rules, Fiscal Compliance, Stability and Growth Pact, Machine learning |
Date: | 2023–10–26 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-03121966&r= |
By: | Daniel de Souza Santos; Tiago Alessandro Espinola Ferreira |
Abstract: | One of the most discussed problems in the financial world is stock option pricing. The Black-Scholes Equation is a Parabolic Partial Differential Equation which provides an option pricing model. The present work proposes an approach based on Neural Networks to solve the Black-Scholes Equations. Real-world data from the stock options market were used as the initial boundary to solve the Black-Scholes Equation. In particular, times series of call options prices of Brazilian companies Petrobras and Vale were employed. The results indicate that the network can learn to solve the Black-Sholes Equation for a specific real-world stock options time series. The experimental results showed that the Neural network option pricing based on the Black-Sholes Equation solution can reach an option pricing forecasting more accurate than the traditional Black-Sholes analytical solutions. The experimental results making it possible to use this methodology to make short-term call option price forecasts in options markets. |
Date: | 2024–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2405.05780&r= |
By: | Ariel Neufeld; Philipp Schmocker; Sizhou Wu |
Abstract: | In this paper, we present a randomized extension of the deep splitting algorithm introduced in [Beck, Becker, Cheridito, Jentzen, and Neufeld (2021)] using random neural networks suitable to approximately solve both high-dimensional nonlinear parabolic PDEs and PIDEs with jumps having (possibly) infinite activity. We provide a full error analysis of our so-called random deep splitting method. In particular, we prove that our random deep splitting method converges to the (unique viscosity) solution of the nonlinear PDE or PIDE under consideration. Moreover, we empirically analyze our random deep splitting method by considering several numerical examples including both nonlinear PDEs and nonlinear PIDEs relevant in the context of pricing of financial derivatives under default risk. In particular, we empirically demonstrate in all examples that our random deep splitting method can approximately solve nonlinear PDEs and PIDEs in 10'000 dimensions within seconds. |
Date: | 2024–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2405.05192&r= |
By: | G. Ibikunle; B. Moews; K. Rzayev |
Abstract: | We design and train machine learning models to capture the nonlinear interactions between financial market dynamics and high-frequency trading (HFT) activity. In doing so, we introduce new metrics to identify liquidity-demanding and -supplying HFT strategies. Both types of HFT strategies increase activity in response to information events and decrease it when trading speed is restricted, with liquidity-supplying strategies demonstrating greater responsiveness. Liquidity-demanding HFT is positively linked with latency arbitrage opportunities, whereas liquidity-supplying HFT is negatively related, aligning with theoretical expectations. Our metrics have implications for understanding the information production process in financial markets. |
Date: | 2024–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2405.08101&r= |
By: | S. Borağan Aruoba; Thomas Drechsel |
Abstract: | We develop a novel method for the identification of monetary policy shocks. By applying natural language processing techniques to documents that Federal Reserve staff prepare in advance of policy decisions, we capture the Fed's information set. Using machine learning techniques, we then predict changes in the target interest rate conditional on this information set and obtain a measure of monetary policy shocks as the residual. We show that the documents' text contains essential information about the economy which is not captured by numerical forecasts that the staff include in the same documents. The dynamic responses of macro variables to our monetary policy shocks are consistent with the theoretical consensus. Shocks constructed by only controlling for the staff forecasts imply responses of macro variables at odds with theory. We directly link these differences to the information that our procedure extracts from the text over and above information captured by the forecasts. |
JEL: | C10 E31 E32 E52 E58 |
Date: | 2024–05 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:32417&r= |
By: | Simone Brusatin; Tommaso Padoan; Andrea Coletta; Domenico Delli Gatti; Aldo Glielmo |
Abstract: | Agent-based models (ABMs) are simulation models used in economics to overcome some of the limitations of traditional frameworks based on general equilibrium assumptions. However, agents within an ABM follow predetermined, not fully rational, behavioural rules which can be cumbersome to design and difficult to justify. Here we leverage multi-agent reinforcement learning (RL) to expand the capabilities of ABMs with the introduction of fully rational agents that learn their policy by interacting with the environment and maximising a reward function. Specifically, we propose a 'Rational macro ABM' (R-MABM) framework by extending a paradigmatic macro ABM from the economic literature. We show that gradually substituting ABM firms in the model with RL agents, trained to maximise profits, allows for a thorough study of the impact of rationality on the economy. We find that RL agents spontaneously learn three distinct strategies for maximising profits, with the optimal strategy depending on the level of market competition and rationality. We also find that RL agents with independent policies, and without the ability to communicate with each other, spontaneously learn to segregate into different strategic groups, thus increasing market power and overall profits. Finally, we find that a higher degree of rationality in the economy always improves the macroeconomic environment as measured by total output, depending on the specific rational policy, this can come at the cost of higher instability. Our R-MABM framework is general, it allows for stable multi-agent learning, and represents a principled and robust direction to extend existing economic simulators. |
Date: | 2024–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2405.02161&r= |
By: | Rehse, Dominik; Valet, Sebastian; Walter, Johannes |
Abstract: | With the final approval of the EU's Artificial Intelligence Act (AI Act), it is now clear that general-purpose AI (GPAI) models with systemic risk will need to undergo adversarial testing. This provision is a response to the emergence of "generative AI" models, which are currently the most notable form of GPAI models gen- erating rich-form content such as text, images, and video. Adversarial testing involves repeatedly interact- ing with a model to try to lead it to exhibit unwanted behaviour. However, the specific implementation of such testing for GPAI models with systemic risk has not been clearly spelled out in the AI Act. Instead, the legislation only refers to codes of practice and harmonised standards which are soon to be developed. In this policy brief, which is based on research funded by the Baden-Württemberg Foundation, we propose that these codes and standards should reflect that an effective adversarial testing regime requires testing by independent third parties, a well-defined goal, clear roles with proper incentive and coordination schemes for all parties involved, and standardised reporting of the results. The market design approach is helpful for developing, testing and improving the underlying rules and the institutional setup of such adversarial testing regimes. We outline the design space for an extensive form of adversarial testing, called red team- ing, of generative AI models. This is intended to stimulate the discussion in preparation for the codes of practice, harmonised standards and potential additional provisions by governing bodies. |
Date: | 2024 |
URL: | http://d.repec.org/n?u=RePEc:zbw:zewpbs:294875&r= |
By: | Xiaowei Chen; Hong Li; Yufan Lu; Rui Zhou |
Abstract: | This paper proposes a probabilistic machine learning method to price catastrophe (CAT) bonds in the primary market. The proposed method combines machine-learning-based predictive models with Conformal Prediction, an innovative algorithm that generates distribution-free probabilistic forecasts for CAT bond prices. Using primary market CAT bond transaction records between January 1999 and March 2021, the proposed method is found to be more robust and yields more accurate predictions of the bond spreads than traditional regression-based methods. Furthermore, the proposed method generates more informative prediction intervals than linear regression and identifies important nonlinear relationships between various risk factors and bond spreads, suggesting that linear regressions could misestimate the bond spreads. Overall, this paper demonstrates the potential of machine learning methods in improving the pricing of CAT bonds. |
Date: | 2024–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2405.00697&r= |
By: | Tomaz Cajner; Leland D. Crane; Christopher J. Kurz; Norman J. Morin; Paul E. Soto; Betsy Vrankovich |
Abstract: | This paper examines the link between industrial production and the sentiment expressed in natural language survey responses from U.S. manufacturing firms. We compare several natural language processing (NLP) techniques for classifying sentiment, ranging from dictionary-based methods to modern deep learning methods. Using a manually labeled sample as ground truth, we find that deep learning models partially trained on a human-labeled sample of our data outperform other methods for classifying the sentiment of survey responses. Further, we capitalize on the panel nature of the data to train models which predict firm-level production using lagged firm-level text. This allows us to leverage a large sample of "naturally occurring" labels with no manual input. We then assess the extent to which each sentiment measure, aggregated to monthly time series, can serve as a useful statistical indicator and forecast industrial production. Our results suggest that the text responses provide information beyond the available numerical data from the same survey and improve out-of-sample forecasting; deep learning methods and the use of naturally occurring labels seem especially useful for forecasting. We also explore what drives the predictions made by the deep learning models, and find that a relatively small number of words associated with very positive/negative sentiment account for much of the variation in the aggregatesentiment index. |
Keywords: | Industrial Production; Natural Language Processing; Machine Learning; Forecasting |
JEL: | C10 E17 O14 |
Date: | 2024–05–03 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedgfe:2024-26&r= |
By: | Tänzer, Alina |
Abstract: | Central bank intervention in the form of quantitative easing (QE) during times of low interest rates is a controversial topic. This paper introduces a novel approach to study the effectiveness of such unconventional measures. Using U.S. data on six key financial and macroeconomic variables between 1990 and 2015, the economy is estimated by artificial neural networks. Historical counterfactual analyses show that real effects are less pronounced than yield effects. Disentangling the effects of the individual asset purchase programs, impulse response functions provide evidence for QE being less effective the more the crisis is overcome. The peak effects of all QE interventions during the Financial Crisis only amounts to 1.3 pp for GDP growth and 0.6 pp for inflation respectively. Hence, the time as well as the volume of the interventions should be deliberated. |
Keywords: | Artificial Intelligence, Machine Learning, Neural Networks, Forecasting and Simulation: Models and Applications, Financial Markets and the Macroeconomy, Monetary Policy, Central Banks and Their Policies |
JEL: | C45 E47 E44 E52 E58 |
Date: | 2024 |
URL: | http://d.repec.org/n?u=RePEc:zbw:imfswp:295732&r= |
By: | W. Benedikt Schmal |
Abstract: | Natural language processing tools have become frequently used in social sciences such as economics, political science, and sociology. Many publications apply topic modeling to elicit latent topics in text corpora and their development over time. Here, most publications rely on visual inspections and draw inference on changes, structural breaks, and developments over time. We suggest using univariate time series econometrics to introduce more quantitative rigor that can strengthen the analyses. In particular, we discuss the econometric topics of non-stationarity as well as structural breaks. This paper serves as a comprehensive practitioners guide to provide researchers in the social and life sciences as well as the humanities with concise advice on how to implement econometric time series methods to thoroughly investigate topic prevalences over time. We provide coding advice for the statistical software R throughout the paper. The application of the discussed tools to a sample dataset completes the analysis. |
Date: | 2024–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2404.18499&r= |
By: | Tian Tian; Liu Ze hui; Huang Zichen; Yubing Tang |
Abstract: | This paper explores the application of AI and NLP techniques for user feedback analysis in the context of heavy machine crane products. By leveraging AI and NLP, organizations can gain insights into customer perceptions, improve product development, enhance satisfaction and loyalty, inform decision-making, and gain a competitive advantage. The paper highlights the impact of user feedback analysis on organizational performance and emphasizes the reasons for using AI and NLP, including scalability, objectivity, improved accuracy, increased insights, and time savings. The methodology involves data collection, cleaning, text and rating analysis, interpretation, and feedback implementation. Results include sentiment analysis, word cloud visualizations, and radar charts comparing product attributes. These findings provide valuable information for understanding customer sentiment, identifying improvement areas, and making data-driven decisions to enhance the customer experience. In conclusion, promising AI and NLP techniques in user feedback analysis offer organizations a powerful tool to understand customers, improve product development, increase satisfaction, and drive business success |
Date: | 2024–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2405.04692&r= |
By: | Maria S. Mavillonio |
Abstract: | In this paper, we leverage recent advancements in large language models to extract information from business plans on various equity crowdfunding platforms and predict the success of firm campaigns. Our approach spans a broad and comprehensive spectrum of model complexities, ranging from standard textual analysis to more intricate textual representations - e.g. Transformers-, thereby offering a clear view of the challenges in understanding of the underlying data. To this end, we build a novel dataset comprising more than 640 equity crowdfunding campaigns from major Italian platforms. Through rigorous analysis, our results indicate a compelling correlation between the use of intricate textual representations and the enhanced predictive capacity for identifying successful campaigns. |
Keywords: | Crowdfunding, Text Representation, Natural Language Processing, Transformers |
JEL: | C45 C53 G23 L26 |
Date: | 2024–05–01 |
URL: | http://d.repec.org/n?u=RePEc:pie:dsedps:2024/308&r= |
By: | Chuanhao Li (Yale University); Runhan Yang (The Chinese University of Hong Kong); Tiankai Li (University of Science and Technology of China); Milad Bafarassat (Sabanci University); Kourosh Sharifi (Sabanci University); Dirk Bergemann (Yale University); Zhuoran Yang (Yale University) |
Abstract: | Large Language Models (LLMs) like GPT-4 have revolutionized natural language processing, showing remarkable linguistic proficiency and reasoning capabilities. However, their application in strategic multi-agent decision-making environments is hampered by significant limitations including poor mathematical reasoning, difficulty in following instructions, and a tendency to generate incorrect information. These deficiencies hinder their performance in strategic and interactive tasks that demand adherence to nuanced game rules, long-term planning, exploration in unknown environments, and anticipation of opponentsÕ moves. To overcome these obstacles, this paper presents a novel LLM agent framework equipped with memory and specialized tools to enhance their strategic decision-making capabilities. We deploy the tools in a number of economically important environments, in particular bilateral bargaining and multi-agent and dynamic mechanism design. We employ quantitative metrics to assess the frameworkÕs performance in various strategic decision-making problems. Our findings establish that our enhanced framework significantly improves the strategic decision-making capability of LLMs. While we highlight the inherent limitations of current LLM models, we demonstrate the improvements through targeted enhancements, suggesting a promising direction for future developments in LLM applications for interactive environments. |
Date: | 2024–05 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:2393&r= |
By: | Xue Wen Tan; Stanley Kok |
Abstract: | Every publicly traded company in the US is required to file an annual 10-K financial report, which contains a wealth of information about the company. In this paper, we propose an explainable deep-learning model, called FinBERT-XRC, that takes a 10-K report as input, and automatically assesses the post-event return volatility risk of its associated company. In contrast to previous systems, our proposed model simultaneously offers explanations of its classification decision at three different levels: the word, sentence, and corpus levels. By doing so, our model provides a comprehensive interpretation of its prediction to end users. This is particularly important in financial domains, where the transparency and accountability of algorithmic predictions play a vital role in their application to decision-making processes. Aside from its novel interpretability, our model surpasses the state of the art in predictive accuracy in experiments on a large real-world dataset of 10-K reports spanning six years. |
Date: | 2024–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2405.01881&r= |
By: | Christian Peukert; Florian Abeillon; Jérémie Haese; Franziska Kaiser; Alexander Staub |
Abstract: | Human-created works represent critical data inputs to artificial intelligence (AI). Strategic behaviour can play a major role for AI training datasets, be it in limiting access to existing works or in deciding which types of new works to create or whether to create new works at all. We examine creators’ behavioral change when their works become training data for AI. Specifically, we focus on contributors on Unsplash, a popular stock image platform with about 6 million high-quality photos and illustrations. In the summer of 2020, Unsplash launched an AI research program by releasing a dataset of 25, 000 images for commercial use. We study contributors’ reactions, comparing contributors whose works were included in this dataset to contributors whose works were not included. Our results suggest that treated contributors left the platform at a higher-than-usual rate and substantially slowed down the rate of new uploads. Professional and more successful photographers react stronger than amateurs and less successful photographers. We also show that affected users changed the variety and novelty of contributions to the platform, with long-run implications for the stock of works potentially available for AI training. Taken together, our findings highlight the trade-off between interests of rightsholders and promoting innovation at the technological frontier. We discuss implications for copyright and AI policy. |
Keywords: | generative artificial intelligence, training data, licensing, copyright, natural experiment |
JEL: | K11 L82 L86 |
Date: | 2024 |
URL: | http://d.repec.org/n?u=RePEc:ces:ceswps:_11099&r= |
By: | Sendhil Mullainathan; Ashesh Rambachan |
Abstract: | Machine learning algorithms can find predictive signals that researchers fail to notice; yet they are notoriously hard-to-interpret. How can we extract theoretical insights from these black boxes? History provides a clue. Facing a similar problem – how to extract theoretical insights from their intuitions – researchers often turned to “anomalies:” constructed examples that highlight flaws in an existing theory and spur the development of new ones. Canonical examples include the Allais paradox and the Kahneman-Tversky choice experiments for expected utility theory. We suggest anomalies can extract theoretical insights from black box predictive algorithms. We develop procedures to automatically generate anomalies for an existing theory when given a predictive algorithm. We cast anomaly generation as an adversarial game between a theory and a falsifier, the solutions to which are anomalies: instances where the black box algorithm predicts - were we to collect data - we would likely observe violations of the theory. As an illustration, we generate anomalies for expected utility theory using a large, publicly available dataset on real lottery choices. Based on an estimated neural network that predicts lottery choices, our procedures recover known anomalies and discover new ones for expected utility theory. In incentivized experiments, subjects violate expected utility theory on these algorithmically generated anomalies; moreover, the violation rates are similar to observed rates for the Allais paradox and Common ratio effect. |
JEL: | B40 C1 |
Date: | 2024–05 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:32422&r= |
By: | Attila Sarkany (Institute of Economic Studies, Charles University, Prague, Czech Republic & The Czech Academy of Sciences, IITA, Prague, Czech Republic); Lukas Janasek (Institute of Economic Studies, Charles University, Prague, Czech Republic & The Czech Academy of Sciences, IITA, Prague, Czech Republic); Jozef Barunik (Institute of Economic Studies, Charles University, Prague, Czech Republic & The Czech Academy of Sciences, IITA, Prague, Czech Republic) |
Abstract: | We develop a novel approach to understand the dynamic diversification of decision makers with quantile preferences. Due to unavailability of analytical solutions to such complex problems, we suggest to approximate the behavior of agents with a Quantile Deep Reinforcement Learning (Q-DRL) algorithm. The research will provide a new level of understanding the behavior of economic agents with respect to preferences, captured by quantiles, without assuming a specific utility function or distribution of returns. Furthermore, we are challenging the traditional diversification methods as they proved to be insufficient due to heightened correlations and similar risk features between asset classes, and rather the research delves into risk factor investing as a solution and portfolio optimization based on them. |
Keywords: | Portfolio Management, Quantile Deep Reinforcement Learning, Factor investing, Deep-Learning, Advantage-Actor-Critic |
Date: | 2024–05 |
URL: | http://d.repec.org/n?u=RePEc:fau:wpaper:wp2024_21&r= |
By: | Nicholas Tenev |
Abstract: | Prediction models can improve efficiency by automating decisions such as the approval of loan applications. However, they may inherit bias against protected groups from the data they are trained on. This paper adds counterfactual (simulated) ethnic bias to real data on mortgage application decisions, and shows that this bias is replicated by a machine learning model (XGBoost) even when ethnicity is not used as a predictive variable. Next, several other de-biasing methods are compared: averaging over prohibited variables, taking the most favorable prediction over prohibited variables (a novel method), and jointly minimizing errors as well as the association between predictions and prohibited variables. De-biasing can recover some of the original decisions, but the results are sensitive to whether the bias is effected through a proxy. |
Date: | 2024–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2405.00910&r= |
By: | Sugat Chaturvedi (Ahmedabad University); Kanika Mahajan (Ashoka University); Zahra Siddique (University of Bristol) |
Abstract: | We study the demand for skills by using text analysis methods on job descriptions in a large volume of ads posted on an online Indian job portal. We make use of domain-specific unlabeled data to obtain word vector representations (i.e., word embeddings) and discuss how these can be leveraged for labor market research. We start by carrying out a data-driven categorization of required skill words and construct gender associations of different skill categories using word embeddings. Next, we examine how different required skill categories correlate with log posted wages as well as explore how skills demand varies with firm size. We find that female skills are associated with lower posted wages, potentially contributing to observed gender wage gaps. We also find that large firms require a more extensive range of skills, implying that complementarity between female and male skills is greater among these firms. |
Keywords: | Gender; Machine learning; online job ads; Skills demand; Text analysis |
Date: | 2023–11–10 |
URL: | http://d.repec.org/n?u=RePEc:ash:wpaper:107&r= |
By: | Reilly Pickard; F. Wredenhagen; Y. Lawryshyn |
Abstract: | This paper contributes to the existing literature on hedging American options with Deep Reinforcement Learning (DRL). The study first investigates hyperparameter impact on hedging performance, considering learning rates, training episodes, neural network architectures, training steps, and transaction cost penalty functions. Results highlight the importance of avoiding certain combinations, such as high learning rates with a high number of training episodes or low learning rates with few training episodes and emphasize the significance of utilizing moderate values for optimal outcomes. Additionally, the paper warns against excessive training steps to prevent instability and demonstrates the superiority of a quadratic transaction cost penalty function over a linear version. This study then expands upon the work of Pickard et al. (2024), who utilize a Chebyshev interpolation option pricing method to train DRL agents with market calibrated stochastic volatility models. While the results of Pickard et al. (2024) showed that these DRL agents achieve satisfactory performance on empirical asset paths, this study introduces a novel approach where new agents at weekly intervals to newly calibrated stochastic volatility models. Results show DRL agents re-trained using weekly market data surpass the performance of those trained solely on the sale date. Furthermore, the paper demonstrates that both single-train and weekly-train DRL agents outperform the Black-Scholes Delta method at transaction costs of 1% and 3%. This practical relevance suggests that practitioners can leverage readily available market data to train DRL agents for effective hedging of options in their portfolios. |
Date: | 2024–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2405.08602&r= |
By: | Tian Tian; Jiahao Deng |
Abstract: | This pioneering research introduces a novel approach for decision-makers in the heavy machinery industry, specifically focusing on production management. The study integrates machine learning techniques like Ridge Regression, Markov chain analysis, and radar charts to optimize North American Crawler Cranes market production processes. Ridge Regression enables growth pattern identification and performance assessment, facilitating comparisons and addressing industry challenges. Markov chain analysis evaluates risk factors, aiding in informed decision-making and risk management. Radar charts simulate benchmark product designs, enabling data-driven decisions for production optimization. This interdisciplinary approach equips decision-makers with transformative insights, enhancing competitiveness in the heavy machinery industry and beyond. By leveraging these techniques, companies can revolutionize their production management strategies, driving success in diverse markets. |
Date: | 2024–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2405.01913&r= |
By: | Xiaoxuan Zhang (University of Waikato); John Gibson (University of Waikato) |
Abstract: | China’s high-speed rail (HSR) has quickly expanded to over 40, 000 km of lines operating and another 10, 000 km under construction. This is over 10-times longer than the networks in long-established HSR countries like France, Germany or Japan. While fewer than 100 county-level units had stations on the HSR network in the first years of operation, the eight years from 2012-19 saw almost 400 more county-level units connect to the HSR network. Effects on local economic activity from this substantial increase in connections to the HSR network remain contested. Some prior studies find either insignificant effects on local economic growth or even negative effects in peripheral regions. In light of this debate, we use spatial econometric models for a panel for almost 2500 county-level units to study effects of connecting to the HSR network. We especially concentrate on the 2012-19 period that has high quality night-time lights data to provide an alternative to GDP as an indicator of growth in local economic activity. Our spatial econometric models allow for spatial lags of the outcomes, of the covariates, and of the errors. We also address potential endogeneity of the HSR networks and connections, using an instrumental variables strategy. Across a range of specifications, we generally find that growth in local economic activity is lower following connection to the HSR network, with this effect especially apparent when using high quality night-time lights data for the 2012-19 period. Hence, expansion of the HSR network may not boost China’s economic growth. |
Keywords: | High-speed rail; infrastructure; luminosity; spatial spillovers; China |
JEL: | R12 |
Date: | 2024–06–05 |
URL: | https://d.repec.org/n?u=RePEc:wai:econwp:24/03&r= |
By: | Ajit Desai; Anneke Kosse; Jacob Sharples |
Abstract: | We propose a flexible machine learning (ML) framework for real-time transaction monitoring in high-value payment systems (HVPS), which are a central piece of a country’s financial infrastructure. This framework can be used by system operators and overseers to detect anomalous transactions, which—if caused by a cyber attack or an operational outage and left undetected—could have serious implications for the HVPS, its participants and the financial system more broadly. Given the substantial volume of payments settled each day and the scarcity of actual anomalous transactions in HVPS, detecting anomalies resembles an attempt to find a needle in a haystack. Therefore, our framework uses a layered approach. In the first layer, a supervised ML algorithm is used to identify and separate “typical” payments from “unusual” payments. In the second layer, only the unusual payments are run through an unsupervised ML algorithm for anomaly detection. We test this framework using artificially manipulated transactions and payments data from the Canadian HVPS. The ML algorithm employed in the first layer achieves a detection rate of 93%, marking a significant improvement over commonly used econometric models. Moreover, the ML algorithm used in the second layer marks the artificially manipulated transactions as nearly twice as suspicious as the original transactions, proving its effectiveness. |
Keywords: | Digital currencies and fintech; Financial institutions; Financial services; Financial system regulation and policies; Payment clearing and settlement systems |
JEL: | C45 C55 D83 E42 |
Date: | 2024–05 |
URL: | http://d.repec.org/n?u=RePEc:bca:bocawp:24-15&r= |
By: | Ali Mohammadjafari |
Abstract: | Prediction of stock prices has been a crucial and challenging task, especially in the case of highly volatile digital currencies such as Bitcoin. This research examineS the potential of using neural network models, namely LSTMs and GRUs, to forecast Bitcoin's price movements. We employ five-fold cross-validation to enhance generalization and utilize L2 regularization to reduce overfitting and noise. Our study demonstrates that the GRUs models offer better accuracy than LSTMs model for predicting Bitcoin's price. Specifically, the GRU model has an MSE of 4.67, while the LSTM model has an MSE of 6.25 when compared to the actual prices in the test set data. This finding indicates that GRU models are better equipped to process sequential data with long-term dependencies, a characteristic of financial time series data such as Bitcoin prices. In summary, our results provide valuable insights into the potential of neural network models for accurate Bitcoin price prediction and emphasize the importance of employing appropriate regularization techniques to enhance model performance. |
Date: | 2024–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2405.08089&r= |
By: | Sylvain BARTHÉLÉMY (Gwenlake, Rennes, France); Virginie GAUTIER (TAC Economics and Univ Rennes, CNRS, CREM – UMR6211, F-35000 Rennes France); Fabien RONDEAU (Univ Rennes, CNRS, CREM – UMR6211, F-35000 Rennes France) |
Abstract: | We study the class of congestion games with player-specic payoff functions Milchtaich (1996). Focusing on a case where the number of resources is equal to two, we give a short and simple method for identifying the exact number of Nash equilibria in pure strategies. We propose an algorithmic method, first to find one or more Nash equilibria; second, to compare the optimal Nash equilibrium, in which the social cost is minimized, with the worst Nash equilibrium, in which the converse is true; third, to identify the time associated to the computations when the number of players increases. |
Keywords: | currency crises, early warning system, neural network, convolutional neural network, SHAP values. |
JEL: | F14 F31 F47 |
Date: | 2024–03 |
URL: | https://d.repec.org/n?u=RePEc:tut:cremwp:2024-01&r= |
By: | Zhiyu Cao; Zachary Feinstein |
Abstract: | This study explores the innovative use of Large Language Models (LLMs) as analytical tools for interpreting complex financial regulations. The primary objective is to design effective prompts that guide LLMs in distilling verbose and intricate regulatory texts, such as the Basel III capital requirement regulations, into a concise mathematical framework that can be subsequently translated into actionable code. This novel approach aims to streamline the implementation of regulatory mandates within the financial reporting and risk management systems of global banking institutions. A case study was conducted to assess the performance of various LLMs, demonstrating that GPT-4 outperforms other models in processing and collecting necessary information, as well as executing mathematical calculations. The case study utilized numerical simulations with asset holdings -- including fixed income, equities, currency pairs, and commodities -- to demonstrate how LLMs can effectively implement the Basel III capital adequacy requirements. |
Date: | 2024–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2405.06808&r= |
By: | Ashish Anil Pawar; Vishnureddy Prashant Muskawar; Ritesh Tiku |
Abstract: | Algorithmic trading or Financial robots have been conquering the stock markets with their ability to fathom complex statistical trading strategies. But with the recent development of deep learning technologies, these strategies are becoming impotent. The DQN and A2C models have previously outperformed eminent humans in game-playing and robotics. In our work, we propose a reinforced portfolio manager offering assistance in the allocation of weights to assets. The environment proffers the manager the freedom to go long and even short on the assets. The weight allocation advisements are restricted to the choice of portfolio assets and tested empirically to knock benchmark indices. The manager performs financial transactions in a postulated liquid market without any transaction charges. This work provides the conclusion that the proposed portfolio manager with actions centered on weight allocations can surpass the risk-adjusted returns of conventional portfolio managers. |
Date: | 2024–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2405.01604&r= |
By: | Ajit Desai; Anneke Kosse; Jacob Sharples |
Abstract: | We propose a flexible machine learning (ML) framework for real-time transaction monitoring in high-value payment systems (HVPS), which are a central piece of a country's financial infrastructure. This framework can be used by system operators and overseers to detect anomalous transactions, which - if caused by a cyber attack or an operational outage and left undetected - could have serious implications for the HVPS, its participants and the financial system more broadly. Given the substantial volume of payments settled each day and the scarcity of actual anomalous transactions in HVPS, detecting anomalies resembles an attempt to find a needle in a haystack. Therefore, our framework uses a layered approach. In the first layer, a supervised ML algorithm is used to identify and separate 'typical' payments from 'unusual' payments. In the second layer, only the 'unusual' payments are run through an unsupervised ML algorithm for anomaly detection. We test this framework using artificially manipulated transactions and payments data from the Canadian HVPS. The ML algorithm employed in the first layer achieves a detection rate of 93%, marking a significant improvement over commonly-used econometric models. Moreover, the ML algorithm used in the second layer marks the artificially manipulated transactions as nearly twice as suspicious as the original transactions, proving its effectiveness. |
Keywords: | payment systems, transaction monitoring, anomaly detection, machine learning |
JEL: | C45 C55 D83 E42 |
Date: | 2024–05 |
URL: | http://d.repec.org/n?u=RePEc:bis:biswps:1188&r= |
By: | Vansh Murad Kalia |
Abstract: | Packing peanuts, as defined by Wikipedia, is a common loose-fill packaging and cushioning material that helps prevent damage to fragile items. In this paper, I propose that synthetic data, akin to packing peanuts, can serve as a valuable asset for economic prediction models, enhancing their performance and robustness when integrated with real data. This hybrid approach proves particularly beneficial in scenarios where data is either missing or limited in availability. Through the utilization of Affinity credit card spending and Womply small business datasets, this study demonstrates the substantial performance improvements achieved by employing a hybrid data approach, surpassing the capabilities of traditional economic modeling techniques. |
Date: | 2024–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2405.07431&r= |
By: | Ge, S.; Li, S.; Linton, O. B.; Liu, W.; Su, W. |
Abstract: | In this paper, we propose two novel frameworks to incorporate auxiliary information about connectivity among entities (i.e., network information) into the estimation of large covariance matrices. The current literature either completely ignores this kind of network information (e.g., thresholding and shrinkage) or utilizes some simple network structure under very restrictive settings (e.g., banding). In the era of big data, we can easily get access to auxiliary information about the complex connectivity structure among entities. Depending on the features of the auxiliary network information at hand and the structure of the covariance matrix, we provide two different frameworks correspondingly —the Network Guided Thresholding and the Network Guided Banding. We show that both Network Guided estimators have optimal convergence rates over a larger class of sparse covariance matrix. Simulation studies demonstrate that they generally outperform other pure statistical methods, especially when the true covariance matrix is sparse, and the auxiliary network contains genuine information. Empirically, we apply our method to the estimation of the covariance matrix with the help of many financial linkage data of asset returns to attain the global minimum variance (GMV) portfolio. |
Keywords: | Banding, Big Data, Large Covariance Matrix, Network, Thresholding |
JEL: | C13 C58 G11 |
Date: | 2024–05–20 |
URL: | http://d.repec.org/n?u=RePEc:cam:camjip:2416&r= |
By: | Reilly Pickard; Finn Wredenhagen; Julio DeJesus; Mario Schlener; Yuri Lawryshyn |
Abstract: | This article leverages deep reinforcement learning (DRL) to hedge American put options, utilizing the deep deterministic policy gradient (DDPG) method. The agents are first trained and tested with Geometric Brownian Motion (GBM) asset paths and demonstrate superior performance over traditional strategies like the Black-Scholes (BS) Delta, particularly in the presence of transaction costs. To assess the real-world applicability of DRL hedging, a second round of experiments uses a market calibrated stochastic volatility model to train DRL agents. Specifically, 80 put options across 8 symbols are collected, stochastic volatility model coefficients are calibrated for each symbol, and a DRL agent is trained for each of the 80 options by simulating paths of the respective calibrated model. Not only do DRL agents outperform the BS Delta method when testing is conducted using the same calibrated stochastic volatility model data from training, but DRL agents achieves better results when hedging the true asset path that occurred between the option sale date and the maturity. As such, not only does this study present the first DRL agents tailored for American put option hedging, but results on both simulated and empirical market testing data also suggest the optimality of DRL agents over the BS Delta method in real-world scenarios. Finally, note that this study employs a model-agnostic Chebyshev interpolation method to provide DRL agents with option prices at each time step when a stochastic volatility model is used, thereby providing a general framework for an easy extension to more complex underlying asset processes. |
Date: | 2024–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2405.06774&r= |
By: | Disa M. Hynsjö; Luca Perdoni |
Abstract: | This paper proposes a novel empirical strategy to estimate the causal effects of federal “redlining” – the mapping and grading of US neighborhoods by the Home Owners’ Loan Corporation (HOLC). In the late 1930s, a federal agency created color-coded maps to summarize the financial risk of granting mortgages in different neighborhoods, together with forms describing the presence of racial and ethnic minorities as “detrimental”. Our analysis exploits an exogenous population cutoff: only cities above 40, 000 residents were mapped. We employ a difference-in-differences design, comparing areas that received a particular grade with neighborhoods that would have received the same grade if their city had been mapped. The control neighborhoods are defined using a machine learning algorithm trained to draw HOLC-like maps using newly geocoded full-count census records. Our findings support the view that HOLC maps further concentrated economic disadvantage. For the year 1940, we find a substantial reduction in property values and a moderate increase in the share of African American residents in areas with the lowest grade. Such negative effects on property values persisted until the early 1980s. The magnitude of the results is higher in historically African American neighborhoods. The empirical results show that a government-supplied, data-driven information tool can coordinate exclusionary practices and amplify their consequences. |
Keywords: | Redlining, neighborhoods, discrimination, machine-learning |
JEL: | J15 R23 N92 N32 |
Date: | 2024 |
URL: | http://d.repec.org/n?u=RePEc:ces:ceswps:_11098&r= |
By: | Theodoros Zafeiriou; Dimitris Kalles |
Abstract: | The present document delineates the analysis, design, implementation, and benchmarking of various neural network architectures within a short-term frequency prediction system for the foreign exchange market (FOREX). Our aim is to simulate the judgment of the human expert (technical analyst) using a system that responds promptly to changes in market conditions, thus enabling the optimization of short-term trading strategies. We designed and implemented a series of LSTM neural network architectures which are taken as input the exchange rate values and generate the short-term market trend forecasting signal and an ANN custom architecture based on technical analysis indicator simulators We performed a comparative analysis of the results and came to useful conclusions regarding the suitability of each architecture and the cost in terms of time and computational power to implement them. The ANN custom architecture produces better prediction quality with higher sensitivity using fewer resources and spending less time than LSTM architectures. The ANN custom architecture appears to be ideal for use in low-power computing systems and for use cases that need fast decisions with the least possible computational cost. |
Date: | 2024–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2405.08045&r= |
By: | Serguei Maliar; Bernard Salanie |
Abstract: | The positive correlation test for asymmetric information developed by Chiappori and Salanie (2000) has been applied in many insurance markets. Most of the literature focuses on the special case of constant correlation; it also relies on restrictive parametric specifications for the choice of coverage and the occurrence of claims. We relax these restrictions by estimating conditional covariances and correlations using deep learning methods. We test the positive correlation property by using the intersection test of Chernozhukov, Lee, and Rosen (2013) and the "sorted groups" test of Chernozhukov, Demirer, Duflo, and Fernandez-Val (2023). Our results confirm earlier findings that the correlation between risk and coverage is small. Random forests and gradient boosting trees produce similar results to neural networks. |
Date: | 2024–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2404.18207&r= |
By: | Yusuke Narita (Yale University); Kohei Yata (Yale University) |
Abstract: | Algorithms make a growing portion of policy and business decisions. We develop a treatment-effect estimator using algorithmic decisions as instruments for a class of stochastic and deterministic algorithms. Our estimator is consistent and asymptotically normal for well-defined causal effects. A special case of our setup is multidimensional regression discontinuity designs with complex boundaries. We apply our estimator to evaluate the Coronavirus Aid, Relief, and Economic Security Act, which allocated many billions of dollars worth of relief funding to hospitals via an algorithmic rule. The funding is shown to have little effect on COVID-19-related hospital activities. Naive estimates exhibit selection bias. |
Date: | 2024–05 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:2391&r= |
By: | Wang, Tengyao; Dobriban, Edgar; Gataric, Milana; Samworth, Richard J. |
Abstract: | We propose a new method for high-dimensional semi-supervised learning problems based on the careful aggregation of the results of a low-dimensional procedure applied to many axis-aligned random projections of the data. Our primary goal is to identify important variables for distinguishing between the classes; existing low-dimensional methods can then be applied for final class assignment. To this end, we score projections according to their class-distinguishing ability; for instance, motivated by a generalized Rayleigh quotient, we can compute the traces of estimated whitened between-class covariance matrices on the projected data. This enables us to assign an importance weight to each variable for a given projection, and to select our signal variables by aggregating these weights over high-scoring projections. Our theory shows that the resulting Sharp-SSL algorithm is able to recover the signal coordinates with high probability when we aggregate over sufficiently many random projections and when the base procedure estimates the diagonal entries of the whitened betweenclass covariance matrix sufficiently well. For the Gaussian EM base procedure, we provide a new analysis of its performance in semi-supervised settings that controls the parameter estimation error in terms of the proportion of labeled data in the sample. Numerical results on both simulated data and a real colon tumor dataset support the excellent empirical performance of the method. |
Keywords: | semi-supervised learning; high-dimensional statistics; sparsity; random projection; ensemble learning; EP/T02772X/1; EP/T017961/1; EP/P031447/1; EP/N031938/1; 101019498; DMS 2046874 (CAREER).; T&F deal |
JEL: | C1 |
Date: | 2024–05–20 |
URL: | https://d.repec.org/n?u=RePEc:ehl:lserod:122552&r= |
By: | Lonjezo Sithole |
Abstract: | I propose a locally robust semiparametric framework for estimating causal effects using the popular examiner IV design, in the presence of many examiners and possibly many covariates relative to the sample size. The key ingredient of this approach is an orthogonal moment function that is robust to biases and local misspecification from the first step estimation of the examiner IV. I derive the orthogonal moment function and show that it delivers multiple robustness where the outcome model or at least one of the first step components is misspecified but the estimating equation remains valid. The proposed framework not only allows for estimation of the examiner IV in the presence of many examiners and many covariates relative to sample size, using a wide range of nonparametric and machine learning techniques including LASSO, Dantzig, neural networks and random forests, but also delivers root-n consistent estimation of the parameter of interest under mild assumptions. |
Date: | 2024–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2404.19144&r= |