|
on Computational Economics |
By: | Shi, Chengchun; Qi, Zhengling; Wang, Jianing; Zhou, Fan |
Abstract: | Reinforcement learning (RL) is a powerful machine learning technique that enables an intelligent agent to learn an optimal policy that maximizes the cumulative rewards in sequential decision making. Most of methods in the existing literature are developed in online settings where the data are easy to collect or simulate. Motivated by high stake domains such as mobile health studies with limited and pre-collected data, in this article, we study offline reinforcement learning methods. To efficiently use these datasets for policy optimization, we propose a novel value enhancement method to improve the performance of a given initial policy computed by existing state-of-the-art RL algorithms. Specifically, when the initial policy is not consistent, our method will output a policy whose value is no worse and often better than that of the initial policy. When the initial policy is consistent, under some mild conditions, our method will yield a policy whose value converges to the optimal one at a faster rate than the initial policy, achieving the desired“value enhancement” property. The proposed method is generally applicable to any parameterized policy that belongs to certain pre-specified function class (e.g., deep neural networks). Extensive numerical studies are conducted to demonstrate the superior performance of our method. Supplementary materials for this article are available online. |
Keywords: | mobile health studies; offline reinforcement learning; semi-parametric efficiency; trust region optimization |
JEL: | C1 |
Date: | 2023–07–20 |
URL: | https://d.repec.org/n?u=RePEc:ehl:lserod:122756 |
By: | Tiago Monteiro |
Abstract: | In quantitative finance, machine learning methods are essential for alpha generation. This study introduces a new approach that combines Hidden Markov Models (HMM) and neural networks, integrated with Black-Litterman portfolio optimization. During the COVID period (2019-2022), this dual-model approach achieved a 83% return with a Sharpe ratio of 0.77. It incorporates two risk models to enhance risk management, showing efficiency during volatile periods. The methodology was implemented on the QuantConnect platform, which was chosen for its robust framework and experimental reproducibility. The system, which predicts future price movements, includes a three-year warm-up to ensure proper algorithm function. It targets highly liquid, large-cap energy stocks to ensure stable and predictable performance while also considering broker payments. The dual-model alpha system utilizes log returns to select the optimal state based on the historical performance. It combines state predictions with neural network outputs, which are based on historical data, to generate trading signals. This study examined the architecture of the trading system, data pre-processing, training, and performance. The full code and backtesting data are available under the QuantConnect terms. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.19858 |
By: | Alejandra de la Rica Escudero; Eduardo C. Garrido-Merchan; Maria Coronado-Vaca |
Abstract: | Financial portfolio management investment policies computed quantitatively by modern portfolio theory techniques like the Markowitz model rely on a set on assumptions that are not supported by data in high volatility markets. Hence, quantitative researchers are looking for alternative models to tackle this problem. Concretely, portfolio management is a problem that has been successfully addressed recently by Deep Reinforcement Learning (DRL) approaches. In particular, DRL algorithms train an agent by estimating the distribution of the expected reward of every action performed by an agent given any financial state in a simulator. However, these methods rely on Deep Neural Networks model to represent such a distribution, that although they are universal approximator models, they cannot explain its behaviour, given by a set of parameters that are not interpretable. Critically, financial investors policies require predictions to be interpretable, so DRL agents are not suited to follow a particular policy or explain their actions. In this work, we developed a novel Explainable Deep Reinforcement Learning (XDRL) approach for portfolio management, integrating the Proximal Policy Optimization (PPO) with the model agnostic explainable techniques of feature importance, SHAP and LIME to enhance transparency in prediction time. By executing our methodology, we can interpret in prediction time the actions of the agent to assess whether they follow the requisites of an investment policy or to assess the risk of following the agent suggestions. To the best of our knowledge, our proposed approach is the first explainable post hoc portfolio management financial policy of a DRL agent. We empirically illustrate our methodology by successfully identifying key features influencing investment decisions, which demonstrate the ability to explain the agent actions in prediction time. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.14486 |
By: | Abdul Jabbar; Syed Qaisar Jalil |
Abstract: | This study evaluates the performance of 41 machine learning models, including 21 classifiers and 20 regressors, in predicting Bitcoin prices for algorithmic trading. By examining these models under various market conditions, we highlight their accuracy, robustness, and adaptability to the volatile cryptocurrency market. Our comprehensive analysis reveals the strengths and limitations of each model, providing critical insights for developing effective trading strategies. We employ both machine learning metrics (e.g., Mean Absolute Error, Root Mean Squared Error) and trading metrics (e.g., Profit and Loss percentage, Sharpe Ratio) to assess model performance. Our evaluation includes backtesting on historical data, forward testing on recent unseen data, and real-world trading scenarios, ensuring the robustness and practical applicability of our models. Key findings demonstrate that certain models, such as Random Forest and Stochastic Gradient Descent, outperform others in terms of profit and risk management. These insights offer valuable guidance for traders and researchers aiming to leverage machine learning for cryptocurrency trading. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.18334 |
By: | Joel P. Villarino; \'Alvaro Leitao |
Abstract: | The present work addresses the challenge of training neural networks for Dynamic Initial Margin (DIM) computation in counterparty credit risk, a task traditionally burdened by the high costs associated with generating training datasets through nested Monte Carlo (MC) simulations. By condensing the initial market state variables into an input vector, determined through an interest rate model and a parsimonious parameterization of the current interest rate term structure, we construct a training dataset where labels are noisy but unbiased DIM samples derived from single MC paths. A multi-output neural network structure is employed to handle DIM as a time-dependent function, facilitating training across a mesh of monitoring times. The methodology offers significant advantages: it reduces the dataset generation cost to a single MC execution and parameterizes the neural network by initial market state variables, obviating the need for repeated training. Experimental results demonstrate the approach's convergence properties and robustness across different interest rate models (Vasicek and Hull-White) and portfolio complexities, validating its general applicability and efficiency in more realistic scenarios. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.16435 |
By: | Chung I Lu; Julian Sester |
Abstract: | Generating synthetic financial time series data that accurately reflects real-world market dynamics holds tremendous potential for various applications, including portfolio optimization, risk management, and large scale machine learning. We present an approach for training generative models for financial time series using the maximum mean discrepancy (MMD) with a signature kernel. Our method leverages the expressive power of the signature transform to capture the complex dependencies and temporal structures inherent in financial data. We employ a moving average model to model the variance of the noise input, enhancing the model's ability to reproduce stylized facts such as volatility clustering. Through empirical experiments on S&P 500 index data, we demonstrate that our model effectively captures key characteristics of financial time series and outperforms a comparable GAN-based approach. In addition, we explore the application of the synthetic data generated to train a reinforcement learning agent for portfolio management, achieving promising results. Finally, we propose a method to add robustness to the generative model by tweaking the noise input so that the generated sequences can be adjusted to different market environments with minimal data. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.19848 |
By: | Undral Byambadalai; Tatsushi Oka; Shota Yasui |
Abstract: | We propose a novel regression adjustment method designed for estimating distributional treatment effect parameters in randomized experiments. Randomized experiments have been extensively used to estimate treatment effects in various scientific fields. However, to gain deeper insights, it is essential to estimate distributional treatment effects rather than relying solely on average effects. Our approach incorporates pre-treatment covariates into a distributional regression framework, utilizing machine learning techniques to improve the precision of distributional treatment effect estimators. The proposed approach can be readily implemented with off-the-shelf machine learning methods and remains valid as long as the nuisance components are reasonably well estimated. Also, we establish the asymptotic properties of the proposed estimator and present a uniformly valid inference method. Through simulation results and real data analysis, we demonstrate the effectiveness of integrating machine learning techniques in reducing the variance of distributional treatment effect estimators in finite samples. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.16037 |
By: | Wolff, Dominik; Echterling, Fabian |
Abstract: | We analyze machine learning algorithms for stock selection. Our study builds on weekly data for the historical constituents of the S&P500 over the period from January 1999 to March 2021 and builds on typical equity factors, additional firm fundamentals, and technical indicators. A variety of machine learning models are trained on the binary classification task to predict whether a specific stock outperforms or underperforms the cross‐sectional median return over the subsequent week. We analyze weekly trading strategies that invest in stocks with the highest predicted outperformance probability. Our empirical results show substantial and significant outperformance of machine learning‐based stock selection models compared to an equally weighted benchmark. Interestingly, we find more simplistic regularized logistic regression models to perform similarly well compared to more complex machine learning models. The results are robust when applied to the STOXX Europe 600 as alternative asset universe. |
Date: | 2024–01 |
URL: | https://d.repec.org/n?u=RePEc:dar:wpaper:149079 |
By: | Kamila Zaman; Alberto Marchisio; Muhammad Kashif; Muhammad Shafique |
Abstract: | Portfolio Optimization (PO) is a financial problem aiming to maximize the net gains while minimizing the risks in a given investment portfolio. The novelty of Quantum algorithms lies in their acclaimed potential and capability to solve complex problems given the underlying Quantum Computing (QC) infrastructure. Utilizing QC's applicable strengths to the finance industry's problems, such as PO, allows us to solve these problems using quantum-based algorithms such as Variational Quantum Eigensolver (VQE) and Quantum Approximate Optimization Algorithm (QAOA). While the Quantum potential for finance is highly impactful, the architecture and composition of the quantum circuits have not yet been properly defined as robust financial frameworks/algorithms as state of the art in present literature for research and design development purposes. In this work, we propose a novel scalable framework, denoted PO-QA, to systematically investigate the variation of quantum parameters (such as rotation blocks, repetitions, and entanglement types) to observe their subtle effect on the overall performance. In our paper, the performance is measured and dictated by convergence to similar ground-state energy values for resultant optimal solutions by each algorithm variation set for QAOA and VQE to the exact eigensolver (classical solution). Our results provide effective insights into comprehending PO from the lens of Quantum Machine Learning in terms of convergence to the classical solution, which is used as a benchmark. This study paves the way for identifying efficient configurations of quantum circuits for solving PO and unveiling their inherent inter-relationships. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.19857 |
By: | Gavin Ugale; Cameron Hall |
Abstract: | Generative artificial intelligence (AI) presents myriad opportunities for integrity actors—anti-corruption agencies, supreme audit institutions, internal audit bodies and others—to enhance the impact of their work, particularly through the use of large language models (LLMS). As this type of AI becomes increasingly mainstream, it is critical for integrity actors to understand both where generative AI and LLMs can add the most value and the risks they pose. To advance this understanding, this paper draws on input from the OECD integrity and anti-corruption communities and provides a snapshot of the ways these bodies are using generative AI and LLMs, the challenges they face, and the insights these experiences offer to similar bodies in other countries. The paper also explores key considerations for integrity actors to ensure trustworthy AI systems and responsible use of AI as their capacities in this area develop. |
Date: | 2024–03–22 |
URL: | https://d.repec.org/n?u=RePEc:oec:comaaa:12-en |
By: | Hongshen Yang; Avinash Malik |
Abstract: | Cryptocurrency is a cryptography-based digital asset with extremely volatile prices. Around $70 billion worth of crypto-currency is traded daily on exchanges. Trading crypto-currency is difficult due to the inherent volatility of the crypto-market. In this work, we want to test the hypothesis: "Can techniques from artificial intelligence help with algorithmically trading cryptocurrencies?". In order to address this question, we combine Reinforcement Learning (RL) with pair trading. Pair trading is a statistical arbitrage trading technique which exploits the price difference between statistically correlated assets. We train reinforcement learners to determine when and how to trade pairs of cryptocurrencies. We develop new reward shaping and observation/action spaces for reinforcement learning. We performed experiments with the developed reinforcement learner on pairs of BTC-GBP and BTC-EUR data separated by 1-minute intervals (n = 263, 520). The traditional non-RL pair trading technique achieved an annualised profit of 8.33%, while the proposed RL-based pair trading technique achieved annualised profits from 9.94% - 31.53%, depending upon the RL learner. Our results show that RL can significantly outperform manual and traditional pair trading techniques when applied to volatile markets such as cryptocurrencies. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.16103 |
By: | Höschle, Lisa; Yu, Xiaohua |
Keywords: | Agribusiness |
Date: | 2023–09–01 |
URL: | https://d.repec.org/n?u=RePEc:ags:gewi23:344249 |
By: | Sabhi Rajae (ENCGK - ENCG University Ibn Tofail of Kenitra, Morocco); Abdelbaki Jamal Eddine (ENCGK - ENCG University Ibn Tofail of Kenitra, Morocco); Taouab Omar (ENCGK - ENCG University Ibn Tofail of Kenitra, Morocco); Eddaoudi Faissal (ENCGK - ENCG University Ibn Tofail of Kenitra, Morocco); Abdelbaki Noureddine (ENCGK - ENCG University Ibn Tofail of Kenitra, Morocco) |
Abstract: | The paper takes a novel tack by suggesting artificial intelligence (AI) as a way to lessen behavioral biases in the process of making financial decisions. The paper explores how AI might assist avoid behavioral biases among financial planners and provide more effective investment recommendations, based on theoretical research that identifies these flaws. The expanding efficacy of AI is examined in order to overcome confirmation and hindsight biases, particularly via supervised and unsupervised learning. By developing a conceptual framework, outlining potential outcomes, and theoretically examining the relationships between behavioral finance and AI, the technique takes a theoretical approach. The theoretical technique highlights the necessity for conceptual exploration in the developing area of artificial intelligence in finance, which supports this approach even if it has limits due to the lack of empirical evidence. |
Keywords: | Confirmation bias Artificial intelligence Behavioral finance Investment recommendations Financial decisions, Confirmation bias, Artificial intelligence, Behavioral finance, Investment recommendations, Financial decisions |
Date: | 2024–07–09 |
URL: | https://d.repec.org/n?u=RePEc:hal:journl:hal-04644322 |
By: | Ma, Tao; Yang, Xuzhi; Szabo, Zoltan |
Abstract: | Reinforcement learning (RL) -- finding the optimal behaviour (also referred to as policy) maximizing the collected long-term cumulative reward -- is among the most influential approaches in machine learning with a large number of successful applications. In several decision problems, however, one faces the possibility of policy switching -- changing from the current policy to a new one -- which incurs a non-negligible cost (examples include the shifting of the currently applied educational technology, modernization of a computing cluster, and the introduction of a new webpage design), and in the decision one is limited to using historical data without the availability for further online interaction. Despite the inevitable importance of this offline learning scenario, to our best knowledge, very little effort has been made to tackle the key problem of balancing between the gain and the cost of switching in a flexible and principled way. Leveraging ideas from the area of optimal transport, we initialize the systematic study of policy switching in offline RL. We establish fundamental properties and design a Net Actor-Critic algorithm for the proposed novel switching formulation. Numerical experiments demonstrate the efficiency of our approach on multiple benchmarks of the Gymnasium. |
JEL: | C1 |
Date: | 2024–07–01 |
URL: | https://d.repec.org/n?u=RePEc:ehl:lserod:124144 |
By: | Taha Barwahwala; Aprajit Mahajan; Shekhar Mittal; Ofir Reich |
Abstract: | We investigate the use of a machine learning (ML) algorithm to identify fraudulent non-existent firms. Using a rich dataset of tax returns over several years in an Indian state, we train an ML-based model to predict fraudulent firms. We then use the model predictions to carry out field inspections of firms identified as suspicious by the ML tool. We find that the ML model is accurate in both simulated and field settings in identifying non-existent firms. Withholding a randomly selected group of firms from inspection, we estimate the causal impact of ML driven inspections. Despite its strong predictive and field performance, the model driven inspections do not yield a significant increase in enforcement as measured by the cancellation of fraudulent firm registrations and tax recovery. We provide two rationales for this discrepancy based on a close analysis of the tax department’s operating protocols: selection bias, and institutional friction in integrating the model into existing administrative systems. Our study serves as a cautionary tale for the application of machine learning in public policy contexts and of relying solely on test set performance as an effectiveness indicator. Field evaluations are critical in assessing the real-world impact of predictive models. |
JEL: | H0 O10 |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:32705 |
By: | Chong Zhang; Xinyi Liu; Mingyu Jin; Zhongmou Zhang; Lingyao Li; Zhenting Wang; Wenyue Hua; Dong Shu; Suiyuan Zhu; Xiaobo Jin; Sujian Li; Mengnan Du; Yongfeng Zhang |
Abstract: | Can AI Agents simulate real-world trading environments to investigate the impact of external factors on stock trading activities (e.g., macroeconomics, policy changes, company fundamentals, and global events)? These factors, which frequently influence trading behaviors, are critical elements in the quest for maximizing investors' profits. Our work attempts to solve this problem through large language model based agents. We have developed a multi-agent AI system called StockAgent, driven by LLMs, designed to simulate investors' trading behaviors in response to the real stock market. The StockAgent allows users to evaluate the impact of different external factors on investor trading and to analyze trading behavior and profitability effects. Additionally, StockAgent avoids the test set leakage issue present in existing trading simulation systems based on AI Agents. Specifically, it prevents the model from leveraging prior knowledge it may have acquired related to the test data. We evaluate different LLMs under the framework of StockAgent in a stock trading environment that closely resembles real-world conditions. The experimental results demonstrate the impact of key external factors on stock market trading, including trading behavior and stock price fluctuation rules. This research explores the study of agents' free trading gaps in the context of no prior knowledge related to market data. The patterns identified through StockAgent simulations provide valuable insights for LLM-based investment advice and stock recommendation. The code is available at https://github.com/MingyuJ666/Stockagent . |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.18957 |
By: | Li, Jie; Fearnhead, Paul; Fryzlewicz, Piotr; Wang, Tengyao |
Abstract: | Detecting change points in data is challenging because of the range of possible types of change and types of behaviour of data when there is no change. Statistically efficient methods for detecting a change will depend on both of these features, and it can be difficult for a practitioner to develop an appropriate detection method for their application of interest. We show how to automatically generate new offline detection methods based on training a neural network. Our approach is motivated by many existing tests for the presence of a change point being representable by a simple neural network, and thus a neural network trained with sufficient data should have performance at least as good as these methods. We present theory that quantifies the error rate for such an approach, and how it depends on the amount of training data. Empirical results show that, even with limited training data, its performance is competitive with the standard cumulative sum (CUSUM) based classifier for detecting a change in mean when the noise is independent and Gaussian, and can substantially outperform it in the presence of auto-correlated or heavy-tailed noise. Our method also shows strong results in detecting and localizing changes in activity based on accelerometer data. |
Keywords: | automatic statistician; classification; likelihood-free inference; neural networks; structural breaks; supervised learning; e High End Computing Cluster at Lancaster University; and EPSRC grants EP/V053590/1; EP/V053639/1 and EP/T02772X/1 |
JEL: | C1 |
Date: | 2024–04–01 |
URL: | https://d.repec.org/n?u=RePEc:ehl:lserod:120083 |
By: | Philippe Lorenz; Karine Perset; Jamie Berryhill |
Abstract: | Generative artificial intelligence (AI) creates new content in response to prompts, offering transformative potential across multiple sectors such as education, entertainment, healthcare and scientific research. However, these technologies also pose critical societal and policy challenges that policy makers must confront: potential shifts in labour markets, copyright uncertainties, and risk associated with the perpetuation of societal biases and the potential for misuse in the creation of disinformation and manipulated content. Consequences could extend to the spreading of mis- and disinformation, perpetuation of discrimination, distortion of public discourse and markets, and the incitement of violence. Governments recognise the transformative impact of generative AI and are actively working to address these challenges. This paper aims to inform these policy considerations and support decision makers in addressing them. |
Keywords: | AI, artificial intelligence, generative artificial intelligence, mis- and disinformation |
Date: | 2023–09–18 |
URL: | https://d.repec.org/n?u=RePEc:oec:comaaa:1-en |
By: | Ludovic Goudenege; Andrea Molent; Antonino Zanette |
Abstract: | This paper explores the application of Machine Learning techniques for pricing high-dimensional options within the framework of the Uncertain Volatility Model (UVM). The UVM is a robust framework that accounts for the inherent unpredictability of market volatility by setting upper and lower bounds on volatility and the correlation among underlying assets. By leveraging historical data and extreme values of estimated volatilities and correlations, the model establishes a confidence interval for future volatility and correlations, thus providing a more realistic approach to option pricing. By integrating advanced Machine Learning algorithms, we aim to enhance the accuracy and efficiency of option pricing under the UVM, especially when the option price depends on a large number of variables, such as in basket or path-dependent options. Our approach evolves backward in time, dynamically selecting at each time step the most expensive volatility and correlation for each market state. Specifically, it identifies the particular values of volatility and correlation that maximize the expected option value at the next time step. This is achieved through the use of Gaussian Process regression, the computation of expectations via a single step of a multidimensional tree and the Sequential Quadratic Programming optimization algorithm. The numerical results demonstrate that the proposed approach can significantly improve the precision of option pricing and risk management strategies compared with methods already in the literature, particularly in high-dimensional contexts. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.13213 |
By: | St\'ephane Cr\'epey (UFR Math\'ematiques UPCit\'e); Botao Li (LPSM); Hoang Nguyen (IES, LPSM); Bouazza Saadeddine |
Abstract: | We present a unified framework for computing CVA sensitivities, hedging the CVA, and assessing CVA risk, using probabilistic machine learning meant as refined regression tools on simulated data, validatable by low-cost companion Monte Carlo procedures. Various notions of sensitivities are introduced and benchmarked numerically. We identify the sensitivities representing the best practical trade-offs in downstream tasks including CVA hedging and risk assessment. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.18583 |
By: | OECD |
Abstract: | The rapid acceleration in the pace of AI innovation in recent years and the advent of content generating capabilities (Generative AI or GenAI) have increased interest in AI innovation in finance, in part due to the user-friendliness and intuitive interface of GenAI tools. The use of AI in financial markets involving full end-to-end automation without any human intervention remains largely at development phase, but its wider deployment could amplify risks already present in financial markets and give rise to new challenges. This paper presents recent evolutions in AI in finance and potential risks and discusses whether policy makers may need to reinforce policies and strengthen protection against these risks. |
Date: | 2023–12–15 |
URL: | https://d.repec.org/n?u=RePEc:oec:comaaa:9-en |
By: | OECD |
Abstract: | This paper discusses recent developments in Artificial Intelligence (AI), particularly generative AI, which could positively impact many markets. While it is important that markets remain competitive to ensure their benefits are widely felt, the lifecycle for generative AI is still developing. This paper focuses on three stages: training foundation models, fine-tuning and deployment. It is too early to say how competition will develop in generative AI, but there appear to be some risks to competition that warrant attention, such as linkages across the generative AI value chain, including from existing markets, and potential barriers to accessing key inputs such as quality data and computing power. Several competition authorities and policy makers are taking actions to monitor market developments and may need to use the various advocacy and enforcement tools at their disposal. Furthermore, co-operation could play an important role in allowing authorities to efficiently maintain their knowledge and expertise. |
Date: | 2024–05–24 |
URL: | https://d.repec.org/n?u=RePEc:oec:comaaa:18-en |
By: | Mahdi Ebrahimi Kahou (Bowdoin College); Jesus Fernandez-Villaverde (University of Pennsylvania, NBER, and CEPR); Sebastian Gomez-Cardona (Morningstar); Jesse Perla (University of British Columbia); Jan Rosa (University of British Columbia) |
Abstract: | In the long run, we are all dead. Nonetheless, when studying the short-run dynamics of economic models, it is crucial to consider boundary conditions that govern long-run, forwardlooking behavior, such as transversality conditions. We demonstrate that machine learning (ML) can automatically satisfy these conditions due to its inherent inductive bias toward finding flat solutions to functional equations. This characteristic enables ML algorithms to solve for transition dynamics, ensuring that long-run boundary conditions are approximately met. ML can even select the correct equilibria in cases of steady-state multiplicity. Additionally, the inductive bias provides a foundation for modeling forward-looking behavioral agents with self-consistent expectations. |
Keywords: | Machine learning, inductive bias, rational expectations, transitional dynamics, transversality, behavioral macroeconomics |
JEL: | C1 E1 |
Date: | 2024–08–12 |
URL: | https://d.repec.org/n?u=RePEc:pen:papers:24-019 |
By: | Natalia Roszyk; Robert \'Slepaczuk |
Abstract: | Predicting the S&P 500 index volatility is crucial for investors and financial analysts as it helps assess market risk and make informed investment decisions. Volatility represents the level of uncertainty or risk related to the size of changes in a security's value, making it an essential indicator for financial planning. This study explores four methods to improve the accuracy of volatility forecasts for the S&P 500: the established GARCH model, known for capturing historical volatility patterns; an LSTM network that utilizes past volatility and log returns; a hybrid LSTM-GARCH model that combines the strengths of both approaches; and an advanced version of the hybrid model that also factors in the VIX index to gauge market sentiment. This analysis is based on a daily dataset that includes S&P 500 and VIX index data, covering the period from January 3, 2000, to December 21, 2023. Through rigorous testing and comparison, we found that machine learning approaches, particularly the hybrid LSTM models, significantly outperform the traditional GARCH model. Including the VIX index in the hybrid model further enhances its forecasting ability by incorporating real-time market sentiment. The results of this study offer valuable insights for achieving more accurate volatility predictions, enabling better risk management and strategic investment decisions in the volatile environment of the S&P 500. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.16780 |
By: | Wenbo Yan; Ying Tan |
Abstract: | Recently, the incorporation of both temporal features and the correlation across time series has become an effective approach in time series prediction. Spatio-Temporal Graph Neural Networks (STGNNs) demonstrate good performance on many Temporal-correlation Forecasting Problem. However, when applied to tasks lacking periodicity, such as stock data prediction, the effectiveness and robustness of STGNNs are found to be unsatisfactory. And STGNNs are limited by memory savings so that cannot handle problems with a large number of nodes. In this paper, we propose a novel approach called the Temporal-Correlation Graph Pre-trained Network (TCGPN) to address these limitations. TCGPN utilize Temporal-correlation fusion encoder to get a mixed representation and pre-training method with carefully designed temporal and correlation pre-training tasks. Entire structure is independent of the number and order of nodes, so better results can be obtained through various data enhancements. And memory consumption during training can be significantly reduced through multiple sampling. Experiments are conducted on real stock market data sets CSI300 and CSI500 that exhibit minimal periodicity. We fine-tune a simple MLP in downstream tasks and achieve state-of-the-art results, validating the capability to capture more robust temporal correlation patterns. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.18519 |
By: | Shi, Chengchun; Zhou, Yunzhe; Li, Lexin |
Abstract: | In this article, we propose a new hypothesis testing method for directed acyclic graph (DAG). While there is a rich class of DAG estimation methods, there is a relative paucity of DAG inference solutions. Moreover, the existing methods often impose some specific model structures such as linear models or additive models, and assume independent data observations. Our proposed test instead allows the associations among the random variables to be nonlinear and the data to be time-dependent. We build the test based on some highly flexible neural networks learners. We establish the asymptotic guarantees of the test, while allowing either the number of subjects or the number of time points for each subject to diverge to infinity. We demonstrate the efficacy of the test through simulations and a brain connectivity network analysis. Supplementary materials for this article are available online. |
Keywords: | brain connectivity networks; directed acrylic graph; hypothesis testing; generative adversarial networks; multilayer perceptron neural networks; Hypothesis testing; CIF-2102227; R01AG061303; R01AG062542; EP/W014971/1 |
JEL: | C1 |
Date: | 2023–07–12 |
URL: | https://d.repec.org/n?u=RePEc:ehl:lserod:119446 |
By: | Tian Guo; Emmanuel Hauptmann |
Abstract: | Large language models (LLMs) and their fine-tuning techniques have demonstrated superior performance in various language understanding and generation tasks. This paper explores fine-tuning LLMs for stock return forecasting with financial newsflow. In quantitative investing, return forecasting is fundamental for subsequent tasks like stock picking, portfolio optimization, etc. We formulate the model to include text representation and forecasting modules. We propose to compare the encoder-only and decoder-only LLMs, considering they generate text representations in distinct ways. The impact of these different representations on forecasting performance remains an open question. Meanwhile, we compare two simple methods of integrating LLMs' token-level representations into the forecasting module. The experiments on real news and investment universes reveal that: (1) aggregated representations from LLMs' token-level embeddings generally produce return predictions that enhance the performance of long-only and long-short portfolios; (2) in the relatively large investment universe, the decoder LLMs-based prediction model leads to stronger portfolios, whereas in the small universes, there are no consistent winners. Among the three LLMs studied (DeBERTa, Mistral, Llama), Mistral performs more robustly across different universes; (3) return predictions derived from LLMs' text representations are a strong signal for portfolio construction, outperforming conventional sentiment scores. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.18103 |
By: | OECD |
Abstract: | OECD countries are increasingly investing in better understanding the potential value of using Artificial Intelligence (AI) to improve public governance. The use of AI by the public sector can increase productivity, responsiveness of public services, and strengthen the accountability of governments. However, governments must also mitigate potential risks, building an enabling environment for trustworthy AI. This policy paper outlines the key trends and policy challenges in the development, use, and deployment of AI in and by the public sector. First, it discusses the potential benefits and specific risks associated with AI use in the public sector. Second, it looks at how AI in the public sector can be used to improve productivity, responsiveness, and accountability. Third, it provides an overview of the key policy issues and presents examples of how countries are addressing them across the OECD. |
Keywords: | AI in the public sector, government use of AI |
Date: | 2024–06–13 |
URL: | https://d.repec.org/n?u=RePEc:oec:comaaa:20-en |
By: | Kamesh Korangi; Christophe Mues; Cristi\'an Bravo |
Abstract: | Apart from assessing individual asset performance, investors in financial markets also need to consider how a set of firms performs collectively as a portfolio. Whereas traditional Markowitz-based mean-variance portfolios are widespread, network-based optimisation techniques have built upon these developments. However, most studies do not contain firms at risk of default and remove any firms that drop off indices over a certain time. This is the first study to incorporate risky firms and use all the firms in portfolio optimisation. We propose and empirically test a novel method that leverages Graph Attention networks (GATs), a subclass of Graph Neural Networks (GNNs). GNNs, as deep learning-based models, can exploit network data to uncover nonlinear relationships. Their ability to handle high-dimensional features and accommodate customised layers for specific purposes makes them particularly appealing for large-scale problems such as mid- and small-cap portfolio optimization. This study utilises 30 years of data on mid-cap firms, creating graphs of firms using distance correlation and the Triangulated Maximally Filtered Graph approach. These graphs are the inputs to a GAT model that we train using custom layers which impose weight and allocation constraints and a loss function derived from the Sharpe ratio, thus directly maximising portfolio risk-adjusted returns. This new model is benchmarked against a network characteristic-based portfolio, a mean variance-based portfolio, and an equal-weighted portfolio. The results show that the portfolio produced by the GAT-based model outperforms all benchmarks and is consistently superior to other strategies over a long period while also being informative of market dynamics. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.15532 |
By: | OECD |
Abstract: | In November 2023, OECD member countries approved a revised version of the Organisation’s definition of an AI system. This document contains proposed clarifications to the definition of an AI system contained in the 2019 OECD Recommendation on AI (the “AI Principles”) to support their continued relevance and technical soundness. The goal of the definition of an AI system in the OECD Recommendation is to articulate what is considered to be an AI system, for purposes of the recommendation. |
Keywords: | AI Principles, AI system, artificial intelligence, OECD Recommendation on Artificial Intelligence |
Date: | 2024–03–05 |
URL: | https://d.repec.org/n?u=RePEc:oec:comaaa:8-en |
By: | Zi Wang; Xingcheng Xu; Yanqing Yang; Xiaodong Zhu |
Abstract: | We propose a deep learning framework, DL-opt, designed to efficiently solve for optimal policies in quantifiable general equilibrium trade models. DL-opt integrates (i) a nested fixed point (NFXP) formulation of the optimization problem, (ii) automatic implicit differentiation to enhance gradient descent for solving unilateral optimal policies, and (iii) a best-response dynamics approach for finding Nash equilibria. Utilizing DL-opt, we solve for non-cooperative tariffs and industrial subsidies across 7 economies and 44 sectors, incorporating sectoral external economies of scale. Our quantitative analysis reveals significant sectoral heterogeneity in Nash policies: Nash industrial subsidies increase with scale elasticities, whereas Nash tariffs decrease with trade elasticities. Moreover, we show that global dual competition, involving both tariffs and industrial subsidies, results in lower tariffs and higher welfare outcomes compared to a global tariff war. These findings highlight the importance of considering sectoral heterogeneity and policy combinations in understanding global economic competition. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.17731 |
By: | Flavio Calvino; Chiara Criscuolo; Hélène Dernis; Lea Samek |
Abstract: | This report outlines a new methodology and provides a first exploratory analysis of technologies and applications that are at the core of recent advances in AI. Using AI-related keywords and technology classes, the study identifies AI-related patents protected in the United States in 2000-18. Among those, “core” AI patents are selected based on their counts of AI-related forward citations. The analysis finds that, compared to other (AI and non-AI) patents, they are more original and general, and tend to be broader in technological scope. Technologies related to general AI, robotics, computer/image vision and recognition/detection are consistently listed among core AI patents, with autonomous driving and deep learning having recently become more prominent. Finally, core AI patents tend to spur innovation across AI-related domains, although some technologies – likely AI applications, such as autonomous driving or robotics – appear to increasingly contribute to developments in their own field. |
Keywords: | Artificial Intelligence, Innovation, Patents |
JEL: | C81 O31 O33 O34 |
Date: | 2023–11–13 |
URL: | https://d.repec.org/n?u=RePEc:oec:comaaa:6-en |
By: | Foltas, Alexander |
Abstract: | I contribute to previous research on the efficient integration of forecasters' narratives into business cycle forecasts. Using a Bidirectional Encoder Representations from Transformers (BERT) model, I quantify 19, 300 paragraphs from German business cycle reports (1998-2021) and classify the signs of institutes' consumption forecast errors. The correlation is strong for 12.8% of paragraphs with a predicted class probability of 85% or higher. Reviewing 150 of such high-probability paragraphs reveals recurring narratives. Underestimations of consumption growth often mention rising employment, increasing wages and transfer payments, low inflation, decreasing taxes, crisis-related fiscal support, and reduced relevance of marginal employment. Conversely, overestimated consumption forecasts present opposing narratives. Forecasters appear to particularly underestimate these factors when they disproportionately affect low-income households. |
Keywords: | Macroeconomic forecasting, Evaluating forecasts, Business cycles, Consumption forecasting, Natural language processing, Language Modeling, Machine learning, Judgemental forecasting |
JEL: | E21 C53 |
Date: | 2024 |
URL: | https://d.repec.org/n?u=RePEc:zbw:pp1859:300847 |
By: | Kerri Lu; Stephen Bates; Sherrie Wang |
Abstract: | Remote sensing map products are used to obtain estimates of environmental quantities, such as deforested area or the effect of conservation zones on deforestation. However, the quality of map products varies, and - because maps are outputs of complex machine learning algorithms that take in a variety of remotely sensed variables as inputs - errors are difficult to characterize. Without capturing the biases that may be present, naive calculations of population-level estimates from such maps are statistically invalid. In this paper, we compare several uncertainty quantification methods - stratification, Olofsson area estimation method, and prediction-powered inference - that combine a small amount of randomly sampled ground truth data with large-scale remote sensing map products to generate statistically valid estimates. Applying these methods across four remote sensing use cases in area and regression coefficient estimation, we find that they result in estimates that are more reliable than naively using the map product as if it were 100% accurate and have lower uncertainty than using only the ground truth and ignoring the map product. Prediction-powered inference uses ground truth data to correct for bias in the map product estimate and (unlike stratification) does not require us to choose a map product before sampling. This is the first work to (1) apply prediction-powered inference to remote sensing estimation tasks, and (2) perform uncertainty quantification on remote sensing regression coefficients without assumptions on the structure of map product errors. To improve the utility of machine learning-generated remote sensing maps for downstream applications, we recommend that map producers provide a holdout ground truth dataset to be used for calibration in uncertainty quantification alongside their maps. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.13659 |
By: | Annelore Verhagen |
Abstract: | While means-tested benefits such as minimum income benefits (MIB) and unemployment assistance (UA) are an essential safety net for low-income people and the unemployed, incomplete take-up is the rule rather than the exception. Building on desk research, open-ended surveys and semi-structured interviews, this paper investigates the opportunities and risks of using artificial intelligence (AI) for managing these means-tested benefits. This ranges from providing information to individuals, through determining eligibility based on pre-determined statutory criteria and identifying undue payments, to notifying individuals about their eligibility status. One of the key opportunities of using AI for these purposes is that this may improve the timeliness and take-up of MIB and UA. However, it may also lead to systematically biased eligibility assessments or increase inequalities, amongst others. Finally, the paper explores potential policy directions to help countries seize AI’s opportunities while addressing its risks, when using it for MIB or UA management. |
Keywords: | Artificial Intelligence, Means-Tested Benefits, Minimum Income Benefits, Social Protection, Unemployment Assistance |
JEL: | C8 H53 I3 J68 O3 |
Date: | 2024–06–24 |
URL: | https://d.repec.org/n?u=RePEc:oec:comaaa:21-en |
By: | Beckert, Jens; Arndt, H. Lukas R. |
Abstract: | Between 2009 and 2015 Greece underwent a profound sovereign debt crisis that led to a serious political crisis in Europe and the restructuring of Greek debt. We argue that the prevalence of negative narratives about the future contributed to the changes in spreads of Greek bonds during the crisis. We support our argument by presenting results from text mining a corpus of 9, 435 articles from the Financial Times and the Wall Street Journal. Based on sentiments and a machine learning model predicting future reference, we identify newspaper articles which generate negative and uncertain outlooks for the future in the expert discourse. We provide evidence from time series regression analysis showing that these negative imagined futures have explanatory power in models estimating spread development of Greek vs. German sovereign bonds. We suggest that these findings provide good evidence for the relevance of "imagined futures" for investors' behavior, and give directions for an innovative contribution of sociology to understanding the microfoundations of financial crises. |
Abstract: | Zwischen 2009 und 2015 durchlebte Griechenland eine tiefgreifende Staatsschuldenkrise, die zu einer schweren politischen Krise in Europa und zur Umstrukturierung der griechischen Schulden führte. Wir argumentieren, dass die Prävalenz negativer Narrative über die Zukunft zu den Veränderungen der Spreads griechischer Anleihen während der Krise beigetragen hat. Zur Untermauerung dieser These präsentieren wir die Ergebnisse der Textanalyse eines Korpus von 9.435 Artikeln aus der Financial Times und dem Wall Street Journal. Auf der Grundlage von Sentiments und einem maschinellen Lernmodell zur Erkennung von Zukunftsvorhersagen identifizieren wir Zeitungsartikel, die negative und unsichere Zukunftsaussichten im Expertendiskurs erzeugen. Wir zeigen anhand von Zeitreihen-Regressionsanalysen, dass diese negativen Zukunftsvorstellungen Erklärungskraft in Modellen zur Schätzung der Spread-Entwicklung von griechischen gegenüber deutschen Staatsanleihen haben. Diese Ergebnisse liefern Evidenz für die Relevanz imaginierter Zukünfte für das Verhalten von Anlegern und ermöglichen einen innovativen Beitrag der Soziologie zum Verständnis der Mikroebene von Finanzkrisen. |
Keywords: | bond spreads, economic sociology, financial markets, Greek debt crisis, imagined futures, sentiment analysis, sovereign debt, valuation, Anleihen-Spreads, Bewertung, Finanzmärkte, griechische Schuldenkrise, imaginierte Zukünfte, Staatsverschuldung, Sentimentanalyse, Wirtschaftssoziologie |
Date: | 2024 |
URL: | https://d.repec.org/n?u=RePEc:zbw:mpifgd:300665 |
By: | Joshua S. Gans |
Abstract: | This paper examines and finds that the answer is likely to be no. The environment examined starts with users who contribute based on their motives to create a public good. Their own actions determine the quality of that public good but also embed a free-rider problem. When AI is trained on that data, it can generate similar contributions to the public good. It is shown that this increases the incentive of human users to provide contributions that are more costly to supply. Thus, the overall quality of contributions from both AI and humans rises compared to human-only contributions. In situations where platform providers want to generate more contributions using explicit incentives, the rate of return on such incentives is shown to be lower in this environment. |
JEL: | D70 H44 O31 |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:32686 |
By: | Liu, Xiaolu; Zhang, Yumei; Lan, Xiangmin; Si, Wei |
Abstract: | This study employs national Computable General Equilibrium (CGE) model to simulate the impacts of reducing domestic agricultural trade costs on agricultural production, household income, food prices, macroeconomic conditions, as well as food consumption and dietary quality of urban and rural residents. We find that in comparison to trade costs associated with agricultural imports and exports, the reduction of domestic agricultural trade costs is more conducive to expanding food production and cultivation areas, reducing food prices, and improving the dietary conditions of both urban and rural residents in China. Moreover, it stimulates the growth of agricultural, agro-processing, and agrifood system GDP. In terms of specific foods, the reduction in domestic agricultural product trade costs will lower the prices of various food items, decrease the consumption of rice and wheat, and increase the consumption of other types of food. This study provides theoretical and empirical foundations for achieving the dual objectives of revitalizing the national unified market and promoting the transformation of the agrifood system to enhance nutritional welfare within the framework of the new development paradigm, thereby Copyright 2024 by Xiaolu Liu, Yumei Zhang, Xiangmin Lan, and Wei Si. All rights reserved. Readers may make verbatim copies of this document for non-commercial purposes by any means, provided that this copyright notice appears on all such copies. offering valuable insights for informing governmental trade policy decisions. In the future, efforts should focus on intensifying the construction of infrastructure for perishable fresh agricultural products, reducing transportation distances, lowering transport costs, and establishing a "unified national market" in the agricultural sector to enhance the sustainability and resilience of China's agrifood system. |
Keywords: | Agricultural and Food Policy, Consumer/Household Economics, Food Consumption/Nutrition/Food Safety |
Date: | 2024–08–07 |
URL: | https://d.repec.org/n?u=RePEc:ags:cfcp15:344315 |
By: | Debuque-Gonzales, Margarita; Corpus, John Paul P. |
Abstract: | This study presents a small macroeconometric model with a fiscal sector, extending the model in Debuque-Gonzales and Corpus (2023). The model retains the original core blocks of domestic demand, international trade, employment, prices, and monetary sectors and adds a fiscal sector consisting of equations for government revenues, expenditures, and debt. Behavioral equations are estimated in error-correction form (using an autoregressive distributed lag or ARDL model) on quarterly data from 2002 to 2019. In-sample simulations demonstrate acceptable levels of predictive accuracy for most macroeconomic variables, even when producing dynamic forecasts. The model also projects plausible outcomes on the fiscal side in response to shocks in world oil prices, the exchange rate, and primary expenditure, showing the expanded model’s policy simulation capabilities. The next steps for developing the model include adding a detailed financial block, modeling the aggregate supply side, and incorporating expectations. |
Keywords: | macroeconometric model;Philippine economy;forecast;simulation;fiscal sector |
Date: | 2024 |
URL: | https://d.repec.org/n?u=RePEc:phd:rpseri:rps_2024-05 |
By: | Guan-Yuan Wang (Vilnius University, Faculty of Mathematics and Informatics) |
Abstract: | Many game development companies use game data analysis for mining insights about users' behaviour and possible product growth. One of the most important analysis tasks for game development is user churn prediction. Effective churn prediction can help hold users in the game by initiating additional actions for their engagement. We focused on high-value user churn prediction as it is of particular interest for any business to keep paying customers satisfied and engaged. We consider the churn prediction problem as a classification problem and conduct the random undersampling approach to address imbalanced class distribution between churners and active users. Based on our real-life data from a freemium casual mobile game, although the best model was chosen as the final classification algorithm for extracted data, we can definitely say there is no general solution to the stated problem. Model performance highly depends on the churn definition, user segmentation and feature engineering, it is therefore necessary to have a custom approach to churn analysis in each specific case. |
Keywords: | Churn prediction, mobile games, classification models, resamlpling methods, imbalanced class distribution, machine learning |
Date: | 2022–12–16 |
URL: | https://d.repec.org/n?u=RePEc:hal:journl:hal-04632443 |
By: | Marcello Monga |
Abstract: | Automated market makers (AMMs) are a new type of trading venues which are revolutionising the way market participants interact. At present, the majority of AMMs are constant function market makers (CFMMs) where a deterministic trading function determines how markets are cleared. Within CFMMs, we focus on constant product market makers (CPMMs) which implements the concentrated liquidity (CL) feature. In this thesis we formalise and study the trading mechanism of CPMMs with CL, and we develop liquidity provision and liquidity taking strategies. Our models are motivated and tested with market data. We derive optimal strategies for liquidity takers (LTs) who trade orders of large size and execute statistical arbitrages. First, we consider an LT who trades in a CPMM with CL and uses the dynamics of prices in competing venues as market signals. We use Uniswap v3 data to study price, liquidity, and trading cost dynamics, and to motivate the model. Next, we consider an LT who trades a basket of crypto-currencies whose constituents co-move. We use market data to study lead-lag effects, spillover effects, and causality between trading venues. We derive optimal strategies for strategic liquidity providers (LPs) who provide liquidity in CPMM with CL. First, we use stochastic control tools to derive a self-financing and closed-form optimal liquidity provision strategy where the width of the LP's liquidity range is determined by the profitability of the pool, the dynamics of the LP's position, and concentration risk. Next, we use a model-free approach to solve the problem of an LP who provides liquidity in multiple CPMMs with CL. We do not specify a model for the stochastic processes observed by LPs, and use a long short-term memory (LSTM) neural network to approximate the optimal liquidity provision strategy. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.16885 |
By: | OECD |
Abstract: | As AI use grows, so do its benefits and risks. These risks can lead to actual harms ("AI incidents") or potential dangers ("AI hazards"). Clear definitions are essential for managing and preventing these risks. This report proposes definitions for AI incidents and related terms. These definitions aim to foster international interoperability while providing flexibility for jurisdictions to determine the scope of AI incidents and hazards they wish to address. |
Date: | 2024–05–06 |
URL: | https://d.repec.org/n?u=RePEc:oec:comaaa:16-en |
By: | Davillas, Apostolos (University of Macedonia); Jones, Andrew M. (University of York) |
Abstract: | We explore the role of epigenetic biological age in predicting subsequent health care utilisation. We use longitudinal data from the UK Understanding Society panel, capitalizing on the availability of baseline epigenetic biological age measures along with data on general practitioner (GP) consultations, outpatient (OP) visits, and hospital inpatient (IP) care collected 5-12 years from baseline. Using least absolute shrinkage and selection operator (LASSO) regression analyses and accounting for participants' pre-existing health conditions, baseline biological underlying health, and socio-economic predictors we find that biological age predicts future GP consultations and IP care, while chronological rather than biological age matters for future OP visits. Post-selection prediction analysis and Shapley-Shorrocks decompositions, comparing our preferred prediction models to models that replace biological age with chronological age, suggest that biological ageing has a stronger role in the models predicting future IP care as opposed to "gatekeeping" GP consultations. |
Keywords: | epigenetics, biological age, health care utilisation, red herring hypothesis, LASSO, supervised machine learning |
JEL: | C5 C81 I10 I18 |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:iza:izadps:dp17159 |
By: | Arnone, Massimo; Leogrande, Angelo |
Abstract: | The competitiveness of financed intermediaries cannot be based exclusively on financial sustainability, i.e. the ability to create profit, but it is also necessary to acquire a transversal vision of sustainability focused on the three ESG dimensions. The paper intends to propose a reflection on the main impacts of the integration of ESG factors on business decisionmaking and operational processes in the financial sector. In this context, we try to understand what role FinTech can play in favor of greater sustainability. Furthermore, through an empirical analysis, some determinants relating to social, environmental, and governance issues are identified which influence the volume of financial resources moved in the factoring market at a European level. Machine learning models are also proposed to estimate the volume |
Keywords: | Sustainability, Factoring, ESG, FinTech, Machine Learning, Clusterization |
JEL: | G00 G21 G22 |
Date: | 2024–06–28 |
URL: | https://d.repec.org/n?u=RePEc:pra:mprapa:121342 |
By: | Devang Sinha; Siddhartha P. Chakrabarty |
Abstract: | In this paper, we examine the Sample Average Approximation (SAA) procedure within a framework where the Monte Carlo estimator of the expectation is biased. We also introduce Multilevel Monte Carlo (MLMC) in the SAA setup to enhance the computational efficiency of solving optimization problems. In this context, we conduct a thorough analysis, exploiting Cram\'er's large deviation theory, to establish uniform convergence, quantify the convergence rate, and determine the sample complexity for both standard Monte Carlo and MLMC paradigms. Additionally, we perform a root-mean-squared error analysis utilizing tools from empirical process theory to derive sample complexity without relying on the finite moment condition typically required for uniform convergence results. Finally, we validate our findings and demonstrate the advantages of the MLMC estimator through numerical examples, estimating Conditional Value-at-Risk (CVaR) in the Geometric Brownian Motion and nested expectation framework. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.18504 |
By: | Carlsson, John G |
Abstract: | The purpose of this project is to apply computational tools from topological data analysis (TDA) to study logistical systems suchas freight networks. TDA is a relatively nascent research area that allows one to describe geometric properties of a data set, such as connectivity, existence of holes, or clustering, in a way that imposes minimal assumptions on parametric structures like coordinate systems or forms of probability distributions. In recent years, TDA has been successfully applied to many different scientific domains, such as aviation, path planning, and time series analysis. To the best of the author's knowledge, this project will be the first to apply TDA to the logistics domain. View the NCST Project Webpage |
Keywords: | Engineering, Data analysis, Freight transportation, Logistics, Systems analysis, Topology |
Date: | 2024–07–01 |
URL: | https://d.repec.org/n?u=RePEc:cdl:itsdav:qt7m0347nd |