nep-big New Economics Papers
on Big Data
Issue of 2023‒07‒31
twenty-two papers chosen by
Tom Coupé
University of Canterbury

  1. Policy Agenda and Trajectory of the Xi Jinping Administration: Textual Evidence from 2012 to 2022 By LIM Jaehwan; ITO Asei; ZHANG Hongyong
  2. Machine Learning and Hamilton-Jacobi-Bellman Equation for Optimal Decumulation: a Comparison Study By Marc Chen; Mohammad Shirazi; Peter A. Forsyth; Yuying Li
  3. Comparing deep learning models for volatility prediction using multivariate data By Wenbo Ge; Pooia Lalbakhsh; Leigh Isai; Artem Lensky; Hanna Suominen
  4. Realistic Synthetic Financial Transactions for Anti-Money Laundering Models By Erik Altman; B\'eni Egressy; Jovan Blanu\v{s}a; Kubilay Atasu
  5. A Massive Scale Semantic Similarity Dataset of Historical English By Emily Silcock; Melissa Dell
  6. Benchmarking Robustness of Deep Reinforcement Learning approaches to Online Portfolio Management By Marc Velay; Bich-Li\^en Doan; Arpad Rimmel; Fabrice Popineau; Fabrice Daniel
  7. Stock Price Prediction using Dynamic Neural Networks By David Noel
  8. Bringing Machine Learning Systems into Clinical Practice: A Design Science Approach to Explainable Machine Learning-Based Clinical Decision Support Systems By Pumplun, Luisa; Peters, Felix; Gawlitza, Joshua; Buxmann, Peter
  9. Quantum computer based Feature Selection in Machine Learning By Gerhard Hellstern; Vanessa Dehn; Martin Zaefferer
  10. Le processus de décision en environnement Big Data By Cécile Godé
  11. Instruct-FinGPT: Financial Sentiment Analysis by Instruction Tuning of General-Purpose Large Language Models By Boyu Zhang; Hongyang Yang; Xiao-Yang Liu
  12. Ideas Without Scale in French Artificial Intelligence Innovations By Johanna Deperi; Ludovic Dibiaggio; Mohamed Keita; Lionel Nesta
  13. Sea Change in Software Development: Economic and Productivity Analysis of the AI-Powered Developer Lifecycle By Thomas Dohmke; Marco Iansiti; Greg Richards
  14. A deep learning approach to estimation of the Phillips curve in South Africa By Gideon du Rand; Hylton Hollander; Dawie van Lill
  15. Unveiling the Potential of Sentiment: Can Large Language Models Predict Chinese Stock Price Movements? By Haohan Zhang; Fengrui Hua; Chengjin Xu; Jian Guo; Hao Kong; Ruiting Zuo
  16. Optimizing Credit Limit Adjustments Under Adversarial Goals Using Reinforcement Learning By Sherly Alfonso-S\'anchez; Jes\'us Solano; Alejandro Correa-Bahnsen; Kristina P. Sendova; Cristi\'an Bravo
  17. Doubly Robust Estimation of Direct and Indirect Quantile Treatment Effects with Machine Learning By Yu-Chin Hsu; Martin Huber; Yu-Min Yen
  18. Toward the Sustainable Development of Machine Learning Applications in Industry 4.0 By Ellenrieder, Sara; Jourdan, Nicolas; Biegel, Tobias; Bretones Cassoli, Beatriz; Metternich, Joachim; Buxmann, Peter
  19. Constructing Time-Series Momentum Portfolios with Deep Multi-Task Learning By Joel Ong; Dorien Herremans
  20. Whose inflation rates matter most? A DSGE model and machine learning approach to monetary policy in the Euro area By Stempel, Daniel; Zahner, Johannes
  21. Statistical electricity price forecasting: A structural approach By Raffaele Sgarlato
  22. Integrating Tick-level Data and Periodical Signal for High-frequency Market Making By Jiafa He; Cong Zheng; Can Yang

  1. By: LIM Jaehwan; ITO Asei; ZHANG Hongyong
    Abstract: How many agendas has Xi Jinping put forth and promoted since taking office in 2012, and what are the types of agendas? What is the relationship between the agendas? How much of political attention has each agenda received, and how has the allocation of attention changed over time? Moreover, how do we know this? Despite the scholarly interest in policy development during the Xi era, few studies have systematically mapped the overall structure – the number, substance, and underlying relationship – of policy agendas pursued by the Xi administration. To address this research gap, we utilize a dataset of presidential statements, speeches, and reports from 2012 to 2022 and employed automated text analysis to identify major topics and terms associated with each topic. Our analysis reveals the identification of about 25 distinct policy agendas across diverse policy domains, with remarkable temporal variations between agendas in terms of the amount of leadership attention. We find a significant shift in both the substance and relative weight of policy agendas between the first and second term of Xi’s tenure, indicating his adaptation and responses to changing domestic and foreign policy environments.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:eti:polidp:23008&r=big
  2. By: Marc Chen; Mohammad Shirazi; Peter A. Forsyth; Yuying Li
    Abstract: We propose a novel data-driven neural network (NN) optimization framework for solving an optimal stochastic control problem under stochastic constraints. Customized activation functions for the output layers of the NN are applied, which permits training via standard unconstrained optimization. The optimal solution yields a multi-period asset allocation and decumulation strategy for a holder of a defined contribution (DC) pension plan. The objective function of the optimal control problem is based on expected wealth withdrawn (EW) and expected shortfall (ES) that directly targets left-tail risk. The stochastic bound constraints enforce a guaranteed minimum withdrawal each year. We demonstrate that the data-driven approach is capable of learning a near-optimal solution by benchmarking it against the numerical results from a Hamilton-Jacobi-Bellman (HJB) Partial Differential Equation (PDE) computational framework.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.10582&r=big
  3. By: Wenbo Ge; Pooia Lalbakhsh; Leigh Isai; Artem Lensky; Hanna Suominen
    Abstract: This study aims at comparing several deep learning-based forecasters in the task of volatility prediction using multivariate data, proceeding from simpler or shallower to deeper and more complex models and compare them to the naive prediction and variations of classical GARCH models. Specifically, the volatility of five assets (i.e., S\&P500, NASDAQ100, gold, silver, and oil) was predicted with the GARCH models, Multi-Layer Perceptrons, recurrent neural networks, Temporal Convolutional Networks, and the Temporal Fusion Transformer. In most cases the Temporal Fusion Transformer followed by variants of Temporal Convolutional Network outperformed classical approaches and shallow networks. These experiments were repeated, and the difference between competing models was shown to be statistically significant, therefore encouraging their use in practice.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.12446&r=big
  4. By: Erik Altman; B\'eni Egressy; Jovan Blanu\v{s}a; Kubilay Atasu
    Abstract: With the widespread digitization of finance and the increasing popularity of cryptocurrencies, the sophistication of fraud schemes devised by cybercriminals is growing. Money laundering -- the movement of illicit funds to conceal their origins -- can cross bank and national boundaries, producing complex transaction patterns. The UN estimates 2-5\% of global GDP or \$0.8 - \$2.0 trillion dollars are laundered globally each year. Unfortunately, real data to train machine learning models to detect laundering is generally not available, and previous synthetic data generators have had significant shortcomings. A realistic, standardized, publicly-available benchmark is needed for comparing models and for the advancement of the area. To this end, this paper contributes a synthetic financial transaction dataset generator and a set of synthetically generated AML (Anti-Money Laundering) datasets. We have calibrated this agent-based generator to match real transactions as closely as possible and made the datasets public. We describe the generator in detail and demonstrate how the datasets generated can help compare different Graph Neural Networks in terms of their AML abilities. In a key way, using synthetic data in these comparisons can be even better than using real data: the ground truth labels are complete, whilst many laundering transactions in real data are never detected.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.16424&r=big
  5. By: Emily Silcock; Melissa Dell
    Abstract: A diversity of tasks use language models trained on semantic similarity data. While there are a variety of datasets that capture semantic similarity, they are either constructed from modern web data or are relatively small datasets created in the past decade by human annotators. This study utilizes a novel source, newly digitized articles from off-copyright, local U.S. newspapers, to assemble a massive-scale semantic similarity dataset spanning 70 years from 1920 to 1989 and containing nearly 400M positive semantic similarity pairs. Historically, around half of articles in U.S. local newspapers came from newswires like the Associated Press. While local papers reproduced articles from the newswire, they wrote their own headlines, which form abstractive summaries of the associated articles. We associate articles and their headlines by exploiting document layouts and language understanding. We then use deep neural methods to detect which articles are from the same underlying source, in the presence of substantial noise and abridgement. The headlines of reproduced articles form positive semantic similarity pairs. The resulting publicly available HEADLINES dataset is significantly larger than most existing semantic similarity datasets and covers a much longer span of time. It will facilitate the application of contrastively trained semantic similarity models to a variety of tasks, including the study of semantic change across space and time.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.17810&r=big
  6. By: Marc Velay; Bich-Li\^en Doan; Arpad Rimmel; Fabrice Popineau; Fabrice Daniel
    Abstract: Deep Reinforcement Learning approaches to Online Portfolio Selection have grown in popularity in recent years. The sensitive nature of training Reinforcement Learning agents implies a need for extensive efforts in market representation, behavior objectives, and training processes, which have often been lacking in previous works. We propose a training and evaluation process to assess the performance of classical DRL algorithms for portfolio management. We found that most Deep Reinforcement Learning algorithms were not robust, with strategies generalizing poorly and degrading quickly during backtesting.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.10950&r=big
  7. By: David Noel
    Abstract: This paper will analyze and implement a time series dynamic neural network to predict daily closing stock prices. Neural networks possess unsurpassed abilities in identifying underlying patterns in chaotic, non-linear, and seemingly random data, thus providing a mechanism to predict stock price movements much more precisely than many current techniques. Contemporary methods for stock analysis, including fundamental, technical, and regression techniques, are conversed and paralleled with the performance of neural networks. Also, the Efficient Market Hypothesis (EMH) is presented and contrasted with Chaos theory using neural networks. This paper will refute the EMH and support Chaos theory. Finally, recommendations for using neural networks in stock price prediction will be presented.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.12969&r=big
  8. By: Pumplun, Luisa; Peters, Felix; Gawlitza, Joshua; Buxmann, Peter
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:dar:wpaper:138523&r=big
  9. By: Gerhard Hellstern; Vanessa Dehn; Martin Zaefferer
    Abstract: The problem of selecting an appropriate number of features in supervised learning problems is investigated in this paper. Starting with common methods in machine learning, we treat the feature selection task as a quadratic unconstrained optimization problem (QUBO), which can be tackled with classical numerical methods as well as within a quantum computing framework. We compare the different results in small-sized problem setups. According to the results of our study, whether the QUBO method outperforms other feature selection methods depends on the data set. In an extension to a larger data set with 27 features, we compare the convergence behavior of the QUBO methods via quantum computing with classical stochastic optimization methods. Due to persisting error rates, the classical stochastic optimization methods are still superior.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.10591&r=big
  10. By: Cécile Godé (CERGAM - Centre d'Études et de Recherche en Gestion d'Aix-Marseille - AMU - Aix Marseille Université - UTLN - Université de Toulon)
    Date: 2023–04–20
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-04125364&r=big
  11. By: Boyu Zhang; Hongyang Yang; Xiao-Yang Liu
    Abstract: Sentiment analysis is a vital tool for uncovering insights from financial articles, news, and social media, shaping our understanding of market movements. Despite the impressive capabilities of large language models (LLMs) in financial natural language processing (NLP), they still struggle with accurately interpreting numerical values and grasping financial context, limiting their effectiveness in predicting financial sentiment. In this paper, we introduce a simple yet effective instruction tuning approach to address these issues. By transforming a small portion of supervised financial sentiment analysis data into instruction data and fine-tuning a general-purpose LLM with this method, we achieve remarkable advancements in financial sentiment analysis. In the experiment, our approach outperforms state-of-the-art supervised sentiment analysis models, as well as widely used LLMs like ChatGPT and LLaMAs, particularly in scenarios where numerical understanding and contextual comprehension are vital.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.12659&r=big
  12. By: Johanna Deperi (University of Brescia); Ludovic Dibiaggio (SKEMA Business School); Mohamed Keita (SKEMA Business School); Lionel Nesta (GREDEG - Groupe de Recherche en Droit, Economie et Gestion - UNS - Université Nice Sophia Antipolis (1965 - 2019) - COMUE UCA - COMUE Université Côte d'Azur (2015-2019) - CNRS - Centre National de la Recherche Scientifique - UCA - Université Côte d'Azur, OFCE - Observatoire français des conjonctures économiques (Sciences Po) - Sciences Po - Sciences Po)
    Abstract: Artificial intelligence (AI) is viewed as the next technological revolution. The aim of this Policy Brief is to identify France's strengths and weaknesses in this great race for AI innovation. We characterise France's positioning relative to other key players and make the following observations: 1. Without being a world leader in innovation incorporating artificial intelligence, France is showing moderate but significant activity in this field. 2. France specialises in machine learning, unsupervised learning and probabilistic graphical models, and in developing solutions for the medical sciences, transport and security. 3. The AI value chain in France is poorly integrated, mainly due to a lack of integration in the downstream phases of the innovation chain. 4. The limited presence of French private players in the global AI arena contrasts with the extensive involvement of French public institutions. French public research organisations produce patents with great economic value. 5. Public players are the key actors in French networks for collaboration in patent development, but are not open to international and institutional diversity. In our opinion, France runs the risk of becoming a global AI laboratory located upstream in the AI innovation value chain. As such, it is likely to bear the sunk costs of AI invention, without enjoying the benefits of AI exploitation on a larger scale. In short, our fear is that French AI will be exported to other locations to prosper and grow.
    Date: 2023–06–26
    URL: http://d.repec.org/n?u=RePEc:hal:spmain:hal-04144817&r=big
  13. By: Thomas Dohmke (GitHub); Marco Iansiti (Harvard Business School and Keystone.AI); Greg Richards (Keystone.AI)
    Abstract: This study examines the impact of GitHub Copilot on a large sample of Copilot users (n=934, 533). The analysis shows that users on average accept nearly 30% of the suggested code, leading to increased productivity. Furthermore, our research demonstrates that the acceptance rate rises over time and is particularly high among less experienced developers, providing them with substantial benefits. Additionally, our estimations indicate that the adoption of generative AI productivity tools could potentially contribute to a $1.5 trillion increase in global GDP by 2030. Moreover, our investigation sheds light on the diverse contributors in the generative AI landscape, including major technology companies, startups, academia, and individual developers. The findings suggest that the driving force behind generative AI software innovation lies within the open-source ecosystem, particularly in the United States. Remarkably, a majority of repositories on GitHub are led by individual developers. As more developers embrace these tools and acquire proficiency in the art of prompting with generative AI, it becomes evident that this novel approach to software development has forged a unique inextricable link between humans and artificial intelligence. This symbiotic relationship has the potential to shape the construction of the world's software for future generations.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.15033&r=big
  14. By: Gideon du Rand; Hylton Hollander; Dawie van Lill
    Abstract: In this study, we provide a comprehensive estimation of the contemporary Phillips curve relationship in the South African economy using a novel deep learning technique. Our approach incorporates multiple measures of economic slack/tightness and inflation expectations, contributing to the debate on the relevance of the Phillips curve in South Africa, where previous findings have been inconclusive. Our analysis reveals that long-run inflation expectations are the primary driver of inflation, with these expectations anchored around 5% historically but declining since the financial crisis.
    Keywords: Inflation, Output gap, Monetary policy
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:unu:wpaper:wp-2023-79&r=big
  15. By: Haohan Zhang; Fengrui Hua; Chengjin Xu; Jian Guo; Hao Kong; Ruiting Zuo
    Abstract: The rapid advancement of Large Language Models (LLMs) has led to extensive discourse regarding their potential to boost the return of quantitative stock trading strategies. This discourse primarily revolves around harnessing the remarkable comprehension capabilities of LLMs to extract sentiment factors which facilitate informed and high-frequency investment portfolio adjustments. To ensure successful implementations of these LLMs into the analysis of Chinese financial texts and the subsequent trading strategy development within the Chinese stock market, we provide a rigorous and encompassing benchmark as well as a standardized back-testing framework aiming at objectively assessing the efficacy of various types of LLMs in the specialized domain of sentiment factor extraction from Chinese news text data. To illustrate how our benchmark works, we reference three distinctive models: 1) the generative LLM (ChatGPT), 2) the Chinese language-specific pre-trained LLM (Erlangshen-RoBERTa), and 3) the financial domain-specific fine-tuned LLM classifier(Chinese FinBERT). We apply them directly to the task of sentiment factor extraction from large volumes of Chinese news summary texts. We then proceed to building quantitative trading strategies and running back-tests under realistic trading scenarios based on the derived sentiment factors and evaluate their performances with our benchmark. By constructing such a comparative analysis, we invoke the question of what constitutes the most important element for improving a LLM's performance on extracting sentiment factors. And by ensuring that the LLMs are evaluated on the same benchmark, following the same standardized experimental procedures that are designed with sufficient expertise in quantitative trading, we make the first stride toward answering such a question.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.14222&r=big
  16. By: Sherly Alfonso-S\'anchez; Jes\'us Solano; Alejandro Correa-Bahnsen; Kristina P. Sendova; Cristi\'an Bravo
    Abstract: Reinforcement learning has been explored for many problems, from video games with deterministic environments to portfolio and operations management in which scenarios are stochastic; however, there have been few attempts to test these methods in banking problems. In this study, we sought to find and automatize an optimal credit card limit adjustment policy by employing reinforcement learning techniques. In particular, because of the historical data available, we considered two possible actions per customer, namely increasing or maintaining an individual's current credit limit. To find this policy, we first formulated this decision-making question as an optimization problem in which the expected profit was maximized; therefore, we balanced two adversarial goals: maximizing the portfolio's revenue and minimizing the portfolio's provisions. Second, given the particularities of our problem, we used an offline learning strategy to simulate the impact of the action based on historical data from a super-app (i.e., a mobile application that offers various services from goods deliveries to financial products) in Latin America to train our reinforcement learning agent. Our results show that a Double Q-learning agent with optimized hyperparameters can outperform other strategies and generate a non-trivial optimal policy reflecting the complex nature of this decision. Our research not only establishes a conceptual structure for applying reinforcement learning framework to credit limit adjustment, presenting an objective technique to make these decisions primarily based on data-driven methods rather than relying only on expert-driven systems but also provides insights into the effect of alternative data usage for determining these modifications.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.15585&r=big
  17. By: Yu-Chin Hsu; Martin Huber; Yu-Min Yen
    Abstract: We suggest double/debiased machine learning estimators of direct and indirect quantile treatment effects under a selection-on-observables assumption. This permits disentangling the causal effect of a binary treatment at a specific outcome rank into an indirect component that operates through an intermediate variable called mediator and an (unmediated) direct impact. The proposed method is based on the efficient score functions of the cumulative distribution functions of potential outcomes, which are robust to certain misspecifications of the nuisance parameters, i.e., the outcome, treatment, and mediator models. We estimate these nuisance parameters by machine learning and use cross-fitting to reduce overfitting bias in the estimation of direct and indirect quantile treatment effects. We establish uniform consistency and asymptotic normality of our effect estimators. We also propose a multiplier bootstrap for statistical inference and show the validity of the multiplier bootstrap. Finally, we investigate the finite sample performance of our method in a simulation study and apply it to empirical data from the National Job Corp Study to assess the direct and indirect earnings effects of training.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.01049&r=big
  18. By: Ellenrieder, Sara; Jourdan, Nicolas; Biegel, Tobias; Bretones Cassoli, Beatriz; Metternich, Joachim; Buxmann, Peter
    Date: 2023–06–03
    URL: http://d.repec.org/n?u=RePEc:dar:wpaper:138521&r=big
  19. By: Joel Ong; Dorien Herremans
    Abstract: A diversified risk-adjusted time-series momentum (TSMOM) portfolio can deliver substantial abnormal returns and offer some degree of tail risk protection during extreme market events. The performance of existing TSMOM strategies, however, relies not only on the quality of the momentum signal but also on the efficacy of the volatility estimator. Yet many of the existing studies have always considered these two factors to be independent. Inspired by recent progress in Multi-Task Learning (MTL), we present a new approach using MTL in a deep neural network architecture that jointly learns portfolio construction and various auxiliary tasks related to volatility, such as forecasting realized volatility as measured by different volatility estimators. Through backtesting from January 2000 to December 2020 on a diversified portfolio of continuous futures contracts, we demonstrate that even after accounting for transaction costs of up to 3 basis points, our approach outperforms existing TSMOM strategies. Moreover, experiments confirm that adding auxiliary tasks indeed boosts the portfolio's performance. These findings demonstrate that MTL can be a powerful tool in finance.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.13661&r=big
  20. By: Stempel, Daniel; Zahner, Johannes
    Abstract: In the euro area, monetary policy is conducted by a single central bank for 20 member countries. However, countries are heterogeneous in their economic development, including their inflation rates. This paper combines a New Keynesian model and a neural network to assess whether the European Central Bank (ECB) conducted monetary policy between 2002 and 2022 according to the weighted average of the inflation rates within the European Monetary Union (EMU) or reacted more strongly to the inflation rate developments of certain EMU countries. The New Keynesian model first generates data which is used to train and evaluate several machine learning algorithms. They authors find that a neural network performs best out-of-sample. They use this algorithm to generally classify historical EMU data, and to determine the exact weight on the inflation rate of EMU members in each quarter of the past two decades. Their findings suggest disproportional emphasis of the ECB on the inflation rates of EMU members that exhibited high inflation rate volatility for the vast majority of the time frame considered (80%), with a median inflation weight of 67% on these countries. They show that these results stem from a tendency of the ECB to react more strongly to countries whose inflation rates exhibit greater deviations from their long-term trend.
    Keywords: New Keynesian Models, Monetary Policy, European Monetary Union, Neural Networks, Transfer Learning
    JEL: E58 C45 C53
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:zbw:imfswp:188&r=big
  21. By: Raffaele Sgarlato
    Abstract: The availability of historical data related to electricity day-ahead prices and to the underlying price formation process is limited. In addition, the electricity market in Europe is facing a rapid transformation, which limits the representativeness of older observations for predictive purposes. On the other hand, machine learning methods that gained traction also in the domain of electricity price forecasting typically require large amounts of data. This study analyses the effectiveness of encoding well-established domain knowledge to mitigate the need for large training datasets. The domain knowledge is incorporated by imposing a structure on the price forecasting problem; the resulting accuracy gains are quantified in an experiment. Compared to an "unstructured" purely statistical model, it is shown that introducing intermediate quantity forecasts of load, renewable infeed, and cross-border exchange, paired with the estimation of supply curves, can result in a NRMSE reduction by 0.1 during daytime hours. The statistically most significant improvements are achieved in the first day of the forecasting horizon when a purely statistical model is combined with structured models. Finally, results are evaluated and interpreted with regard to the dynamic market conditions observed in Europe during the experiment period (from the 1st October 2022 to the 30th April 2023), highlighting the adaptive nature of models that are trained on shorter timescales.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.14186&r=big
  22. By: Jiafa He; Cong Zheng; Can Yang
    Abstract: We focus on the problem of market making in high-frequency trading. Market making is a critical function in financial markets that involves providing liquidity by buying and selling assets. However, the increasing complexity of financial markets and the high volume of data generated by tick-level trading makes it challenging to develop effective market making strategies. To address this challenge, we propose a deep reinforcement learning approach that fuses tick-level data with periodic prediction signals to develop a more accurate and robust market making strategy. Our results of market making strategies based on different deep reinforcement learning algorithms under the simulation scenarios and real data experiments in the cryptocurrency markets show that the proposed framework outperforms existing methods in terms of profitability and risk management.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.17179&r=big

This nep-big issue is ©2023 by Tom Coupé. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.