nep-cmp New Economics Papers
on Computational Economics
Issue of 2023‒10‒09
twenty papers chosen by



  1. Introducing the $\sigma$-Cell: Unifying GARCH, Stochastic Fluctuations and Evolving Mechanisms in RNN-based Volatility Forecasting By German Rodikov; Nino Antulov-Fantulin
  2. Applying Deep Learning to Calibrate Stochastic Volatility Models By Abir Sridi; Paul Bilokon
  3. Generating drawdown-realistic financial price paths using path signatures By Emiel Lemahieu; Kris Boudt; Maarten Wyns
  4. Media Moments and Corporate Connections: A Deep Learning Approach to Stock Movement Classification By Luke Sanborn; Matthew Sahagun
  5. Starting Slowly to Go Fast Deep Dive in the Context of AI Pilot Projects By Pletcher, Scott Nicholas
  6. Novissi Togo - Harnessing Artificial Intelligence to Deliver Shock-Responsive Social Protection By Lawson, Cina; Koudeka, Morlé; Cardenas Martinez, Ana Lucia; Alberro Encinas, Luis Inaki; Karippacheril, Tina George
  7. A Causal Perspective on Loan Pricing: Investigating the Impacts of Selection Bias on Identifying Bid-Response Functions By Christopher Bockel-Rickermann; Sam Verboven; Tim Verdonck; Wouter Verbeke
  8. DeepVol: A Deep Transfer Learning Approach for Universal Asset Volatility Modeling By Chen Liu; Minh-Ngoc Tran; Chao Wang; Richard Gerlach; Robert Kohn
  9. The effect of green energy, global environmental indexes, and stock markets in predicting oil price crashes: Evidence from explainable machine learning By Sami Ben Jabeur; Rabeh Khalfaoui; Wissal Ben Arfi
  10. Conducting qualitative interviews with AI By Felix Chopra; Ingar Haaland
  11. Can Unbiased Predictive AI Amplify Bias? By Tanvir Ahmed Khan
  12. The future of artificial intelligence in the Arab world The experience of some Arab countries By Bouzid Merouane
  13. Fourier Neural Network Approximation of Transition Densities in Finance By Rong Du; Duy-Minh Dang
  14. Building Sustainable Business Practices: Design Principles for Reusable Artificial Intelligence By Omerovic Smajlovic, Mirheta; Zöll, Anne; Rami, Alhasan
  15. TradingGPT: Multi-Agent System with Layered Memory and Distinct Characters for Enhanced Financial Trading Performance By Yang Li; Yangyang Yu; Haohang Li; Zhi Chen; Khaldoun Khashanah
  16. C++ Design Patterns for Low-latency Applications Including High-frequency Trading By Paul Bilokon; Burak Gunduz
  17. Executive AI Literacy: A Text-Mining Approach to Understand Existing and Demanded AI Skills of Leaders in Unicorn Firms By Pinski, Marc; Hofmann, Thomas; Benlian, Alexander
  18. Messung von AI Literacy – Empirische Evidenz und Implikationen By Weber, Patrick; Baum, Lorenz; Pinski, Marc
  19. Using Microsoft Power BI for sales forecasting as a data mining technique By Laifa Assala; Hadouga Hassiba
  20. Data science, artificial intelligence and the third wave of digital era governance By Dunleavy, Patrick; Margetts, Helen

  1. By: German Rodikov; Nino Antulov-Fantulin
    Abstract: This paper introduces the $\sigma$-Cell, a novel Recurrent Neural Network (RNN) architecture for financial volatility modeling. Bridging traditional econometric approaches like GARCH with deep learning, the $\sigma$-Cell incorporates stochastic layers and time-varying parameters to capture dynamic volatility patterns. Our model serves as a generative network, approximating the conditional distribution of latent variables. We employ a log-likelihood-based loss function and a specialized activation function to enhance performance. Experimental results demonstrate superior forecasting accuracy compared to traditional GARCH and Stochastic Volatility models, making the next step in integrating domain knowledge with neural networks.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.01565&r=cmp
  2. By: Abir Sridi; Paul Bilokon
    Abstract: Stochastic volatility models, where the volatility is a stochastic process, can capture most of the essential stylized facts of implied volatility surfaces and give more realistic dynamics of the volatility smile or skew. However, they come with the significant issue that they take too long to calibrate. Alternative calibration methods based on Deep Learning (DL) techniques have been recently used to build fast and accurate solutions to the calibration problem. Huge and Savine developed a Differential Deep Learning (DDL) approach, where Machine Learning models are trained on samples of not only features and labels but also differentials of labels to features. The present work aims to apply the DDL technique to price vanilla European options (i.e. the calibration instruments), more specifically, puts when the underlying asset follows a Heston model and then calibrate the model on the trained network. DDL allows for fast training and accurate pricing. The trained neural network dramatically reduces Heston calibration's computation time. In this work, we also introduce different regularisation techniques, and we apply them notably in the case of the DDL. We compare their performance in reducing overfitting and improving the generalisation error. The DDL performance is also compared to the classical DL (without differentiation) one in the case of Feed-Forward Neural Networks. We show that the DDL outperforms the DL.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.07843&r=cmp
  3. By: Emiel Lemahieu; Kris Boudt; Maarten Wyns
    Abstract: A novel generative machine learning approach for the simulation of sequences of financial price data with drawdowns quantifiably close to empirical data is introduced. Applications such as pricing drawdown insurance options or developing portfolio drawdown control strategies call for a host of drawdown-realistic paths. Historical scenarios may be insufficient to effectively train and backtest the strategy, while standard parametric Monte Carlo does not adequately preserve drawdowns. We advocate a non-parametric Monte Carlo approach combining a variational autoencoder generative model with a drawdown reconstruction loss function. To overcome issues of numerical complexity and non-differentiability, we approximate drawdown as a linear function of the moments of the path, known in the literature as path signatures. We prove the required regularity of drawdown function and consistency of the approximation. Furthermore, we obtain close numerical approximations using linear regression for fractional Brownian and empirical data. We argue that linear combinations of the moments of a path yield a mathematically non-trivial smoothing of the drawdown function, which gives one leeway to simulate drawdown-realistic price paths by including drawdown evaluation metrics in the learning objective. We conclude with numerical experiments on mixed equity, bond, real estate and commodity portfolios and obtain a host of drawdown-realistic paths.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.04507&r=cmp
  4. By: Luke Sanborn; Matthew Sahagun
    Abstract: The financial industry poses great challenges with risk modeling and profit generation. These entities are intricately tied to the sophisticated prediction of stock movements. A stock forecaster must untangle the randomness and ever-changing behaviors of the stock market. Stock movements are influenced by a myriad of factors, including company history, performance, and economic-industry connections. However, there are other factors that aren't traditionally included, such as social media and correlations between stocks. Social platforms such as Reddit, Facebook, and X (Twitter) create opportunities for niche communities to share their sentiment on financial assets. By aggregating these opinions from social media in various mediums such as posts, interviews, and news updates, we propose a more holistic approach to include these "media moments" within stock market movement prediction. We introduce a method that combines financial data, social media, and correlated stock relationships via a graph neural network in a hierarchical temporal fashion. Through numerous trials on current S&P 500 index data, with results showing an improvement in cumulative returns by 28%, we provide empirical evidence of our tool's applicability for use in investment decisions.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.06559&r=cmp
  5. By: Pletcher, Scott Nicholas
    Abstract: For many organizations, artificial intelligence and its subsets of machine learning and deep learning hold great potential for improving efficiency, creating new capabilities and launching new business models. Accordingly, many organizations are attempting to harness these technologies through prototyping and pilot projects. However, many organizations struggle to move past the pilot phase, despite heavy investment in time, data infrastructure and training. In their book Strategic Doing, Morrison et al. (2019) provide a framework to help organizations brainstorm, organize and launch innovation using ten skills of agile leadership. A specific step in the described approach is to Start Slowly to Go Fast. This simple statement holds some deep implications, with many of the principles contained within that philosophy shown to improve innovation outcomes. This paper will examine some of those principles in the context of AI projects.
    Date: 2023–09–01
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:8jqzu&r=cmp
  6. By: Lawson, Cina; Koudeka, Morlé; Cardenas Martinez, Ana Lucia; Alberro Encinas, Luis Inaki; Karippacheril, Tina George
    Abstract: This case study, jointly authored by the Government of Togo and the World Bank, documents the innovative features of the NOVISSI program and posits some directions for the way forward. The study examines how Togo leveraged artificial intelligence and machine learning methods to prioritize the rural poor in the absence of a shock-responsive social protection delivery system and a dynamic social registry. It also discusses the main challenges of the model and the risks and implications of implementing such a program.
    Date: 2023–09–01
    URL: http://d.repec.org/n?u=RePEc:wbk:hdnspu:184975&r=cmp
  7. By: Christopher Bockel-Rickermann; Sam Verboven; Tim Verdonck; Wouter Verbeke
    Abstract: In lending, where prices are specific to both customers and products, having a well-functioning personalized pricing policy in place is essential to effective business making. Typically, such a policy must be derived from observational data, which introduces several challenges. While the problem of ``endogeneity'' is prominently studied in the established pricing literature, the problem of selection bias (or, more precisely, bid selection bias) is not. We take a step towards understanding the effects of selection bias by posing pricing as a problem of causal inference. Specifically, we consider the reaction of a customer to price a treatment effect. In our experiments, we simulate varying levels of selection bias on a semi-synthetic dataset on mortgage loan applications in Belgium. We investigate the potential of parametric and nonparametric methods for the identification of individual bid-response functions. Our results illustrate how conventional methods such as logistic regression and neural networks suffer adversely from selection bias. In contrast, we implement state-of-the-art methods from causal machine learning and show their capability to overcome selection bias in pricing data.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.03730&r=cmp
  8. By: Chen Liu; Minh-Ngoc Tran; Chao Wang; Richard Gerlach; Robert Kohn
    Abstract: This paper introduces DeepVol, a promising new deep learning volatility model that outperforms traditional econometric models in terms of model generality. DeepVol leverages the power of transfer learning to effectively capture and model the volatility dynamics of all financial assets, including previously unseen ones, using a single universal model. This contrasts to the prevailing practice in econometrics literature, which necessitates training separate models for individual datasets. The introduction of DeepVol opens up new avenues for volatility modeling and forecasting in the finance industry, potentially transforming the way volatility is understood and predicted.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.02072&r=cmp
  9. By: Sami Ben Jabeur (ESDES - ESDES, Lyon Business School - UCLy - UCLy - Université Catholique de Lyon (UCLy), UR CONFLUENCE : Sciences et Humanités (EA 1598) - UCLy - Université Catholique de Lyon (UCLy)); Rabeh Khalfaoui (ICN Business School); Wissal Ben Arfi (EDC - EDC Paris Business School)
    Date: 2021–11
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-03797577&r=cmp
  10. By: Felix Chopra (University of Copenhagen, CEBI); Ingar Haaland (Norwegian School of Economics)
    Abstract: Qualitative interviews are one of the fundamental tools of empirical social science research and give individuals the opportunity to explain how they understand and interpret the world, allowing researchers to capture detailed and nuanced insights into complex phenomena. However, qualitative interviews are seldom used in economics and other disciplines inclined toward quantitative data analysis, likely due to concerns about limited scalability, high costs, and low generalizability. In this paper, we introduce an AI-assisted method to conduct semi-structured interviews. This approach retains the depth of traditional qualitative research while enabling large-scale, cost-effective data collection suitable for quantitative analysis. We demonstrate the feasibility of this approach through a large-scale data collection to understand the stock market participation puzzle. Our 395 interviews allow for quantitative analysis that we demonstrate yields richer and more robust conclusions compared to qualitative interviews with traditional sample sizes as well as to survey responses to a single open-ended question. We also demonstrate high interviewee satisfaction with the AI-assisted interviews. In fact, a majority of respondents indicate a strict preference for AI-assisted interviews over human-led interviews. Our novel AI-assisted approach bridges the divide between qualitative and quantitative data analysis and substantially lowers the barriers and costs of conducting qualitative interviews at scale.
    Keywords: Artificial Intelligence, Interviews, Large Language Models, Qualitative Methods, Stock Market Participation
    JEL: C83 C90 D14 D91 Z13
    Date: 2023–09–25
    URL: http://d.repec.org/n?u=RePEc:kud:kucebi:2306&r=cmp
  11. By: Tanvir Ahmed Khan
    Abstract: Predictive AI is increasingly used to guide decisions on agents. I show that even a bias-neutral predictive AI can potentially amplify exogenous (human) bias in settings where the predictive AI represents a cost-adjusted precision gain to unbiased predictions, and the final judgments are made by biased human evaluators. In the absence of perfect and instantaneous belief updating, expected victims of bias become less likely to be saved by randomness under more precise predictions. An increase in aggregate discrimination is possible if this effect dominates. Not accounting for this mechanism may result in AI being unduly blamed for creating bias.
    Keywords: artificial intelligence, AI, algorithm, human-machine interactions, discrimination, bias, algorithmic bias, financial institutions
    JEL: O33 J15 G2
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:qed:wpaper:1510&r=cmp
  12. By: Bouzid Merouane (UMBB - Université M'Hamed Bougara Boumerdes)
    Abstract: For more than two decades, artificial intelligence has been making major transformations in various sectors: from education, healthcare, to public transportation, business, entertainment, war, and more. Therefore, this sector has turned into a major competition arena among the countries of the world. Arab countries live in different internal conditions, which are clearly reflected in their plans to adopt artificial intelligence in their discourse, strategies, and institutions. Arab countries, especially in the Gulf, hastened to adopt the latest technologies, institutions, standards and plans to localize and use artificial intelligence, which reflected positively on their ranking in global indicators. On the other hand, other Arab countries are still groping their way, with attempts to teach artificial intelligence subjects in some curricula with the aim of laying the foundations for this industry.
    Keywords: intelligence artificial intelligence research centers the strategy innovation decisions. JEL Classification Codes: J23, J24, intelligence, artificial intelligence, research centers, the strategy, innovation decisions. JEL Classification Codes: J23
    Date: 2023–06–04
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-04183436&r=cmp
  13. By: Rong Du; Duy-Minh Dang
    Abstract: This paper introduces FourNet, a novel single-layer feed-forward neural network (FFNN) method designed to approximate transition densities for which closed-form expressions of their Fourier transforms, i.e. characteristic functions, are available. A unique feature of FourNet lies in its use of a Gaussian activation function, enabling exact Fourier and inverse Fourier transformations and drawing analogies with the Gaussian mixture model. We mathematically establish FourNet's capacity to approximate transition densities in the $L_2$-sense arbitrarily well with finite number of neurons. The parameters of FourNet are learned by minimizing a loss function derived from the known characteristic function and the Fourier transform of the FFNN, complemented by a strategic sampling approach to enhance training. Through a rigorous and comprehensive error analysis, we derive informative bounds for the $L_2$ estimation error and the potential (pointwise) loss of nonnegativity in the estimated densities. FourNet's accuracy and versatility are demonstrated through a wide range of dynamics common in quantitative finance, including L\'{e}vy processes and the Heston stochastic volatility models-including those augmented with the self-exciting Queue-Hawkes jump process.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.03966&r=cmp
  14. By: Omerovic Smajlovic, Mirheta; Zöll, Anne; Rami, Alhasan
    Abstract: The implementation of artificial intelligence (AI) requires significant resources, which creates a conflict in light of the growing importance of sustainable practices. To address this challenge, it is essential to consider its reusability but the unique nature of AI necessitates the development of specific design principles tailored to AI systems. Thus, we utilize design science research and leverage established design knowledge that encompasses principles for creating AI solutions that can be reused. Our approach incorporates Wenger's (1998) framework of Community of Practice and involves iterative refinement and evaluation of our design knowledge through design thinking workshops, focus group discussions, and expert interviews. Furthermore, we explore how the established design principles for developing reusable AI solutions can contribute to the promotion of socially and environmentally sustainable business practices. By discussing these topics, we aim to inspire further exploration and investigation in the field of Information Systems research.
    Date: 2023–09–20
    URL: http://d.repec.org/n?u=RePEc:dar:wpaper:138495&r=cmp
  15. By: Yang Li; Yangyang Yu; Haohang Li; Zhi Chen; Khaldoun Khashanah
    Abstract: Large Language Models (LLMs), prominently highlighted by the recent evolution in the Generative Pre-trained Transformers (GPT) series, have displayed significant prowess across various domains, such as aiding in healthcare diagnostics and curating analytical business reports. The efficacy of GPTs lies in their ability to decode human instructions, achieved through comprehensively processing historical inputs as an entirety within their memory system. Yet, the memory processing of GPTs does not precisely emulate the hierarchical nature of human memory. This can result in LLMs struggling to prioritize immediate and critical tasks efficiently. To bridge this gap, we introduce an innovative LLM multi-agent framework endowed with layered memories. We assert that this framework is well-suited for stock and fund trading, where the extraction of highly relevant insights from hierarchical financial data is imperative to inform trading decisions. Within this framework, one agent organizes memory into three distinct layers, each governed by a custom decay mechanism, aligning more closely with human cognitive processes. Agents can also engage in inter-agent debate. In financial trading contexts, LLMs serve as the decision core for trading agents, leveraging their layered memory system to integrate multi-source historical actions and market insights. This equips them to navigate financial changes, formulate strategies, and debate with peer agents about investment decisions. Another standout feature of our approach is to equip agents with individualized trading traits, enhancing memory diversity and decision robustness. These sophisticated designs boost the system's responsiveness to historical trades and real-time market signals, ensuring superior automated trading accuracy.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.03736&r=cmp
  16. By: Paul Bilokon; Burak Gunduz
    Abstract: This work aims to bridge the existing knowledge gap in the optimisation of latency-critical code, specifically focusing on high-frequency trading (HFT) systems. The research culminates in three main contributions: the creation of a Low-Latency Programming Repository, the optimisation of a market-neutral statistical arbitrage pairs trading strategy, and the implementation of the Disruptor pattern in C++. The repository serves as a practical guide and is enriched with rigorous statistical benchmarking, while the trading strategy optimisation led to substantial improvements in speed and profitability. The Disruptor pattern showcased significant performance enhancement over traditional queuing methods. Evaluation metrics include speed, cache utilisation, and statistical significance, among others. Techniques like Cache Warming and Constexpr showed the most significant gains in latency reduction. Future directions involve expanding the repository, testing the optimised trading algorithm in a live trading environment, and integrating the Disruptor pattern with the trading algorithm for comprehensive system benchmarking. The work is oriented towards academics and industry practitioners seeking to improve performance in latency-sensitive applications.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.04259&r=cmp
  17. By: Pinski, Marc; Hofmann, Thomas; Benlian, Alexander
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:dar:wpaper:139226&r=cmp
  18. By: Weber, Patrick; Baum, Lorenz; Pinski, Marc
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:dar:wpaper:139225&r=cmp
  19. By: Laifa Assala (Université de Constantine 2 Abdelhamid Mehri [Constantine]); Hadouga Hassiba (Université de Constantine 2 Abdelhamid Mehri [Constantine])
    Abstract: This study aims to predict the sales of a commercial organization in order to know the role that modern information technology plays in achieving accurate and rapid processing of data based on the data mining tool represented in the Microsoft Power BI business intelligence program, through a theoretical and applied study. The significant role played by the estimated future sales information in the planning process as well as guiding and rationalizing the decisions of the sales manager to improve the performance of the organization.
    Keywords: sales forecasting data mining business intelligence Microsoft Power BI. JEL Classification Codes: C13, E2, sales forecasting, data mining, business intelligence, Microsoft Power BI. JEL Classification Codes: C13
    Date: 2023–06–04
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-04183450&r=cmp
  20. By: Dunleavy, Patrick; Margetts, Helen
    Abstract: This article examines the model of digital era governance (DEG) in the light of the latest-wave of data-driven technologies, such as data science methodologies and artificial intelligence (labelled here DSAI). It identifies four key top-level macro-themes through which digital changes in response to these developments may be investigated. First, the capability to store and analyse large quantities of digital data obviates the need for data 'compression' that characterises Weberian-model bureaucracies, and facilitates data de-compression in data-intensive, information regimes, where the capabilities of public agencies and civil society are both enhanced. Second, the increasing capability of robotic devices have expanded the range of tasks that machines extending or substituting workers' capabilities can perform, with implications for a reshaping of state organisation. Third, DSAI technologies allow new ways of partitioning state functions in ways that can maximise organisational productivity, in an 'intelligent centre, devolved delivery' model within vertical policy sectors. Fourth, within each tier of government, DSAI technologies offer new possibilities for 'administrative holism' - the horizontal allocation of power and functions between organisations, through state integration, common capacity and needs-based joining-up of services. Together, these four themes comprise a third wave of DEG changes, suggesting important administrative choices to be made regarding information regimes, state organisation, functional allocation and outsourcing arrangements, as well as a long-term research agenda for public administration, requiring extensive and detailed analysis. This article has been accepted for publication in the Sage journal Public Policy and Administration, August 2023.
    Date: 2023–08–27
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:f3rza&r=cmp

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.