|
on Computational Economics |
Issue of 2023‒11‒13
twenty papers chosen by |
By: | Freek Holvoet; Katrien Antonio; Roel Henckaerts |
Abstract: | Insurers usually turn to generalized linear models for modelling claim frequency and severity data. Due to their success in other fields, machine learning techniques are gaining popularity within the actuarial toolbox. Our paper contributes to the literature on frequency-severity insurance pricing with machine learning via deep learning structures. We present a benchmark study on four insurance data sets with frequency and severity targets in the presence of multiple types of input features. We compare in detail the performance of: a generalized linear model on binned input data, a gradient-boosted tree model, a feed-forward neural network (FFNN), and the combined actuarial neural network (CANN). Our CANNs combine a baseline prediction established with a GLM and GBM, respectively, with a neural network correction. We explain the data preprocessing steps with specific focus on the multiple types of input features typically present in tabular insurance data sets, such as postal codes, numeric and categorical covariates. Autoencoders are used to embed the categorical variables into the neural network and we explore their potential advantages in a frequency-severity setting. Finally, we construct global surrogate models for the neural nets' frequency and severity models. These surrogates enable the translation of the essential insights captured by the FFNNs or CANNs to GLMs. As such, a technical tariff table results that can easily be deployed in practice. |
Date: | 2023–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2310.12671&r=cmp |
By: | Ajit Desai |
Abstract: | This article reviews selected papers that use machine learning for economics research and policy analysis. Our review highlights when machine learning is used in economics, the commonly preferred models and how those models are used. |
Keywords: | Central bank research; Econometric and statistical methods; Economic models |
JEL: | A1 A10 B2 B23 C4 C45 C5 C55 |
Date: | 2023–10 |
URL: | http://d.repec.org/n?u=RePEc:bca:bocsan:23-16&r=cmp |
By: | Zoran Stoiljkovic |
Abstract: | This thesis provides an overview of the recent advances in reinforcement learning in pricing and hedging financial instruments, with a primary focus on a detailed explanation of the Q-Learning Black Scholes approach, introduced by Halperin (2017). This reinforcement learning approach bridges the traditional Black and Scholes (1973) model with novel artificial intelligence algorithms, enabling option pricing and hedging in a completely model-free and data-driven way. This paper also explores the algorithm's performance under different state variables and scenarios for a European put option. The results reveal that the model is an accurate estimator under different levels of volatility and hedging frequency. Moreover, this method exhibits robust performance across various levels of option's moneyness. Lastly, the algorithm incorporates proportional transaction costs, indicating diverse impacts on profit and loss, affected by different statistical properties of the state variables. |
Date: | 2023–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2310.04336&r=cmp |
By: | Yujie Ding; Shuai Jia; Tianyi Ma; Bingcheng Mao; Xiuze Zhou; Liuliu Li; Dongming Han |
Abstract: | The remarkable achievements and rapid advancements of Large Language Models (LLMs) such as ChatGPT and GPT-4 have showcased their immense potential in quantitative investment. Traders can effectively leverage these LLMs to analyze financial news and predict stock returns accurately. However, integrating LLMs into existing quantitative models presents two primary challenges: the insufficient utilization of semantic information embedded within LLMs and the difficulties in aligning the latent information within LLMs with pre-existing quantitative stock features. We propose a novel framework consisting of two components to surmount these challenges. The first component, the Local-Global (LG) model, introduces three distinct strategies for modeling global information. These approaches are grounded respectively on stock features, the capabilities of LLMs, and a hybrid method combining the two paradigms. The second component, Self-Correlated Reinforcement Learning (SCRL), focuses on aligning the embeddings of financial news generated by LLMs with stock features within the same semantic space. By implementing our framework, we have demonstrated superior performance in Rank Information Coefficient and returns, particularly compared to models relying only on stock features in the China A-share market. |
Date: | 2023–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2310.05627&r=cmp |
By: | Zhengmeng Xu; Hai Lin |
Abstract: | We propose a time series forecasting method named Quantum Gramian Angular Field (QGAF). This approach merges the advantages of quantum computing technology with deep learning, aiming to enhance the precision of time series classification and forecasting. We successfully transformed stock return time series data into two-dimensional images suitable for Convolutional Neural Network (CNN) training by designing specific quantum circuits. Distinct from the classical Gramian Angular Field (GAF) approach, QGAF's uniqueness lies in eliminating the need for data normalization and inverse cosine calculations, simplifying the transformation process from time series data to two-dimensional images. To validate the effectiveness of this method, we conducted experiments on datasets from three major stock markets: the China A-share market, the Hong Kong stock market, and the US stock market. Experimental results revealed that compared to the classical GAF method, the QGAF approach significantly improved time series prediction accuracy, reducing prediction errors by an average of 25% for Mean Absolute Error (MAE) and 48% for Mean Squared Error (MSE). This research confirms the potential and promising prospects of integrating quantum computing with deep learning techniques in financial time series forecasting. |
Date: | 2023–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2310.07427&r=cmp |
By: | Igor Sadoune; Marcelin Joanis; Andrea Lodi |
Abstract: | We present a deep learning solution to address the challenges of simulating realistic synthetic first-price sealed-bid auction data. The complexities encountered in this type of auction data include high-cardinality discrete feature spaces and a multilevel structure arising from multiple bids associated with a single auction instance. Our methodology combines deep generative modeling (DGM) with an artificial learner that predicts the conditional bid distribution based on auction characteristics, contributing to advancements in simulation-based research. This approach lays the groundwork for creating realistic auction environments suitable for agent-based learning and modeling applications. Our contribution is twofold: we introduce a comprehensive methodology for simulating multilevel discrete auction data, and we underscore the potential ofDGMas a powerful instrument for refining simulation techniques and fostering the development of economic models grounded in generative AI. Nous proposons une solution basée sur l'apprentissage profond pour simuler de manière réaliste des données d'enchères scellées. Les enjeux liés à ce type de données résident dans la gestion des variables discrètes de grande dimension et de la structure multiniveau liée à la présence de multiples offres pour une seule et même enchère. Notre approche intègre une modélisation générative profonde avec un système d'apprentissage artificiel, capable de prévoir la distribution des offres en fonction des propriétés de l'enchère. Cette stratégie constitue une base solide pour l'élaboration d'environnements d'enchères artificiels mais réalistes, adaptés à l'apprentissage et à la modélisation basés sur les agents. Notre contribution est double: nous introduisons une méthodologie complète pour simuler des données d'enchères discrètes à plusieurs niveaux, et nous mettons en lumière le potentiel de la modélisation générative profonde pour améliorer les techniques de simulation et promouvoir le développement de modèles économiques s'appuyant sur l'intelligence artificielle générative. |
Keywords: | simulation crafting, discrete deep generative modeling, multilevel discrete data, auction data, simulation, modélisation générative discrète et profonde, données discrètes multiniveaux, données d'enchères |
Date: | 2023–10–02 |
URL: | http://d.repec.org/n?u=RePEc:cir:cirwor:2023s-23&r=cmp |
By: | Mariam Dundua (Financial and Supervisory Technology Development Department, National Bank of Georgia); Otar Gorgodze (Head of Financial and Supervisory Technologies Department, National Bank of Georgia) |
Abstract: | The recent advances in Artificial Intelligence (AI), in particular, the development of reinforcement learning (RL) methods, are specifically suited for application to complex economic problems. We formulate a new approach looking for optimal monetary policy rules using RL. Analysis of AI generated monetary policy rules indicates that optimal policy rules exhibit significant nonlinearities. This could explain why simple monetary rules based on traditional linear modeling toolkits lack the robustness needed for practical application. The generated transition equations analysis allows us to estimate the neutral policy rate, which came out to be 6.5 percent. We discuss the potential combination of the method with state-of-the-art FinTech developments in digital finance like DeFi and CBDC and the feasibility of MonetaryTech approach to monetary policy. |
Keywords: | Artificial Intelligence; Reinforcement Learning; Monetary policy |
JEL: | C60 C61 C63 E17 C45 E52 |
Date: | 2022–11 |
URL: | http://d.repec.org/n?u=RePEc:aez:wpaper:02/2022&r=cmp |
By: | Sarit Maitra |
Abstract: | This study enhances option pricing by presenting unique pricing model fractional order Black-Scholes-Merton (FOBSM) which is based on the Black-Scholes-Merton (BSM) model. The main goal is to improve the precision and authenticity of option pricing, matching them more closely with the financial landscape. The approach integrates the strengths of both the BSM and neural network (NN) with complex diffusion dynamics. This study emphasizes the need to take fractional derivatives into account when analyzing financial market dynamics. Since FOBSM captures memory characteristics in sequential data, it is better at simulating real-world systems than integer-order models. Findings reveals that in complex diffusion dynamics, this hybridization approach in option pricing improves the accuracy of price predictions. the key contribution of this work lies in the development of a novel option pricing model (FOBSM) that leverages fractional calculus and neural networks to enhance accuracy in capturing complex diffusion dynamics and memory effects in financial data. |
Date: | 2023–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2310.04464&r=cmp |
By: | Mikhaylov, Dmitry (The Russian Presidential Academy of National Economy and Public Administration) |
Abstract: | During the last decade a lot of academic papers consider the possibility of predicting the economic fluctuations and macroeconomic variables volatility with the use of news data. The reason for this is the development of new machine learning techniques and enhancement of the existed methods. The scientific problem of our study is the investigation of whether predictive power of the forecast of macroeconomic variables can be improved with the use of news data in the context of Russia. We apply NLU algorithms and techniques for topic modeling. Especially, we implement LDA (Latent Dirichlet Allocation) since this approach has shown its effectiveness in the published papers related to the mentioned framework. Then the frequency news and sentiment news indexes are constructed with the use of modeled topics. The end point of our research is the forecast analysis of the set of macroeconomics variables [CPI (π), Business Confidence Index (BCI), Consumer Confidence Index (CCI), Export (EX), Import (IM), Net Export (NX)] supplemented by inclusion of frequency and sentiment news indexes in order to evaluated the improvement in predictive power. We have shown that the inclusion of frequency news indexes and sentiment news indexes, based on the LDA approach in the forecast models can improve the quality of the predictions and increase the predictive power for some variables. |
Keywords: | Macroeconomic Forecasting, Natural Language Processing, Machine Learning |
JEL: | E27 E37 |
Date: | 2023–04–14 |
URL: | http://d.repec.org/n?u=RePEc:rnp:wpaper:w20220250&r=cmp |
By: | Bolivar, Osmar |
Abstract: | Esta investigación tiene como objetivo pronosticar la incidencia de pobreza a nivel comunitario en Bolivia para el año 2022 empleando algoritmos de machine learning y teledetección, y contrastar estos pronósticos con los datos de 2012. Se procesaron datos censales de 2012 para crear un indicador de pobreza basado en Necesidades Básicas Insatisfechas (NBI) a nivel de comunidades y se seleccionaron 953 de estas comunidades como unidades de análisis. La generación de variables geoespaciales, el entrenamiento y validación de algoritmos de machine learning, y la posterior aplicación de estos modelos revelaron una disminución general de la pobreza, con aproximadamente el 50% de las comunidades proyectadas por debajo del umbral del 42, 5% en 2022, indicando mejoras significativas desde 2012. Se observó una reducción diferencial de la pobreza, con un impacto más pronunciado en las comunidades con menores niveles de pobreza iniciales. Se vislumbraron disparidades regionales, con tasas de pobreza más bajas en áreas urbanas, subrayando la necesidad de abordar las desigualdades regionales. Además, se evidencio la eficacia de la metodología planteada en este estudio en comparación con investigaciones similares, resaltando la utilidad de esta metodología para predecir la pobreza a nivel comunitario. |
Keywords: | pobreza; machine learning; remote sensing |
JEL: | C8 I3 I32 O31 |
Date: | 2023–08 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:118932&r=cmp |
By: | Abdul Latif Baydoun (LUMEN - Lille University Management Lab - ULR 4999 - Université de Lille) |
Abstract: | AI drives history-based evolution of societies, and we thrive to understand AI's power to empower humanity and nature. Using an ethnographic approach, we try to learn how AI shapes consumer ethical agency considering the important issue of climate change at stake. |
Date: | 2022–06–13 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-04214774&r=cmp |
By: | Cyril Bachelard (UNIL - Université de Lausanne = University of Lausanne); Apostolos Chalkis (GeomScale Org.); Vissarion Fisikopoulos (NKUA - National and Kapodistrian University of Athens, GeomScale Org.); Elias Tsigaridas (OURAGAN - OUtils de Résolution Algébriques pour la Géométrie et ses ApplicatioNs - Inria de Paris - Inria - Institut National de Recherche en Informatique et en Automatique, GeomScale Org.) |
Abstract: | We propose novel randomized geometric tools to detect low-volatility anomalies in stock markets; a principal problem in financial economics. Our modeling of the (detection) problem results in sampling and estimating the (relative) volume of geodesically non-convex and non-connected spherical patches that arise by intersecting a non-standard simplex with a sphere. To sample, we introduce two novel Markov Chain Monte Carlo (MCMC) algorithms that exploit the geometry of the problem and employ state-of-the-art continuous geometric random walks (such as Billiard walk and Hit-and-Run) adapted on spherical patches. To our knowledge, this is the first geometric formulation and MCMC-based analysis of the volatility puzzle in stock markets. We have implemented our algorithms in C++ (along with an R interface) and we illustrate the power of our approach by performing extensive experiments on real data. Our analyses provide accurate detection and new insights into the distribution of portfolios' performance characteristics. Moreover, we use our tools to show that classical methods for low-volatility anomaly detection in finance form bad proxies that could lead to misleading or inaccurate results. |
Date: | 2023–04–25 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-04223511&r=cmp |
By: | Michael E. Glinsky; Sharon Sievert |
Abstract: | This paper fundamentally reformulates economic and financial theory to include electronic currencies. The valuation of the electronic currencies will be based on macroeconomic theory and the fundamental equation of monetary policy, not the microeconomic theory of discounted cash flows. The view of electronic currency as a transactional equity associated with tangible assets of a sub-economy will be developed, in contrast to the view of stock as an equity associated mostly with intangible assets of a sub-economy. The view will be developed of the electronic currency management firm as an entity responsible for coordinated monetary (electronic currency supply and value stabilization) and fiscal (investment and operational) policies of a substantial (for liquidity of the electronic currency) sub-economy. The risk model used in the valuations and the decision-making will not be the ubiquitous, yet inappropriate, exponential risk model that leads to discount rates, but will be multi time scale models that capture the true risk. The decision-making will be approached from the perspective of true systems control based on a system response function given by the multi scale risk model and system controllers that utilize the Deep Reinforcement Learning, Generative Pretrained Transformers, and other methods of Artificial Intelligence (DRL/GPT/AI). Finally, the sub-economy will be viewed as a nonlinear complex physical system with both stable equilibriums that are associated with short-term exploitation, and unstable equilibriums that need to be stabilized with active nonlinear control based on the multi scale system response functions and DRL/GPT/AI. |
Date: | 2023–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2310.04986&r=cmp |
By: | Boyu Zhang; Hongyang Yang; Tianyu Zhou; Ali Babar; Xiao-Yang Liu |
Abstract: | Financial sentiment analysis is critical for valuation and investment decision-making. Traditional NLP models, however, are limited by their parameter size and the scope of their training datasets, which hampers their generalization capabilities and effectiveness in this field. Recently, Large Language Models (LLMs) pre-trained on extensive corpora have demonstrated superior performance across various NLP tasks due to their commendable zero-shot abilities. Yet, directly applying LLMs to financial sentiment analysis presents challenges: The discrepancy between the pre-training objective of LLMs and predicting the sentiment label can compromise their predictive performance. Furthermore, the succinct nature of financial news, often devoid of sufficient context, can significantly diminish the reliability of LLMs' sentiment analysis. To address these challenges, we introduce a retrieval-augmented LLMs framework for financial sentiment analysis. This framework includes an instruction-tuned LLMs module, which ensures LLMs behave as predictors of sentiment labels, and a retrieval-augmentation module which retrieves additional context from reliable external sources. Benchmarked against traditional models and LLMs like ChatGPT and LLaMA, our approach achieves 15\% to 48\% performance gain in accuracy and F1 score. |
Date: | 2023–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2310.04027&r=cmp |
By: | Kauhanen, Antti; Pajarinen, Mika; Rouvinen, Petri |
Abstract: | Abstract Some one-fifth of Finnish employment is in occupations with at least half of tasks exposed to generative artificial intelligence. A relatively large share of occupations has at least some exposure, but few occupations have high exposures. Contrary to prior technological discontinuities, in the case of generative artificial intelligence the labor market elite is relatively more exposed. As far as the Finnish labor market is concerned, the effect of generative artificial intelligence is ambiguous – and quite possibly positive. Regardless, employees are faced with a sizable change, which is best addressed head-on, i.e., by experimenting with and deploying generative artificial intelligence as soon as possible. Our observations are based on a replication of the US analysis by Eloundou et al. (2023) in the context of Finland. This brief kicks off a research project conducted by ETLA and supported by the TT foundation. |
Keywords: | Generative artificial intelligence, Technological change, Employment, Labor market, Occupations |
JEL: | E24 J21 O33 |
Date: | 2023–10–25 |
URL: | http://d.repec.org/n?u=RePEc:rif:briefs:128&r=cmp |
By: | Daniele Condorelli; Massimiliano Furlan |
Abstract: | We simulate behaviour of independent reinforcement learning algorithms playing the Crawford and Sobel (1982) game of strategic information transmission. We show that a sender and a receiver training together converge to strategies close to the exante optimal equilibrium of the game. Hence, communication takes place to the largest extent predicted by Nash equilibrium given the degree of conflict of interest between agents. The conclusion is shown to be robust to alternative specifications of the hyperparameters and of the game. We discuss implications for theories of equilibrium selection in information transmission games, for work on emerging communication among algorithms in computer science and for the economics of collusions in markets populated by artificially intelligent agents. |
Date: | 2023–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2310.07867&r=cmp |
By: | Neng Wang; Hongyang Yang; Christina Dan Wang |
Abstract: | In the swiftly expanding domain of Natural Language Processing (NLP), the potential of GPT-based models for the financial sector is increasingly evident. However, the integration of these models with financial datasets presents challenges, notably in determining their adeptness and relevance. This paper introduces a distinctive approach anchored in the Instruction Tuning paradigm for open-source large language models, specifically adapted for financial contexts. Through this methodology, we capitalize on the interoperability of open-source models, ensuring a seamless and transparent integration. We begin by explaining the Instruction Tuning paradigm, highlighting its effectiveness for immediate integration. The paper presents a benchmarking scheme designed for end-to-end training and testing, employing a cost-effective progression. Firstly, we assess basic competencies and fundamental tasks, such as Named Entity Recognition (NER) and sentiment analysis to enhance specialization. Next, we delve into a comprehensive model, executing multi-task operations by amalgamating all instructional tunings to examine versatility. Finally, we explore the zero-shot capabilities by earmarking unseen tasks and incorporating novel datasets to understand adaptability in uncharted terrains. Such a paradigm fortifies the principles of openness and reproducibility, laying a robust foundation for future investigations in open-source financial large language models (FinLLMs). |
Date: | 2023–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2310.04793&r=cmp |
By: | Tom Bryan; Jacob Carlson; Abhishek Arora; Melissa Dell |
Abstract: | Billions of public domain documents remain trapped in hard copy or lack an accurate digitization. Modern natural language processing methods cannot be used to index, retrieve, and summarize their texts; conduct computational textual analyses; or extract information for statistical analyses, and these texts cannot be incorporated into language model training. Given the diversity and sheer quantity of public domain texts, liberating them at scale requires optical character recognition (OCR) that is accurate, extremely cheap to deploy, and sample-efficient to customize to novel collections, languages, and character sets. Existing OCR engines, largely designed for small-scale commercial applications in high resource languages, often fall short of these requirements. EffOCR (EfficientOCR), a novel open-source OCR package, meets both the computational and sample efficiency requirements for liberating texts at scale by abandoning the sequence-to-sequence architecture typically used for OCR, which takes representations from a learned vision model as inputs to a learned language model. Instead, EffOCR models OCR as a character or word-level image retrieval problem. EffOCR is cheap and sample efficient to train, as the model only needs to learn characters' visual appearance and not how they are used in sequence to form language. Models in the EffOCR model zoo can be deployed off-the-shelf with only a few lines of code. Importantly, EffOCR also allows for easy, sample efficient customization with a simple model training interface and minimal labeling requirements due to its sample efficiency. We illustrate the utility of EffOCR by cheaply and accurately digitizing 20 million historical U.S. newspaper scans, evaluating zero-shot performance on randomly selected documents from the U.S. National Archives, and accurately digitizing Japanese documents for which all other OCR solutions failed. |
Date: | 2023–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2310.10050&r=cmp |
By: | Alfarisi, Omar |
Abstract: | Artificial Earth Economics General Intelligence (AEEGI) is a natural progression of Artificial General Intelligence (AGI) that caters to the Economics of the Earth caring. It is crucial to optimize the entire spectrum of generating, transporting, storing, and consuming earth resources for the betterment of humanity, the environment, industry, and the scientific community. Most research efforts focus on a specific sector, leading to a disconnect between multiple disciplines and hindering effective problem-solving while ignoring the economic impact on the whole ecosystem, including people. AEEGI proposes creating a new coin specific to earth economics and integrating the positive initiative and outcomes from each sector to create an optimal solution that simultaneously addresses multiple objectives that generate the required coins (cash) to achieve them; we call this coin the Hubnomics Earth Wise Coin (EarthYzcoin). The generation of EarthYzcoin and integration across all industry sectors and its value chain are more complex than solving each industry sector or specific value chain challenges separately, but achieving a sustainable and efficient industry-environment-economic system is necessary. Because the industry, as per the current economic system, would have to invest more to become more environmentally friendly. Therefore, with EarthYzcoin, the required investment would be generated by EarthYzcoin and given to industries and universities to research and develop cleaner processes and technologies. At the same time, as per the definition of Yzcoin, the fund that Yzcoin generates is not taken from anyone; instead, it is purely generated as an equivalent value to what the environmental project deliverables outcome. However, EarthYzcoin is an upfront generation of value provided to institutions that would work on researching and developing cleaner solutions to industries and life. The role of AEEGI is to ensure continuous learning from every new outcome to keep optimizing. |
Date: | 2023–10–09 |
URL: | http://d.repec.org/n?u=RePEc:osf:osfxxx:hn49b&r=cmp |
By: | Saeid Vaghefi (University of Zurich); Qian Wang (University of Zurich; Inovest Partners AG); Veruska Muccione (University of Zurich; University of Geneva); Jingwei Ni (ETH Zurich); Mathias Kraus (University of Erlangen); Julia Bingler (University of Oxford); Tobias Schimanski (University of Zurich); Chiara Colesanti Senni (ETH Zurich; University of Zurich); Nicolas Webersinke (Friedrich-Alexander-Universität Erlangen-Nürnberg); Christian Huggel (University of Zurich); Markus Leippold (University of Zurich; Swiss Finance Institute) |
Abstract: | Large Language Models (LLMs) have made significant progress in recent years, achieving remarkable results in question-answering tasks (QA). However, they still face two major challenges: hallucination and outdated information after the training phase. These challenges take center stage in critical domains like climate change, where obtaining accurate and up-to-date information from reliable sources in a limited time is essential and difficult. To overcome these barriers, one potential solution is to provide LLMs with access to external, scientifically accurate, and robust sources (long-term memory) to continuously update their knowledge and prevent the propagation of inaccurate, incorrect, or outdated information. In this study, we enhanced GPT-4 by integrating the information from the Sixth Assessment Report of the Intergovernmental (IPCC AR6), the most comprehensive, up-to-date, and reliable source in this domain. We present our conversational AI prototype, available at www.chatclimate.ai, for his invaluable and voluntary support in setting up the server. The server will become available by mid-April.} and demonstrate its ability to answer challenging questions accurately. The answers and their sources were evaluated by our team of IPCC authors, who used their expert knowledge to score the accuracy of the answers from 1 (very-low) to 5 (very-high). The evaluation showed that the hybrid chatClimate provided more accurate answers, highlighting the effectiveness of our solution. This approach can be easily scaled for chatbots in specific domains, enabling the delivery of reliable and accurate information. |
Date: | 2023–10 |
URL: | http://d.repec.org/n?u=RePEc:chf:rpseri:rp2388&r=cmp |