nep-cmp New Economics Papers
on Computational Economics
Issue of 2024‒05‒13
twenty-two papers chosen by



  1. StockGPT: A GenAI Model for Stock Prediction and Trading By Dat Mai
  2. Algorithmic Collusion by Large Language Models By Sara Fish; Yannai A. Gonczarowski; Ran I. Shorrer
  3. From Predictive Algorithms to Automatic Generation of Anomalies By Sendhil Mullainathan; Ashesh Rambachan
  4. RiskLabs: Predicting Financial Risk Using Large Language Model Based on Multi-Sources Data By Yupeng Cao; Zhi Chen; Qingyun Pei; Fabrizio Dimino; Lorenzo Ausiello; Prashant Kumar; K. P. Subbalakshmi; Papa Momar Ndiaye
  5. ChatGPT Can Predict the Future when it Tells Stories Set in the Future About the Past By Van Pham; Scott Cunningham
  6. Sentiment trading with large language models By Kirtac, Kemal; Germano, Guido
  7. Artificial Intelligence-based Analysis of Change in Public Finance between US and International Markets By Kapil Panda
  8. DeepTraderX: Challenging Conventional Trading Strategies with Deep Learning in Multi-Threaded Market Simulations By Armand Mihai Cismaru
  9. Pre-publication revisions of bank financial statements: a novel way to monitor banks? By Andre Guettler; Mahvish Naeem; Lars Norden; Bernardus F Nazar Van Doornik
  10. Early warning systems for financial markets of emerging economies By Artem Kraevskiy; Artem Prokhorov; Evgeniy Sokolovskiy
  11. Machine learning-based similarity measure to forecast M&A from patent data By Giambattista Albora; Matteo Straccamore; Andrea Zaccaria
  12. The impact of prudential regulations on the UK housing market and economy: insights from an agent-based model By Bardoscia, Marco; Carro, Adrian; Hinterschweiger, Marc; Napoletano, Mauro; Popoyan, Lilit; Roventini, Andrea; Uluc, Arzu
  13. The impact of artificial intelligence on output and inflation By Iñaki Aldasoro; Sebastian Doerr; Leonardo Gambacorta; Daniel Rees
  14. QFNN-FFD: Quantum Federated Neural Network for Financial Fraud Detection By Nouhaila Innan; Alberto Marchisio; Muhammad Shafique; Mohamed Bennai
  15. How good are LLMs in risk profiling? By Thorsten Hens; Trine Nordlie
  16. Developing a Holistic AI Literacy Assessment Matrix - Bridging Generic, Domain-Specific, and Ethical Competencies By Knoth, Nils; Decker, Marie; Laupichler, Matthias Carl; Pinski, Marc; Buchholtz, Nils; Bata, Katharina; Schultz, Ben
  17. Fast TTC Computation By Irene Aldridge
  18. Strategic Interactions between Large Language Models-based Agents in Beauty Contests By Siting Lu
  19. For What It's Worth: Measuring Land Value in the Era of Big Data and Machine Learning By Scott Wentland; Gary Cornwall; Jeremy G. Moulton
  20. A backward differential deep learning-based algorithm for solving high-dimensional nonlinear backward stochastic differential equations By Lorenc Kapllani; Long Teng
  21. Neural Network Modeling for Forecasting Tourism Demand in Stopi\'{c}a Cave: A Serbian Cave Tourism Study By Buda Baji\'c; Sr{\dj}an Mili\'cevi\'c; Aleksandar Anti\'c; Slobodan Markovi\'c; Nemanja Tomi\'c
  22. Algorithms, inequalities, and the 'humans-in-the-loop' By Paola Tubaro

  1. By: Dat Mai
    Abstract: This paper introduces StockGPT, an autoregressive "number" model pretrained directly on the history of daily U.S. stock returns. Treating each return series as a sequence of tokens, the model excels at understanding and predicting the highly intricate stock return dynamics. Instead of relying on handcrafted trading patterns using historical stock prices, StockGPT automatically learns the hidden representations predictive of future returns via its attention mechanism. On a held-out test sample from 2001 to 2023, a daily rebalanced long-short portfolio formed from StockGPT predictions earns an annual return of 119% with a Sharpe ratio of 6.5. The StockGPT-based portfolio completely explains away momentum and long-/short-term reversals, eliminating the need for manually crafted price-based strategies and also encompasses most leading stock market factors. This highlights the immense promise of generative AI in surpassing human in making complex financial investment decisions and illustrates the efficacy of the attention mechanism of large language models when applied to a completely different domain.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.05101&r=cmp
  2. By: Sara Fish; Yannai A. Gonczarowski; Ran I. Shorrer
    Abstract: The rise of algorithmic pricing raises concerns of algorithmic collusion. We conduct experiments with algorithmic pricing agents based on Large Language Models (LLMs), and specifically GPT-4. We find that (1) LLM-based agents are adept at pricing tasks, (2) LLM-based pricing agents autonomously collude in oligopoly settings to the detriment of consumers, and (3) variation in seemingly innocuous phrases in LLM instructions ("prompts") may increase collusion. These results extend to auction settings. Our findings underscore the need for antitrust regulation regarding algorithmic pricing, and uncover regulatory challenges unique to LLM-based pricing agents.
    Date: 2024–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.00806&r=cmp
  3. By: Sendhil Mullainathan; Ashesh Rambachan
    Abstract: Machine learning algorithms can find predictive signals that researchers fail to notice; yet they are notoriously hard-to-interpret. How can we extract theoretical insights from these black boxes? History provides a clue. Facing a similar problem -- how to extract theoretical insights from their intuitions -- researchers often turned to ``anomalies:'' constructed examples that highlight flaws in an existing theory and spur the development of new ones. Canonical examples include the Allais paradox and the Kahneman-Tversky choice experiments for expected utility theory. We suggest anomalies can extract theoretical insights from black box predictive algorithms. We develop procedures to automatically generate anomalies for an existing theory when given a predictive algorithm. We cast anomaly generation as an adversarial game between a theory and a falsifier, the solutions to which are anomalies: instances where the black box algorithm predicts - were we to collect data - we would likely observe violations of the theory. As an illustration, we generate anomalies for expected utility theory using a large, publicly available dataset on real lottery choices. Based on an estimated neural network that predicts lottery choices, our procedures recover known anomalies and discover new ones for expected utility theory. In incentivized experiments, subjects violate expected utility theory on these algorithmically generated anomalies; moreover, the violation rates are similar to observed rates for the Allais paradox and Common ratio effect.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.10111&r=cmp
  4. By: Yupeng Cao; Zhi Chen; Qingyun Pei; Fabrizio Dimino; Lorenzo Ausiello; Prashant Kumar; K. P. Subbalakshmi; Papa Momar Ndiaye
    Abstract: The integration of Artificial Intelligence (AI) techniques, particularly large language models (LLMs), in finance has garnered increasing academic attention. Despite progress, existing studies predominantly focus on tasks like financial text summarization, question-answering (Q$\&$A), and stock movement prediction (binary classification), with a notable gap in the application of LLMs for financial risk prediction. Addressing this gap, in this paper, we introduce \textbf{RiskLabs}, a novel framework that leverages LLMs to analyze and predict financial risks. RiskLabs uniquely combines different types of financial data, including textual and vocal information from Earnings Conference Calls (ECCs), market-related time series data, and contextual news data surrounding ECC release dates. Our approach involves a multi-stage process: initially extracting and analyzing ECC data using LLMs, followed by gathering and processing time-series data before the ECC dates to model and understand risk over different timeframes. Using multimodal fusion techniques, RiskLabs amalgamates these varied data features for comprehensive multi-task financial risk prediction. Empirical experiment results demonstrate RiskLab's effectiveness in forecasting both volatility and variance in financial markets. Through comparative experiments, we demonstrate how different data sources contribute to financial risk assessment and discuss the critical role of LLMs in this context. Our findings not only contribute to the AI in finance application but also open new avenues for applying LLMs in financial risk assessment.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.07452&r=cmp
  5. By: Van Pham; Scott Cunningham
    Abstract: This study investigates whether OpenAI's ChatGPT-3.5 and ChatGPT-4 can accurately forecast future events using two distinct prompting strategies. To evaluate the accuracy of the predictions, we take advantage of the fact that the training data at the time of experiment stopped at September 2021, and ask about events that happened in 2022 using ChatGPT-3.5 and ChatGPT-4. We employed two prompting strategies: direct prediction and what we call future narratives which ask ChatGPT to tell fictional stories set in the future with characters that share events that have happened to them, but after ChatGPT's training data had been collected. Concentrating on events in 2022, we prompted ChatGPT to engage in storytelling, particularly within economic contexts. After analyzing 100 prompts, we discovered that future narrative prompts significantly enhanced ChatGPT-4's forecasting accuracy. This was especially evident in its predictions of major Academy Award winners as well as economic trends, the latter inferred from scenarios where the model impersonated public figures like the Federal Reserve Chair, Jerome Powell. These findings indicate that narrative prompts leverage the models' capacity for hallucinatory narrative construction, facilitating more effective data synthesis and extrapolation than straightforward predictions. Our research reveals new aspects of LLMs' predictive capabilities and suggests potential future applications in analytical contexts.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.07396&r=cmp
  6. By: Kirtac, Kemal; Germano, Guido
    Abstract: We analyse the performance of the large language models (LLMs) OPT, BERT, and FinBERT, alongside the traditional Loughran-McDonald dictionary, in the sentiment analysis of 965, 375 U.S. financial news articles from 2010 to 2023. Our findings reveal that the GPT-3-based OPT model significantly outperforms the others, predicting stock market returns with an accuracy of 74.4%. A long-short strategy based on OPT, accounting for 10 basis points (bps) in transaction costs, yields an exceptional Sharpe ratio of 3.05. From August 2021 to July 2023, this strategy produces an impressive 355% gain, outperforming other strategies and traditional market portfolios. This underscores the transformative potential of LLMs in financial market prediction and portfolio management and the necessity of employing sophisticated language models to develop effective investment strategies based on news sentiment.
    Keywords: artificial intelligence investment strategies; generative pre-trained transformer (GPT); large language models; machine learning in stock return prediction; natural language processing (NLP)
    JEL: C53 G10 G11 G12 G14
    Date: 2024–04–01
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:122592&r=cmp
  7. By: Kapil Panda
    Abstract: Public finances are one of the fundamental mechanisms of economic governance that refer to the financial activities and decisions made by government entities to fund public services, projects, and operations through assets. In today's globalized landscape, even subtle shifts in one nation's public debt landscape can have significant impacts on that of international finances, necessitating a nuanced understanding of the correlations between international and national markets to help investors make informed investment decisions. Therefore, by leveraging the capabilities of artificial intelligence, this study utilizes neural networks to depict the correlations between US and International Public Finances and predict the changes in international public finances based on the changes in US public finances. With the neural network model achieving a commendable Mean Squared Error (MSE) value of 2.79, it is able to affirm a discernible correlation and also plot the effect of US market volatility on international markets. To further test the accuracy and significance of the model, an economic analysis was conducted that aimed to correlate the changes seen by the results of the model with historical stock market changes. This model demonstrates significant potential for investors to predict changes in international public finances based on signals from US markets, marking a significant stride in comprehending the intricacies of global public finances and the role of artificial intelligence in decoding its multifaceted patterns for practical forecasting.
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2403.18823&r=cmp
  8. By: Armand Mihai Cismaru
    Abstract: In this paper, we introduce DeepTraderX (DTX), a simple Deep Learning-based trader, and present results that demonstrate its performance in a multi-threaded market simulation. In a total of about 500 simulated market days, DTX has learned solely by watching the prices that other strategies produce. By doing this, it has successfully created a mapping from market data to quotes, either bid or ask orders, to place for an asset. Trained on historical Level-2 market data, i.e., the Limit Order Book (LOB) for specific tradable assets, DTX processes the market state $S$ at each timestep $T$ to determine a price $P$ for market orders. The market data used in both training and testing was generated from unique market schedules based on real historic stock market data. DTX was tested extensively against the best strategies in the literature, with its results validated by statistical analysis. Our findings underscore DTX's capability to rival, and in many instances, surpass, the performance of public-domain traders, including those that outclass human traders, emphasising the efficiency of simple models, as this is required to succeed in intricate multi-threaded simulations. This highlights the potential of leveraging "black-box" Deep Learning systems to create more efficient financial markets.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2403.18831&r=cmp
  9. By: Andre Guettler; Mahvish Naeem; Lars Norden; Bernardus F Nazar Van Doornik
    Abstract: We investigate whether pre-publication revisions of bank financial statements contain forward-looking information about bank risk. Using 7.4 million observations of monthly financial reports from all banks in Brazil during 2007-2019, we show that 78% of all revisions occur before the publication of these statements. The frequency, missing of reporting deadlines, and severity of revisions are positively related to future bank risk. Using machine learning techniques, we provide evidence on mechanisms through which revisions affect bank risk. Our findings suggest that private information about pre-publication revisions is useful for supervisors to monitor banks.
    Keywords: banks, bank performance, regulatory reporting quality, regulatory oversight, machine learning
    JEL: G21 G28 M41
    Date: 2024–03
    URL: http://d.repec.org/n?u=RePEc:bis:biswps:1177&r=cmp
  10. By: Artem Kraevskiy; Artem Prokhorov; Evgeniy Sokolovskiy
    Abstract: We develop and apply a new online early warning system (EWS) for what is known in machine learning as concept drift, in economics as a regime shift and in statistics as a change point. The system goes beyond linearity assumed in many conventional methods, and is robust to heavy tails and tail-dependence in the data, making it particularly suitable for emerging markets. The key component is an effective change-point detection mechanism for conditional entropy of the data, rather than for a particular indicator of interest. Combined with recent advances in machine learning methods for high-dimensional random forests, the mechanism is capable of finding significant shifts in information transfer between interdependent time series when traditional methods fail. We explore when this happens using simulations and we provide illustrations by applying the method to Uzbekistan's commodity and equity markets as well as to Russia's equity market in 2021-2023.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.03319&r=cmp
  11. By: Giambattista Albora; Matteo Straccamore; Andrea Zaccaria
    Abstract: Defining and finalizing Mergers and Acquisitions (M&A) requires complex human skills, which makes it very hard to automatically find the best partner or predict which firms will make a deal. In this work, we propose the MASS algorithm, a specifically designed measure of similarity between companies and we apply it to patenting activity data to forecast M&A deals. MASS is based on an extreme simplification of tree-based machine learning algorithms and naturally incorporates intuitive criteria for deals; as such, it is fully interpretable and explainable. By applying MASS to the Zephyr and Crunchbase datasets, we show that it outperforms LightGCN, a "black box" graph convolutional network algorithm. When similar companies have disjoint patenting activities, on the contrary, LightGCN turns out to be the most effective algorithm. This study provides a simple and powerful tool to model and predict M&A deals, offering valuable insights to managers and practitioners for informed decision-making.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.07179&r=cmp
  12. By: Bardoscia, Marco (Bank of England); Carro, Adrian (Banco de España, Institute for New Economic Thinking at the Oxford Martin School, University of Oxford); Hinterschweiger, Marc (Bank of England); Napoletano, Mauro (Scuola Superiore Sant’Anna); Popoyan, Lilit (Queen Mary, University of London); Roventini, Andrea (Scuola Superiore Sant’Anna); Uluc, Arzu (Bank of England)
    Abstract: We develop a macroeconomic agent-based model to study the joint impact of borrower and lender-based prudential policies on the housing and credit markets and the economy more widely. We perform three experiments: (i) an increase of total capital requirements; (ii) an introduction of a loan-to-income (LTI) cap on mortgages to owner-occupiers; and (iii) a joint introduction of both experiments at the same time. Our results suggest that tightening capital requirements leads to a sharp decrease in commercial and mortgage lending, and housing transactions. When the LTI cap is in place, house prices fall sharply relative to income, and the homeownership rate decreases. When both policy instruments are combined, we find that housing transactions and prices drop. Both policies have a positive impact on real GDP and unemployment, while there is no material impact on inflation and the real interest rate.
    Keywords: Prudential policies; housing market; macroeconomy; agent-based models
    JEL: C63 D10 D31 E58 G21 G28 R20 R21 R31
    Date: 2024–03–15
    URL: http://d.repec.org/n?u=RePEc:boe:boeewp:1066&r=cmp
  13. By: Iñaki Aldasoro; Sebastian Doerr; Leonardo Gambacorta; Daniel Rees
    Abstract: This paper studies the effects of artificial intelligence (AI) on sectoral and aggregate employment, output and inflation in both the short and long run. We construct an index of industry exposure to AI to calibrate a macroeconomic multi-sector model. Building on studies that find significant increases in workers' output from AI, we model AI as a permanent increase in productivity that differs by sector. We find that AI significantly raises output, consumption and investment in the short and long run. The inflation response depends crucially on households' and firms' anticipation of the impact of AI. If they do not anticipate higher future productivity, AI adoption is initially disinflationary. Over time, general equilibrium forces lead to moderate inflation through demand effects. In contrast, when households and firms anticipate higher future productivity, inflation rises immediately. Inspecting individual sectors and performing counterfactual exercises we find that a sector's initial exposure to AI has little correlation with its long-term increase in output. However, output grows by twice as much for the same increase in aggregate productivity when AI affects sectors producing consumption rather than investment goods, thanks to second round effects through sectoral linkages. We discuss how public policy should foster AI adoption and implications for central banks.
    Keywords: artificial intelligence, generative AI, inflation, output, productivity, monetary policy
    JEL: E31 J24 O33 O40
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:bis:biswps:1179&r=cmp
  14. By: Nouhaila Innan; Alberto Marchisio; Muhammad Shafique; Mohamed Bennai
    Abstract: This study introduces the Quantum Federated Neural Network for Financial Fraud Detection (QFNN-FFD), a cutting-edge framework merging Quantum Machine Learning (QML) and quantum computing with Federated Learning (FL) to innovate financial fraud detection. Using quantum technologies' computational power and FL's data privacy, QFNN-FFD presents a secure, efficient method for identifying fraudulent transactions. Implementing a dual-phase training model across distributed clients surpasses existing methods in performance. QFNN-FFD significantly improves fraud detection and ensures data confidentiality, marking a significant advancement in fintech solutions and establishing a new standard for privacy-focused fraud detection.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.02595&r=cmp
  15. By: Thorsten Hens (Department of Finance, University of Zurich, Department of Finance, Norwegian School of Economics, NHH, Institute of Economic Research, Kyoto University); Trine Nordlie (Department of Finance, Norwegian School of Economics, NHH, Bergen)
    Abstract: This study compares OpenAI's ChatGPT-4 and Google's Bard with bank experts in determining investors'risk profiles. We find that for half of the client cases used, there are no statistically significant differences in the risk profiles. Moreover, the economic relevance of the differences is small.
    Keywords: Large Language Models, ChatGPT, Bard, Risk Profiling
    JEL: D8 D14 D81 G51
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:kyo:wpaper:1103&r=cmp
  16. By: Knoth, Nils; Decker, Marie; Laupichler, Matthias Carl; Pinski, Marc; Buchholtz, Nils; Bata, Katharina; Schultz, Ben
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:dar:wpaper:144414&r=cmp
  17. By: Irene Aldridge
    Abstract: This paper proposes a fast Markov Matrix-based methodology for computing Top Trading Cycles (TTC) that delivers O(1) computational speed, that is speed independent of the number of agents and objects in the system. The proposed methodology is well suited for complex large-dimensional problems like housing choice. The methodology retains all the properties of TTC, namely, Pareto-efficiency, individual rationality and strategy-proofness.
    Date: 2024–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2403.15111&r=cmp
  18. By: Siting Lu
    Abstract: The growing adoption of large language models (LLMs) presents substantial potential for deeper understanding of human behaviours within game theory frameworks through simulations. Leveraging on the diverse pool of LLM types and addressing the gap in research on competitive games, this paper examines the strategic interactions among multiple types of LLM-based agents in a classical game of beauty contest. Drawing parallels to experiments involving human subjects, LLM-based agents are assessed similarly in terms of strategic levels. They demonstrate varying depth of reasoning that falls within a range of level-0 and 1, and show convergence in actions in repeated settings. Furthermore, I also explore how variations in group composition of agent types influence strategic behaviours, where I found higher proportion of fixed-strategy opponents enhances convergence for LLM-based agents, and having a mixed environment with agents of differing relative strategic levels accelerates convergence for all agents. There could also be higher average payoffs for the more intelligent agents, albeit at the expense of the less intelligent agents. These results not only provide insights into outcomes for simulated agents under specified scenarios, it also offer valuable implications for understanding strategic interactions between algorithms.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.08492&r=cmp
  19. By: Scott Wentland; Gary Cornwall; Jeremy G. Moulton
    Abstract: This paper develops a new method for valuing land, a key asset on a nation’s balance sheet. The method first employs an unsupervised machine learning method, kmeans clustering, to discretize unobserved heterogeneity, which we then combine with a supervised learning algorithm, gradient boosted trees (GBT), to obtain property-level price predictions and estimates of the land component. Our initial results from a large national dataset show this approach routinely outperforms hedonic regression methods (as used by the U.K.’s Office for National Statistics, for example) in out-of-sample price predictions. To exploit the best of both methods, we further explore a composite approach using model stacking, finding it outperforms all methods in out-of-sample tests and a benchmark test against nearby vacant land sales. In an application, we value residential, commercial, industrial, and agricultural land for the entire contiguous U.S. from 2006-2015. The results offer new insights into valuation and demonstrate how a unified method can build national and subnational estimates of land value from detailed, parcel-level data. We discuss further applications to economic policy and the property valuation literature more generally.
    JEL: E01
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:bea:papers:0115&r=cmp
  20. By: Lorenc Kapllani; Long Teng
    Abstract: In this work, we propose a novel backward differential deep learning-based algorithm for solving high-dimensional nonlinear backward stochastic differential equations (BSDEs), where the deep neural network (DNN) models are trained not only on the inputs and labels but also the differentials of the corresponding labels. This is motivated by the fact that differential deep learning can provide an efficient approximation of the labels and their derivatives with respect to inputs. The BSDEs are reformulated as differential deep learning problems by using Malliavin calculus. The Malliavin derivatives of solution to a BSDE satisfy themselves another BSDE, resulting thus in a system of BSDEs. Such formulation requires the estimation of the solution, its gradient, and the Hessian matrix, represented by the triple of processes $\left(Y, Z, \Gamma\right).$ All the integrals within this system are discretized by using the Euler-Maruyama method. Subsequently, DNNs are employed to approximate the triple of these unknown processes. The DNN parameters are backwardly optimized at each time step by minimizing a differential learning type loss function, which is defined as a weighted sum of the dynamics of the discretized BSDE system, with the first term providing the dynamics of the process $Y$ and the other the process $Z$. An error analysis is carried out to show the convergence of the proposed algorithm. Various numerical experiments up to $50$ dimensions are provided to demonstrate the high efficiency. Both theoretically and numerically, it is demonstrated that our proposed scheme is more efficient compared to other contemporary deep learning-based methodologies, especially in the computation of the process $\Gamma$.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.08456&r=cmp
  21. By: Buda Baji\'c; Sr{\dj}an Mili\'cevi\'c; Aleksandar Anti\'c; Slobodan Markovi\'c; Nemanja Tomi\'c
    Abstract: For modeling the number of visits in Stopi\'{c}a cave (Serbia) we consider the classical Auto-regressive Integrated Moving Average (ARIMA) model, Machine Learning (ML) method Support Vector Regression (SVR), and hybrid NeuralPropeth method which combines classical and ML concepts. The most accurate predictions were obtained with NeuralPropeth which includes the seasonal component and growing trend of time-series. In addition, non-linearity is modeled by shallow Neural Network (NN), and Google Trend is incorporated as an exogenous variable. Modeling tourist demand represents great importance for management structures and decision-makers due to its applicability in establishing sustainable tourism utilization strategies in environmentally vulnerable destinations such as caves. The data provided insights into the tourist demand in Stopi\'{c}a cave and preliminary data for addressing the issues of carrying capacity within the most visited cave in Serbia.
    Date: 2024–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2404.04974&r=cmp
  22. By: Paola Tubaro (CNRS - Centre National de la Recherche Scientifique, ENSAE Paris - École Nationale de la Statistique et de l'Administration Économique, CREST - Centre de Recherche en Économie et Statistique - ENSAI - Ecole Nationale de la Statistique et de l'Analyse de l'Information [Bruz] - X - École polytechnique - ENSAE Paris - École Nationale de la Statistique et de l'Administration Économique - CNRS - Centre National de la Recherche Scientifique)
    Keywords: Artificial intelligence, Micro-work, Human-in-the-loop, Digital inequalities
    Date: 2024–03–27
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-04533266&r=cmp

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.