|
on Computational Economics |
By: | Stanisław Łaniewski (University of Warsaw, Faculty of Economic Sciences, Department of Quantitative Finance and Machine Learning); Robert Ślepaczuk (University of Warsaw, Faculty of Economic Sciences, Department of Quantitative Finance and Machine Learning) |
Abstract: | This study utilizes machine learning algorithms to analyze and organize knowledge in the field of algorithmic trading, based on filtering 136 million research papers to 14, 342 articles ranging from 1956 to Q1 2020. We compare previously used practices such as keyword-based algorithms and embedding techniques with state-of-the-art dimension reduction and clustering for topic modeling method (BERTopic) to compare the popularity and evolution of different approaches and themes. We show new possibilities created by the last iteration of Large Language Models (LLM) like ChatGPT. The analysis reveals that the number of research articles on algorithmic trading is increasing faster than the overall number of papers. The stocks and main indices comprise more than half of all assets considered, but the growing trend in some classes is much stronger (e.g. cryptocurrencies). Machine learning models have become the most popular methods nowadays, but they are often flawed compared to seemingly simpler techniques. The study demonstrates the usefulness of Natural Language Processing in asking intricate questions about analyzed articles, like comparing the efficiency of different models. We demonstrate the efficiency of LLMs in refining datasets. Our research shows that by breaking tasks into smaller ones and adding reasoning steps, we can effectively address complex questions supported by case analyses. |
Keywords: | trading, quantitative finance, neural networks, literature review, knowledge representation, natural language processing (NLP), topic modeling, model comparison, artificial intelligence |
JEL: | C4 C15 C22 C45 C53 C58 C61 G11 G14 G15 G17 |
Date: | 2024 |
URL: | https://d.repec.org/n?u=RePEc:war:wpaper:2024-16 |
By: | Thomas R. Cook; Zach Modig; Nathan M. Palmer |
Abstract: | Machine learning and artificial intelligence are often described as “black boxes.” Traditional linear regression is interpreted through its marginal relationships as captured by regression coefficients. We show that the same marginal relationship can be described rigorously for any machine learning model by calculating the slope of the partial dependence functions, which we call the partial marginal effect (PME). We prove that the PME of OLS is analytically equivalent to the OLS regression coefficient. Bootstrapping provides standard errors and confidence intervals around the point estimates of the PMEs. We apply the PME to a hedonic house pricing example and demonstrate that the PMEs of neural networks, support vector machines, random forests, and gradient boosting models reveal the non-linear relationships discovered by the machine learning models and allow direct comparison between those models and a traditional linear regression. Finally we extend PME to a Shapley value decomposition and explore how it can be used to further explain model outputs. |
Keywords: | Machine learning; House prices; Statistical inference |
JEL: | C14 C18 C15 C45 C52 |
Date: | 2024–09–20 |
URL: | https://d.repec.org/n?u=RePEc:fip:fedgfe:2024-75 |
By: | Hendrik Jenett; Maximilian Nagl; Cathrine Nagl; McKay Price; Wolfgang Schäfers |
Abstract: | In the current context of heighted market tensions driven by rising interest rates, there is vital interest for both researchers and practitioners to understand the dynamics of Real Estate Investment Trust (REIT) returns and their accompanying uncertainties. To address this concern, we examine the drivers of REIT returns and volatility in a time-varying framework, spanning the modern REIT era (1991 to 2022). Our study is the first to simultaneously forecast both REIT returns and their associated volatility using an artificial neural network. We contribute to the literature by opening the black-box character of neural networks, enabling the identification of individual feature impacts on predictions and their evolution over time.The key focus revolves around understanding how the influence of accounting and macroeconomic variables changes during periods of financial crises compared to non-crisis periods. The results showcase superior predictive capabilities of the neural network compared to conventional regression models. We shed light on the intricate interplay of diverse variables influencing the performance of REITs. Our findings hold implications for investors, policymakers and researchers navigating the complex landscape of real estate investments in a dynamically evolving market environment. |
Keywords: | Machine Learning; Neural Network; REIT Return; Volatility |
JEL: | R3 |
Date: | 2024–01–01 |
URL: | https://d.repec.org/n?u=RePEc:arz:wpaper:eres2024-107 |
By: | Sanjay Sathish; Charu C Sharma |
Abstract: | Our research presents a new approach for forecasting the synchronization of stock prices using machine learning and non-linear time-series analysis. To capture the complex non-linear relationships between stock prices, we utilize recurrence plots (RP) and cross-recurrence quantification analysis (CRQA). By transforming Cross Recurrence Plot (CRP) data into a time-series format, we enable the use of Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM) networks for predicting stock price synchronization through both regression and classification. We apply this methodology to a dataset of 20 highly capitalized stocks from the Indian market over a 21-year period. The findings reveal that our approach can predict stock price synchronization, with an accuracy of 0.98 and F1 score of 0.83 offering valuable insights for developing effective trading strategies and risk management tools. |
Date: | 2024–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2409.06728 |
By: | V. Lanzetta |
Abstract: | Literature highlighted that financial time series data pose significant challenges for accurate stock price prediction, because these data are characterized by noise and susceptibility to news; traditional statistical methodologies made assumptions, such as linearity and normality, which are not suitable for the non-linear nature of financial time series; on the other hand, machine learning methodologies are able to capture non linear relationship in the data. To date, neural network is considered the main machine learning tool for the financial prices prediction. Transfer Learning, as a method aimed at transferring knowledge from source tasks to target tasks, can represent a very useful methodological tool for getting better financial prediction capability. Current reviews on the above body of knowledge are mainly focused on neural network architectures, for financial prediction, with very little emphasis on the transfer learning methodology; thus, this paper is aimed at going deeper on this topic by developing a systematic review with respect to application of Transfer Learning for financial market predictions and to challenges/potential future directions of the transfer learning methodologies for stock market predictions. |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2409.17183 |
By: | Koundouri, Phoebe; Aslanidis, Panagiotis-Stavros; Dellis, Konstantinos; Feretzakis, Georgios; Plataniotis, Angelos |
Abstract: | This paper introduces a machine learning (ML) based approach for integrating Human Security (HS) and Sustainable Development Goals (SDGs). Originating in the 1990s, HS focuses on strategic, people-centric interventions for ensuring comprehensive welfare and resilience. It closely aligns with the SDGs, together forming the foundation for global sustainable development initiatives. Our methodology involves mapping 44 reports to the 17 SDGs using expert-annotated keywords and advanced ML techniques, resulting in a web-based SDG mapping tool. This tool is specifically tailored for the HS-SDG nexus, enabling the analysis of 13 new reports and their connections to the SDGs. Through this, we uncover detailed insights and establish strong links between the reports and global objectives, offering a nuanced understanding of the interplay between HS and sustainable development. This research provides a scalable framework to explore the relationship between HS and the Paris Agenda, offering a practical, efficient resource for scholars and policymakers. |
Keywords: | Artificial Intelligence in Policy Making, Data Mining, Human-Centric Governance Strategies, Human Security, Machine Learning, Sustainable Development Goals |
JEL: | C65 O15 |
Date: | 2024–02–20 |
URL: | https://d.repec.org/n?u=RePEc:pra:mprapa:121972 |
By: | Anne-Ga\"elle Maltese Pierre Pelletier; R\'emy Guichardaz |
Abstract: | Creativity is a fundamental pillar of human expression and a driving force behind innovation, yet it now stands at a crossroads. As artificial intelligence advances at an astonishing pace, the question arises: can machines match and potentially surpass human creativity? This study investigates the creative performance of artificial intelligence (AI) compared to humans by analyzing the effects of two distinct prompting strategies (a Naive and an Expert AI) on AI and across three different tasks (Text, Draw and Alternative Uses tasks). Human external evaluators have scored creative outputs generated by humans and AI, and these subjective creative scores were complemented with objective measures based on quantitative measurements and NLP tools. The results reveal that AI generally outperforms humans in creative tasks, though this advantage is nuanced by the specific nature of each task and the chosen creativity criteria. Ultimately, while AI demonstrates superior performance in certain creative domains, our results suggest that integrating human feedback is crucial for maximizing AI's creative potential. |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2409.18776 |
By: | Guoxi Zhang; Jiuding Duan |
Abstract: | This paper addresses the cost-efficiency aspect of Reinforcement Learning from Human Feedback (RLHF). RLHF leverages datasets of human preferences over outputs of large language models (LLM) to instill human expectations into LLMs. While preference annotation comes with a monetized cost, the economic utility of a preference dataset has not been considered by far. What exacerbates this situation is that given complex intransitive or cyclic relationships in preference datasets, existing algorithms for fine-tuning LLMs are still far from capturing comprehensive preferences. This raises severe cost-efficiency concerns in production environments, where preference data accumulate over time. In this paper, we see the fine-tuning of LLMs as a monetized economy and introduce an auction mechanism to improve the efficiency of the preference data collection in dollar terms. We show that introducing an auction mechanism can play an essential role in enhancing the cost-efficiency of RLHF while maintaining satisfactory model performance. Experimental results demonstrate that our proposed auction-based protocol is cost-efficient for fine-tuning LLMs by concentrating on high-quality feedback. |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2409.18417 |
By: | Jiaxing Yang |
Abstract: | Structural prediction has long been considered critical in RNA research, especially following the success of AlphaFold2 in protein studies, which has drawn significant attention to the field. While recent advances in machine learning and data accumulation have effectively addressed many biological tasks, particularly in protein related research. RNA structure prediction remains a significant challenge due to data limitations. Obtaining RNA structural data is difficult because traditional methods such as nuclear magnetic resonance spectroscopy, Xray crystallography, and electron microscopy are expensive and time consuming. Although several RNA 3D structure prediction methods have been proposed, their accuracy is still limited. Predicting RNA structural information at another level, such as distance maps, remains highly valuable. Distance maps provide a simplified representation of spatial constraints between nucleotides, capturing essential relationships without requiring a full 3D model. This intermediate level of structural information can guide more accurate 3D modeling and is computationally less intensive, making it a useful tool for improving structural predictions. In this work, we demonstrate that using only primary sequence information, we can accurately infer the distances between RNA bases by utilizing a large pretrained RNA language model coupled with a well trained downstream transformer. |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2409.16333 |
By: | Ali Mehrabian; Ehsan Hoseinzade; Mahdi Mazloum; Xiaohong Chen |
Abstract: | Stock markets play an important role in the global economy, where accurate stock price predictions can lead to significant financial returns. While existing transformer-based models have outperformed long short-term memory networks and convolutional neural networks in financial time series prediction, their high computational complexity and memory requirements limit their practicality for real-time trading and long-sequence data processing. To address these challenges, we propose SAMBA, an innovative framework for stock return prediction that builds on the Mamba architecture and integrates graph neural networks. SAMBA achieves near-linear computational complexity by utilizing a bidirectional Mamba block to capture long-term dependencies in historical price data and employing adaptive graph convolution to model dependencies between daily stock features. Our experimental results demonstrate that SAMBA significantly outperforms state-of-the-art baseline models in prediction accuracy, maintaining low computational complexity. The code and datasets are available at github.com/Ali-Meh619/SAMBA. |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2410.03707 |
By: | Lisa D. Cook |
Date: | 2024–10–02 |
URL: | https://d.repec.org/n?u=RePEc:fip:fedgsq:98899 |
By: | Alexander Bick; Adam Blandin; David Deming |
Abstract: | An analysis suggests that generative AI has been quickly and widely adopted at home and in the workplace, with about 40% of the U.S. population ages 18 to 64 using it to some degree. |
Keywords: | generative artificial intelligence (AI); labor productivity; technology adoption |
Date: | 2023–09–23 |
URL: | https://d.repec.org/n?u=RePEc:fip:l00001:98843 |
By: | Ronald Richman; Salvatore Scognamiglio; Mario V. W\"uthrich |
Abstract: | Inspired by the large success of Transformers in Large Language Models, these architectures are increasingly applied to tabular data. This is achieved by embedding tabular data into low-dimensional Euclidean spaces resulting in similar structures as time-series data. We introduce a novel credibility mechanism to this Transformer architecture. This credibility mechanism is based on a special token that should be seen as an encoder that consists of a credibility weighted average of prior information and observation based information. We demonstrate that this novel credibility mechanism is very beneficial to stabilize training, and our Credibility Transformer leads to predictive models that are superior to state-of-the-art deep learning models. |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2409.16653 |
By: | Marco Bornstein; Zora Che; Suhas Julapalli; Abdirisak Mohamed; Amrit Singh Bedi; Furong Huang |
Abstract: | In an era of "moving fast and breaking things", regulators have moved slowly to pick up the safety, bias, and legal pieces left in the wake of broken Artificial Intelligence (AI) deployment. Since AI models, such as large language models, are able to push misinformation and stoke division within our society, it is imperative for regulators to employ a framework that mitigates these dangers and ensures user safety. While there is much-warranted discussion about how to address the safety, bias, and legal woes of state-of-the-art AI models, the number of rigorous and realistic mathematical frameworks to regulate AI safety is lacking. We take on this challenge, proposing an auction-based regulatory mechanism that provably incentivizes model-building agents (i) to deploy safer models and (ii) to participate in the regulation process. We provably guarantee, via derived Nash Equilibria, that each participating agent's best strategy is to submit a model safer than a prescribed minimum-safety threshold. Empirical results show that our regulatory auction boosts safety and participation rates by 20% and 15% respectively, outperforming simple regulatory frameworks that merely enforce minimum safety standards. |
Date: | 2024–10 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2410.01871 |
By: | Bertin Martens |
Abstract: | This working paper explores the tension between rapidly increasing artificial intelligence investment costs and the slower pace of productivity growth |
Date: | 2024–10 |
URL: | https://d.repec.org/n?u=RePEc:bre:wpaper:node_10375 |
By: | Dae-Hyun Yoo; Caterina Giannetti |
Abstract: | This paper presents a principal-agent model for aligning artificial intelligence (AI) behaviors with human ethical objectives. In this framework, the end-user acts as the principal, offering a contract to the system developer (the agent) that specifies desired ethical alignment levels for the AI system. This incentivizes the developer to align the AI’s objectives with ethical considerations, fostering trust and collaboration. When ethical alignment is unobservable and the developer is risk-neutral, the optimal contract achieves the same alignment and expected utilities as when it is observable. For observable alignment levels, a fixed reward is uniquely optimal for strictly risk-averse developers, while for risk-neutral developers, a fixed reward is one of several optimal options. Our findings demonstrate that even a basic principal-agent model can enhance the understanding of how to balance responsibility between users and developers in the pursuit of ethical AI. Users seeking higher ethical alignment must compensate developers appropriately, and they also share responsibility for ethical AI by adhering to design specifications and regulations. |
Keywords: | AI Ethics, Ethical Alignment, Principal-Agent Model, Contract Theory, Responsibility Allocation, Economic Incentives |
JEL: | D82 D86 O33 |
Date: | 2024–10–01 |
URL: | https://d.repec.org/n?u=RePEc:pie:dsedps:2024/313 |
By: | Matteo Tranchero; Cecil-Francis Brenninkmeijer; Arul Murugan; Abhishek Nagaraj |
Abstract: | Large Language Models (LLMs) are proving to be a powerful toolkit for management and organizational research. While early work has largely focused on the value of these tools for data processing and replicating survey-based research, the potential of LLMs for theory building is yet to be recognized. We argue that LLMs can accelerate the pace at which researchers can develop, validate, and extend strategic management theory. We propose a novel framework called Generative AI-Based Experimentation (GABE) that enables researchers to conduct exploratory in silico experiments that can mirror the complexities of real-world organizational settings, featuring multiple agents and strategic interdependencies. This approach is unique because it allows researchers to unpack the mechanisms behind results by directly modifying agents’ roles, preferences, and capabilities, and asking them to reveal the explanations behind decisions. We apply this framework to a novel theory studying strategic exploration under uncertainty. We show how our framework can not only replicate the results from experiments with human subjects at a much lower cost, but can also be used to extend theory by clarifying boundary conditions and uncovering mechanisms. We conclude that LLMs possess tremendous potential to complement existing methods for theorizing in strategy and, more broadly, the social sciences. |
JEL: | M10 O3 |
Date: | 2024–10 |
URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:33033 |
By: | Ruslan Goyenko; Bryan T. Kelly; Tobias J. Moskowitz; Yinan Su; Chao Zhang |
Abstract: | Portfolio optimization focuses on risk and return prediction, yet implementation costs critically matter. Predicting trading costs is challenging because costs depend on trade size and trader identity, thus impeding a generic solution. We focus on a component of trading costs that applies universally – trading volume. Individual stock trading volume is highly predictable, especially with machine learning. We model the economic benefits of predicting volume through a portfolio framework that trades off tracking error versus net-of-cost performance – translating volume prediction into net-of-cost alpha. The economic benefits of predicting individual stock volume are as large as those from stock return predictability. |
JEL: | C45 C53 C55 G00 G11 G12 G17 |
Date: | 2024–10 |
URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:33037 |
By: | Zhen Wang (School of Cybersecurity, and School of Artificial Intelligence, OPtics and ElectroNics); Ruiqi Song (School of Cybersecurity, and School of Artificial Intelligence, OPtics and ElectroNics); Chen Shen (Faculty of Engineering Sciences, Kyushu University, Japan); Shiya Yin (School of Cybersecurity, and School of Artificial Intelligence, OPtics and ElectroNics); Zhao Song (School of Computing, Engineering and Digital Technologies, Teesside University, United Kingdom); Balaraju Battu (Computer Science, Science Division, New York University Abu Dhabi, UAE); Lei Shi (School of Statistics and Mathematics, Yunnan University of Finance and Economics, China); Danyang Jia (School of Cybersecurity, and School of Artificial Intelligence, OPtics and ElectroNics); Talal Rahwan (Computer Science, Science Division, New York University Abu Dhabi, UAE); Shuyue Hu (Shanghai Artificial Intelligence Laboratory, China) |
Abstract: | In social dilemmas where the collective and self-interests are at odds, people typically cooperate less with machines than with fellow humans, a phenomenon termed the machine penalty. Overcoming this penalty is critical for successful human-machine collectives, yet current solutions often involve ethically-questionable tactics, like concealing machines' non-human nature. In this study, with 1, 152 participants, we explore the possibility of closing this research question by using Large Language Models (LLMs), in scenarios where communication is possible between interacting parties. We design three types of LLMs: (i) Cooperative, aiming to assist its human associate; (ii) Selfish, focusing solely on maximizing its self-interest; and (iii) Fair, balancing its own and collective interest, while slightly prioritizing self-interest. Our findings reveal that, when interacting with humans, fair LLMs are able to induce cooperation levels comparable to those observed in human-human interactions, even when their non-human nature is fully disclosed. In contrast, selfish and cooperative LLMs fail to achieve this goal. Post-experiment analysis shows that all three types of LLMs succeed in forming mutual cooperation agreements with humans, yet only fair LLMs, which occasionally break their promises, are capable of instilling a perception among humans that cooperating with them is the social norm, and eliciting positive views on their trustworthiness, mindfulness, intelligence, and communication quality. Our findings suggest that for effective human-machine cooperation, bot manufacturers should avoid designing machines with mere rational decision-making or a sole focus on assisting humans. Instead, they should design machines capable of judiciously balancing their own interest and the interest of humans. |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2410.03724 |
By: | Akhter, Fahmida; Bhattacharjee, Ankita; Hasan, Amena |
Abstract: | This research investigates the application of Artificial Intelligence (AI) in Human Resource Management (HRM) in Bangladesh. Through interviews and case studies, the study explores the current state of AI adoption, challenges, opportunities, and ethical considerations. Key findings include limited AI adoption, primarily focused on recruitment, and challenges such as lack of expertise and data privacy concerns. AI offers potential benefits like improved efficiency and decision-making. The study recommends organizations to invest in AI expertise, address privacy concerns, and develop ethical guidelines. Policymakers should support AI education, reskilling, and a favorable regulatory environment. AI can significantly enhance HR practices in Bangladesh if implemented responsibly and ethically. |
Keywords: | Artificial intelligence; Human Resource Management; Efficiency; Bangladesh |
JEL: | A2 C8 P0 |
Date: | 2024–01–06 |
URL: | https://d.repec.org/n?u=RePEc:pra:mprapa:122222 |
By: | Kiwhan Song; Mohamed Ali Dhraief; Muhua Xu; Locke Cai; Xuhao Chen; Arvind; Jie Chen |
Abstract: | Anti-Money Laundering (AML) involves the identification of money laundering crimes in financial activities, such as cryptocurrency transactions. Recent studies advanced AML through the lens of graph-based machine learning, modeling the web of financial transactions as a graph and developing graph methods to identify suspicious activities. For instance, a recent effort on opensourcing datasets and benchmarks, Elliptic2, treats a set of Bitcoin addresses, considered to be controlled by the same entity, as a graph node and transactions among entities as graph edges. This modeling reveals the "shape" of a money laundering scheme - a subgraph on the blockchain. Despite the attractive subgraph classification results benchmarked by the paper, competitive methods remain expensive to apply due to the massive size of the graph; moreover, existing methods require candidate subgraphs as inputs which may not be available in practice. In this work, we introduce RevTrack, a graph-based framework that enables large-scale AML analysis with a lower cost and a higher accuracy. The key idea is to track the initial senders and the final receivers of funds; these entities offer a strong indication of the nature (licit vs. suspicious) of their respective subgraph. Based on this framework, we propose RevClassify, which is a neural network model for subgraph classification. Additionally, we address the practical problem where subgraph candidates are not given, by proposing RevFilter. This method identifies new suspicious subgraphs by iteratively filtering licit transactions, using RevClassify. Benchmarking these methods on Elliptic2, a new standard for AML, we show that RevClassify outperforms state-of-the-art subgraph classification techniques in both cost and accuracy. Furthermore, we demonstrate the effectiveness of RevFilter in discovering new suspicious subgraphs, confirming its utility for practical AML. |
Date: | 2024–10 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2410.08394 |
By: | Julius Golej; Andrej Adamušin; Miroslav Panik |
Abstract: | Automated value model (AVM) is a computerized statistically based software that collects and uses Big Data in the real estate sector. It uses property information such as comparable and historical sales, property characteristics, price trends, and any other information relevant to the property in its algorithm. The effectiveness of using AVM depends on the amount and especially the quality of the data used because only high-quality data can be considered reliable and representative. At the same time, it should be added that machine data collection and evaluation in the field of real estate would never be as accurate as manual valuation, where the appraiser can, based on his knowledge and experience, take into account factors that are not taken into account and documented in the collected data, through a physical inspection. Even if an appraiser uses certain specific methods to determine the price of a property, the appraiser's subjectivity factor always enters the valuation process, which can create a certain deviation in human-generated sales prices compared to the price generated by the software. The following contribution is devoted to the issue of creating an AVM model for evaluating real estate sales prices. The authors collaborated on the creation of such a model for practice in Slovak conditions, the main goal of which was to increase the efficiency and productivity of work in the field of real estate valuation. |
Keywords: | automated value model; Big data; Real Estate Market; Real estate prices |
JEL: | R3 |
Date: | 2024–01–01 |
URL: | https://d.repec.org/n?u=RePEc:arz:wpaper:eres2024-038 |
By: | Simon Dohn; Kristoffer Arnsfelt Hansen; Asger Klinkby |
Abstract: | We study computational problems in financial networks of banks connected by debt contracts and credit default swaps (CDSs). A main problem is to determine \emph{clearing} payments, for instance right after some banks have been exposed to a financial shock. Previous works have shown the $\varepsilon$-approximate version of the problem to be $\mathrm{PPAD}$-complete and the exact problem $\mathrm{FIXP}$-complete. We show that $\mathrm{PPAD}$-hardness hold when $\varepsilon \approx 0.101$, improving the previously best bound significantly. Due to the fact that the clearing problem typically does not have a unique solution, or that it may not have a solution at all in the presence of default costs, several natural decision problems are also of great interest. We show two such problems to be $\exists\mathbb{R}$-complete, complementing previous $\mathrm{NP}$-hardness results for the approximate setting. |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2409.18717 |