nep-cmp New Economics Papers
on Computational Economics
Issue of 2023‒10‒16
25 papers chosen by



  1. Tasks Makyth Models: Machine Learning Assisted Surrogates for Tipping Points By Gianluca Fabiani; Nikolaos Evangelou; Tianqi Cui; Juan M. Bello-Rivas; Cristina P. Martin-Linares; Constantinos Siettos; Ioannis G. Kevrekidis
  2. Enhancing Healthcare Cost Forecasting: A Machine Learning Model for Resource Allocation in Heterogeneous Regions By Caravaggio, Nicola; Resce, Giuliano
  3. pystacked and ddml: machine learning for prediction and causal inference in Stata By Achim Ahrens; Christian B. Hansen; Mark E. Schaffer; Thomas Wiemann
  4. Gray-box Adversarial Attack of Deep Reinforcement Learning-based Trading Agents By Foozhan Ataiefard; Hadi Hemmati
  5. A compendium of data sources for data science, machine learning, and artificial intelligence By Paul Bilokon; Oleksandr Bilokon; Saeed Amen
  6. gingado: a machine learning library focused on economics and finance By Douglas Kiarelly Godoy de Araujo
  7. Predicting Changes in Canadian Housing Markets with Machine Learning By Johan Brannlund; Helen Lao; Maureen MacIsaac; Jing Yang
  8. Optimizing pessimism in dynamic treatment regimes: a Bayesian learning approach By Zhou, Yunzhe; Qi, Zhengling; Shi, Chengchun; Li, Lexin
  9. Commodities Trading through Deep Policy Gradient Methods By Jonas Hanetho
  10. GPT-InvestAR: Enhancing Stock Investment Strategies through Annual Report Analysis with Large Language Models By Udit Gupta
  11. EarnHFT: Efficient Hierarchical Reinforcement Learning for High Frequency Trading By Molei Qin; Shuo Sun; Wentao Zhang; Haochong Xia; Xinrun Wang; Bo An
  12. A Review of Machine Learning Commands in Stata: Performance and Usability Evaluation By Giovanni Cerulli
  13. Deep learning model fragility and implications for financial stability and regulation By Kumar, Rishabh; Koshiyama, Adriano; da Costa, Kleyton; Kingsman, Nigel; Tewarrie, Marvin; Kazim, Emre; Roy, Arunita; Treleaven, Philip; Lovell, Zac
  14. Electricity Consumption Forecasting in Algeria using ARIMA and Long Short-Term Memory Neural Network By Sahed Abdelkader; Kahoui Hacene
  15. Generative AI for End-to-End Limit Order Book Modelling: A Token-Level Autoregressive Generative Model of Message Flow Using a Deep State Space Network By Peer Nagy; Sascha Frey; Silvia Sapora; Kang Li; Anisoara Calinescu; Stefan Zohren; Jakob Foerster
  16. How to check a simulation study By Ian R White; Tra My Pham; Matteo Quartagno; Tim P Morris
  17. On the benefits of robo-advice in financial markets By Lambrecht, Marco; Oechssler, Jörg; Weidenholzer, Simon
  18. Modeling intervention: The Political element in Barbara Bergmann's micro-to-macro simulation projects By Chassonnery-Zaïgouche, Cléo; Goutsmedt, Aurélien
  19. Decoding GPT's hidden "rationality" of cooperation By Bauer, Kevin; Liebich, Lena; Hinz, Oliver; Kosfeld, Michael
  20. Parsimonious Wasserstein Text-mining By Gadat, Sébastien; Villeneuve, Stéphane
  21. Nonparametric estimation of k-modal taste heterogeneity for group level agent-based mixed logit By Xiyuan Ren; Joseph Y. J. Chow
  22. GPT has become financially literate: Insights from financial literacy tests of GPT and a preliminary test of how people use it as a source of advice By Pawe{\l} Niszczota; Sami Abbas
  23. Artificial Intelligence and Its Impact on Information Technology (IT) Service Sector in Bangladesh By Fahmida Khatun; Nadia Nawrin
  24. Combining Forecasts under Structural Breaks Using Graphical LASSO By Tae-Hwy Lee; Ekaterina Seregina
  25. InvestLM: A Large Language Model for Investment using Financial Domain Instruction Tuning By Yi Yang; Yixuan Tang; Kar Yan Tam

  1. By: Gianluca Fabiani; Nikolaos Evangelou; Tianqi Cui; Juan M. Bello-Rivas; Cristina P. Martin-Linares; Constantinos Siettos; Ioannis G. Kevrekidis
    Abstract: We present a machine learning (ML)-assisted framework bridging manifold learning, neural networks, Gaussian processes, and Equation-Free multiscale modeling, for (a) detecting tipping points in the emergent behavior of complex systems, and (b) characterizing probabilities of rare events (here, catastrophic shifts) near them. Our illustrative example is an event-driven, stochastic agent-based model (ABM) describing the mimetic behavior of traders in a simple financial market. Given high-dimensional spatiotemporal data -- generated by the stochastic ABM -- we construct reduced-order models for the emergent dynamics at different scales: (a) mesoscopic Integro-Partial Differential Equations (IPDEs); and (b) mean-field-type Stochastic Differential Equations (SDEs) embedded in a low-dimensional latent space, targeted to the neighborhood of the tipping point. We contrast the uses of the different models and the effort involved in learning them.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.14334&r=cmp
  2. By: Caravaggio, Nicola; Resce, Giuliano
    Abstract: Accurate forecasting of healthcare costs is essential for making decisions, shaping policies, preparing finances, and managing resources effectively, but traditional econometric models fall short in addressing this policy challenge adequately. This paper introduces machine learning to predict healthcare expenditure in systems with heterogeneous regional needs. The Italian NHS is used as a case study, with administrative data spanning the years 1994 to 2019. The empirical analysis utilises four machine learning algorithms (Elastic-Net, Gradient Boosting, Random Forest, and Support Vector Regression) and a multivariate regression as a baseline. Gradient Boosting emerges as the superior algorithm in out-of-the-sample prediction performances; even when applied to 2019 data, the models trained up to 2018 demonstrate robust forecasting abilities. Important predictors of expenditure include temporal factors, average family size, regional area, GDP per capita, and life expectancy. The remarkable effectiveness of the model demonstrates that machine learning can be efficiently employed to distribute national healthcare funds to areas with heterogeneous needs.
    Keywords: Machine Learning, National Health System, Healthcare expenditure
    JEL: C54 H51 I10
    Date: 2023–10–03
    URL: http://d.repec.org/n?u=RePEc:mol:ecsdps:esdp23090&r=cmp
  3. By: Achim Ahrens (ETH Zürich); Christian B. Hansen (University of Chicago); Mark E. Schaffer (Heriot-Watt University); Thomas Wiemann (University of Chicago)
    Abstract: pystacked implements stacked generalization (Wolpert 1992) for regression and binary classification via Python’s scikit-learn. Stacking is an ensemble method that combines multiple supervised machine learners — the "base" or "level-0" learners — into a single learner. The currently-supported base learners include regularized regression (lasso, ridge, elastic net), random forest, gradient boosted trees, support vector machines, and feed-forward neural nets (multilayer perceptron). pystacked can also be used to fit a single base learner and thus provides an easy-to-use API for scikit-learn’s machine learning algorithms. ddml implements algorithms for causal inference aided by supervised machine learning as proposed in "Double/debiased machine learning for treatment and structural parameters" (Econometrics Journal 2018). Five different models are supported, allowing for binary or continuous treatment variables and endogeneity in the presence of high-dimensional controls and/or instrumental variables. ddml is compatible with many existing supervised machine learning programs in Stata, and in particular has integrated support for pystacked, making it straightforward to use machine learner ensemble methods in causal inference applications.
    Date: 2023–09–10
    URL: http://d.repec.org/n?u=RePEc:boc:lsug23:12&r=cmp
  4. By: Foozhan Ataiefard; Hadi Hemmati
    Abstract: In recent years, deep reinforcement learning (Deep RL) has been successfully implemented as a smart agent in many systems such as complex games, self-driving cars, and chat-bots. One of the interesting use cases of Deep RL is its application as an automated stock trading agent. In general, any automated trading agent is prone to manipulations by adversaries in the trading environment. Thus studying their robustness is vital for their success in practice. However, typical mechanism to study RL robustness, which is based on white-box gradient-based adversarial sample generation techniques (like FGSM), is obsolete for this use case, since the models are protected behind secure international exchange APIs, such as NASDAQ. In this research, we demonstrate that a "gray-box" approach for attacking a Deep RL-based trading agent is possible by trading in the same stock market, with no extra access to the trading agent. In our proposed approach, an adversary agent uses a hybrid Deep Neural Network as its policy consisting of Convolutional layers and fully-connected layers. On average, over three simulated trading market configurations, the adversary policy proposed in this research is able to reduce the reward values by 214.17%, which results in reducing the potential profits of the baseline by 139.4%, ensemble method by 93.7%, and an automated trading software developed by our industrial partner by 85.5%, while consuming significantly less budget than the victims (427.77%, 187.16%, and 66.97%, respectively).
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.14615&r=cmp
  5. By: Paul Bilokon; Oleksandr Bilokon; Saeed Amen
    Abstract: Recent advances in data science, machine learning, and artificial intelligence, such as the emergence of large language models, are leading to an increasing demand for data that can be processed by such models. While data sources are application-specific, and it is impossible to produce an exhaustive list of such data sources, it seems that a comprehensive, rather than complete, list would still benefit data scientists and machine learning experts of all levels of seniority. The goal of this publication is to provide just such an (inevitably incomplete) list -- or compendium -- of data sources across multiple areas of applications, including finance and economics, legal (laws and regulations), life sciences (medicine and drug discovery), news sentiment and social media, retail and ecommerce, satellite imagery, and shipping and logistics, and sports.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.05682&r=cmp
  6. By: Douglas Kiarelly Godoy de Araujo
    Abstract: gingado is an open source Python library that offers a variety of convenience functions and objects to support usage of machine learning in economics research. It is designed to be compatible with widely used machine learning libraries. gingado facilitates augmenting user datasets with relevant data directly obtained from official sources by leveraging the SDMX data and metadata sharing protocol. The library also offers a benchmarking object that creates a random forest with a reasonably good performance out-of-the-box and, if provided with candidate models, retains the one with the best performance. gingado also includes methods to help with machine learning model documentation, including ethical considerations. Further, gingado provides a flexible simulatation of panel datasets with a variety of non-linear causal treatment effects, to support causal model prototyping and benchmarking. The library is under active development and new functionalities are periodically added or improved.
    Keywords: machine learning, open source, data access, documentation
    JEL: C87 C14 C82
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:bis:biswps:1122&r=cmp
  7. By: Johan Brannlund; Helen Lao; Maureen MacIsaac; Jing Yang
    Abstract: This paper examines whether machine learning (ML) algorithms can outperform a linear model in predicting monthly growth in Canada of both house prices and existing home sales. The aim is to apply two widely used ML techniques (support vector regression and multilayer perceptron) in economic forecasting to understand their scopes and limitations. We find that the two ML algorithms can perform better than a linear model in forecasting house prices and resales. However, the improvement in forecast accuracy is not always statistically significant. Therefore, we cannot systematically conclude using traditional time-series data that the ML models outperform the linear model in a significant way. Future research should explore non-traditional data sets to fully take advantage of ML methods.
    Keywords: Econometric and statistical methods; Financial markets; Housing
    JEL: A C45 C53 R2 R3 D2
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:bca:bocadp:23-21&r=cmp
  8. By: Zhou, Yunzhe; Qi, Zhengling; Shi, Chengchun; Li, Lexin
    Abstract: In this article, we propose a novel pessimismbased Bayesian learning method for optimal dynamic treatment regimes in the offline setting. When the coverage condition does not hold, which is common for offline data, the existing solutions would produce sub-optimal policies. The pessimism principle addresses this issue by discouraging recommendation of actions that are less explored conditioning on the state. However, nearly all pessimism-based methods rely on a key hyper-parameter that quantifies the degree of pessimism, and the performance of the methods can be highly sensitive to the choice of this parameter. We propose to integrate the pessimism principle with Thompson sampling and Bayesian machine learning for optimizing the degree of pessimism. We derive a credible set whose boundary uniformly lower bounds the optimal Q-function, and thus we do not require additional tuning of the degree of pessimism. We develop a general Bayesian learning method that works with a range of models, from Bayesian linear basis model to Bayesian neural network model. We develop the computational algorithm based on variational inference, which is highly efficient and scalable. We establish the theoretical guarantees of the proposed method, and show empirically that it outperforms the existing state-of-theart solutions through both simulations and a real data example.
    JEL: C1
    Date: 2023–01–20
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:118233&r=cmp
  9. By: Jonas Hanetho
    Abstract: Algorithmic trading has gained attention due to its potential for generating superior returns. This paper investigates the effectiveness of deep reinforcement learning (DRL) methods in algorithmic commodities trading. It formulates the commodities trading problem as a continuous, discrete-time stochastic dynamical system. The proposed system employs a novel time-discretization scheme that adapts to market volatility, enhancing the statistical properties of subsampled financial time series. To optimize transaction-cost- and risk-sensitive trading agents, two policy gradient algorithms, namely actor-based and actor-critic-based approaches, are introduced. These agents utilize CNNs and LSTMs as parametric function approximators to map historical price observations to market positions.Backtesting on front-month natural gas futures demonstrates that DRL models increase the Sharpe ratio by $83\%$ compared to the buy-and-hold baseline. Additionally, the risk profile of the agents can be customized through a hyperparameter that regulates risk sensitivity in the reward function during the optimization process. The actor-based models outperform the actor-critic-based models, while the CNN-based models show a slight performance advantage over the LSTM-based models.
    Date: 2023–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.00630&r=cmp
  10. By: Udit Gupta
    Abstract: Annual Reports of publicly listed companies contain vital information about their financial health which can help assess the potential impact on Stock price of the firm. These reports are comprehensive in nature, going up to, and sometimes exceeding, 100 pages. Analysing these reports is cumbersome even for a single firm, let alone the whole universe of firms that exist. Over the years, financial experts have become proficient in extracting valuable information from these documents relatively quickly. However, this requires years of practice and experience. This paper aims to simplify the process of assessing Annual Reports of all the firms by leveraging the capabilities of Large Language Models (LLMs). The insights generated by the LLM are compiled in a Quant styled dataset and augmented by historical stock price data. A Machine Learning model is then trained with LLM outputs as features. The walkforward test results show promising outperformance wrt S&P500 returns. This paper intends to provide a framework for future work in this direction. To facilitate this, the code has been released as open source.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.03079&r=cmp
  11. By: Molei Qin; Shuo Sun; Wentao Zhang; Haochong Xia; Xinrun Wang; Bo An
    Abstract: High-frequency trading (HFT) uses computer algorithms to make trading decisions in short time scales (e.g., second-level), which is widely used in the Cryptocurrency (Crypto) market (e.g., Bitcoin). Reinforcement learning (RL) in financial research has shown stellar performance on many quantitative trading tasks. However, most methods focus on low-frequency trading, e.g., day-level, which cannot be directly applied to HFT because of two challenges. First, RL for HFT involves dealing with extremely long trajectories (e.g., 2.4 million steps per month), which is hard to optimize and evaluate. Second, the dramatic price fluctuations and market trend changes of Crypto make existing algorithms fail to maintain satisfactory performance. To tackle these challenges, we propose an Efficient hieArchical Reinforcement learNing method for High Frequency Trading (EarnHFT), a novel three-stage hierarchical RL framework for HFT. In stage I, we compute a Q-teacher, i.e., the optimal action value based on dynamic programming, for enhancing the performance and training efficiency of second-level RL agents. In stage II, we construct a pool of diverse RL agents for different market trends, distinguished by return rates, where hundreds of RL agents are trained with different preferences of return rates and only a tiny fraction of them will be selected into the pool based on their profitability. In stage III, we train a minute-level router which dynamically picks a second-level agent from the pool to achieve stable performance across different markets. Through extensive experiments in various market trends on Crypto markets in a high-fidelity simulation trading environment, we demonstrate that EarnHFT significantly outperforms 6 state-of-art baselines in 6 popular financial criteria, exceeding the runner-up by 30% in profitability.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.12891&r=cmp
  12. By: Giovanni Cerulli (CNR-IRCRES, National Research Council of Italy, Research Institute on Sustainable Economic Growth)
    Abstract: This paper provides a comprehensive survey reviewing machine learning (ML) commands in Stata. I systematically categorize and summarize the available ML commands in Stata and evaluate their performance and usability for different tasks such as classification, regression, clustering, and dimension reduction. I also provide examples of how to use these commands with real-world datasets and compare their performance. This review aims to help researchers and practitioners choose appropriate ML methods and related Stata tools for their specific research questions and datasets, and to improve the efficiency and reproducibility of ML analyses using Stata. I conclude by discussing some limitations and future directions for ML research in Stata.
    Date: 2023–09–10
    URL: http://d.repec.org/n?u=RePEc:boc:lsug23:08&r=cmp
  13. By: Kumar, Rishabh (Bank of England); Koshiyama, Adriano (University College London); da Costa, Kleyton (University College London); Kingsman, Nigel (University College London); Tewarrie, Marvin (Bank of England); Kazim, Emre (University College London); Roy, Arunita (Reserve Bank of Australia); Treleaven, Philip (University College London); Lovell, Zac (Bank of England)
    Abstract: Deep learning models are being utilised increasingly within finance. Given the models are opaque in nature and are now being deployed for internal and consumer facing decisions, there are increasing concerns around the trustworthiness of their results. We test the stability of predictions and explanations of different deep learning models, which differ between each other only via subtle changes to model settings, with each model trained over the same data. Our results show that the models produce similar predictions but different explanations, even when the differences in model architecture are due to arbitrary factors like random seeds. We compare this behaviour with traditional, interpretable, ‘glass-box models’, which show similar accuracies while maintaining stable explanations and predictions. Finally, we show a methodology based on network analysis to compare deep learning models. Our analysis has implications for the adoption and risk management of future deep learning models by regulated institutions.
    Keywords: Deep neural networks; fragility; robustness; explainability; regulation
    JEL: C45 C52 G18
    Date: 2023–09–01
    URL: http://d.repec.org/n?u=RePEc:boe:boeewp:1038&r=cmp
  14. By: Sahed Abdelkader (Maghnia University Center); Kahoui Hacene (maghnia University center, Algeria)
    Abstract: Forecasting electricity consumption is necessary for electric grid operation and utility resource planning, as well as to improve energy security and grid resilience. Thus, this research aims to investigate the prediction performance of the ARIMA and LSTM neural network model using electricity consumption data during the period 1990 to 2020. The time series for electricity consumption is divided into 70% for training data and 30% for test data. The results showed that the LSTM model provided improved forecasting accuracy than the ARIMA model.
    Keywords: Electricity Consumption ARIMA LSTM Algeria. JEL Classification Codes: Q47, C53, C45, Electricity Consumption, ARIMA, LSTM, Algeria. JEL Classification Codes: Q47
    Date: 2023–06–04
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-04183403&r=cmp
  15. By: Peer Nagy; Sascha Frey; Silvia Sapora; Kang Li; Anisoara Calinescu; Stefan Zohren; Jakob Foerster
    Abstract: Developing a generative model of realistic order flow in financial markets is a challenging open problem, with numerous applications for market participants. Addressing this, we propose the first end-to-end autoregressive generative model that generates tokenized limit order book (LOB) messages. These messages are interpreted by a Jax-LOB simulator, which updates the LOB state. To handle long sequences efficiently, the model employs simplified structured state-space layers to process sequences of order book states and tokenized messages. Using LOBSTER data of NASDAQ equity LOBs, we develop a custom tokenizer for message data, converting groups of successive digits to tokens, similar to tokenization in large language models. Out-of-sample results show promising performance in approximating the data distribution, as evidenced by low model perplexity. Furthermore, the mid-price returns calculated from the generated order flow exhibit a significant correlation with the data, indicating impressive conditional forecast performance. Due to the granularity of generated data, and the accuracy of the model, it offers new application areas for future work beyond forecasting, e.g. acting as a world model in high-frequency financial reinforcement learning applications. Overall, our results invite the use and extension of the model in the direction of autoregressive large financial models for the generation of high-frequency financial data and we commit to open-sourcing our code to facilitate future research.
    Date: 2023–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.00638&r=cmp
  16. By: Ian R White (MRC Clinical Trials Unit at UCL, London, UK); Tra My Pham (MRC Clinical Trials Unit at UCL, London, UK); Matteo Quartagno (MRC Clinical Trials Unit at UCL, London, UK); Tim P Morris (MRC Clinical Trials Unit at UCL, London, UK)
    Abstract: Simulation studies are a powerful tool in biostatistics, but they can be hard to conduct successfully. Sometimes unexpected results are obtained. We offer advice on how to check a simulation study when this occurs, and how to design and conduct the study to give results that are easier to check. Simulation studies should be designed to include some settings where answers are already known. Code should be written in stages and data generating mechanisms should be checked before simulated data are analysed. Results should be explored carefully, with scatterplots of standard error estimates against point estimates a surprisingly powerful tool. When estimation fails or there are outlying estimates, these should be identified, understood, and dealt with by changing data generating mechanisms or coding realistic hybrid analysis procedures. Finally, we give a series of ideas that have been useful to us in the past for checking unexpected results. Following our advice may help to prevent errors and to improve the quality of published simulation studies. We illustrate the ideas with a simple but realistic simulation study in Stata.
    Date: 2023–09–10
    URL: http://d.repec.org/n?u=RePEc:boc:lsug23:16&r=cmp
  17. By: Lambrecht, Marco; Oechssler, Jörg; Weidenholzer, Simon
    Abstract: Robo-advisors are novel tools in financial markets that provide investors with low-cost financial advice, usually based on individual characteristics like risk attitudes. In a portfolio choice experiment running over 10 weeks, we study how much investors benefit from robo advice. We also study whether robos increase financial market participation. The treatments are whether investors just receive advice, have a robo making all decisions for them, or have to trade on their own. We find no effect on initial market participation. But robos help investors to avoid mistakes, make rebalancing more frequent, and overall yield portfolios much closer to the utility maximizing ones. Robo-advisors that implement the recommendations by default do significantly better than those that just give advice.
    Keywords: algorithmic trading; experiment; financial markets
    Date: 2023–09–22
    URL: http://d.repec.org/n?u=RePEc:awi:wpaper:0734&r=cmp
  18. By: Chassonnery-Zaïgouche, Cléo (University of Lausanne); Goutsmedt, Aurélien (UC Louvain - F.R.S-FNRS)
    Abstract: Over a period of twelve years, Barbara Bergmann developed several models of the labor market using microsimulation, eventually integrated in a "Transactions Model" of the entire US economy, built with Robert Bennett and published in 1986. The paper reconstructs the history of this modelling enterprise in the context of the debates on the microfoundations of macroeconomics and the role of macroeconomic expertise from the 1970s stagflation to the late 1980s. It shows how a political element-her focus on distributional effects of policies-was central to her criticism of macroeconomic modelling and how both her epistemic and political positions were increasingly marginalized in the 1980s.
    Date: 2023–09–14
    URL: http://d.repec.org/n?u=RePEc:osf:socarx:ynmbe&r=cmp
  19. By: Bauer, Kevin; Liebich, Lena; Hinz, Oliver; Kosfeld, Michael
    Abstract: In current discussions on large language models (LLMs) such as GPT, understanding their ability to emulate facets of human intelligence stands central. Using behavioral economic paradigms and structural models, we investigate GPT's cooperativeness in human interactions and assess its rational goal-oriented behavior. We discover that GPT cooperates more than humans and has overly optimistic expectations about human cooperation. Intriguingly, additional analyses reveal that GPT's behavior isn't random; it displays a level of goal-oriented rationality surpassing human counterparts. Our findings suggest that GPT hyper-rationally aims to maximize social welfare, coupled with a strive of self-preservation. Methodologically, our research highlights how structural models, typically employed to decipher human behavior, can illuminate the rationality and goal-orientation of LLMs. This opens a compelling path for future research into the intricate rationality of sophisticated, yet enigmatic artificial agents.
    Keywords: large language models, cooperation, goal orientation, economic rationality
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:zbw:safewp:401&r=cmp
  20. By: Gadat, Sébastien; Villeneuve, Stéphane
    Abstract: This document introduces a parsimonious novel method of processing textual data based on the NMF factorization and on supervised clustering withWasserstein barycenter’s to reduce the dimension of the model. This dual treatment of textual data allows for a representation of a text as a probability distribution on the space of profiles which accounts for both uncertainty and semantic interpretability with the Wasserstein distance. The full textual information of a given period is represented as a random probability measure. This opens the door to a statistical inference method that seeks to predict a financial data using the information generated by the texts of a given period.
    Keywords: Natural Language Processing; Textual Analysis; Wasserstein distance; clustering
    Date: 2023–09–20
    URL: http://d.repec.org/n?u=RePEc:tse:wpaper:128497&r=cmp
  21. By: Xiyuan Ren; Joseph Y. J. Chow
    Abstract: Estimating agent-specific taste heterogeneity with a large information and communication technology (ICT) dataset requires both model flexibility and computational efficiency. We propose a group-level agent-based mixed (GLAM) logit approach that is estimated with inverse optimization (IO) and group-level market share. The model is theoretically consistent with the RUM model framework, while the estimation method is a nonparametric approach that fits to market-level datasets, which overcomes the limitations of existing approaches. A case study of New York statewide travel mode choice is conducted with a synthetic population dataset provided by Replica Inc., which contains mode choices of 19.53 million residents on two typical weekdays, one in Fall 2019 and another in Fall 2021. Individual mode choices are grouped into market-level market shares per census block-group OD pair and four population segments, resulting in 120, 740 group-level agents. We calibrate the GLAM logit model with the 2019 dataset and compare to several benchmark models: mixed logit (MXL), conditional mixed logit (CMXL), and individual parameter logit (IPL). The results show that empirical taste distribution estimated by GLAM logit can be either unimodal or multimodal, which is infeasible for MXL/CMXL and hard to fulfill in IPL. The GLAM logit model outperforms benchmark models on the 2021 dataset, improving the overall accuracy from 82.35% to 89.04% and improving the pseudo R-square from 0.4165 to 0.5788. Moreover, the value-of-time (VOT) and mode preferences retrieved from GLAM logit aligns with our empirical knowledge (e.g., VOT of NotLowIncome population in NYC is $28.05/hour; public transit and walking is preferred in NYC). The agent-specific taste parameters are essential for the policymaking of statewide transportation projects.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.13159&r=cmp
  22. By: Pawe{\l} Niszczota; Sami Abbas
    Abstract: We assess the ability of GPT -- a large language model -- to serve as a financial robo-advisor for the masses, by using a financial literacy test. Davinci and ChatGPT based on GPT-3.5 score 66% and 65% on the financial literacy test, respectively, compared to a baseline of 33%. However, ChatGPT based on GPT-4 achieves a near-perfect 99% score, pointing to financial literacy becoming an emergent ability of state-of-the-art models. We use the Judge-Advisor System and a savings dilemma to illustrate how researchers might assess advice-utilization from large language models. We also present a number of directions for future research.
    Date: 2023–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.00649&r=cmp
  23. By: Fahmida Khatun; Nadia Nawrin
    Abstract: This paper has attempted to examine the 4IR’s penetration and impacts on the workforce in the IT services sector in Bangladesh. This study also discusses some of the challenges that Bangladesh IT sector faces at present. Finally, contemplating Bangladesh’s preparedness for the digital age of 4IR in terms of access to technology and policy framework, the paper makes a number of recommendations which can enable the country to reap the full benefits of 4IR.
    Keywords: Artificial Intelligence, Fourth industrial revolution, 4IR, CPD-FES Publication
    Date: 2021–11
    URL: http://d.repec.org/n?u=RePEc:pdb:report:17&r=cmp
  24. By: Tae-Hwy Lee (Department of Economics, University of California Riverside); Ekaterina Seregina (Colby College)
    Abstract: In this paper we develop a novel method of combining many forecasts based on a machine learning algorithm called Graphical LASSO (GL). We visualize forecast errors from different forecasters as a network of interacting entities and generalize network inference in the presence of common factor structure and structural breaks. First, we note that forecasters often use common information and hence make common mistakes,  which makes the forecast errors exhibit common factor structures. We use the Factor Graphical LASSO (FGL, Lee and Seregina (2023)) to separate common forecast errors from the idiosyncratic errors and exploit sparsity of the precision matrix of the latter. Second, since the network of experts changes over time as a response to unstable environments such as recessions, it is unreasonable to assume constant forecast combination weights. Hence, we propose Regime-Dependent Factor Graphical LASSO (RD-FGL) that allows factor loadings and idiosyncratic precision matrix to be regime-dependent. We develop its scalable implementation using the Alternating Direction Method of Multipliers (ADMM) to estimate regime-dependent forecast combination weights. The empirical application to forecasting macroeconomic series using the data of the European Central Bank’s Survey of Professional Forecasters (ECB SPF) demonstrates superior performance of a combined forecast using FGL and RD-FGL.
    Keywords: Common Forecast Errors, Regime Dependent Forecast Combination, Sparse Precision Matrix of Idiosyncratic Errors, Structural Breaks.
    JEL: C13 C38 C55
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:ucr:wpaper:202310&r=cmp
  25. By: Yi Yang; Yixuan Tang; Kar Yan Tam
    Abstract: We present a new financial domain large language model, InvestLM, tuned on LLaMA-65B (Touvron et al., 2023), using a carefully curated instruction dataset related to financial investment. Inspired by less-is-more-for-alignment (Zhou et al., 2023), we manually curate a small yet diverse instruction dataset, covering a wide range of financial related topics, from Chartered Financial Analyst (CFA) exam questions to SEC filings to Stackexchange quantitative finance discussions. InvestLM shows strong capabilities in understanding financial text and provides helpful responses to investment related questions. Financial experts, including hedge fund managers and research analysts, rate InvestLM's response as comparable to those of state-of-the-art commercial models (GPT-3.5, GPT-4 and Claude-2). Zero-shot evaluation on a set of financial NLP benchmarks demonstrates strong generalizability. From a research perspective, this work suggests that a high-quality domain specific LLM can be tuned using a small set of carefully curated instructions on a well-trained foundation model, which is consistent with the Superficial Alignment Hypothesis (Zhou et al., 2023). From a practical perspective, this work develops a state-of-the-art financial domain LLM with superior capability in understanding financial texts and providing helpful investment advice, potentially enhancing the work efficiency of financial professionals. We release the model parameters to the research community.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.13064&r=cmp

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.