nep-cmp New Economics Papers
on Computational Economics
Issue of 2023‒08‒21
35 papers chosen by
Stan Miles, Thompson Rivers University


  1. Stochastic Delay Differential Games: Financial Modeling and Machine Learning Algorithms By Robert Balkin; Hector D. Ceniceros; Ruimeng Hu
  2. Machine learning for option pricing: an empirical investigation of network architectures By Laurens Van Mieghem; Antonis Papapantoleon; Jonas Papazoglou-Hennig
  3. analysis of the predictor of a volatility surface by machine learning By Valentin Lourme
  4. Harnessing the Potential of Volatility: Advancing GDP Prediction By Ali Lashgari
  5. Comparative Analysis of Machine Learning, Hybrid, and Deep Learning Forecasting Models Evidence from European Financial Markets and Bitcoins By Apostolos Ampountolas
  6. Real-time Trading System based on Selections of Potentially Profitable, Uncorrelated, and Balanced Stocks by NP-hard Combinatorial Optimization By Kosuke Tatsumura; Ryo Hidaka; Jun Nakayama; Tomoya Kashimata; Masaya Yamasaki
  7. Ideas Without Scale in French Artificial Intelligence Innovations By Johanna Deperi; Ludovic Dibiaggio; Mohamed Keita; Lionel Nesta
  8. Epidemic Modeling with Generative Agents By Ross Williams; Niyousha Hosseinichimeh; Aritra Majumdar; Navid Ghaffarzadegan
  9. Critical comparisons on deep learning approaches for foreign exchange rate prediction By Zhu Bangyuan
  10. Panel Data Nowcasting: The Case of Price-Earnings Ratios By Andrii Babii; Ryan T. Ball; Eric Ghysels; Jonas Striaukas
  11. For What It's Worth: Measuring Land Value in the Era of Big Data and Machine Learning By Scott Wentland; Gary Cornwall; Jeremy G. Moulton
  12. Evaluation of Deep Reinforcement Learning Algorithms for Portfolio Optimisation By Chung I Lu
  13. Pattern Mining for Anomaly Detection in Graphs: Application to Fraud in Public Procurement By Lucas Potin; Rosa Figueiredo; Vincent Labatut; Christine Largeron
  14. Over-the-Counter Market Making via Reinforcement Learning By Zhou Fang; Haiqing Xu
  15. Deep Inception Networks: A General End-to-End Framework for Multi-asset Quantitative Strategies By Tom Liu; Stephen Roberts; Stefan Zohren
  16. Systemically important banks - emerging risk and policy responses: An agent-based investigation By Lilit Popoyan; Mauro Napoletano; Andrea Roventini
  17. Generating Synergistic Formulaic Alpha Collections via Reinforcement Learning By Shuo Yu; Hongyan Xue; Xiang Ao; Feiyang Pan; Jia He; Dandan Tu; Qing He
  18. Action-State Dependent Dynamic Model Selection By Francesco Cordoni; Alessio Sancetta
  19. Using GPT-4 for Financial Advice By Christian Fieberg; Lars Hornuf; David J. Streich
  20. Market Making of Options via Reinforcement Learning By Zhou Fang; Haiqing Xu
  21. Random Subspace Local Projections By Viet Hoang Dinh; Didier Nibbering; Benjamin Wong
  22. Improving Human Deception Detection Using Algorithmic Feedback By Marta Serra-Garcia; Uri Gneezy
  23. Generative Meta-Learning Robust Quality-Diversity Portfolio By Kamer Ali Yuksel
  24. Using Monte Carlo Methods for Retirement Simulations By Aditya Gupta; Vijay K. Tayal
  25. Will ChatGPT revolutionize accounting? The benefits of Artificial Intelligence (AI) in accounting By Hacker, Bernd
  26. Artificial intelligence in human resource management a challenge for the human-centred agenda? By Cappelli, Peter,; Rogovsky, Nikolai,
  27. Please take over: XAI, delegation of authority, and domain knowledge By Bauer, Kevin; von Zahn, Moritz; Hinz, Oliver
  28. Private Wealth over the Life-Cycle: A Meeting between Microsimulation and Structural Approaches By L. GALIANA; L. WILNER
  29. Exploring the Dynamics of the Specialty Insurance Market Using a Novel Discrete Event Simulation Framework: a Lloyd's of London Case Study By Sedar Olmez; Akhil Ahmed; Keith Kam; Zhe Feng; Alan Tua
  30. Testing for the Markov property in time series via deep conditional generative learning By Shi, Chengchun
  31. Supervised portfolios By Guillaume Chevalier; Guillaume Coqueret; Thomas Raffinot
  32. Newts microsimulation model for an informative and co-creative public-decision making By Daouda Diakité; Michel Paul; Valentin Morin
  33. Simulating the Adoption of a Retail CBDC By Leon Rincon, Carlos; Moreno, Jose; Soramaki, Kimmo
  34. The Economic Performances of Different Trial Designs in OnFarm Precision Experimentation: A Monte Carlo Evaluation By Li, Xiaofei; Mieno, Taro; Bullock, David S
  35. The Economic Effects of COVID-19 in Sweden: A Report on Income, Taxes, Distribution, and Government Support Policies By Angelov, Nikolay; Waldenström, Daniel

  1. By: Robert Balkin; Hector D. Ceniceros; Ruimeng Hu
    Abstract: In this paper, we propose a numerical methodology for finding the closed-loop Nash equilibrium of stochastic delay differential games through deep learning. These games are prevalent in finance and economics where multi-agent interaction and delayed effects are often desired features in a model, but are introduced at the expense of increased dimensionality of the problem. This increased dimensionality is especially significant as that arising from the number of players is coupled with the potential infinite dimensionality caused by the delay. Our approach involves parameterizing the controls of each player using distinct recurrent neural networks. These recurrent neural network-based controls are then trained using a modified version of Brown's fictitious play, incorporating deep learning techniques. To evaluate the effectiveness of our methodology, we test it on finance-related problems with known solutions. Furthermore, we also develop new problems and derive their analytical Nash equilibrium solutions, which serve as additional benchmarks for assessing the performance of our proposed deep learning approach.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.06450&r=cmp
  2. By: Laurens Van Mieghem; Antonis Papapantoleon; Jonas Papazoglou-Hennig
    Abstract: We consider the supervised learning problem of learning the price of an option or the implied volatility given appropriate input data (model parameters) and corresponding output data (option prices or implied volatilities). The majority of articles in this literature considers a (plain) feed forward neural network architecture in order to connect the neurons used for learning the function mapping inputs to outputs. In this article, motivated by methods in image classification and recent advances in machine learning methods for PDEs, we investigate empirically whether and how the choice of network architecture affects the accuracy and training time of a machine learning algorithm. We find that for option pricing problems, where we focus on the Black--Scholes and the Heston model, the generalized highway network architecture outperforms all other variants, when considering the mean squared error and the training time as criteria. Moreover, for the computation of the implied volatility, after a necessary transformation, a variant of the DGM architecture outperforms all other variants, when considering again the mean squared error and the training time as criteria.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.07657&r=cmp
  3. By: Valentin Lourme (Arts et Métiers ParisTech, Natixis)
    Abstract: The purpose of this study is to compare two approaches to assessing the points of a volatility layer. The first approach used is cubic spline interpolation, while the second approach is a machine learning algorithm, the XGBoost. The purpose of this comparison is to define the use case where the XGBoost Learning machine algorithm is more suitable compared to the cubic spline. The comparison between the two approaches is measured with the error between the measured volatility and the interpolated or predicted volatility. Cubic spline interpolation requires volatility data on the day of the study for interpolation to occur. The XGBoost Machine Learning algorithm will train on historical data to predict the volatility value on the day of the study.
    Abstract: Cette étude vise à comparer deux approches d'évaluation des points d'une nappe de volatilité. La première approche utilisée est l'interpolation par spline cubique, tandis que la seconde approche est un algorithme de machine Learning, le XGBoost. Cette comparaison a pour but de définir le cas d'utilisation ou l'algorithme de machine Learning XGBoost est plus adapté par rapport au spline cubique. La comparaison entre les deux approches est mesurée avec l'erreur entre la volatilité mesurée et la volatilité interpolée ou prédite. L'interpolation par spline cubique nécessite les données de volatilité au jour de l'étude pour que l'interpolation soit réalisée. L'algorithme de Machine Learning XGBoost va s'entrainer sur des données historiques pour prédire la valeur de volatilité au jour de l'étude
    Date: 2023–07–05
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-04151604&r=cmp
  4. By: Ali Lashgari
    Abstract: This paper presents a novel machine learning approach to GDP prediction that incorporates volatility as a model weight. The proposed method is specifically designed to identify and select the most relevant macroeconomic variables for accurate GDP prediction, while taking into account unexpected shocks or events that may impact the economy. The proposed method's effectiveness is tested on real-world data and compared to previous techniques used for GDP forecasting, such as Lasso and Adaptive Lasso. The findings show that the Volatility-weighted Lasso method outperforms other methods in terms of accuracy and robustness, providing policymakers and analysts with a valuable tool for making informed decisions in a rapidly changing economic environment. This study demonstrates how data-driven approaches can help us better understand economic fluctuations and support more effective economic policymaking. Keywords: GDP prediction, Lasso, Volatility, Regularization, Macroeconomics Variable Selection, Machine Learning JEL codes: C22, C53, E37.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.05391&r=cmp
  5. By: Apostolos Ampountolas
    Abstract: This study analyzes the transmission of market uncertainty on key European financial markets and the cryptocurrency market over an extended period, encompassing the pre, during, and post-pandemic periods. Daily financial market indices and price observations are used to assess the forecasting models. We compare statistical, machine learning, and deep learning forecasting models to evaluate the financial markets, such as the ARIMA, hybrid ETS-ANN, and kNN predictive models. The study results indicate that predicting financial market fluctuations is challenging, and the accuracy levels are generally low in several instances. ARIMA and hybrid ETS-ANN models perform better over extended periods compared to the kNN model, with ARIMA being the best-performing model in 2018-2021 and the hybrid ETS-ANN model being the best-performing model in most of the other subperiods. Still, the kNN model outperforms the others in several periods, depending on the observed accuracy measure. Researchers have advocated using parametric and non-parametric modeling combinations to generate better results. In this study, the results suggest that the hybrid ETS-ANN model is the best-performing model despite its moderate level of accuracy. Thus, the hybrid ETS-ANN model is a promising financial time series forecasting approach. The findings offer financial analysts an additional source that can provide valuable insights for investment decisions.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.08853&r=cmp
  6. By: Kosuke Tatsumura; Ryo Hidaka; Jun Nakayama; Tomoya Kashimata; Masaya Yamasaki
    Abstract: Financial portfolio construction problems are often formulated as quadratic and discrete (combinatorial) optimization that belong to the nondeterministic polynomial time (NP)-hard class in computational complexity theory. Ising machines are hardware devices that work in quantum-mechanical/quantum-inspired principles for quickly solving NP-hard optimization problems, which potentially enable making trading decisions based on NP-hard optimization in the time constraints for high-speed trading strategies. Here we report a real-time stock trading system that determines long(buying)/short(selling) positions through NP-hard portfolio optimization for improving the Sharpe ratio using an embedded Ising machine based on a quantum-inspired algorithm called simulated bifurcation. The Ising machine selects a balanced (delta-neutral) group of stocks from an $N$-stock universe according to an objective function involving maximizing instantaneous expected returns defined as deviations from volume-weighted average prices and minimizing the summation of statistical correlation factors (for diversification). It has been demonstrated in the Tokyo Stock Exchange that the trading strategy based on NP-hard portfolio optimization for $N$=128 is executable with the FPGA (field-programmable gate array)-based trading system with a response latency of 164 $\mu$s.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.06339&r=cmp
  7. By: Johanna Deperi (University of Brescia); Ludovic Dibiaggio (SKEMA Business School); Mohamed Keita (SKEMA Business School); Lionel Nesta (GREDEG - Groupe de Recherche en Droit, Economie et Gestion - UNS - Université Nice Sophia Antipolis (1965 - 2019) - COMUE UCA - COMUE Université Côte d'Azur (2015-2019) - CNRS - Centre National de la Recherche Scientifique - UCA - Université Côte d'Azur, OFCE - Observatoire français des conjonctures économiques (Sciences Po) - Sciences Po - Sciences Po)
    Abstract: Artificial intelligence (AI) is viewed as the next technological revolution. The aim of this Policy Brief is to identify France's strengths and weaknesses in this great race for AI innovation. We characterise France's positioning relative to other key players and make the following observations: 1. Without being a world leader in innovation incorporating artificial intelligence, France is showing moderate but significant activity in this field. 2. France specialises in machine learning, unsupervised learning and probabilistic graphical models, and in developing solutions for the medical sciences, transport and security. 3. The AI value chain in France is poorly integrated, mainly due to a lack of integration in the downstream phases of the innovation chain. 4. The limited presence of French private players in the global AI arena contrasts with the extensive involvement of French public institutions. French public research organisations produce patents with great economic value. 5. Public players are the key actors in French networks for collaboration in patent development, but are not open to international and institutional diversity. In our opinion, France runs the risk of becoming a global AI laboratory located upstream in the AI innovation value chain. As such, it is likely to bear the sunk costs of AI invention, without enjoying the benefits of AI exploitation on a larger scale. In short, our fear is that French AI will be exported to other locations to prosper and grow.
    Date: 2023–06–26
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-04144817&r=cmp
  8. By: Ross Williams; Niyousha Hosseinichimeh; Aritra Majumdar; Navid Ghaffarzadegan
    Abstract: This study offers a new paradigm of individual-level modeling to address the grand challenge of incorporating human behavior in epidemic models. Using generative artificial intelligence in an agent-based epidemic model, each agent is empowered to make its own reasonings and decisions via connecting to a large language model such as ChatGPT. Through various simulation experiments, we present compelling evidence that generative agents mimic real-world behaviors such as quarantining when sick and self-isolation when cases rise. Collectively, the agents demonstrate patterns akin to multiple waves observed in recent pandemics followed by an endemic period. Moreover, the agents successfully flatten the epidemic curve. This study creates potential to improve dynamic system modeling by offering a way to represent human brain, reasoning, and decision making.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.04986&r=cmp
  9. By: Zhu Bangyuan
    Abstract: In a natural market environment, the price prediction model needs to be updated in real time according to the data obtained by the system to ensure the accuracy of the prediction. In order to improve the user experience of the system, the price prediction function needs to use the fastest training model and the model prediction fitting effect of the best network as a predictive model. We conduct research on the fundamental theories of RNN, LSTM, and BP neural networks, analyse their respective characteristics, and discuss their advantages and disadvantages to provide a reference for the selection of price-prediction models.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.06600&r=cmp
  10. By: Andrii Babii; Ryan T. Ball; Eric Ghysels; Jonas Striaukas
    Abstract: The paper uses structured machine learning regressions for nowcasting with panel data consisting of series sampled at different frequencies. Motivated by the problem of predicting corporate earnings for a large cross-section of firms with macroeconomic, financial, and news time series sampled at different frequencies, we focus on the sparse-group LASSO regularization which can take advantage of the mixed frequency time series panel data structures. Our empirical results show the superior performance of our machine learning panel data regression models over analysts' predictions, forecast combinations, firm-specific time series regression models, and standard machine learning methods.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.02673&r=cmp
  11. By: Scott Wentland; Gary Cornwall; Jeremy G. Moulton
    Abstract: This paper develops a new method for valuing land, a key asset on a nation’s balance sheet. The method first employs an unsupervised machine learning method, kmeans clustering, to discretize unobserved heterogeneity, which we then combine with a supervised learning algorithm, gradient boosted trees (GBT), to obtain property-level price predictions and estimates of the land component. Our initial results from a large national dataset show this approach routinely outperforms hedonic regression methods (as used by the U.K.’s Office for National Statistics, for example) in out-of-sample price predictions. To exploit the best of both methods, we further explore a composite approach using model stacking, finding it outperforms all methods in out-of-sample tests and a benchmark test against nearby vacant land sales. In an application, we value residential, commercial, industrial, and agricultural land for the entire contiguous U.S. from 2006-2015. The results offer new insights into valuation and demonstrate how a unified method can build national and subnational estimates of land value from detailed, parcel-level data. We discuss further applications to economic policy and the property valuation literature more generally.
    JEL: E01
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:bea:wpaper:0209&r=cmp
  12. By: Chung I Lu
    Abstract: We evaluate benchmark deep reinforcement learning (DRL) algorithms on the task of portfolio optimisation under a simulator. The simulator is based on correlated geometric Brownian motion (GBM) with the Bertsimas-Lo (BL) market impact model. Using the Kelly criterion (log utility) as the objective, we can analytically derive the optimal policy without market impact and use it as an upper bound to measure performance when including market impact. We found that the off-policy algorithms DDPG, TD3 and SAC were unable to learn the right Q function due to the noisy rewards and therefore perform poorly. The on-policy algorithms PPO and A2C, with the use of generalised advantage estimation (GAE), were able to deal with the noise and derive a close to optimal policy. The clipping variant of PPO was found to be important in preventing the policy from deviating from the optimal once converged. In a more challenging environment where we have regime changes in the GBM parameters, we found that PPO, combined with a hidden Markov model (HMM) to learn and predict the regime context, is able to learn different policies adapted to each regime. Overall, we find that the sample complexity of these algorithms is too high, requiring more than 2m steps to learn a good policy in the simplest setting, which is equivalent to almost 8, 000 years of daily prices.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.07694&r=cmp
  13. By: Lucas Potin (LIA - Laboratoire Informatique d'Avignon - AU - Avignon Université - Centre d'Enseignement et de Recherche en Informatique - CERI); Rosa Figueiredo (LIA - Laboratoire Informatique d'Avignon - AU - Avignon Université - Centre d'Enseignement et de Recherche en Informatique - CERI); Vincent Labatut (LIA - Laboratoire Informatique d'Avignon - AU - Avignon Université - Centre d'Enseignement et de Recherche en Informatique - CERI); Christine Largeron (LHC - Laboratoire Hubert Curien - IOGS - Institut d'Optique Graduate School - UJM - Université Jean Monnet - Saint-Étienne - CNRS - Centre National de la Recherche Scientifique)
    Abstract: In the context of public procurement, several indicators called red flags are used to estimate fraud risk. They are computed according to certain contract attributes and are therefore dependent on the proper filling of the contract and award notices. However, these attributes are very often missing in practice, which prohibits red flags computation. Traditional fraud detection approaches focus on tabular data only, considering each contract separately, and are therefore very sensitive to this issue. In this work, we adopt a graph-based method allowing leveraging relations between contracts, to compensate for the missing attributes. We propose PANG (Pattern-Based Anomaly Detection in Graphs), a general supervised framework relying on pattern extraction to detect anomalous graphs in a collection of attributed graphs. Notably, it is able to identify induced subgraphs, a type of pattern widely overlooked in the literature. When benchmarked on standard datasets, its predictive performance is on par with state-of-the-art methods, with the additional advantage of being explainable. These experiments also reveal that induced patterns are more discriminative on certain datasets. When applying PANG to public procurement data, the prediction is superior to other methods, and it identifies subgraph patterns that are characteristic of fraud-prone situations, thereby making it possible to better understand fraudulent behavior.
    Keywords: Pattern Mining, Graph Classification, Public Procurement, Fraud Detection
    Date: 2023–09–18
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-04131485&r=cmp
  14. By: Zhou Fang; Haiqing Xu
    Abstract: The over-the-counter (OTC) market is characterized by a unique feature that allows market makers to adjust bid-ask spreads based on order size. However, this flexibility introduces complexity, transforming the market-making problem into a high-dimensional stochastic control problem that presents significant challenges. To address this, this paper proposes an innovative solution utilizing reinforcement learning techniques to tackle the OTC market-making problem. By assuming a linear inverse relationship between market order arrival intensity and bid-ask spreads, we demonstrate the optimal policy for bid-ask spreads follows a Gaussian distribution. We apply two reinforcement learning algorithms to conduct a numerical analysis, revealing the resulting return distribution and bid-ask spreads under different time and inventory levels.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.01816&r=cmp
  15. By: Tom Liu; Stephen Roberts; Stefan Zohren
    Abstract: We introduce Deep Inception Networks (DINs), a family of Deep Learning models that provide a general framework for end-to-end systematic trading strategies. DINs extract time series (TS) and cross sectional (CS) features directly from daily price returns. This removes the need for handcrafted features, and allows the model to learn from TS and CS information simultaneously. DINs benefit from a fully data-driven approach to feature extraction, whilst avoiding overfitting. Extending prior work on Deep Momentum Networks, DIN models directly output position sizes that optimise Sharpe ratio, but for the entire portfolio instead of individual assets. We propose a novel loss term to balance turnover regularisation against increased systemic risk from high correlation to the overall market. Using futures data, we show that DIN models outperform traditional TS and CS benchmarks, are robust to a range of transaction costs and perform consistently across random seeds. To balance the general nature of DIN models, we provide examples of how attention and Variable Selection Networks can aid the interpretability of investment decisions. These model-specific methods are particularly useful when the dimensionality of the input is high and variable importance fluctuates dynamically over time. Finally, we compare the performance of DIN models on other asset classes, and show how the space of potential features can be customised.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.05522&r=cmp
  16. By: Lilit Popoyan; Mauro Napoletano; Andrea Roventini
    Abstract: We develop a macroeconomic agent-based model to study the role of systemically important banks (SIBs) in financial stability and the effectiveness of capital surcharges on SIBs as a risk management tool. The model is populated by heterogeneous firms, consumers, and banks interacting locally in different markets. In particular, banks provide credit to firms according to Basel III macro-prudential frameworks and manage their liquidity in the interbank market. The Central Bank performs monetary policy according to different types of Taylor rules. Our model endogenously generates banks with different balance sheet sizes, making some systemically important. The additional capital surcharges for SIBs prove to have a marginal effect on preventing the crisis since it points mainly to the ''too-big-to-fail'' problem with minimal importance for ''too-interconnected-to-fail'', ''too-many-to-fail'' and other issues. Moreover, we found that additional capital surcharges on SIBs do not account for the type and management strategy of the bank, leading to the ''one-size-fits-all'' problem. Finally, we found that additional loss-absorbing capacity needs to be increased to ensure total coverage of losses for failed SIBs.
    Keywords: Financial instability; monetary policy; macro-prudential policy; systemically important banks, additional loss-absorbing capacity, Basel III regulation; agent-based models.
    Date: 2023–07–28
    URL: http://d.repec.org/n?u=RePEc:ssa:lemwps:2023/30&r=cmp
  17. By: Shuo Yu; Hongyan Xue; Xiang Ao; Feiyang Pan; Jia He; Dandan Tu; Qing He
    Abstract: In the field of quantitative trading, it is common practice to transform raw historical stock data into indicative signals for the market trend. Such signals are called alpha factors. Alphas in formula forms are more interpretable and thus favored by practitioners concerned with risk. In practice, a set of formulaic alphas is often used together for better modeling precision, so we need to find synergistic formulaic alpha sets that work well together. However, most traditional alpha generators mine alphas one by one separately, overlooking the fact that the alphas would be combined later. In this paper, we propose a new alpha-mining framework that prioritizes mining a synergistic set of alphas, i.e., it directly uses the performance of the downstream combination model to optimize the alpha generator. Our framework also leverages the strong exploratory capabilities of reinforcement learning~(RL) to better explore the vast search space of formulaic alphas. The contribution to the combination models' performance is assigned to be the return used in the RL process, driving the alpha generator to find better alphas that improve upon the current set. Experimental evaluations on real-world stock market data demonstrate both the effectiveness and the efficiency of our framework for stock trend forecasting. The investment simulation results show that our framework is able to achieve higher returns compared to previous approaches.
    Date: 2023–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.12964&r=cmp
  18. By: Francesco Cordoni; Alessio Sancetta
    Abstract: A model among many may only be best under certain states of the world. Switching from a model to another can also be costly. Finding a procedure to dynamically choose a model in these circumstances requires to solve a complex estimation procedure and a dynamic programming problem. A Reinforcement learning algorithm is used to approximate and estimate from the data the optimal solution to this dynamic programming problem. The algorithm is shown to consistently estimate the optimal policy that may choose different models based on a set of covariates. A typical example is the one of switching between different portfolio models under rebalancing costs, using macroeconomic information. Using a set of macroeconomic variables and price data, an empirical application to the aforementioned portfolio problem shows superior performance to choosing the best portfolio model with hindsight.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.04754&r=cmp
  19. By: Christian Fieberg; Lars Hornuf; David J. Streich
    Abstract: We show that the recently released text-based artificial intelligence tool GPT-4 can provide suitable financial advice. The tool suggests specific investment portfolios that reflect an investor’s individual circumstances such as risk tolerance, risk capacity, and sustainability preference. Notably, while the suggested portfolios display home bias and are rather insensitive to the investment horizon, historical risk-adjusted performance is on par with a professionally managed benchmark portfolio. Given the current inability of GPT-4 to provide full-service financial advice, it may be used by financial advisors as a back-office tool for portfolio recommendation.
    Keywords: GPT-4, ChatGPT, financial advice, artificial intelligence, portfolio management
    JEL: G00 G11
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_10529&r=cmp
  20. By: Zhou Fang; Haiqing Xu
    Abstract: Market making of options with different maturities and strikes is a challenging problem due to its high dimensional nature. In this paper, we propose a novel approach that combines a stochastic policy and reinforcement learning-inspired techniques to determine the optimal policy for posting bid-ask spreads for an options market maker who trades options with different maturities and strikes. When the arrival of market orders is linearly inverse to the spreads, the optimal policy is normally distributed.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.01814&r=cmp
  21. By: Viet Hoang Dinh; Didier Nibbering; Benjamin Wong
    Abstract: We show how random subspace methods can be adapted to estimating local projections with many controls. Random subspace methods have their roots in the machine learning literature and are implemented by averaging over regressions estimated over different combinations of subsets of these controls. We document three key results: (i) Our approach can successfully recover the impulse response function in a Monte Carlo exercise where we simulate data from a real business cycle model with fiscal foresight. (ii) Our results suggest that random subspace methods are more accurate than factor models if the underlying large data set has a factor structure similar to typical macroeconomic data sets such as FRED-MD. (iii) Our approach leads to differences in the estimated impulse response functions relative to standard methods when applied to two widely-studied empirical applications.
    Keywords: Local Projections, Random Subspace, Impulse Response Functions, Large Data Sets
    JEL: C22 E32
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:een:camaaa:2023-34&r=cmp
  22. By: Marta Serra-Garcia; Uri Gneezy
    Abstract: Can algorithms help people predict behavior in high-stakes prisoner’s dilemmas? Participants watching the pre-play communication of contestants in the TV show Golden Balls display a limited ability to predict contestants’ behavior, while algorithms do significantly better. We provide participants algorithmic advice by flagging videos for which an algorithm predicts a high likelihood of cooperation or defection. We find that the effectiveness of flags depends on their timing: participants rely significantly more on flags shown before they watch the videos than flags shown after they watch them. These findings show that the timing of algorithmic feedback is key for its adoption.
    Keywords: detecting lies, machine learning, cooperation, experiment
    JEL: D83 D91 C72 C91
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_10518&r=cmp
  23. By: Kamer Ali Yuksel
    Abstract: This paper proposes a novel meta-learning approach to optimize a robust portfolio ensemble. The method uses a deep generative model to generate diverse and high-quality sub-portfolios combined to form the ensemble portfolio. The generative model consists of a convolutional layer, a stateful LSTM module, and a dense network. During training, the model takes a randomly sampled batch of Gaussian noise and outputs a population of solutions, which are then evaluated using the objective function of the problem. The weights of the model are updated using a gradient-based optimizer. The convolutional layer transforms the noise into a desired distribution in latent space, while the LSTM module adds dependence between generations. The dense network decodes the population of solutions. The proposed method balances maximizing the performance of the sub-portfolios with minimizing their maximum correlation, resulting in a robust ensemble portfolio against systematic shocks. The approach was effective in experiments where stochastic rewards were present. Moreover, the results (Fig. 1) demonstrated that the ensemble portfolio obtained by taking the average of the generated sub-portfolio weights was robust and generalized well. The proposed method can be applied to problems where diversity is desired among co-optimized solutions for a robust ensemble. The source-codes and the dataset are in the supplementary material.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.07811&r=cmp
  24. By: Aditya Gupta; Vijay K. Tayal
    Abstract: Retirement prediction helps individuals and institutions make informed finan-cial, lifestyle, and workforce decisions based on estimated retirement portfolios. This paper attempts to predict retirement using Monte Carlo simulations, allow-ing one to probabilistically account for a range of possibilities. The authors propose a model to predict the values of the investment accounts IRA and 401(k) through the simulation of inflation rates, interest rates, and other perti-nent factors. They provide a user case study to discuss the implications of the proposed model.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.16563&r=cmp
  25. By: Hacker, Bernd
    Abstract: It's an experiment! This paper explores how ChatGPT can be used in accounting and reporting to automate routine tasks, increase efficiency, and better understand financial data. The creation of the paper itself was done with the help of ChatGPT, i.e., significant parts of this text were created by the AI and then edited and completed by the author in terms of content and language. On the one hand, this is intended to show the potential of the application in practice and to make it clear that dealing with the tools of AI will be indispensable in the future, but on the other hand, the risks and concerns are also addressed.
    Keywords: ChatGPT, Artificial intelligence, Chatbot, Accounting, Appendix, Auditing
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:zbw:rpaebs:62023&r=cmp
  26. By: Cappelli, Peter,; Rogovsky, Nikolai,
    Abstract: The ILO human-centred agenda puts the needs, aspirations and rights of all people at the heart of economic, social and environmental policies. At the enterprise level, this approach calls for broader employee representation and involvement that could be powerful factors for productivity growth. However, the implementation of the human-centred agenda at the workplace level may be challenged by the use of artificial intelligence (AI) in various areas of corporate human resource management (HRM). While firms are enthusiastically embracing AI and digital technology in a number of HRM areas, their understanding of how such innovations affect the workforce often lags behind or is not viewed as a priority. This paper offers guidance as to when and where the use of AI in HRM should be encouraged, and where it is likely to cause more problems than it solves.
    Keywords: artificial intelligence, human resources management, information technology
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:ilo:ilowps:995320592902676&r=cmp
  27. By: Bauer, Kevin; von Zahn, Moritz; Hinz, Oliver
    Abstract: Recent regulatory measures such as the European Union's AI Act require artificial intelligence (AI) systems to be explainable. As such, understanding how explainability impacts human-AI interaction and pinpointing the specific circumstances and groups affected, is imperative. In this study, we devise a formal framework and conduct an empirical investigation involving real estate agents to explore the complex interplay between explainability of and delegation to AI systems. On an aggregate level, our findings indicate that real estate agents display a higher propensity to delegate apartment evaluations to an AI system when its workings are explainable, thereby surrendering control to the machine. However, at an individual level, we detect considerable heterogeneity. Agents possessing extensive domain knowledge are generally more inclined to delegate decisions to AI and minimize their effort when provided with explanations. Conversely, agents with limited domain knowledge only exhibit this behavior when explanations correspond with their preconceived notions regarding the relationship between apartment features and listing prices. Our results illustrate that the introduction of explainability in AI systems may transfer the decision-making control from humans to AI under the veil of transparency, which has notable implications for policy makers and practitioners that we discuss.
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:zbw:safewp:394&r=cmp
  28. By: L. GALIANA (Insee); L. WILNER (Insee, Crest)
    Abstract: This paper embeds a structural model of private wealth accumulation over the life-cycle within a dynamic microsimulation model (Destinie 2) designed for long-run projections of pensions. In such an environment, the optimal savings path results from consumption smoothing and bequests motives, on top of the mortality risk. Preferences are estimated based on a longitudinal wealth survey through a method of simulated moments. Simulations issued from these estimations replicate quite well a private wealth that is more concentrated than labor income. They enable us to compute “augmented” standards of living including capital income, hence to quantify both the countervailing role played by private wealth to earnings dropout after retirement and the impact of the mortality risk in this regard.
    Keywords: Microsimulation; Intertemporal Consumer Choice; Life-cycle; Inequality
    JEL: C63 C88 D15
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:nse:doctra:2023-04&r=cmp
  29. By: Sedar Olmez; Akhil Ahmed; Keith Kam; Zhe Feng; Alan Tua
    Abstract: This research presents a novel Discrete Event Simulation (DES) of the Lloyd's of London specialty insurance market, exploring complex market dynamics that have not been previously studied quantitatively. The proof-of-concept model allows for the simulation of various scenarios that capture important market phenomena such as the underwriting cycle, the impact of risk syndication, and the importance of appropriate exposure management. Despite minimal calibration, our model has shown that it is a valuable tool for understanding and analysing the Lloyd's of London specialty insurance market, particularly in terms of identifying areas for further investigation for regulators and participants of the market alike. The results generate the expected behaviours that, syndicates (insurers) are less likely to go insolvent if they adopt sophisticated exposure management practices, catastrophe events lead to more defined patterns of cyclicality and cause syndicates to substantially increase their premiums offered. Lastly, syndication enhances the accuracy of actuarial price estimates and narrows the divergence among syndicates. Overall, this research offers a new perspective on the Lloyd's of London market and demonstrates the potential of individual-based modelling (IBM) for understanding complex financial systems.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.05581&r=cmp
  30. By: Shi, Chengchun
    Abstract: The Markov property is widely imposed in analysis of time series data. Correspondingly, testing the Markov property, and relatedly, inferring the order of a Markov model, are of paramount importance. In this article, we propose a nonparametric test for the Markov property in high-dimensional time series via deep conditional generative learning. We also apply the test sequentially to determine the order of the Markov model. We show that the test controls the type-I error asymptotically, and has the power approaching one. Our proposal makes novel contributions in several ways. We utilize and extend state-of-the-art deep generative learning to estimate the conditional density functions, and establish a sharp upper bound on the approximation error of the estimators. We derive a doubly robust test statistic, which employs a nonparametric estimation but achieves a parametric convergence rate. We further adopt sample splitting and cross-fitting to minimize the conditions required to ensure the consistency of the test. We demonstrate the efficacy of the test through both simulations and the three data applications.
    Keywords: deep conditional generative learning; high-dimensional time series; hypothesis testing; Markov property; mixture density network; OUP deal
    JEL: C1
    Date: 2023–06–23
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:119352&r=cmp
  31. By: Guillaume Chevalier (AXA Investment Managers, Multi Asset Client Solutions, Quantitative Research - AXA); Guillaume Coqueret (EM - emlyon business school); Thomas Raffinot (AXA Investment Managers, Multi Asset Client Solutions, Quantitative Research - AXA)
    Abstract: We propose an asset allocation strategy that engineers optimal weights before feeding them to a supervised learning algorithm. In contrast to the traditional approaches, the machine is able to learn risk measures, preferences and constraints beyond simple expected returns, within a flexible, forward-looking and non-linear framework. Our empirical analysis illustrates that predicting the optimal weights directly instead of the traditional two step approach leads to more stable portfolios with statistically better risk-adjusted performance measures. To foster reproducibility and future comparisons, our code is publicly available on Google Colab.
    Date: 2022–12–02
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-04144588&r=cmp
  32. By: Daouda Diakité (CEMOI - Centre d'Économie et de Management de l'Océan Indien - UR - Université de La Réunion); Michel Paul (CEMOI - Centre d'Économie et de Management de l'Océan Indien - UR - Université de La Réunion); Valentin Morin (CEMOI - Centre d'Économie et de Management de l'Océan Indien - UR - Université de La Réunion)
    Date: 2023–05–31
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-04147771&r=cmp
  33. By: Leon Rincon, Carlos (Tilburg University, Center For Economic Research); Moreno, Jose; Soramaki, Kimmo
    Keywords: payments; money; agent-based modelling; simulation; digital twin
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:tiu:tiucen:adf7c1f0-7fc3-46d7-8395-5e5499b24138&r=cmp
  34. By: Li, Xiaofei; Mieno, Taro; Bullock, David S
    Abstract: On-farm precision experimentation (OFPE) has expanded rapidly over the past years. While the importance of efficient trial designs in OFPE has been recognized, the design efficiency has not been assessed from the economic perspective. This study reports how to use Monte Carlo simulations of corn-to-nitrogen (N) response OFPEs to compare economic performances of thirteen different OFPE trial designs. The economic performance is measured by the profit from implementing the N “prescription” (i.e., estimated site-specific economically optimal N rates) provided by analysing the OFPE data generated by a trial design. Results showed that the choice of trial design affects the final economic performance of OFPE. Overall, the best design was the Latin square design with a special pattern of limited N rate “jump” (LJ), which had the highest average profit and lowest profit variation in almost all simulation scenarios. The economic performance of the high efficiency fixed-block strip design (SF1) was only slightly lower than that of LJ, and could be a good alternative when only strip designs are available. In contrast, designs with gradual trial rate changes over space were less profitable in most situations, and should be avoided. Those results were robust to various nitrogen-to-corn price ratios, yield response estimation models, and field sizes used in the simulations. It was also found that the statistical efficiency measures of trial designs roughly explained the designs’ economic performances, though there are still much part remaining unexplained.
    Keywords: Crop Production/Industries, Production Economics, Productivity Analysis, Research and Development/Tech Change/Emerging Technologies
    Date: 2022–09–23
    URL: http://d.repec.org/n?u=RePEc:ags:haaepa:337136&r=cmp
  35. By: Angelov, Nikolay (Uppsala Center for Fiscal Studies); Waldenström, Daniel (Research Institute of Industrial Economics, Stockholm)
    Abstract: This report analyses the economic consequences of the coronavirus pandemic and support policies using underutilized data sources from the Swedish Tax Agency's tax register, which provides real-time information on firm sales and employees' wage income. Firms' sales, particularly in areas heavily impacted by COVID-19, declined by 6.1% on average, inducing a drastic economic recession. Excise tax revenue analysis reveals a decline in industrial electricity and air travel tax revenues, but a rise in alcohol tax revenue. The hospitality industry experienced significant negative effects, with drops in sales, employment, and wage income. Payroll tax revenues decreased due to government intervention, whereas sick pay drastically increased. Average pre-tax labor income decreased by 5%, largely due to increased unemployment among part-time workers, escalating income inequality. Policy simulations indicate government support measures mitigated wage income reduction and unemployment rise, yet they contributed to income inequality under certain conditions. These results provide insight into the diverse, yet significant, economic impacts of the pandemic. A number of policy recommendations are presented based on the empirical findings.
    Keywords: COVID-19, taxes, inequality, policy effects
    JEL: D31 H12 H24 J22 J24
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:iza:izapps:pp200&r=cmp

This nep-cmp issue is ©2023 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.