|
on Computational Economics |
Issue of 2023‒01‒16
eighteen papers chosen by |
By: | Pierre Bras |
Abstract: | Stochastic Gradient Descent Langevin Dynamics (SGLD) algorithms, which add noise to the classic gradient descent, are known to improve the training of neural networks in some cases where the neural network is very deep. In this paper we study the possibilities of training acceleration for the numerical resolution of stochastic control problems through gradient descent, where the control is parametrized by a neural network. If the control is applied at many discretization times then solving the stochastic control problem reduces to minimizing the loss of a very deep neural network. We numerically show that Langevin algorithms improve the training on various stochastic control problems like hedging and resource management, and for different choices of gradient descent methods. |
Date: | 2022–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2212.12018&r=cmp |
By: | Harrison Mateika; Juannan Jia; Linda Lillard; Noah Cronbaugh; Will Shin |
Abstract: | The primary aim of this research was to find a model that best predicts which fallen angel bonds would either potentially rise up back to investment grade bonds and which ones would fall into bankruptcy. To implement the solution, we thought that the ideal method would be to create an optimal machine learning model that could predict bankruptcies. Among the many machine learning models out there we decided to pick four classification methods: logistic regression, KNN, SVM, and NN. We also utilized an automated methods of Google Cloud's machine learning. The results of our model comparisons showed that the models did not predict bankruptcies very well on the original data set with the exception of Google Cloud's machine learning having a high precision score. However, our over-sampled and feature selection data set did perform very well. This could likely be due to the model being over-fitted to match the narrative of the over-sampled data (as in, it does not accurately predict data outside of this data set quite well). Therefore, we were not able to create a model that we are confident that would predict bankruptcies. However, we were able to find value out of this project in two key ways. The first is that Google Cloud's machine learning model in every metric and in every data set either outperformed or performed on par with the other models. The second is that we found that utilizing feature selection did not reduce predictive power that much. This means that we can reduce the amount of data to collect for future experimentation regarding predicting bankruptcies. |
Date: | 2022–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2212.03454&r=cmp |
By: | Alessandro Gnoatto; Silvia Lavagnini; Athena Picarelli |
Abstract: | We present a novel computational approach for quadratic hedging in a high-dimensional incomplete market. This covers both mean-variance hedging and local risk minimization. In the first case, the solution is linked to a system of BSDEs, one of which being a backward stochastic Riccati equation (BSRE); in the second case, the solution is related to the F\"olmer-Schweizer decomposition and is also linked to a BSDE. We apply (recursively) a deep neural network-based BSDE solver. Thanks to this approach, we solve high-dimensional quadratic hedging problems, providing the entire hedging strategies paths, which, in alternative, would require to solve high dimensional PDEs. We test our approach with a classical Heston model and with a multi-dimensional generalization of it. |
Date: | 2022–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2212.12725&r=cmp |
By: | Emmanuel Alanis; Sudheer Chava; Agam Shah |
Abstract: | Using a comprehensive sample of 2, 585 bankruptcies from 1990 to 2019, we benchmark the performance of various machine learning models in predicting financial distress of publicly traded U.S. firms. We find that gradient boosted trees outperform other models in one-year-ahead forecasts. Variable permutation tests show that excess stock returns, idiosyncratic risk, and relative size are the more important variables for predictions. Textual features derived from corporate filings do not improve performance materially. In a credit competition model that accounts for the asymmetric cost of default misclassification, the survival random forest is able to capture large dollar profits. |
Date: | 2022–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2212.12051&r=cmp |
By: | Marc Chataigner; Areski Cousin; St\'ephane Cr\'epey; Matthew Dixon; Djibril Gueye |
Abstract: | We explore the abilities of two machine learning approaches for no-arbitrage interpolation of European vanilla option prices, which jointly yield the corresponding local volatility surface: a finite dimensional Gaussian process (GP) regression approach under no-arbitrage constraints based on prices, and a neural net (NN) approach with penalization of arbitrages based on implied volatilities. We demonstrate the performance of these approaches relative to the SSVI industry standard. The GP approach is proven arbitrage-free, whereas arbitrages are only penalized under the SSVI and NN approaches. The GP approach obtains the best out-of-sample calibration error and provides uncertainty quantification.The NN approach yields a smoother local volatility and a better backtesting performance, as its training criterion incorporates a local volatility regularization term. |
Date: | 2022–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2212.09957&r=cmp |
By: | Shuhua Xiao; Jiali Ma; Li Xia; Shushang Zhu |
Abstract: | The bailout strategy is crucial to cushion the massive loss caused by systemic risk in the financial system. There is no closed-form formulation of the optimal bailout problem, making solving it difficult. In this paper, we regard the issue of the optimal bailout (capital injection) as a black-box optimization problem, where the black box is characterized as a fixed-point system that follows the E-N framework for measuring the systemic risk of the financial system. We propose the so-called ``Prediction-Gradient-Optimization'' (PGO) framework to solve it, where the ``Prediction'' means that the objective function without a closed-form is approximated and predicted by a neural network, the ``Gradient'' is calculated based on the former approximation, and the ``Optimization'' procedure is further implemented within a gradient projection algorithm to solve the problem. Comprehensive numerical simulations demonstrate that the proposed approach is promising for systemic risk management. |
Date: | 2022–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2212.05235&r=cmp |
By: | Jan Ditzen; Francesco Ravazzolo |
Abstract: | For western economies a long-forgotten phenomenon is on the horizon: rising inflation rates. We propose a novel approach christened D2ML to identify drivers of national inflation. D2ML combines machine learning for model selection with time dependent data and graphical models to estimate the inverse of the covariance matrix, which is then used to identify dominant drivers. Using a dataset of 33 countries, we find that the US inflation rate and oil prices are dominant drivers of national inflation rates. For a more general framework, we carry out Monte Carlo simulations to show that our estimator correctly identifies dominant drivers. |
Date: | 2022–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2212.05841&r=cmp |
By: | Piero Mazzarisi; Adele Ravagnani; Paola Deriu; Fabrizio Lillo; Francesca Medda; Antonio Russo |
Abstract: | Identifying market abuse activity from data on investors' trading activity is very challenging both for the data volume and for the low signal to noise ratio. Here we propose two complementary unsupervised machine learning methods to support market surveillance aimed at identifying potential insider trading activities. The first one uses clustering to identify, in the vicinity of a price sensitive event such as a takeover bid, discontinuities in the trading activity of an investor with respect to his/her own past trading history and on the present trading activity of his/her peers. The second unsupervised approach aims at identifying (small) groups of investors that act coherently around price sensitive events, pointing to potential insider rings, i.e. a group of synchronised traders displaying strong directional trading in rewarding position in a period before the price sensitive event. As a case study, we apply our methods to investor resolved data of Italian stocks around takeover bids. |
Date: | 2022–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2212.05912&r=cmp |
By: | Jiashu Lou; Leyi Cui; Ye Li |
Abstract: | With the increasing enrichment and development of the financial derivatives market, the frequency of transactions is also faster and faster. Due to human limitations, algorithms and automatic trading have recently become the focus of discussion. In this paper, we propose a bidirectional LSTM neural network based on an attention mechanism, which is based on two popular assets, gold and bitcoin. In terms of Feature Engineering, on the one hand, we add traditional technical factors, and at the same time, we combine time series models to develop factors. In the selection of model parameters, we finally chose a two-layer deep learning network. According to AUC measurement, the accuracy of bitcoin and gold is 71.94% and 73.03% respectively. Using the forecast results, we achieved a return of 1089.34% in two years. At the same time, we also compare the attention Bi-LSTM model proposed in this paper with the traditional model, and the results show that our model has the best performance in this data set. Finally, we discuss the significance of the model and the experimental results, as well as the possible improvement direction in the future. |
Date: | 2022–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2212.03443&r=cmp |
By: | Pouriya Khalilian; Sara Azizi; Mohammad Hossein Amiri; Javad T. Firouzjaee |
Abstract: | National Association of Securities Dealers Automated Quotations(NASDAQ) is an American stock exchange based. It is one of the most valuable stock economic indices in the world and is located in New York City \cite{pagano2008quality}. The volatility of the stock market and the influence of economic indicators such as crude oil, gold, and the dollar in the stock market, and NASDAQ shares are also affected and have a volatile and chaotic nature \cite{firouzjaee2022lstm}.In this article, we have examined the effect of oil, dollar, gold, and the volatility of the stock market in the economic market, and then we have also examined the effect of these indicators on NASDAQ stocks. Then we started to analyze the impact of the feedback on the past prices of NASDAQ stocks and its impact on the current price. Using PCA and Linear Regression algorithm, we have designed an optimal dynamic learning experience for modeling these stocks. The results obtained from the quantitative analysis are consistent with the results of the qualitative analysis of economic studies, and the modeling done with the optimal dynamic experience of machine learning justifies the current price of NASDAQ shares. |
Date: | 2022–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2212.12044&r=cmp |
By: | Alicia von Schenk; Victor Klockmann; Jean-Fran\c{c}ois Bonnefon; Iyad Rahwan; Nils K\"obis |
Abstract: | People are not very good at detecting lies, which may explain why they refrain from accusing others of lying, given the social costs attached to false accusations - both for the accuser and the accused. Here we consider how this social balance might be disrupted by the availability of lie-detection algorithms powered by Artificial Intelligence. Will people elect to use lie detection algorithms that perform better than humans, and if so, will they show less restraint in their accusations? We built a machine learning classifier whose accuracy (67\%) was significantly better than human accuracy (50\%) in a lie-detection task and conducted an incentivized lie-detection experiment in which we measured participants' propensity to use the algorithm, as well as the impact of that use on accusation rates. We find that the few people (33\%) who elect to use the algorithm drastically increase their accusation rates (from 25\% in the baseline condition up to 86% when the algorithm flags a statement as a lie). They make more false accusations (18pp increase), but at the same time, the probability of a lie remaining undetected is much lower in this group (36pp decrease). We consider individual motivations for using lie detection algorithms and the social implications of these algorithms. |
Date: | 2022–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2212.04277&r=cmp |
By: | Patrick Rehill |
Abstract: | Methods for learning optimal policies use causal machine learning models to create human-interpretable rules for making choices around the allocation of different policy interventions. However, in realistic policy-making contexts, decision-makers often care about trade-offs between outcomes, not just singlemindedly maximising utility for one outcome. This paper proposes an approach termed Multi-Objective Policy Learning (MOPoL) which combines optimal decision trees for policy learning with a multi-objective Bayesian optimisation approach to explore the trade-off between multiple outcomes. It does this by building a Pareto frontier of non-dominated models for different hyperparameter settings. The key here is that a low-cost surrogate function can be an accurate proxy for the very computationally costly optimal tree in terms of expected regret. This surrogate can be fit many times with different hyperparameter values to proxy the performance of the optimal model. The method is applied to a real-world case-study of conditional cash transfers in Morocco where hybrid (partially optimal, partially greedy) policy trees provide good performance as a surrogate for optimal trees while being computationally cheap enough to feasibly fit a Pareto frontier. |
Date: | 2022–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2212.06312&r=cmp |
By: | Kazuhiko Shinoda; Takahiro Hoshino |
Abstract: | In various fields of data science, researchers are often interested in estimating the ratio of conditional expectation functions (CEFR). Specifically in causal inference problems, it is sometimes natural to consider ratio-based treatment effects, such as odds ratios and hazard ratios, and even difference-based treatment effects are identified as CEFR in some empirically relevant settings. This chapter develops the general framework for estimation and inference on CEFR, which allows the use of flexible machine learning for infinite-dimensional nuisance parameters. In the first stage of the framework, the orthogonal signals are constructed using debiased machine learning techniques to mitigate the negative impacts of the regularization bias in the nuisance estimates on the target estimates. The signals are then combined with a novel series estimator tailored for CEFR. We derive the pointwise and uniform asymptotic results for estimation and inference on CEFR, including the validity of the Gaussian bootstrap, and provide low-level sufficient conditions to apply the proposed framework to some specific examples. We demonstrate the finite-sample performance of the series estimator constructed under the proposed framework by numerical simulations. Finally, we apply the proposed method to estimate the causal effect of the 401(k) program on household assets. |
Date: | 2022–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2212.13145&r=cmp |
By: | Fierro, Luca Eduardo; Giri, Federico; Russo, Alberto |
Abstract: | We study how income inequality affects monetary policy through the inequality-household debt channel. We design a minimal macro Agent-Based model that replicates several stylized facts, including two novel ones: falling aggregate saving rate and decreasing bankruptcies during the household's debt boom phase. When inequality meets financial liberalization, a leaning against-the-wind strategy can preserve financial stability at the cost of high unemployment, whereas an accommodative strategy can dampen the fall of aggregate demand at the cost of larger leverage. We conclude that inequality may constrain the central bank, even when it is not explicitly targeted. |
Keywords: | Inequality; Financial Fragility; Monetary Policy; Agent-Based Model |
JEL: | E21 E25 E31 E52 G01 |
Date: | 2022–12–01 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:115741&r=cmp |
By: | Chunmeng Yang; Siqi Bu; Yi Fan; Wayne Xinwei Wan; Ruoheng Wang; Aoife Foley |
Abstract: | To meet widely recognised carbon neutrality targets, over the last decade metropolitan regions around the world have implemented policies to promote the generation and use of sustainable energy. Nevertheless, there is an availability gap in formulating and evaluating these policies in a timely manner, since sustainable energy capacity and generation are dynamically determined by various factors along dimensions based on local economic prosperity and societal green ambitions. We develop a novel data-driven platform to predict and evaluate energy transition policies by applying an artificial neural network and a technology diffusion model. Using Singapore, London, and California as case studies of metropolitan regions at distinctive stages of energy transition, we show that in addition to forecasting renewable energy generation and capacity, the platform is particularly powerful in formulating future policy scenarios. We recommend global application of the proposed methodology to future sustainable energy transition in smart regions. |
Date: | 2022–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2212.07019&r=cmp |
By: | Berndt, Marvin; Hess, Sebastian |
Keywords: | International Relations/Trade, Research Methods/ Statistical Methods |
Date: | 2022 |
URL: | http://d.repec.org/n?u=RePEc:ags:gewi22:329591&r=cmp |
By: | Francesco Lamperti; Andrea Roventini |
Abstract: | Though climate physical and transition risks will likely affect socio-economic dynamics along any transition pathways, their unfolding is still poorly understood. This also affects the development of climate-change policies to achieve sustainable growth. In this paper, we discuss a series of results assessing the materiality of climate risks for economic and financial stability and alternative policy pathways by means of the Dystopian Schumpeter meeting Keynes (DSK) agent-based integrated assessment model. Our results suggest the emergence of tipping points wherein physical risks under unmitigated emissions will reduce long-run growth and spur financial and economic instability. Moreover, diverse types of climate shocks have a different impact on economic dynamics and on the chances of observing a transition to carbonless growth. While these results call for immediate and ambitious interventions, appropriate mitigation policies need to be designed. Our results show that carbon taxation is not the most suitable tool to achieve zero-emission growth given its huge economic costs. On the contrary, command-and-control regulation and innovation policies to foster green investments is the best policy mix to put the economy on a green growth pathway. Overall, our results contradict the standard tenets of cost-benefit climate economics and suggest the absence of any trade-off between decarbonization and growth. |
Keywords: | climate policy; climate risks; macroeconomic dynamics; agent-based modelling. |
Date: | 2022–12–30 |
URL: | http://d.repec.org/n?u=RePEc:ssa:lemwps:2022/39&r=cmp |
By: | Marco Di Francesco; Kevin Kamm |
Abstract: | In this paper, we improve the performance of the large basket approximation developed by Reisinger et al. to calibrate Collateralized Debt Obligations (CDO) to iTraxx market data. The iTraxx tranches and index are computed using a basket of size $K= 125$. In the context of the large basket approximation, it is assumed that this is sufficiently large to approximate it by a limit SPDE describing the portfolio loss of a basket with size $K\rightarrow \infty$. For the resulting SPDE, we show four different numerical methods and demonstrate how the Magnus expansion can be applied to efficiently solve the large basket SPDE with high accuracy. Moreover, we will calibrate a structural model to the available market data. For this, it is important to efficiently infer the so-called initial distances to default from the Credit Default Swap (CDS) quotes of the constituents of the iTraxx for the large basket approximation. We will show how Deep Learning techniques can help us to improve the performance of this step significantly. We will see in the end a good fit to the market data and develop a highly parallelizable numerical scheme using GPU and multithreading techniques. |
Date: | 2022–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2212.12318&r=cmp |