|
on Computational Economics |
Issue of 2019‒07‒29
twelve papers chosen by |
By: | Bucci, Andrea |
Abstract: | Accurately forecasting multivariate volatility plays a crucial role for the financial industry. The Cholesky-Artificial Neural Networks specification here presented provides a twofold advantage for this topic. On the one hand, the use of the Cholesky decomposition ensures positive definite forecasts. On the other hand, the implementation of artificial neural networks allows to specify nonlinear relations without any particular distributional assumption. Out-of-sample comparisons reveal that Artificial neural networks are not able to strongly outperform the competing models. However, long-memory detecting networks, like Nonlinear Autoregressive model process with eXogenous input and long shortterm memory, show improved forecast accuracy respect to existing econometric models. |
Keywords: | Neural Networks; Machine Learning; Stock market volatility; Realized Volatility |
JEL: | C22 C45 C53 G17 |
Date: | 2019–07 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:95137&r=all |
By: | Souradeep Chakraborty |
Abstract: | In this paper we explore the usage of deep reinforcement learning algorithms to automatically generate consistently profitable, robust, uncorrelated trading signals in any general financial market. In order to do this, we present a novel Markov decision process (MDP) model to capture the financial trading markets. We review and propose various modifications to existing approaches and explore different techniques to succinctly capture the market dynamics to model the markets. We then go on to use deep reinforcement learning to enable the agent (the algorithm) to learn how to take profitable trades in any market on its own, while suggesting various methodology changes and leveraging the unique representation of the FMDP (financial MDP) to tackle the primary challenges faced in similar works. Through our experimentation results, we go on to show that our model could be easily extended to two very different financial markets and generates a positively robust performance in all conducted experiments. |
Date: | 2019–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1907.04373&r=all |
By: | J. M. Calabuig; H. Falciani; E. A. S\'anchez-P\'erez |
Abstract: | We develop a new topological structure for the construction of a reinforcement learning model in the framework of financial markets. It is based on Lipschitz type extension of reward functions defined in metric spaces. Using some known states of a dynamical system that represents the evolution of a financial market, we use our technique to simulate new states, that we call ``dreams". These new states are used to feed a learning algorithm designed to improve the investment strategy. |
Date: | 2019–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1907.05697&r=all |
By: | Bernard Lapeyre (CERMICS, MATHRISK); J\'er\^ome Lelong (LJK) |
Abstract: | The pricing of Bermudan options amounts to solving a dynamic programming principle , in which the main difficulty, especially in large dimension, comes from the computation of the conditional expectation involved in the continuation value. These conditional expectations are classically computed by regression techniques on a finite dimensional vector space. In this work, we study neural networks approximation of conditional expectations. We prove the convergence of the well-known Longstaff and Schwartz algorithm when the standard least-square regression is replaced by a neural network approximation. |
Date: | 2019–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1907.06474&r=all |
By: | Alex Burnap; John R. Hauser; Artem Timoshenko |
Abstract: | Aesthetics are critically important to market acceptance in many product categories. In the automotive industry in particular, an improved aesthetic design can boost sales by 30% or more. Firms invest heavily in designing and testing new product aesthetics. A single automotive "theme clinic" costs between \$100,000 and \$1,000,000, and hundreds are conducted annually. We use machine learning to augment human judgment when designing and testing new product aesthetics. The model combines a probabilistic variational autoencoder (VAE) and adversarial components from generative adversarial networks (GAN), along with modeling assumptions that address managerial requirements for firm adoption. We train our model with data from an automotive partner-7,000 images evaluated by targeted consumers and 180,000 high-quality unrated images. Our model predicts well the appeal of new aesthetic designs-38% improvement relative to a baseline and substantial improvement over both conventional machine learning models and pretrained deep learning models. New automotive designs are generated in a controllable manner for the design team to consider, which we also empirically verify are appealing to consumers. These results, combining human and machine inputs for practical managerial usage, suggest that machine learning offers significant opportunity to augment aesthetic design. |
Date: | 2019–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1907.07786&r=all |
By: | Songül Tolan (European Commission – JRC) |
Abstract: | Machine learning algorithms are now frequently used in sensitive contexts that substantially affect the course of human lives, such as credit lending or criminal justice. This is driven by the idea that‘objective’ machines base their decisions solely on facts and remain unaffected by human cognitive biases, discriminatory tendencies or emotions. Yet, there is overwhelming evidence showing that algorithms can inherit or even perpetuate human biases in their decision making when they are based on data that contains biased human decisions. This has led to a call for fairness-aware machine learning. However, fairness is a complex concept which is also reflected in the attempts to formalize fairness for algorithmic decision making. Statistical formalizations of fairness lead to a long list of criteria that are each flawed (or harmful even) in different contexts. Moreover,inherent tradeoffs in these criteria make it impossible to unify them in one general framework. Thus,fairness constraintsin algorithms have to be specific to the domains to which the algorithms are applied. In the future, research in algorithmic decision making systems should be aware of data and developer biases and add a focus on transparency to facilitate regular fairness audits. |
Keywords: | fairness, machine learning, algorithmic bias, algorithmic transparency |
Date: | 2018–12 |
URL: | http://d.repec.org/n?u=RePEc:ipt:decwpa:2018-10&r=all |
By: | Brummelhuis, Raymond; Luo, Zhongmin |
Abstract: | The 2007-09 financial crisis revealed that the investors in the financial market were more concerned about the future as opposed to the current capital adequacy for banks. Stress testing promises to complement the regulatory capital adequacy regimes, which assess a bank's current capital adequacy, with the ability to assess its future capital adequacy based on the projected asset-losses and incomes from the forecasting models from regulators and banks. The effectiveness of stress-test rests on its ability to inform the financial market, which depends on whether or not the market has confidence in the model-projected asset-losses and incomes for banks. Post-crisis studies found that the stress-test results are uninformative and receive insignificant market reactions; others question its validity on the grounds of the poor forecast accuracy using linear regression models which forecast the banking-industry incomes measured by Aggregate Net Interest Margin. Instead, our study focuses on NIM forecasting at an individual bank's level and employs both linear regression and non-linear Machine Learning techniques. First, we present both the linear and non-linear Machine Learning regression techniques used in our study. Then, based on out-of-sample tests and literature-recommended forecasting techniques, we compare the NIM forecast accuracy by 162 models based on 11 different regression techniques, finding that some Machine Learning techniques as well as some linear ones can achieve significantly higher accuracy than the random-walk benchmark, which invalidates the grounds used by the literature to challenge the validity of stress-test. Last, our results from forecast accuracy comparisons are either consistent with or complement those from existing forecasting literature. We believe that the paper is the first systematic study on forecasting bank-specific NIM by Machine Learning Techniques; also, it is a first systematic study on forecast accuracy comparison including both linear and non-linear Machine Learning techniques using financial data for a critical real-world problem; it is a multi-step forecasting example involving iterative forecasting, rolling-origins, recalibration with forecast accuracy measure being scale-independent; robust regression proved to be beneficial for forecasting in presence of outliers. It concludes with policy suggestions and future research directions. |
Keywords: | Regression, Machine Learning, Time Series Analysis, Bank Capital, Stress Test, Net Interest Margin, Forecasting, PPNR, CCAR |
JEL: | C4 C45 C5 C58 C6 G01 |
Date: | 2019–03–02 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:94779&r=all |
By: | Torsten Heinrich; Juan Sabuco; J. Doyne Farmer |
Abstract: | We develop an agent-based simulation of the catastrophe insurance and reinsurance industry and use it to study the problem of risk model homogeneity. The model simulates the balance sheets of insurance firms, who collect premiums from clients in return for ensuring them against intermittent, heavy-tailed risks. Firms manage their capital and pay dividends to their investors, and use either reinsurance contracts or cat bonds to hedge their tail risk. The model generates plausible time series of profits and losses and recovers stylized facts, such as the insurance cycle and the emergence of asymmetric, long tailed firm size distributions. We use the model to investigate the problem of risk model homogeneity. Under Solvency II, insurance companies are required to use only certified risk models. This has led to a situation in which only a few firms provide risk models, creating a systemic fragility to the errors in these models. We demonstrate that using too few models increases the risk of nonpayment and default while lowering profits for the industry as a whole. The presence of the reinsurance industry ameliorates the problem but does not remove it. Our results suggest that it would be valuable for regulators to incentivize model diversity. The framework we develop here provides a first step toward a simulation model of the insurance industry for testing policies and strategies for better capital management. |
Date: | 2019–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1907.05954&r=all |
By: | Chen, Siyan; Desiderio, Saul |
Abstract: | As suggested by recent empirical evidence, one of the causes behind the widespread rise of inequality experienced by OECD countries in the last few decades may have been the increased flexibility of labor markets. The authors explore this hypothesis through the analysis of a stock-flow consistent agent-based macroeconomic model able to reproduce with good statistical precision several empirical regularities. To this scope they employ three different sensitivity analysis techniques, which indicate that increasing job contract duration (i.e. decreasing flexibility) has the effect of reducing income and wealth inequality. However, the authors also find that this effect is diminished by tight monetary policy and low credit supply. This result suggests that the final outcome of structural reforms aimed at changing labor flexibility can depend on the macroeconomic environment in which these are implemented. |
Keywords: | economic inequality,labor market flexibility,monetary policy,agent-basedmodels,sensitivity analysis |
JEL: | C15 C63 D31 E50 J01 J41 |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:zbw:ifwedp:201944&r=all |
By: | David Easley (Cornell University; Cornell University and EIEF); Christopher Rojas (Cornell University) |
Abstract: | We develop a dynamic matched sample estimation algorithm to distinguish peer influence and homophily effects on item adoption decisions in dynamic networks, with numerous items diffusing simultaneously. We infer preferences using a machine learning algorithm applied to previous adoption decisions, and we match agents using those inferred preferences. We show that ignoring previous adoption decisions leads to significantly overestimating the role of peer influence in the diffusion of information, mistakenly confounding influence-based contagion with diffusion driven by common preferences. Our matching-on-preferences algorithm with machine learning reduces the relative effect of peer influence on item adoption decisions in this network significantly more than matching on earlier adoption decisions, as well other observable characteristics. We also show significant and intuitive heterogeneity in the relative effect of peer influence. |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:eie:wpaper:1912&r=all |
By: | Barnhart, Bradley L.; Bostian, Moriah B.; Jha, Manoj K.; Kurkalova, Lyubov A. |
Keywords: | Research Methods/ Statistical Methods |
Date: | 2019–06–25 |
URL: | http://d.repec.org/n?u=RePEc:ags:aaea19:291209&r=all |
By: | Alexis Bogroff (University Paris 1 Panthéon-Sorbonne); Dominique Guégan (University Paris 1 Panthéon-Sorbonne; labEx ReFi France; University Ca’ Foscari Venice) |
Abstract: | An extensive list of risks relative to big data frameworks and their use through models of artificial intelligence is provided along with measurements and implementable solutions. Bias, interpretability and ethics are studied in depth, with several interpretations from the point of view of developers, companies and regulators. Reflexions suggest that fragmented frameworks increase the risks of models misspecification, opacity and bias in the result. Domain experts and statisticians need to be involved in the whole process as the business objective must drive each decision from the data extraction step to the final activatable prediction. We propose an holistic and original approach to take into account the risks encountered all along the implementation of systems using artificial intelligence from the choice of the data and the selection of the algorithm, to the decision making. |
Keywords: | Artificial Intelligence, Bias, Big Data, Ethics, Governance, Interpretability, Regulation, Risk |
JEL: | C4 C5 C6 C8 D8 G28 G38 K2 |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:ven:wpaper:2019:19&r=all |