|
on Computational Economics |
Issue of 2020‒08‒31
24 papers chosen by |
By: | Alexandre Carbonneau |
Abstract: | This study presents a deep reinforcement learning approach for global hedging of long-term financial derivatives. A similar setup as in Coleman et al. (2007) is considered with the risk management of lookback options embedded in guarantees of variable annuities with ratchet features. The deep hedging algorithm of Buehler et al. (2019a) is applied to optimize neural networks representing global hedging policies with both quadratic and non-quadratic penalties. To the best of the author's knowledge, this is the first paper that presents an extensive benchmarking of global policies for long-term contingent claims with the use of various hedging instruments (e.g. underlying and standard options) and with the presence of jump risk for equity. Monte Carlo experiments demonstrate the vast superiority of non-quadratic global hedging as it results simultaneously in downside risk metrics two to three times smaller than best benchmarks and in significant hedging gains. Analyses show that the neural networks are able to effectively adapt their hedging decisions to different penalties and stylized facts of risky asset dynamics only by experiencing simulations of the financial market exhibiting these features. Numerical results also indicate that non-quadratic global policies are significantly more geared towards being long equity risk which entails earning the equity risk premium. |
Date: | 2020–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2007.15128&r=all |
By: | Linwei Hu; Jie Chen; Joel Vaughan; Hanyu Yang; Kelly Wang; Agus Sudjianto; Vijayan N. Nair |
Abstract: | This article provides an overview of Supervised Machine Learning (SML) with a focus on applications to banking. The SML techniques covered include Bagging (Random Forest or RF), Boosting (Gradient Boosting Machine or GBM) and Neural Networks (NNs). We begin with an introduction to ML tasks and techniques. This is followed by a description of: i) tree-based ensemble algorithms including Bagging with RF and Boosting with GBMs, ii) Feedforward NNs, iii) a discussion of hyper-parameter optimization techniques, and iv) machine learning interpretability. The paper concludes with a comparison of the features of different ML algorithms. Examples taken from credit risk modeling in banking are used throughout the paper to illustrate the techniques and interpret the results of the algorithms. |
Date: | 2020–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2008.04059&r=all |
By: | Rongju Zhang (Monash University [Melbourne]); Nicolas Langrené (CSIRO - Commonwealth Scientific and Industrial Research Organisation [Canberra]); Yu Tian (Monash University [Melbourne]); Zili Zhu (CSIRO - Commonwealth Scientific and Industrial Research Organisation [Canberra]); Fima Klebaner (Monash University [Melbourne]); Kais Hamza (Monash University [Melbourne]) |
Abstract: | We present a simulation-and-regression method for solving dynamic portfolio allocation problems in the presence of general transaction costs, liquidity costs and market impacts. This method extends the classical least squares Monte Carlo algorithm to incorporate switching costs, corresponding to transaction costs and transient liquidity costs, as well as multiple endogenous state variables, namely the portfolio value and the asset prices subject to permanent market impacts. To do so, we improve the accuracy of the control randomization approach in the case of discrete controls, and propose a global iteration procedure to further improve the allocation estimates. We validate our numerical method by solving a realistic cash-and-stock portfolio with a power-law liquidity model. We quantify the certainty equivalent losses associated with ignoring liquidity effects, and illustrate how our dynamic allocation protects the investor's capital under illiquid market conditions. Lastly, we analyze, under different liquidity conditions, the sensitivities of certainty equivalent returns and optimal allocations with respect to trading volume, stock price volatility, initial investment amount, risk-aversion level and investment horizon. |
Keywords: | dynamic portfolio selection,portfolio optimization,transaction cost,liquidity cost,market impact,optimal stochastic control,switching cost,least squares Monte Carlo,simulation-and-regression |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-02909207&r=all |
By: | Runshan Fu; Yan Huang; Param Vir Singh |
Abstract: | Big data and machine learning (ML) algorithms are key drivers of many fintech innovations. While it may be obvious that replacing humans with machine would increase efficiency, it is not clear whether and where machines can make better decisions than humans. We answer this question in the context of crowd lending, where decisions are traditionally made by a crowd of investors. Using data from Prosper.com, we show that a reasonably sophisticated ML algorithm predicts listing default probability more accurately than crowd investors. The dominance of the machine over the crowd is more pronounced for highly risky listings. We then use the machine to make investment decisions, and find that the machine benefits not only the lenders but also the borrowers. When machine prediction is used to select loans, it leads to a higher rate of return for investors and more funding opportunities for borrowers with few alternative funding options. We also find suggestive evidence that the machine is biased in gender and race even when it does not use gender and race information as input. We propose a general and effective "debasing" method that can be applied to any prediction focused ML applications, and demonstrate its use in our context. We show that the debiased ML algorithm, which suffers from lower prediction accuracy, still leads to better investment decisions compared with the crowd. These results indicate that ML can help crowd lending platforms better fulfill the promise of providing access to financial resources to otherwise underserved individuals and ensure fairness in the allocation of these resources. |
Date: | 2020–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2008.04068&r=all |
By: | Omid Safarzadeh |
Abstract: | This research investigates efficiency on-line learning Algorithms to generate trading signals.I employed technical indicators based on high frequency stock prices and generated trading signals through ensemble of Random Forests. Similarly, Kalman Filter was used for signaling trading positions. Comparing Time Series methods with Machine Learning methods, results spurious of Kalman Filter to Random Forests in case of on-line learning predictions of stock prices |
Date: | 2020–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2007.11098&r=all |
By: | Hao Tang; Anurag Pal; Lu-Feng Qiao; Tian-Yu Wang; Jun Gao; Xian-Min Jin |
Abstract: | Collateral debt obligation (CDO) has been one of the most commonly used structured financial products and is intensively studied in quantitative finance. By setting the asset pool into different tranches, it effectively works out and redistributes credit risks and returns to meet the risk preferences for different tranche investors. The copula models of various kinds are normally used for pricing CDOs, and the Monte Carlo simulations are required to get their numerical solution. Here we implement two typical CDO models, the single-factor Gaussian copula model and Normal Inverse Gaussian copula model, and by applying the conditional independence approach, we manage to load each model of distribution in quantum circuits. We then apply quantum amplitude estimation as an alternative to Monte Carlo simulation for CDO pricing. We demonstrate the quantum computation results using IBM Qiskit. Our work addresses a useful task in finance instrument pricing, significantly broadening the application scope for quantum computing in finance. |
Date: | 2020–08 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2008.04110&r=all |
By: | Tetsuya Kaji; Elena Manresa; Guillaume Pouliot |
Abstract: | We propose a new simulation-based estimation method, adversarial estimation, for structural models. The estimator is formulated as the solution to a minimax problem between a generator (which generates synthetic observations using the structural model) and a discriminator (which classifies if an observation is synthetic). The discriminator maximizes the accuracy of its classification while the generator minimizes it. We show that, with a sufficiently rich discriminator, the adversarial estimator attains parametric efficiency under correct specification and the parametric rate under misspecification. We advocate the use of a neural network as a discriminator that can exploit adaptivity properties and attain fast rates of convergence. We apply our method to the elderly's saving decision model and show that including gender and health profiles in the discriminator uncovers the bequest motive as an important source of saving across the wealth distribution, not only for the rich. |
Date: | 2020–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2007.06169&r=all |
By: | Monica Azqueta-Gavaldon; Gonzalo Azqueta-Gavaldon; Inigo Azqueta-Gavaldon; Andres Azqueta-Gavaldon |
Abstract: | This project aims at creating an investment device to help investors determine which real estate units have a higher return to investment in Madrid. To do so, we gather data from Idealista.com, a real estate web-page with millions of real estate units across Spain, Italy and Portugal. In this preliminary version, we present the road map on how we gather the data; descriptive statistics of the 8,121 real estate units gathered (rental and sale); build a return index based on the difference in prices of rental and sale units(per neighbourhood and size) and introduce machine learning algorithms for rental real estate price prediction. |
Date: | 2020–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2008.02629&r=all |
By: | Juri Hinz; Igor Grigoryev; Alexander Novikov |
Abstract: | The economic viability of a mining project depends on its efficient exploration, which requires a prediction of worthwhile ore in a mine deposit. In this work, we apply the so-called LASSO methodology to estimate mineral concentration within unexplored areas. Our methodology outperforms traditional techniques not only in terms of logical consistency, but potentially also in costs reduction. Our approach is illustrated by a full source code listing and a detailed discussion of the advantages and limitations of our approach. |
Keywords: | prediction; artificial intelligence; machine learning; LASSO; cross-validation |
Date: | 2020–03–01 |
URL: | http://d.repec.org/n?u=RePEc:uts:rpaper:407&r=all |
By: | Steen Nielsen (Department of Economics and Business Economics, Aarhus University) |
Abstract: | Not only is the role of data changing in a most dramatic way, but also the way we can handle and use the data through a number of new technologies such as Machine Learning (ML) and Artificial Intelligence (AI). The changes, their speed and scale, as well as their impact on almost every aspect of daily life and, of course, on Management Accounting are almost unbelievable. The term ‘data’ in this context means business data in the broadest possible sense. ML teaches computers to do what comes naturally to humans and decision makers: that is to learn from experience. ML and AI for management accountants have only been sporadically discussed within the last 5-10 years, even though these concepts have been used for a long time now within other business fields such as logistics and finance. ML and AI are extensions of Business Analytics. This paper discusses how machine learning will provide new opportunities and implications for the management accountants in the future. First, it was found that many classical areas and topics within Management Accounting and Performance Management are natural candidates for ML and AI. The true value of the paper lies in making practitioners and researchers more aware of the possibilities of ML for Management Accounting, thereby making the management accountants a real value driver for the company. |
Keywords: | Management accounting, machine learning, algorithms, decisions, analytics, management accountant, business translator, performance management |
JEL: | C15 M41 |
Date: | 2020–08–06 |
URL: | http://d.repec.org/n?u=RePEc:aah:aarhec:2020-09&r=all |
By: | Rongju Zhang (Monash University [Melbourne]); Nicolas Langrené (CSIRO - Commonwealth Scientific and Industrial Research Organisation [Canberra]); Yu Tian (Monash University [Melbourne]); Zili Zhu (CSIRO - Commonwealth Scientific and Industrial Research Organisation [Canberra]); Fima Klebaner (Monash University [Melbourne]); Kais Hamza (Monash University [Melbourne]) |
Abstract: | In this paper, we propose a novel investment strategy for portfolio optimization problems. The proposed strategy maximizes the expected portfolio value bounded within a targeted range, composed of a conservative lower target representing a need for capital protection and a desired upper target representing an investment goal. This strategy favorably shapes the entire probability distribution of returns, as it simultaneously seeks a desired expected return, cuts off downside risk and implicitly caps volatility and higher moments. To illustrate the effectiveness of this investment strategy, we study a multiperiod portfolio optimization problem with transaction costs and develop a two-stage regression approach that improves the classical least squares Monte Carlo (LSMC) algorithm when dealing with difficult payoffs, such as highly concave, abruptly changing or discontinuous functions. Our numerical results show substantial improvements over the classical LSMC algorithm for both the constant relative risk-aversion (CRRA) utility approach and the proposed skewed target range strategy (STRS). Our numerical results illustrate the ability of the STRS to contain the portfolio value within the targeted range. When compared with the CRRA utility approach, the STRS achieves a similar mean-variance efficient frontier while delivering a better downside risk-return trade-off. |
Keywords: | target-based portfolio optimization,alternative performance measure,multiperiod portfolio optimization,least squares Monte Carlo,two-stage regression |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-02909342&r=all |
By: | Romain Gauchon (SAF - Laboratoire de Sciences Actuarielle et Financière - UCBL - Université Claude Bernard Lyon 1 - Université de Lyon); Stéphane Loisel (SAF - Laboratoire de Sciences Actuarielle et Financière - UCBL - Université Claude Bernard Lyon 1 - Université de Lyon); Jean-Louis Rullière (SAF - Laboratoire de Sciences Actuarielle et Financière - UCBL - Université Claude Bernard Lyon 1 - Université de Lyon) |
Abstract: | On paper, prevention appears to be a good complement to health insurance. However, its implementation is often costly. To maximize the impact and efficiency of prevention plans these should target particular groups of policyholders. In this article, we propose a way of clustering policyholders that could be a starting point for the targeting of prevention plans. This two-step method mainly classifies using policyholder health consumption. This dimension is first reduced using a Nonnegative matrix factorization algorithm, producing intermediate health-product clusters. We then cluster using Kohonen's map algorithm. This leads to a natural visualization of the results, allowing the simple comparison of results from different databases. We apply our method to two real health-insurer datasets. We carry out a number of tests (including tests on a text-mining database) of method stability and clustering ability. The method is shown to be stable, easily-understandable, and able to cluster most policyholders efficiently. |
Keywords: | Kohonen self-organizing map,Prevention,Non negative Matrix Factorization NMF,Health insurance claims databases,Clustering Algorithm |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-02156058&r=all |
By: | Marc Chataigner; St\'ephane Cr\'epey; Matthew Dixon |
Abstract: | Deep learning for option pricing has emerged as a novel methodology for fast computations with applications in calibration and computation of Greeks. However, many of these approaches do not enforce any no-arbitrage conditions, and the subsequent local volatility surface is never considered. In this article, we develop a deep learning approach for interpolation of European vanilla option prices which jointly yields the full surface of local volatilities. We demonstrate the modification of the loss function or the feed forward network architecture to enforce (hard constraints approach) or favor (soft constraints approach) the no-arbitrage conditions and we specify the experimental design parameters that are needed for adequate performance. A novel component is the use of the Dupire formula to enforce bounds on the local volatility associated with option prices, during the network fitting. Our methodology is benchmarked numerically on real datasets of DAX vanilla options. |
Date: | 2020–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2007.10462&r=all |
By: | Xue-Zhong He (Finance Discipline Group, UTS Business School, University of Technology Sydney); Shen Lin |
Abstract: | Information-based reinforcement learning is effective for trading and price discovery in limit order markets. It helps traders to learn a statistical equilibrium in which traders' expected payoffs and out-sample payoffs are highly correlated. Consistent with rational equilibrium models, the order choice between buy and sell and between market and limit orders for informed traders mainly depends on their information about fundamental value, while uninformed traders trade on a short-run momentum of the informed market orders. The learning increases liquidity supply of uninformed and liquidity consumption of informed, generating diagonal effect on order submission and hump-shaped order books, and improving traders' profitability and price discovery. The results shed a light into the market practice of using machine learning in limit order markets. |
Keywords: | Reinforcement Learning; Order Book Information; Limit Orders; Momentum Trading |
JEL: | G14 C63 D82 D83 |
Date: | 2019–02–01 |
URL: | http://d.repec.org/n?u=RePEc:uts:rpaper:403&r=all |
By: | Philippe Goulet Coulombe; Maxime Leroux; Dalibor Stevanovic; Stéphane Surprenant |
Abstract: | From a purely predictive standpoint, rotating the predictors’ matrix in a low-dimensional linear regression setup does not alter predictions. However, when the forecasting technology either uses shrinkage or is non-linear, it does. This is precisely the fabric of the machine learning (ML) macroeconomic forecasting environment. Pre-processing of the data translates to an alteration of the regularization – explicit or implicit – embedded in ML algorithms. We review old transformations and propose new ones, then empirically evaluate their merits in a substantial pseudo-out-sample exercise. It is found that traditional factors should almost always be included in the feature matrix and moving average rotations of the data can provide important gains for various forecasting targets. |
Keywords: | Machine Learning,Big Data,Forecasting, |
Date: | 2020–08–04 |
URL: | http://d.repec.org/n?u=RePEc:cir:cirwor:2020s-42&r=all |
By: | Andrii Babii; Ryan T. Ball; Eric Ghysels; Jonas Striaukas |
Abstract: | This paper introduces structured machine learning regressions for prediction and nowcasting with panel data consisting of series sampled at different frequencies. Motivated by the empirical problem of predicting corporate earnings for a large cross-section of firms with macroeconomic, financial, and news time series sampled at different frequencies, we focus on the sparse-group LASSO regularization. This type of regularization can take advantage of the mixed frequency time series panel data structures and we find that it empirically outperforms the unstructured machine learning methods. We obtain oracle inequalities for the pooled and fixed effects sparse-group LASSO panel data estimators recognizing that financial and economic data exhibit heavier than Gaussian tails. To that end, we leverage on a novel Fuk-Nagaev concentration inequality for panel data consisting of heavy-tailed $\tau$-mixing processes which may be of independent interest in other high-dimensional panel data settings. |
Date: | 2020–08 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2008.03600&r=all |
By: | Hainaut, Donatien; Denuit, Michel |
Date: | 2020–01–01 |
URL: | http://d.repec.org/n?u=RePEc:aiz:louvad:2020001&r=all |
By: | Hainaut, Donatien; Denuit, Michel |
Date: | 2019–01–01 |
URL: | http://d.repec.org/n?u=RePEc:aiz:louvad:2019026&r=all |
By: | Leluc, Remi; Portier, Francois; Segers, Johan |
Date: | 2019–01–01 |
URL: | http://d.repec.org/n?u=RePEc:aiz:louvad:2019015&r=all |
By: | Aart Gerritsen; Bas Jacobs; Alexandra Victoria Rusu; Kevin Spiritus |
Abstract: | There is increasing empirical evidence that people systematically differ in their rates of return on capital. We derive optimal non-linear taxes on labor and capital income in the presence of such return heterogeneity. We allow for two distinct reasons why returns are heterogeneous: because individuals with higher ability obtain higher returns on their savings, and because wealthier individuals achieve higher returns due to scale effects in wealth management. In both cases, a strictly positive tax on capital income is part of a Pareto-efficient dual income tax structure. We write optimal tax rates on capital income in terms of sufficient statistics and find that they are increasing in the degree of return heterogeneity. Numerical simulations for empirically plausible return heterogeneity suggest that optimal marginal tax rates on capital income are positive, substantial, and increasing in capital income. |
Keywords: | optimal taxation, capital taxation, heterogeneous returns |
JEL: | H21 H24 |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:ces:ceswps:_8395&r=all |
By: | Kristoffer Persson |
Abstract: | This paper investigates the relationship between economic media sentiment and individuals' expetations and perceptions about economic conditions. We test if economic media sentiment Granger-causes individuals' expectations and opinions concerning economic conditions, controlling for macroeconomic variables. We develop a measure of economic media sentiment using a supervised machine learning method on a data set of Swedish economic media during the period 1993-2017. We classify the sentiment of 179,846 media items, stemming from 1,071 unique media outlets, and use the number of news items with positive and negative sentiment to construct a time series index of economic media sentiment. Our results show that this index Granger-causes individuals' perception of macroeconomic conditions. This indicates that the way the economic media selects and frames macroeconomic news matters for individuals' aggregate perception of macroeconomic reality. |
Date: | 2020–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2007.13823&r=all |
By: | Daniel Wochner (KOF Swiss Economic Institute, ETH Zurich, Switzerland) |
Abstract: | Machine Learning models are often considered to be “black boxes†that provide only little room for the incorporation of theory (cf. e.g. Mukherjee, 2017; Veltri, 2017). This article proposes so-called Dynamic Factor Trees (DFT) and Dynamic Factor Forests (DFF) for macroeconomic forecasting, which synthesize the recent machine learning, dynamic factor model and business cycle literature within a unified statistical machine learning framework for model-based recursive partitioning proposed in Zeileis, Hothorn and Hornik (2008). DFTs and DFFs are non-linear and state-dependent forecasting models, which reduce to the standard Dynamic Factor Model (DFM) as a special case and allow us to embed theory-led factor models in powerful tree-based machine learning ensembles conditional on the state of the business cycle. The out-of-sample forecasting experiment for short-term U.S. GDP growth predictions combines three distinct FRED-datasets, yielding a balanced panel with over 375 indicators from 1967 to 2018 (FRED, 2019; McCracken & Ng, 2016, 2019a, 2019b). Our results provide strong empirical evidence in favor of the proposed DFTs and DFFs and show that they significantly improve the predictive performance of DFMs by almost 20% in terms of MSFE. Interestingly, the improvements materialize in both expansionary and recessionary periods, suggesting that DFTs and DFFs tend to perform not only sporadically but systematically better than DFMs. Our findings are fairly robust to a number of sensitivity tests and hold exciting avenues for future research. |
Keywords: | Forecasting, Machine Learning, Regression Trees and Forests, Dynamic Factor Model, Business Cycles, GDP Growth, United States |
JEL: | C45 C51 C53 E32 O47 |
Date: | 2020–05 |
URL: | http://d.repec.org/n?u=RePEc:kof:wpskof:20-472&r=all |
By: | Ohdoi, Ryoji |
Abstract: | In some classes of macroeconomic models with financial frictions, an adverse financial shock successfully explains a drop in GDP, but simultaneously induces a stock price boom. The latter theoretical result is not consistent with data from actual financial crises. This study develops a simple macroeconomic model featuring a banking sector, financial frictions, and R&D-led endogenous growth to examine the impacts of an adverse financial shock to banks on firms' R&D investments and equity prices. Both the analytical and numerical investigations show that a shock that hinders the banks' financial intermediary function can be a key to generating both a prolonged recession and a drop in the firms' equity prices. |
Keywords: | Banks; Endogenous growth; Financial frictions; Financial shocks; Quality-ladder growth model |
JEL: | E32 E44 G01 O31 O41 |
Date: | 2020–07–22 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:101993&r=all |
By: | Szekeres, Szabolcs |
Abstract: | A numerical model is used to experimentally compute certainty equivalent discount rates (CERs) of risk neutral and risk-averse decision makers. Investors are characterized by utility functions of the constant-intertemporal-elasticity-of-substitution (CIES) type. Stochastic interest rates are generated using a Cox, Ingersoll & Ross (CIR) type model, calibrated to 1992-2017 US three-month Treasury Bill rates. The paper replicates empirical studies providing evidence for declining discount rates (DDRs) and tests claims regarding risk averse CERs in a descriptive discounting context. It is shown that DDRs as proposed by Weitzman are based on a fallacy. The reviewed papers seeking empirical evidence of DDRs repeat the mistake. Risk averse CERs can be decline with time because of portfolio effects. If these are low, risk averse CERs are slightly lower than risk neutral ones but not secularly declining. |
Keywords: | Weitzman-Gollier puzzle; declining discount rates; discounting |
JEL: | D61 H43 |
Date: | 2020–08–04 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:102233&r=all |