nep-cmp New Economics Papers
on Computational Economics
Issue of 2020‒08‒24
twenty-one papers chosen by



  1. A Novel Ensemble Deep Learning Model for Stock Prediction Based on Stock Prices and News By Yang Li; Yi Pan
  2. POSA: Policy implementation sensitivity analysis By Bauermann, Tom; Roos, Michael W. M.; Schaff, Frederik
  3. Relative wealth concerns with partial information and heterogeneous priors By Chao Deng; Xizhi Su; Chao Zhou
  4. Capturing Key Energy and Emission Trends in CGE models. Assessment of Status and Remaining Challenges By Taran Fæhn; Gabriel Bachner; Robert Beach; Jean Chateau; Shinichiro Fujimori; Madanmohan Ghosh; Meriem Hamdi-Cherif; Elisa Lanzi; Sergey Paltsev; Toon Vandyck; Bruno Cunha; Rafael Garaffa; Karl Steininger
  5. Deep xVA solver - A neural network based counterparty credit risk management framework By Alessandro Gnoatto; Athena Picarelli; Christoph Reisinger
  6. Hybrid ARFIMA Wavelet Artificial Neural Network Model for DJIA Index Forecasting By Heni Boubaker; Giorgio Canarella; Rangan Guzpta; Stephen M. Miller
  7. Machine Learning approach for Credit Scoring By A. R. Provenzano; D. Trifir\`o; A. Datteo; L. Giada; N. Jean; A. Riciputi; G. Le Pera; M. Spadaccino; L. Massaron; C. Nordio
  8. Deep neural network for optimal retirement consumption in defined contribution pension system By Wen Chen; Nicolas Langren\'e
  9. A Note on the Interpretability of Machine Learning Algorithms By Dominique Guegan
  10. Europe beyond Coal - An Economic and Climate Impact Assessment By Christoph Böhringer; Knut Einar Rosendahl
  11. Ergodic Annealing By Carlo Baldassi; Fabio Maccheroni; Massimo Marinacci; Marco Pirazzini
  12. China's Missing Pigs: Correcting China's Hog Inventory Data Using a Machine Learning Approach By Yongtong Shao; Minghao Li; Dermot J. Hayes; Wendong Zhang; Tao Xiong; Wei Xie
  13. The Hypothetical Household Tool (HHoT) in EUROMOD: a new instrument for comparative research on tax-benefit policies in Europe By Tine Hufkens; Tim Goedemé; Katrin Gasior; Chrysa Leventi; Kostas Manios; Olga Rastrigina; Pasquale Recchia; Holly Sutherland; Natascha Van Mechelen; Gerlinde Verbist
  14. Variable Selection in Macroeconomic Forecasting with Many Predictors By Zhenzhong Wang; Zhengyuan Zhu; Cindy Yu
  15. Region Search Optimization Algorithm for Economic Energy Management of Grid-Connected Mode Microgrid By Jamaledini, Ashkan; Soltani, Ali; Khazaei, Ehsan
  16. Generating Empirical Core Size Distributions of Hedonic Games using a Monte Carlo Method By Andrew J. Collins; Sheida Etemadidavan; Wael Khallouli
  17. Nvidia’s stock returns prediction using machine learning techniques for time series forecasting problem By Marcin Chlebus; Michał Dyczko; Michał Woźniak
  18. Which bills are lobbied? Predicting and interpreting lobbying activity in the US. By Ivan Slobozhan; Peter Ormosi; Rajesh Sharma
  19. Robust utility maximization under model uncertainty via a penalization approach By Ivan Guo; Nicolas Langrené; Gregoire Loeper; Wei Ning
  20. HRP performance comparison in portfolio optimization under various codependence and distance metrics By Illya Barziy; Marcin Chlebus
  21. Deep Dynamic Factor Models By Paolo Andreini; Cosimo Izzo; Giovanni Ricco

  1. By: Yang Li; Yi Pan
    Abstract: In recent years, machine learning and deep learning have become popular methods for financial data analysis, including financial textual data, numerical data, and graphical data. This paper proposes to use sentiment analysis to extract useful information from multiple textual data sources and a blending ensemble deep learning model to predict future stock movement. The blending ensemble model contains two levels. The first level contains two Recurrent Neural Networks (RNNs), one Long-Short Term Memory network (LSTM) and one Gated Recurrent Units network (GRU), followed by a fully connected neural network as the second level model. The RNNs, LSTM, and GRU models can effectively capture the time-series events in the input data, and the fully connected neural network is used to ensemble several individual prediction results to further improve the prediction accuracy. The purpose of this work is to explain our design philosophy and show that ensemble deep learning technologies can truly predict future stock price trends more effectively and can better assist investors in making the right investment decision than other traditional methods.
    Date: 2020–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2007.12620&r=all
  2. By: Bauermann, Tom; Roos, Michael W. M.; Schaff, Frederik
    Abstract: Agent-based computational economics (ACE) is gaining interest in macroeconomic research. Agent-based models (ABM) are increasingly able to replicate micro- and macroeconomic stylised facts and to extend the knowledge about real-world economic systems. These advances allow ABM to become a valuable and more frequently used tool for policy analysis in academia and economic practice. However, ACE is a rather complex approach to already complex investigations like policy analyses, i.e. the analyses on how a variety of policy measures affects the (model) economy, which makes policy analyses in ABM prone to critique. The following research paper addresses these problems. We have developed a procedure for policy experiments in ACE which helps to conceptualise and conduct policy experiments in macroeconomic ABM efficiently. The procedure makes policy implementation decisions and their consequences transparent by conducting what we term the policy implementation sensitivity analysis (POSA). The application of the procedure produces graphical and/or numerical reports that should be included in the appendix of the original research paper in order to increase the credibility of the research, similar to proofs and protocols in analytical and empirical research.
    Keywords: agent-based macroeconomics,policy experiments,sensitivity analyses
    JEL: C63 E6 B4
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:zbw:rwirep:854&r=all
  3. By: Chao Deng; Xizhi Su; Chao Zhou
    Abstract: We establish a Nash equilibrium in a market with $ N $ agents with the performance criteria of relative wealth level when the market return is unobservable. Each investor has a random prior belief on the return rate of the risky asset. The investors can be heterogeneous in both the mean and variance of the prior. By a separation result and a martingale argument, we show that the optimal investment strategy under a stochastic return rate model can be characterized by a fully-coupled linear FBSDE. Two sets of deep neural networks are used for the numerical computation to first find each investor's estimate of the mean return rate and then solve the FBSDEs. We establish the existence and uniqueness result for the class of FBSDEs with stochastic coefficients and solve the utility game under partial information using deep neural network function approximators. We demonstrate the efficiency and accuracy by a base-case comparison with the solution from the finite difference scheme in the linear case and apply the algorithm to the general case of nonlinear hidden variable process. Simulations of investment strategies show a herd effect that investors trade more aggressively under relativeness concerns. Statistical properties of the investment strategies and the portfolio performance, including the Sharpe ratios and the Variance Risk ratios (VRRs) are examed. We observe that the agent with the most accurate prior estimate is likely to lead the herd, and the effect of competition on heterogeneous agents varies more with market characteristics compared to the homogeneous case.
    Date: 2020–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2007.11781&r=all
  4. By: Taran Fæhn (Statistics Norway); Gabriel Bachner; Robert Beach; Jean Chateau; Shinichiro Fujimori; Madanmohan Ghosh; Meriem Hamdi-Cherif; Elisa Lanzi; Sergey Paltsev; Toon Vandyck; Bruno Cunha; Rafael Garaffa; Karl Steininger
    Abstract: Limiting global warming in line with the goals in the Paris Agreement will require substantial technological and behavioural transformations. This challenge drives many of the current modelling trends. This article undertakes a review of 17 state-of-the-art recursive-dynamic computable general equilibrium (CGE) models and assesses the key methodologies and applied modules they use for representing sectoral energy and emission characteristics and dynamics. The purpose is to provide technical insight into recent advances in the modelling of current and future energy and abatement technologies and how they can be used to make baseline projections and scenarios 20-80 years ahead. Numerical illustrations are provided. In order to represent likely energy system transitions in the decades to come, modern CGE tools have learned from bottom-up studies. Three different approaches to baseline quantification can be distinguished: (a) exploiting bottom-up model characteristics to endogenize responses of technological investment and utilization, (b) relying on external information sources to feed the exogenous parameters and variables of the model, and (c) linking the model with more technology-rich, partial models to obtain bottom-up- and pathwayconsistent parameters.
    Keywords: Computable general equilibrium models; Long-term economic projections; Energy; Technological change; Emissions; Greenhouse gases
    JEL: C68 O13 O14 O18 Q43 Q54
    Date: 2020–07
    URL: http://d.repec.org/n?u=RePEc:ssb:dispap:936&r=all
  5. By: Alessandro Gnoatto (Department of Economics (University of Verona)); Athena Picarelli (Department of Economics (University of Verona)); Christoph Reisinger (University of Oxford)
    Abstract: In this paper, we present a novel computational framework for portfolio-wide risk management problems where the presence of a potentially large number of risk factors makes traditional numerical techniques ineffective. The new method utilises a coupled system of BSDEs for the valuation adjustments (xVA) and solves these by a recursive application of a neural network based BSDE solver. This not only makes the computation of xVA for high-dimensional problems feasible, but also produces hedge ratios and dynamic risk measures for xVA, and allows simulations of the collateral account.
    Keywords: CVA, DVA, FVA, ColVA, xVA, EPE, Collateral, xVA hedging, Deep BSDE Solver
    JEL: G12 G13 C63
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:ver:wpaper:07/2020&r=all
  6. By: Heni Boubaker (International University of Rabat); Giorgio Canarella (University of Nevada, Las Vegas); Rangan Guzpta (University of Pretoria); Stephen M. Miller (University of Nevada, Las Vegas)
    Abstract: This paper proposes a hybrid modelling approach for forecasting returns and volatilities of the stock market. The model, called ARFIMA-WLLWNN model, integrates the advantages of the ARFIMA model, the wavelet decomposition technique (namely, the discrete MODWT with Daubechies least asymmetric wavelet filter) and artificial neural network (namely, the LLWNN neural network). The model develops through a two-phase approach. In phase one, a wavelet decomposition improves the forecasting accuracy of the LLWNN neural network, resulting in the Wavelet Local Linear Wavelet Neural Network (WLLWNN) model. The Back Propagation (BP) and Particle Swarm Optimization (PSO) learning algorithms optimize the WLLWNN structure. In phase two, the residuals of an ARFIMA model of the conditional mean become the input to the WLLWNN model. The hybrid ARFIMA-WLLWNN model is evaluated using daily closing prices for the Dow Jones Industrial Average (DJIA) index over 01/01/2010 to 02/11/2020. The experimental results indicate that the PSO-optimized version of the hybrid ARFIMA-WLLWNN outperforms the LLWNN, WLLWNN, ARFIMA-LLWNN, and the ARFIMA-HYAPARCH models and provides more accurate out-of-sample forecasts over validation horizons of one, five and twenty-two days.
    Keywords: Wavelet decomposition, WLLWNN, Neural network, ARFIMA, HYGARCH
    JEL: C45 C58 G17
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:uct:uconnp:2020-10&r=all
  7. By: A. R. Provenzano; D. Trifir\`o; A. Datteo; L. Giada; N. Jean; A. Riciputi; G. Le Pera; M. Spadaccino; L. Massaron; C. Nordio
    Abstract: In this work we build a stack of machine learning models aimed at composing a state-of-the-art credit rating and default prediction system, obtaining excellent out-of-sample performances. Our approach is an excursion through the most recent ML / AI concepts, starting from natural language processes (NLP) applied to economic sectors' (textual) descriptions using embedding and autoencoders (AE), going through the classification of defaultable firms on the base of a wide range of economic features using gradient boosting machines (GBM) and calibrating their probabilities paying due attention to the treatment of unbalanced samples. Finally we assign credit ratings through genetic algorithms (differential evolution, DE). Model interpretability is achieved by implementing recent techniques such as SHAP and LIME, which explain predictions locally in features' space.
    Date: 2020–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2008.01687&r=all
  8. By: Wen Chen; Nicolas Langren\'e
    Abstract: In this paper, we develop a deep neural network approach to solve a lifetime expected mortality-weighted utility-based model for optimal consumption in the decumulation phase of a defined contribution pension system. We formulate this problem as a multi-period finite-horizon stochastic control problem and train a deep neural network policy representing consumption decisions. The optimal consumption policy is determined by personal information about the retiree such as age, wealth, risk aversion and bequest motive, as well as a series of economic and financial variables including inflation rates and asset returns jointly simulated from a proposed seven-factor economic scenario generator calibrated from market data. We use the Australian pension system as an example, with consideration of the government-funded means-tested Age Pension and other practical aspects such as fund management fees. The key findings from our numerical tests are as follows. First, our deep neural network optimal consumption policy, which adapts to changes in market conditions, outperforms deterministic drawdown rules proposed in the literature. Moreover, the out-of-sample outperformance ratios increase as the number of training iterations increases, eventually reaching outperformance on all testing scenarios after less than 10 minutes of training. Second, a sensitivity analysis is performed to reveal how risk aversion and bequest motives change the consumption over a retiree's lifetime under this utility framework. Third, we provide the optimal consumption rate with different starting wealth balances. We observe that optimal consumption rates are not proportional to initial wealth due to the Age Pension payment. Forth, with the same initial wealth balance and utility parameter settings, the optimal consumption level is different between males and females due to gender differences in mortality.
    Date: 2020–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2007.09911&r=all
  9. By: Dominique Guegan (UP1 - Université Panthéon-Sorbonne, CES - Centre d'économie de la Sorbonne - UP1 - Université Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique, University of Ca’ Foscari [Venice, Italy])
    Abstract: We are interested in the analysis of the concept of interpretability associated with a ML algorithm. We distinguish between the "How", i.e., how a black box or a very complex algorithm works, and the "Why", i.e. why an algorithm produces such a result. These questions appeal to many actors, users, professions, regulators among others. Using a formal standardized framework , we indicate the solutions that exist by specifying which elements of the supply chain are impacted when we provide answers to the previous questions. This presentation, by standardizing the notations, allows to compare the different approaches and to highlight the specificities of each of them: both their objective and their process. The study is not exhaustive and the subject is far from being closed.
    Keywords: Interpretability,Counterfactual approach,Artificial Intelligence,Agnostic models,LIME method,Machine learning
    Date: 2020–07
    URL: http://d.repec.org/n?u=RePEc:hal:cesptp:halshs-02900929&r=all
  10. By: Christoph Böhringer; Knut Einar Rosendahl
    Abstract: Several European countries have decided to phase out coal power generation. Emissions from electricity generation are already regulated by the EU Emissions Trading System (ETS), and in some countries like Germany the phaseout of coal will be accompanied with cancellation of emissions allowances. In this paper we examine the consequences of phasing out coal, both for the broader economy, the electricity sector, and for CO2 emissions. We show analytically how the welfare impacts for a phaseout region depend on i) whether and how allowances are canceled, ii) whether other countries join phaseout policies, and iii) terms-of-trade effects in the ETS market. Based on numerical simulations with a computable general equilibrium model for the European economy, we quantify the economic and environmental impacts of alternative phaseout scenarios, considering both unilateral and multilateral phaseout. We find that terms-of-trade effects in the ETS market play an important role for the welfare effects across EU member states. For Germany, coal phaseout combined with unilateral cancellation of allowances is found to be welfare-improving if the German citizens value emissions reductions at 65 Euro per ton or more.
    Keywords: coal phaseout, emissions trading, electricity market
    JEL: D61 F18 H23 Q54
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_8412&r=all
  11. By: Carlo Baldassi; Fabio Maccheroni; Massimo Marinacci; Marco Pirazzini
    Abstract: Simulated Annealing is the crowning glory of Markov Chain Monte Carlo Methods for the solution of NP-hard optimization problems in which the cost function is known. Here, by replacing the Metropolis engine of Simulated Annealing with a reinforcement learning variation -- that we call Macau Algorithm -- we show that the Simulated Annealing heuristic can be very effective also when the cost function is unknown and has to be learned by an artificial agent.
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2008.00234&r=all
  12. By: Yongtong Shao; Minghao Li; Dermot J. Hayes (Center for Agricultural and Rural Development (CARD)); Wendong Zhang (Center for Agricultural and Rural Development (CARD)); Tao Xiong; Wei Xie
    Abstract: Small sample size often limits forecasting tasks such as the prediction of production, yield, and consumption of agricultural products. Machine learning offers an appealing alternative to traditional forecasting methods. In particular, Support Vector Regression has superior forecasting performance in small sample applications. In this article, we introduce Support Vector Regression via an application to China's hog market. Since 2014, China's hog inventory data has experienced an abnormal decline that contradicts price and consumption trends. We use Support Vector Regression to predict the true inventory based on the price-inventory relationship before 2014. We show that, in this application with a small sample size, Support Vector Regression out-performs neural networks, random forest, and linear regression. Predicted hog inventory decreased by 3.9% from November 2013 to September 2017, instead of the 25.4% decrease in the reported data.
    Date: 2020–08
    URL: http://d.repec.org/n?u=RePEc:ias:cpaper:20-wp607&r=all
  13. By: Tine Hufkens; Tim Goedemé; Katrin Gasior; Chrysa Leventi; Kostas Manios; Olga Rastrigina; Pasquale Recchia; Holly Sutherland; Natascha Van Mechelen; Gerlinde Verbist
    Abstract: This paper introduces the Hypothetical Household Tool (HHoT), a new extension of EUROMOD, the tax-benefit microsimulation model for the European Union. With HHoT, users can easily create their own hypothetical data, which enables them to better understand how policies work for households with specific characteristics. The tool creates unique possibilities for an enhanced analysis of taxes and social benefits in Europe by integrating results from microsimulations and hypothetical household simulations in a single modelling framework. Furthermore, the flexibility of HHoT facilitates an advanced use of hypothetical household simulations to create new comparative policy indicators in the context of multi-country and longitudinal analyses. In this paper, we highlight the main features of HHoT, its strengths and limitations, and illustrate how it can be used for comparative policy purposes.
    Date: 2018–12
    URL: http://d.repec.org/n?u=RePEc:hdl:wpaper:1819&r=all
  14. By: Zhenzhong Wang; Zhengyuan Zhu; Cindy Yu
    Abstract: In the data-rich environment, using many economic predictors to forecast a few key variables has become a new trend in econometrics. The commonly used approach is factor augment (FA) approach. In this paper, we pursue another direction, variable selection (VS) approach, to handle high-dimensional predictors. VS is an active topic in statistics and computer science. However, it does not receive as much attention as FA in economics. This paper introduces several cutting-edge VS methods to economic forecasting, which includes: (1) classical greedy procedures; (2) l1 regularization; (3) gradient descent with sparsification and (4) meta-heuristic algorithms. Comprehensive simulation studies are conducted to compare their variable selection accuracy and prediction performance under different scenarios. Among the reviewed methods, a meta-heuristic algorithm called sequential Monte Carlo algorithm performs the best. Surprisingly the classical forward selection is comparable to it and better than other more sophisticated algorithms. In addition, we apply these VS methods on economic forecasting and compare with the popular FA approach. It turns out for employment rate and CPI inflation, some VS methods can achieve considerable improvement over FA, and the selected predictors can be well explained by economic theories.
    Date: 2020–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2007.10160&r=all
  15. By: Jamaledini, Ashkan; Soltani, Ali; Khazaei, Ehsan
    Abstract: Economic energy management of grid-connected microgrid has been widely investigated. However, due to the binary variables of the generation unit’s status, the optimal result of the grid-connected microgrid is very hard. Thus, in this paper, the region search optimization algorithm (RSOA) is developed and adopted for the energy management of the grid-connected microgrid. The developed technique has higher convergence speed and accuracy, compared to the well-known heuristic techniques, such as genetic algorithm and particle swarm optimization. Results shows the effectiveness of the developed model.
    Keywords: Grid-connected microgrid, economic energy management, industry microgrid,
    JEL: A1 A30 G1 G10 G14 L0 O1 P0
    Date: 2020–03–11
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:102094&r=all
  16. By: Andrew J. Collins; Sheida Etemadidavan; Wael Khallouli
    Abstract: Data analytics allows an analyst to gain insight into underlying populations through the use of various computational approaches, including Monte Carlo methods. This paper discusses an approach to apply Monte Carlo methods to hedonic games. Hedonic games have gain popularity over the last two decades leading to several research articles that are concerned with the necessary, sufficient, or both conditions of the existence of a core partition. Researchers have used analytical methods for this work. We propose that using a numerical approach will give insights that might not be available through current analytical methods. In this paper, we describe an approach to representing hedonic games, with strict preferences, in a matrix form that can easily be generated; that is, a hedonic game with randomly generated preferences for each player. Using this generative approach, we were able to create and solve, i.e., find any core partitions, of millions of hedonic games. Our Monte Carlo experiment generated games with up to thirteen players. The results discuss the distribution form of the core size of the games of a given number of players. We also discuss computational considerations. Our numerical study of hedonic games gives insight into the underlying properties of hedonic games.
    Date: 2020–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2007.12127&r=all
  17. By: Marcin Chlebus (Faculty of Economic Sciences, University of Warsaw); Michał Dyczko (Faculty of Mathematics and Computer Science, Warsaw University of Technology); Michał Woźniak (Faculty of Economic Sciences, University of Warsaw)
    Abstract: The main aim of this paper was to predict daily stock returns of Nvidia Corporation company quoted on Nasdaq Stock Market. The most important problems in this research are: statistical specificity of return ratios i.e. time series might occur to be a white noise and the fact of necessity of applying many atypical machine learning methods to handle time factor influence. The period of study covered 07/2012 - 12/2018. Models used in this paper were: SVR, KNN, XGBoost, LightGBM, LSTM, ARIMA, ARIMAX. Features which, were used in models comes from such classes like: technical analysis, fundamental analysis, Google Trends entries, markets related to Nvidia. It was empirically proved that there is a possibility to construct prediction model of Nvidia daily return ratios which can outperform simple naive model. The best performance was obtained by SVR based on stationary attributes. Generally, it was shown that models based on stationary variables perform better than models based on stationary and non-stationary variables. Ensemble approach designed especially for time series failed to make an improvement in forecast precision. It seems that usage of machine learning models for the problem of time series with various explanatory variable classes brings good results.
    Keywords: nvidia, stock returns, machine learning, technical analysis, fundamental analysis, google trends, stationarity, ensembling
    JEL: C32 C38 C44 C51 C52 C61 C65 G11 G15
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:war:wpaper:2020-22&r=all
  18. By: Ivan Slobozhan (Institute of Computer Science, University of Tartu); Peter Ormosi (Centre for Competition Policy and Norwich Business School, University of East Anglia); Rajesh Sharma (Institute of Computer Science, University of Tartu)
    Abstract: Using lobbying data from OpenSecrets.org, we offer several experiments applying machine learning techniques to predict if a piece of legislation (US bill) has been subjected to lobbying activities or not. We also investigate the influence of the intensity of the lobbying activity on how discernible a lobbied bill is from one that was not subject to lobbying. We compare the performance of a number of different models (logistic regression, random forest, CNN and LSTM) and text embedding representations (BOW, TF-IDF, GloVe, Law2Vec). We report results of above 0.85% ROC AUC scores, and 78% accuracy. Model performance significantly improves (95% ROC AUC, and 88% accuracy) when bills with higher lobbying intensity are looked at. We also propose a method that could be used for unlabelled data. Through this we show that there is a considerably large number of previously unlabelled US bills where our predictions suggest that some lobbying activity took place. We believe our method could potentially contribute to the enforcement of the US Lobbying Disclosure Act (LDA) by indicating the bills that were likely to have been affected by lobbying but were not led as such.
    Keywords: lobbying; rent seeking; text classification; US bills
    Date: 2020–01–01
    URL: http://d.repec.org/n?u=RePEc:uea:ueaccp:2020_03&r=all
  19. By: Ivan Guo (Monash University [Melbourne]); Nicolas Langrené (CSIRO - Commonwealth Scientific and Industrial Research Organisation [Canberra]); Gregoire Loeper (Monash University [Melbourne]); Wei Ning (Monash University [Melbourne])
    Abstract: This paper addresses the problem of utility maximization under uncertain parameters. In contrast with the classical approach, where the parameters of the model evolve freely within a given range, we constrain them via a penalty function. We show that this robust optimization process can be interpreted as a two-player zero-sum stochastic differential game. We prove that the value function satisfies the Dynamic Programming Principle and that it is the unique viscosity solution of an associated Hamilton-Jacobi-Bellman-Isaacs equation. We test this robust algorithm on real market data. The results show that robust portfolios generally have higher expected utilities and are more stable under strong market downturns. To solve for the value function, we derive an analytical solution in the logarithmic utility case and obtain accurate numerical approximations in the general case by three methods: finite difference method, Monte Carlo simulation, and Generative Adversarial Networks.
    Keywords: robust portfolio optimization,differential games,HJBI equation,Generative adversarial networks,GANs,Monte Carlo
    Date: 2020–08–01
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-02910261&r=all
  20. By: Illya Barziy; Marcin Chlebus (Faculty of Economic Sciences, University of Warsaw)
    Abstract: Problem of portfolio optimization was formulated almost 70 years ago in the works of Harry Markowitz. However, the studies of possible optimization methods are still being provided in order to obtain better results of asset allocation using the empirical approximations of codependences between assets. In this work various codependences and metrics are tested in the Hierarchical Risk Parity algorithm to determine whether the results obtained are superior to those of the standard Pearson correlation as a measure of codependence. In order to compare how HRP uses the information from alternative codependence metrics, the MV, IVP, and CLA optimization algorithms were used on the same data. Dataset used for comparison consisted of 32 ETFs representing equity of different regions and sectors as well as bonds and commodities. The time period tested was 01.01.2007-20.12.2019. Results show that alternative codependence metrics show worse results in terms of Sharpe ratios and maximum drawdowns in comparison to the standard Pearson correlation for each optimization method used. The added value of this work is using alternative codependence and distance metrics on real data, and including transaction costs to determine their impact on the result of each algorithm.
    Keywords: Hierarchical Risk Parity, portfolio optimization, ETF, hierarchical structure, clustering, backtesting, distance metrics, risk management, machine learning
    JEL: C32 C38 C44 C51 C52 C61 C65 G11 G15
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:war:wpaper:2020-21&r=all
  21. By: Paolo Andreini; Cosimo Izzo; Giovanni Ricco
    Abstract: We propose a novel deep neural net framework - that we refer to as Deep Dynamic Factor Model (D2FM) -, to encode the information available, from hundreds of macroeconomic and financial time-series into a handful of unobserved latent states. While similar in spirit to traditional dynamic factor models (DFMs), differently from those, this new class of models allows for nonlinearities between factors and observables due to the deep neural net structure. However, by design, the latent states of the model can still be interpreted as in a standard factor model. In an empirical application to the forecast and nowcast of economic conditions in the US, we show the potential of this framework in dealing with high dimensional, mixed frequencies and asynchronously published time series data. In a fully real-time out-of-sample exercise with US data, the D2FM improves over the performances of a state-of-the-art DFM.
    Date: 2020–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2007.11887&r=all

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.