nep-cmp New Economics Papers
on Computational Economics
Issue of 2019‒10‒07
sixteen papers chosen by



  1. Deep Neural Network Framework Based on Backward Stochastic Differential Equations for Pricing and Hedging American Options in High Dimensions By Yangang Chen; Justin W. L. Wan
  2. Machine Learning Optimization Algorithms & Portfolio Allocation By Sarah Perrin; Thierry Roncalli
  3. Can a machine understand real estate pricing? – Evaluating machine learning approaches with big data By Marcelo Cajias
  4. Using Machine Learning to Predict Realized Variance By Peter Carr; Liuren Wu; Zhibai Zhang
  5. Text-Based Rental Rate Predictions of Airbnb Listings By Norbert Pfeifer
  6. Explaining Agent-Based Financial Market Simulation By David Byrd
  7. Exploring Graph Neural Networks for Stock Market Predictions with Rolling Window Analysis By Daiki Matsunaga; Toyotaro Suzumura; Toshihiro Takahashi
  8. Moments of renewal shot-noise processes and their applications By Jang, Jiwook; Dassios, Angelos; Zhao, Hongbiao
  9. Artificial Intelligence BlockCloud (AIBC) Technical Whitepaper By Qi Deng
  10. A Robust Transferable Deep Learning Framework for Cross-sectional Investment Strategy By Kei Nakagawa; Masaya Abe; Junpei Komiyama
  11. "Particle Rolling MCMC" By Naoki Awaya; Yasuhiro Omori
  12. PAGAN: Portfolio Analysis with Generative Adversarial Networks By Giovanni Mariani; Yada Zhu; Jianbo Li; Florian Scheidegger; Roxana Istrate; Costas Bekas; A. Cristiano I. Malossi
  13. I know where you will invest in the next year – Forecasting real estate investments with machine learning methods By Marcelo Cajias; Jonas Willwersch; Felix Lorenz
  14. Macroscopic approximation methods for the analysis of adaptive networked agent-based models: The example of a two-sector investment model By Jakob J. Kolb; Finn M\"uller-Hansen; J\"urgen Kurths; Jobst Heitzig
  15. Quantum Annealing Algorithm for Expected Shortfall based Dynamic Asset Allocation By Samudra Dasgupta; Arnab Banerjee
  16. Risk Aversion and the Predictability of Crude Oil Market Volatility: A Forecasting Experiment with Random Forests By Riza Demirer; Konstantinos Gkillas; Rangan Gupta; Christian Pierdzioch

  1. By: Yangang Chen; Justin W. L. Wan
    Abstract: We propose a deep neural network framework for computing prices and deltas of American options in high dimensions. The architecture of the framework is a sequence of neural networks, where each network learns the difference of the price functions between adjacent timesteps. We introduce the least squares residual of the associated backward stochastic differential equation as the loss function. Our proposed framework yields prices and deltas on the entire spacetime, not only at a given point. The computational cost of the proposed approach is quadratic in dimension, which addresses the curse of dimensionality issue that state-of-the-art approaches suffer. Our numerical simulations demonstrate these contributions, and show that the proposed neural network framework outperforms state-of-the-art approaches in high dimensions.
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1909.11532&r=all
  2. By: Sarah Perrin; Thierry Roncalli
    Abstract: Portfolio optimization emerged with the seminal paper of Markowitz (1952). The original mean-variance framework is appealing because it is very efficient from a computational point of view. However, it also has one well-established failing since it can lead to portfolios that are not optimal from a financial point of view. Nevertheless, very few models have succeeded in providing a real alternative solution to the Markowitz model. The main reason lies in the fact that most academic portfolio optimization models are intractable in real life although they present solid theoretical properties. By intractable we mean that they can be implemented for an investment universe with a small number of assets using a lot of computational resources and skills, but they are unable to manage a universe with dozens or hundreds of assets. However, the emergence and the rapid development of robo-advisors means that we need to rethink portfolio optimization and go beyond the traditional mean-variance optimization approach. Another industry has faced similar issues concerning large-scale optimization problems. Machine learning has long been associated with linear and logistic regression models. Again, the reason was the inability of optimization algorithms to solve high-dimensional industrial problems. Nevertheless, the end of the 1990s marked an important turning point with the development and the rediscovery of several methods that have since produced impressive results. The goal of this paper is to show how portfolio allocation can benefit from the development of these large-scale optimization algorithms. Not all of these algorithms are useful in our case, but four of them are essential when solving complex portfolio optimization problems. These four algorithms are the coordinate descent, the alternating direction method of multipliers, the proximal gradient method and the Dykstra's algorithm.
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1909.10233&r=all
  3. By: Marcelo Cajias
    Abstract: In the era of internet and digitalization real estate prices of dwellings are predominantly collected live by multiple listing services and merged with supporting data such as spatio-temporal geo-information. Despite the computational requirements for analyzing such large datasets, the methods for analyzing big data have evolved substantially and go much far beyond the traditional regression. In this context, the usage of machine learning technologies for analyzing prices in the real estate industry is not commonplace. This paper applies machine learnings algorithms on a data set of more than 3 Mio. observations in the German residential market to explore the predicting accuracy of methods such as the random forests regressions, XGboost and the stacked regression among others. The results show a significant reduction in the forecasting variance and confirm that artificial intelligence understands real estate prices much deeper.
    Keywords: Big Data in real estate; German housing; Machine learning Algorithms; Random forest; XGBoost
    JEL: R3
    Date: 2019–01–01
    URL: http://d.repec.org/n?u=RePEc:arz:wpaper:eres2019_232&r=all
  4. By: Peter Carr; Liuren Wu; Zhibai Zhang
    Abstract: In this paper we formulate a regression problem to predict realized volatility by using option price data and enhance VIX-styled volatility indices' predictability and liquidity. We test algorithms including regularized regression and machine learning methods such as Feedforward Neural Networks (FNN) on S&P 500 Index and its option data. By conducting a time series validation we find that both Ridge regression and FNN can improve volatility indexing with higher prediction performance and fewer options required. The best approach found is to predict the difference between the realized volatility and the VIX-styled index's prediction rather than to predict the realized volatility directly, representing a successful combination of human learning and machine learning. We also discuss suitability of different regression algorithms for volatility indexing and applications of our findings.
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1909.10035&r=all
  5. By: Norbert Pfeifer
    Abstract: The validation of house price value remains a critical task for scientific research as well as for practitioners. The following paper investigates this challenge by integrating textual-based information contained in real estate descriptions. More specifically, we show different approaches surrounding how to integrate verbal descriptions from real estate advertisements in an automated valuation model. By using Airbnb listing data, we address the proposed methods against a traditional hedonic-based approach, where we show that a neural network-based prediction model—featuring only information from verbal descriptions—are able to outperform a traditional hedonic-based model estimated with physical attributes, such as bathrooms or/and bedrooms. We also draw attention to techniques that allow for interrelations between physical, locational, and qualitative, text-based attributes. The results strongly suggest the integration of textual information, specifically modelled in a 2-stage model architecture in which the first model (recurrent long short-term memory network) outputs a probability distribution over price classifications, which is then used along with quantitative measurements in a stacked feed-forward neural network.
    Keywords: AVM; housing; Neural Network; NLP
    JEL: R3
    Date: 2019–01–01
    URL: http://d.repec.org/n?u=RePEc:arz:wpaper:eres2019_329&r=all
  6. By: David Byrd
    Abstract: This paper is intended to explain, in simple terms, some of the mechanisms and agents common to multiagent financial market simulations. We first discuss the necessity to include an exogenous price time series ("the fundamental value") for each asset and three methods for generating that series. We then illustrate one process by which a Bayesian agent may receive limited observations of the fundamental series and estimate its current and future values. Finally, we present two such agents widely examined in the literature, the Zero Intelligence agent and the Heuristic Belief Learning agent, which implement different approaches to order placement.
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1909.11650&r=all
  7. By: Daiki Matsunaga; Toyotaro Suzumura; Toshihiro Takahashi
    Abstract: Recently, there has been a surge of interest in the use of machine learning to help aid in the accurate predictions of financial markets. Despite the exciting advances in this cross-section of finance and AI, many of the current approaches are limited to using technical analysis to capture historical trends of each stock price and thus limited to certain experimental setups to obtain good prediction results. On the other hand, professional investors additionally use their rich knowledge of inter-market and inter-company relations to map the connectivity of companies and events, and use this map to make better market predictions. For instance, they would predict the movement of a certain company's stock price based not only on its former stock price trends but also on the performance of its suppliers or customers, the overall industry, macroeconomic factors and trade policies. This paper investigates the effectiveness of work at the intersection of market predictions and graph neural networks, which hold the potential to mimic the ways in which investors make decisions by incorporating company knowledge graphs directly into the predictive model. The main goal of this work is to test the validity of this approach across different markets and longer time horizons for backtesting using rolling window analysis.In this work, we concentrate on the prediction of individual stock prices in the Japanese Nikkei 225 market over a period of roughly 20 years. For the knowledge graph, we use the Nikkei Value Search data, which is a rich dataset showing mainly supplier relations among Japanese and foreign companies. Our preliminary results show a 29.5% increase and a 2.2-fold increase in the return ratio and Sharpe ratio, respectively, when compared to the market benchmark, as well as a 6.32% increase and 1.3-fold increase, respectively, compared to the baseline LSTM model.
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1909.10660&r=all
  8. By: Jang, Jiwook; Dassios, Angelos; Zhao, Hongbiao
    Abstract: In this paper, we study the family of renewal shot-noise processes. The Feynmann–Kac formula is obtained based on the piecewise deterministic Markov process theory and the martingale methodology. We then derive the Laplace transforms of the conditional moments and asymptotic moments of the processes. In general, by inverting the Laplace transforms, the asymptotic moments and the first conditional moments can be derived explicitly; however, other conditional moments may need to be estimated numerically. As an example, we develop a very efficient and general algorithm of Monte Carlo exact simulation for estimating the second conditional moments. The results can be then easily transformed to the counterparts of discounted aggregate claims for insurance applications, and we apply the first two conditional moments for the actuarial net premium calculation. Similarly, they can also be applied to credit risk and reliability modelling. Numerical examples with four distribution choices for interarrival times are provided to illustrate how the models can be implemented.
    Keywords: renewal shot-noise processes; discounted aggregate claims; actuarial net premium; piecewise-deterministic Markov processes; martingale method; Monte Carlo exact simulation; credit risk; reliability
    JEL: G32 F3 G3
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:87428&r=all
  9. By: Qi Deng
    Abstract: The AIBC is an Artificial Intelligence and blockchain technology based large-scale decentralized ecosystem that allows system-wide low-cost sharing of computing and storage resources. The AIBC consists of four layers: a fundamental layer, a resource layer, an application layer, and an ecosystem layer. The AIBC implements a two-consensus scheme to enforce upper-layer economic policies and achieve fundamental layer performance and robustness: the DPoEV incentive consensus on the application and resource layers, and the DABFT distributed consensus on the fundamental layer. The DABFT uses deep learning techniques to predict and select the most suitable BFT algorithm in order to achieve the best balance of performance, robustness, and security. The DPoEV uses the knowledge map algorithm to accurately assess the economic value of digital assets.
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1909.12063&r=all
  10. By: Kei Nakagawa; Masaya Abe; Junpei Komiyama
    Abstract: Stock return predictability is an important research theme as it reflects our economic and social organization, and significant efforts are made to explain the dynamism therein. Statistics of strong explanative power, called "factor" have been proposed to summarize the essence of predictive stock returns. Although machine learning methods are increasingly popular in stock return prediction, an inference of the stock returns is highly elusive, and still most investors, if partly, rely on their intuition to build a better decision making. The challenge here is to make an investment strategy that is consistent over a reasonably long period, with the minimum human decision on the entire process. To this end, we propose a new stock return prediction framework that we call Ranked Information Coefficient Neural Network (RIC-NN). RIC-NN is a deep learning approach and includes the following three novel ideas: (1) nonlinear multi-factor approach, (2) stopping criteria with ranked information coefficient (rank IC), and (3) deep transfer learning among multiple regions. Experimental comparison with the stocks in the Morgan Stanley Capital International (MSCI) indices shows that RIC-NN outperforms not only off-the-shelf machine learning methods but also the average return of major equity investment funds in the last fourteen years.
    Date: 2019–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1910.01491&r=all
  11. By: Naoki Awaya (Graduate School of Economics, The University of Tokyo); Yasuhiro Omori (Faculty of Economics, The University of Tokyo)
    Abstract: An efficient simulation-based methodology is proposed for the rolling window esti-mation of state space models, called particle rolling Markov chain Monte Carlo (MCMC)with double block sampling. In our method, which is based on Sequential Monte Carlo(SMC), particles are sequentially updated to approximate the posterior distribution foreach window by learning new information and discarding old information from obser-vations. Th particles are refreshed with an MCMC algorithm when the importanceweights degenerate. To avoid degeneracy, which is crucial for reducing the computa-tion time, we introduce a block sampling scheme and generate multiple candidates bythe algorithm based on the conditional SMC. The theoretical discussion shows thatthe proposed methodology with a nested structure is expressed as SMC sampling forthe augmented space to provide the justification. The computational performance isevaluated in illustrative examples, showing that the posterior distributions of the modelparameters are accurately estimated. The proofs and additional discussions (algorithmsand experimental results) are provided in the Supplementary Material.
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:tky:fseres:2019cf1126&r=all
  12. By: Giovanni Mariani; Yada Zhu; Jianbo Li; Florian Scheidegger; Roxana Istrate; Costas Bekas; A. Cristiano I. Malossi
    Abstract: Since decades, the data science community tries to propose prediction models of financial time series. Yet, driven by the rapid development of information technology and machine intelligence, the velocity of today's information leads to high market efficiency. Sound financial theories demonstrate that in an efficient marketplace all information available today, including expectations on future events, are represented in today prices whereas future price trend is driven by the uncertainty. This jeopardizes the efforts put in designing prediction models. To deal with the unpredictability of financial systems, today's portfolio management is largely based on the Markowitz framework which puts more emphasis in the analysis of the market uncertainty and less in the price prediction. The limitation of the Markowitz framework stands in taking very strong ideal assumptions about future returns probability distribution. To address this situation we propose PAGAN, a pioneering methodology based on deep generative models. The goal is modeling the market uncertainty that ultimately is the main factor driving future trends. The generative model learns the joint probability distribution of price trends for a set of financial assets to match the probability distribution of the real market. Once the model is trained, a portfolio is optimized by deciding the best diversification to minimize the risk and maximize the expected returns observed over the execution of several simulations. Applying the model for analyzing possible futures, is as simple as executing a Monte Carlo simulation, a technique very familiar to finance experts. The experimental results on different portfolios representing different geopolitical areas and industrial segments constructed using real-world public data sets demonstrate promising results.
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1909.10578&r=all
  13. By: Marcelo Cajias; Jonas Willwersch; Felix Lorenz
    Abstract: Real estate transactions can be seen as a spatial point pattern over space and time. That means, that real estate transactions occur in places where at a certain point of time certain characteristics are given that lead to an investment decision. While the decision-making process by investors is impossible to capture, this paper applies new methods for capturing the conditions under which real estate transactions are made over space and time. In other words, we explain and forecast real estate transactions with machine learning methods including both real estate transactions, geographical information and most importantly microeconomic data.
    Keywords: Machine Learning; Point pattern analysis; Real estate transactions; Spatial-temporal analysis; Surveillance analysis
    JEL: R3
    Date: 2019–01–01
    URL: http://d.repec.org/n?u=RePEc:arz:wpaper:eres2019_171&r=all
  14. By: Jakob J. Kolb; Finn M\"uller-Hansen; J\"urgen Kurths; Jobst Heitzig
    Abstract: In this paper, we propose a statistical aggregation method for agent-based models with heterogeneous agents that interact both locally on a complex adaptive network and globally on a market. The method combines three approaches from statistical physics: (a) moment closure, (b) pair approximation of adaptive network processes, and (c) thermodynamic limit of the resulting stochastic process. As an example of use, we develop a stochastic agent-based model with heterogeneous households that invest in either a fossil-fuel or renewables-based sector while allocating labor on a competitive market. Using the adaptive voter model, the model describes agents as social learners that interact on a dynamic network. We apply the approximation methods to derive a set of ordinary differential equations that approximate the macro-dynamics of the model. A comparison of the reduced analytical model with numerical simulations shows that the approximation fits well for a wide range of parameters. The proposed method makes it possible to use analytical tools to better understand the dynamical properties of models with heterogeneous agents on adaptive networks. We showcase this with a bifurcation analysis that identifies parameter ranges with multi-stabilities. The method can thus help to explain emergent phenomena from network interactions and make them mathematically traceable.
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1909.13758&r=all
  15. By: Samudra Dasgupta; Arnab Banerjee
    Abstract: The 2008 mortgage crisis is an example of an extreme event. Extreme value theory tries to estimate such tail risks. Modern finance practitioners prefer Expected Shortfall based risk metrics (which capture tail risk) over traditional approaches like volatility or even Value-at-Risk. This paper provides a quantum annealing algorithm in QUBO form for a dynamic asset allocation problem using expected shortfall constraint. It was motivated by the need to refine the current quantum algorithms for Markowitz type problems which are academically interesting but not useful for practitioners. The algorithm is dynamic and the risk target emerges naturally from the market volatility. Moreover, it avoids complicated statistics like generalized pareto distribution. It translates the problem into qubit form suitable for implementation by a quantum annealer like D-Wave. Such QUBO algorithms are expected to be solved faster using quantum annealing systems than any classical algorithm using classical computer (but yet to be demonstrated at scale).
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1909.12904&r=all
  16. By: Riza Demirer (Department of Economics and Finance, Southern Illinois University Edwardsville, Edwardsville, IL 62026-1102, USA); Konstantinos Gkillas (Department of Business Administration, University of Patras – University Campus, Rio, P.O. Box 1391, 26500 Patras, Greece); Rangan Gupta (Department of Economics, University of Pretoria, Pretoria, 0002, South Africa); Christian Pierdzioch (Department of Economics, Helmut Schmidt University, Holstenhofweg 85, P.O.B. 700822, 22008 Hamburg, Germany)
    Abstract: We analyze the predictive power of time-varying risk aversion for the realized volatility of crude oil returns based on high-frequency data. While the popular linear heterogeneous autoregressive realized volatility (HAR-RV) model fails to recognize the predictive power of risk aversion over crude oil volatility, we find that risk aversion indeed improves forecast accuracy at all forecast horizons when we compute forecasts by means of random forests. The predictive power of risk aversion is robust to various covariates including realized skewness and realized kurtosis, various measures of jump intensity and leverage. The findings highlight the importance of accounting for nonlinearity in the data-generating process for forecast accuracy as well as the predictive power of non-cashflow factors over commodity-market uncertainty with significant implications for the pricing and forecasting in these markets.
    Keywords: Oil price, Realized volatility, Risk aversion, Random forests
    JEL: G17 Q02 Q47
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:pre:wpaper:201972&r=all

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.