nep-cmp New Economics Papers
on Computational Economics
Issue of 2019‒11‒25
twenty-one papers chosen by
Stan Miles
Thompson Rivers University

  1. Index Tracking with Cardinality Constraints: A Stochastic Neural Networks Approach By Yu Zheng; Bowei Chen; Timothy M. Hospedales; Yongxin Yang
  2. A path-based many-to-many assignment game to model Mobility-as-a-Service market networks By Theodoros Pantelidis; Saeid Rasulkhani; Joseph Y. J. Chow
  3. Reinforcement Learning for Market Making in a Multi-agent Dealer Market By Sumitra Ganesh; Nelson Vadori; Mengda Xu; Hua Zheng; Prashant Reddy; Manuela Veloso
  4. Neural networks for option pricing and hedging: a literature review By Johannes Ruf; Weiguan Wang
  5. Is Positive Sentiment in Corporate Annual Reports Informative? Evidence from Deep Learning By Mehran Azimi; Anup Agrawal
  6. Incremental Risk Charge Methodology By Xiao, Tim
  7. Bounds on Multi-asset Derivatives via Neural Networks By Luca De Gennaro Aquino; Carole Bernard
  8. Economics of Nuclear Power Plant Investment: Monte Carlo Simulations of Generation III/III+ Investment Projects By Ben Wealer; Simon Bauer; Leonard Göke; Christian von Hirschhausen; Claudia Kemfert
  9. Linear Fractional Stable Motion with the RLFSM R Package By Mazur, Stepan; Otryakhin, Dmitry
  10. Machine Learning and Causality: The Impact of Financial Crises on Growth By Andrew J Tiffin
  11. An EM algorithm to model the occurrence of events subject to a reporting delay By R Verbelen; K Antonio; Gerda Claeskens; J Crevecoeur
  12. A Coherent Framework for Predicting Emerging Market Credit Spreads with Support Vector Regression By Gary S. Anderson; Alena Audzeyeva
  13. Predicting Indian stock market using the psycho-linguistic features of financial news By B. Shravan Kumar; Vadlamani Ravi; Rishabh Miglani
  14. Bottom-up Leading Macroeconomic Indicators: An Application to Non-Financial Corporate Defaults using Machine Learning By Tyler Pike; Horacio Sapriza; Thomas Zimmermann
  15. Introductory Remarks : a speech at "Nontraditional Data, Machine Learning, and Natural Language Processing in Macroeconomics," a research conference sponsored by the Federal Reserve Board, Washington, D.C., October 1, 2019. By Clarida, Richard H.
  16. Machine Learning et nouvelles sources de données pour le scoring de crédit By Christophe HURLIN; Christophe PERIGNON
  17. Monte Carlo Sampling Processes and Incentive Compatible Allocations in Large Economies By Hammond, Peter J; Qiao, Lei; Sun, Yeneng
  18. Optical Proof of Work By Michael Dubrovsky; Marshall Ball; Bogdan Penkovsky
  19. The Behavioral Economics of Artificial Intelligence: Lessons from Experiments with Computer Players By Christoph March
  20. Quantization-based Bermudan option pricing in the $FX$ world By Jean-Michel Fayolle; Vincent Lemaire; Thibaut Montes; Gilles Pag\`es
  21. Making Good on LSTMs Unfulfilled Promise By Daniel Philps; Artur d'Avila Garcez; Tillman Weyde

  1. By: Yu Zheng; Bowei Chen; Timothy M. Hospedales; Yongxin Yang
    Abstract: Partial (replication) index tracking is a popular passive investment strategy. It aims to replicate the performance of a given index by constructing a tracking portfolio which contains some constituents of the index. The tracking error optimisation is quadratic and NP-hard when taking the $\ell_0$ constraint into account so it is usually solved by heuristic methods such as evolutionary algorithms. This paper introduces a simple, efficient and scalable connectionist model as an alternative. We propose a novel reparametrisation method and then solve the optimisation problem with stochastic neural networks. The proposed approach is examined with S\&P 500 index data for more than 10 years and compared with widely used index tracking approaches such as forward and backward selection and the largest market capitalisation methods. The empirical results show our model achieves excellent performance. Compared with the benchmarked models, our model has the lowest tracking error, across a range of portfolio sizes. Meanwhile it offers comparable performance to the others on secondary criteria such as volatility, Sharpe ratio and maximum drawdown.
    Date: 2019–11
  2. By: Theodoros Pantelidis; Saeid Rasulkhani; Joseph Y. J. Chow
    Abstract: As Mobility as a Service (MaaS) systems become increasingly popular, travel is changing from unimodal trips to personalized services offered by a market of mobility operators. Traditional traffic assignment models ignore the interaction of different operators. However, a key characteristic of MaaS markets is that urban trip decisions depend on both user route decisions as well as operator service and pricing decisions. We adopt a new paradigm for traffic assignment in a MaaS network of multiple operators using the concept of stable matching to allocate costs and determine prices offered by operators corresponding to user route choices and operator service choices without resorting to nonconvex bilevel programming formulations. Unlike our prior work, the proposed model allows travelers to make multimodal, multi-operator trips, resulting in stable cost allocations between competing network operators to provide MaaS for users. Algorithms are proposed to generate stability conditions for the stable outcome pricing model. Extensive computational experiments demonstrate the use of the model, and effectiveness of the proposed algorithm, to handling pricing responses of MaaS operators in technological and capacity changes, government acquisition, consolidation, and firm entry, using the classic Sioux Falls network.
    Date: 2019–11
  3. By: Sumitra Ganesh; Nelson Vadori; Mengda Xu; Hua Zheng; Prashant Reddy; Manuela Veloso
    Abstract: Market makers play an important role in providing liquidity to markets by continuously quoting prices at which they are willing to buy and sell, and managing inventory risk. In this paper, we build a multi-agent simulation of a dealer market and demonstrate that it can be used to understand the behavior of a reinforcement learning (RL) based market maker agent. We use the simulator to train an RL-based market maker agent with different competitive scenarios, reward formulations and market price trends (drifts). We show that the reinforcement learning agent is able to learn about its competitor's pricing policy; it also learns to manage inventory by smartly selecting asymmetric prices on the buy and sell sides (skewing), and maintaining a positive (or negative) inventory depending on whether the market price drift is positive (or negative). Finally, we propose and test reward formulations for creating risk averse RL-based market maker agents.
    Date: 2019–11
  4. By: Johannes Ruf; Weiguan Wang
    Abstract: Neural networks have been used as a nonparametric method for option pricing and hedging since the early 1990s. Far over a hundred papers have been published on this topic. This note intends to provide a comprehensive review. Papers are compared in terms of input features, output variables, benchmark models, performance measures, data partition methods, and underlying assets. Furthermore, related work and regularisation techniques are discussed.
    Date: 2019–11
  5. By: Mehran Azimi; Anup Agrawal
    Abstract: We use a novel text classification approach from deep learning to more accurately measure sentiment in a large sample of 10-Ks. In contrast to most prior literature, we find that positive, and negative, sentiment predicts abnormal return and abnormal trading volume around 10-K filing date and future firm fundamentals and policies. Our results suggest that the qualitative information contained in corporate annual reports is richer than previously found. Both positive and negative sentiments are informative when measured accurately, but they do not have symmetric implications, suggesting that a net sentiment measure advocated by prior studies would be less informative.
    JEL: C81 G10 G14 G30
    Date: 2019–08–21
  6. By: Xiao, Tim
    Abstract: The incremental risk charge (IRC) is a new regulatory requirement from the Basel Committee in response to the recent financial crisis. Notably few models for IRC have been developed in the literature. This paper proposes a methodology consisting of two Monte Carlo simulations. The first Monte Carlo simulation simulates default, migration, and concentration in an integrated way. Combining with full re-valuation, the loss distribution at the first liquidity horizon for a subportfolio can be generated. The second Monte Carlo simulation is the random draws based on the constant level of risk assumption. It convolutes the copies of the single loss distribution to produce one year loss distribution. The aggregation of different subportfolios with different liquidity horizons is addressed. Moreover, the methodology for equity is also included, even though it is optional in IRC.
    Date: 2018–08–16
  7. By: Luca De Gennaro Aquino; Carole Bernard
    Abstract: Using neural networks, we compute bounds on the prices of multi-asset derivatives given information on prices of related payoffs. As a main example, we focus on European basket options and include information on the prices of other similar options, such as spread options and/or basket options on subindices. We show that, in most cases, adding further constraints gives rise to bounds that are considerably tighter and discuss the maximizing/minimizing copulas achieving such bounds. Our approach follows the literature on constrained optimal transport and, in particular, builds on a recent paper by Eckstein and Kupper (2019, Appl. Math. Optim.).
    Date: 2019–11
  8. By: Ben Wealer; Simon Bauer; Leonard Göke; Christian von Hirschhausen; Claudia Kemfert
    Abstract: This paper analyzes nuclear power plant investments using Monte Carlo simulations of economic indicators such as net present value (NPV) and levelized cost of electricity (LCOE). In times of liberalized electricity markets, largescale decarbonization and climate change considerations, this topic is gaining momentum and requires fundamental analysis of cost drivers. We adopt the private investors’ perspective and ask: What are the investors’ economics of nuclear power, or - stated differently - would a private investor consider nuclear power as an investment option in the context of a competitive power market? By focusing on the perspective of an investor, we leave aside the public policy perspective, such as externalities, cost-benefit analysis, proliferation issues, etc. Instead, we apply a conventional economic perspective, such as proposed by Rothwell (2016) to calculate NPV and LCOE. We base our analysis on a stochastic Monte Carlo simulation to nuclear power plant investments of generation III/III+, i.e. available technologies with some experience and an extensive scrutiny of cost data. We define and estimate the main drivers of our model, i.e. overnight construction costs, wholesale electricity prices, and weighted average cost of capital, and discuss reasonable ranges and distributions of those parameters. We apply the model to recent and ongoing investment projects in the Western world, i.e. Europe and the United States; cases in non-market economies such as China and Russia, and other non-established technologies (Generation IV reactors and small modular reactors) are excluded from the analysis due to data issues. Model runs suggest that investing in nuclear power plants is not profitable, i.e. expected net present values are highly negative, mainly driven by high construction costs, including capital costs, and uncertain and low revenues. Even extending reactor lifetimes from currently 40 years to 60 years does not improve the results significantly. We conclude that the economics of nuclear power plants are not favorable to future investments, even though additional costs (decommissioning, long-term storage) and the social costs of accidents are not even considered.
    Keywords: nuclear power; nuclear financing; investment; levelized cost of electricity; monte carlo simulation; uncertainty
    JEL: Q40 D24 G00
    Date: 2019
  9. By: Mazur, Stepan (Örebro University School of Business); Otryakhin, Dmitry (Aarhus University)
    Abstract: Linear fractional stable motion is a type of a stochastic integral driven by symmetric alpha-stable L´evy motion. The integral could be considered as a non-Gaussian analogue of the fractional Brownian motion. The present paper discusses R package rlfsm created for numerical procedures with the linear fractional stable motion. It is a set of tools for simulation of these processes as well as performing statistical inference and simulation studies on them. We introduce: tools that we developed to work with that type of motions as well as methods and ideas underlying them. Also we perform numerical experiments to show finite-sample behavior of certain estimators of the integral, and give an idea of how to envelope workflow related to the linear fractional stable motion in S4 classes and methods. Supplementary materials, including codes for numerical experiments, are available online. rlfsm could be found on CRAN and gitlab.
    Keywords: Fractional processes; limit theorems; parametric estimation; stochastic simulation; stable motion
    JEL: C00 C13 C15 C88
    Date: 2019–11–13
  10. By: Andrew J Tiffin
    Abstract: Machine learning tools are well known for their success in prediction. But prediction is not causation, and causal discovery is at the core of most questions concerning economic policy. Recently, however, the literature has focused more on issues of causality. This paper gently introduces some leading work in this area, using a concrete example—assessing the impact of a hypothetical banking crisis on a country’s growth. By enabling consideration of a rich set of potential nonlinearities, and by allowing individually-tailored policy assessments, machine learning can provide an invaluable complement to the skill set of economists within the Fund and beyond.
    Date: 2019–11–01
  11. By: R Verbelen; K Antonio; Gerda Claeskens; J Crevecoeur
    Date: 2018
  12. By: Gary S. Anderson; Alena Audzeyeva
    Abstract: We propose a coherent framework using support vector regression (SRV) for generating and ranking a set of high quality models for predicting emerging market sovereign credit spreads. Our framework adapts a global optimization algorithm employing an hv-block cross-validation metric, pertinent for models with serially correlated economic variables, to produce robust sets of tuning parameters for SRV kernel functions. In contrast to previous approaches identifying a single "best" tuning parameter setting, a task that is pragmatically improbable to achieve in many applications, we proceed with a collection of tuning parameter candidates, employing the Model Confidence Set test to select the most accurate models from the collection of promising candidates. Using bond credit spread data for three large emerging market economies and an array of input variables motivated by economic theory, we apply our framework to identify relatively small sets of SVR models with su perior out-of-sample forecasting performance. Benchmarking our SRV forecasts against random walk and conventional linear model forecasts provides evidence for the notably superior forecasting accuracy of SRV-based models. In contrast to routinely used linear model benchmarks, the SRV-based models can generate accurate forecasts using only a small set of input variables limited to the country-specific credit-spread-curve factors, lending some support to the rational expectation theory of the term structure in the context of emerging market credit spreads. Consequently, our evidence indicates a better ability of highly flexible SVR to capture investor expectations about future spreads reflected in today's credit spread curve.
    Keywords: Support vector machine regressions ; Out-of-sample predictability ; Soverign cedit spreads ; Machine learning ; Emerging markets ; Model confidence set
    JEL: G17 F15 G15 F34 F17 C53
    Date: 2019–10–17
  13. By: B. Shravan Kumar; Vadlamani Ravi; Rishabh Miglani
    Abstract: Financial forecasting using news articles is an emerging field. In this paper, we proposed hybrid intelligent models for stock market prediction using the psycholinguistic variables (LIWC and TAALES) extracted from news articles as predictor variables. For prediction purpose, we employed various intelligent techniques such as Multilayer Perceptron (MLP), Group Method of Data Handling (GMDH), General Regression Neural Network (GRNN), Random Forest (RF), Quantile Regression Random Forest (QRRF), Classification and regression tree (CART) and Support Vector Regression (SVR). We experimented on the data of 12 companies stocks, which are listed in the Bombay Stock Exchange (BSE). We employed chi-squared and maximum relevance and minimum redundancy (MRMR) feature selection techniques on the psycho-linguistic features obtained from the new articles etc. After extensive experimentation, using the Diebold-Mariano test, we conclude that GMDH and GRNN are statistically the best techniques in that order with respect to the MAPE and NRMSE values.
    Date: 2019–11
  14. By: Tyler Pike; Horacio Sapriza; Thomas Zimmermann
    Abstract: This paper constructs a leading macroeconomic indicator from microeconomic data using recent machine learning techniques. Using tree-based methods, we estimate probabilities of default for publicly traded non-financial firms in the United States. We then use the cross-section of out-of-sample predicted default probabilities to construct a leading indicator of non-financial corporate health. The index predicts real economic outcomes such as GDP growth and employment up to eight quarters ahead. Impulse responses validate the interpretation of the index as a measure of financial stress.
    Keywords: Corporate Default ; Early Warning Indicators ; Economic Activity ; Machine Learning
    JEL: C53 E32 G33
    Date: 2019–09–20
  15. By: Clarida, Richard H. (Board of Governors of the Federal Reserve System (U.S.))
    Date: 2019–10–01
  16. By: Christophe HURLIN; Christophe PERIGNON
    Date: 2019
  17. By: Hammond, Peter J (Department of Economics, University of Warwick); Qiao, Lei (Shanghai University of Finance and Economics); Sun, Yeneng (Risk Management Institute and Department of Economics, National University of Singapore)
    Abstract: Monte Carlo simulation is used in Hammond and Sun (Economic Theory, 2008) to characterize a standard stochastic framework involving a continuum of random variables that are conditionally independent given macro shocks. This paper presents some general properties of such Monte Carlo sampling processes, including their one-way Fubini extension and regular conditional independence. In addition to the almost sure convergence of Monte Carlo simulation considered in Hammond and Sun (Economic Theory, 2008), here we also consider norm convergence when the random variables are square integrable. This leads to a necessary and sufficient condition for the classical law of large numbers to hold in a general Hilbert space. Applying this analysis to large economies with asymmetric information shows that the conflict between incentive compatibility and Pareto efficiency is resolved asymptotically for almost all sampling economies, corresponding to some results in McLean and Postlewaite (Econometrica 2002) and in Sun and Yannelis (Journal of Economic Theory, 2007).
    Keywords: Law of large numbers ; Monte Carlo sampling process ; one-way Fubini property ; Hilbert space ; incentive compatibility ; asymmetric information ; Pareto efficiency
    JEL: C65 D51 D61 D82
    Date: 2019
  18. By: Michael Dubrovsky; Marshall Ball; Bogdan Penkovsky
    Abstract: Most cryptocurrencies rely on Proof-of-Work (PoW) "mining" for resistance to Sybil and double-spending attacks, as well as a mechanism for currency issuance. Hashcash PoW has successfully secured the Bitcoin network since its inception, however, as the network has expanded to take on additional value storage and transaction volume, Bitcoin PoW's heavy reliance on electricity has created scalability issues, environmental concerns, and systemic risks. Mining efforts have concentrated in areas with low electricity costs, creating single points of failure. Although PoW security properties rely on imposing a trivially verifiable economic cost on miners, there is no fundamental reason for it to consist primarily of electricity cost. The authors propose a novel PoW algorithm, Optical Proof of Work (oPoW), to eliminate energy as the primary cost of mining. Proposed algorithm imposes economic difficulty on the miners, however, the cost is concentrated in hardware (capital expense-CAPEX) rather than electricity (operating expenses-OPEX). The oPoW scheme involves minimal modifications to Hashcash-like PoW schemes, inheriting safety/security properties from such schemes. Rapid growth and improvement in silicon photonics over the last two decades has led to the commercialization of silicon photonic co-processors (integrated circuits that use photons instead of electrons to perform specialized computing tasks) for low-energy deep learning. oPoW is optimized for this technology such that miners are incentivized to use specialized, energy-efficient photonics for computation. Beyond providing energy savings, oPoW has the potential to improve network scalability, enable decentralized mining outside of low electricity cost areas, and democratize issuance. Due to the CAPEX dominance of mining costs, oPoW hashrate will be significantly less sensitive to underlying coin price declines.
    Date: 2019–11
  19. By: Christoph March
    Abstract: Artificial intelligence (AI) is starting to pervade the economic and social life rendering strategic interactions with artificial agents more and more common. At the same time, experimental economic research has increasingly employed computer players to advance our understanding of strategic interaction in general. What can this strand of research teach us about an AI-shaped future? I review 90 experimental studies using computer players. I find that, in a nutshell, humans act more selfishly and more rational in the presence of computer players, and they are often able to exploit these players. Still, many open questions prevail.
    Keywords: experiment, robots, computer players, survey
    JEL: C90 C92 O33
    Date: 2019
  20. By: Jean-Michel Fayolle; Vincent Lemaire; Thibaut Montes; Gilles Pag\`es
    Abstract: This paper proposes two numerical solution based on Product Optimal Quantization for the pricing of Foreign Echange (FX) linked long term Bermudan options e.g. Bermudan Power Reverse Dual Currency options, where we take into account stochastic domestic and foreign interest rates on top of stochastic FX rate, hence we consider a 3-factor model. For these two numerical methods, we give an estimation of the $L^2$-error induced by such approximations and we illustrate them with market-based examples that highlight the speed of such methods.
    Date: 2019–11
  21. By: Daniel Philps; Artur d'Avila Garcez; Tillman Weyde
    Abstract: LSTMs promise much to financial time-series analysis, temporal and cross-sectional inference, but we find they do not deliver in a real-world financial management task. We examine an alternative called Continual Learning (CL), a memory-augmented approach, which can provide transparent explanations; which memory did what and when. This work has implications for many financial applications including to credit, time-varying fairness in decision making and more. We make three important new observations. Firstly, as well as being more explainable, time-series CL approaches outperform LSTM and a simple sliding window learner (feed-forward neural net (FFNN)). Secondly, we show that CL based on a sliding window learner (FFNN) is more effective than CL based on a sequential learner (LSTM). Thirdly, we examine how real-world, time-series noise impacts several similarity approaches used in CL memory addressing. We provide these insights using an approach called Continual Learning Augmentation (CLA) tested on a complex real world problem; emerging market equities investment decision making. CLA provides a test-bed as it can be based on different types of time-series learner, allowing testing of LSTM and sliding window (FFNN) learners side by side. CLA is also used to test several distance approaches used in a memory recall-gate: euclidean distance (ED), dynamic time warping (DTW), auto-encoder (AE) and a novel hybrid approach, warp-AE. We find CLA out-performs simple LSTM and FFNN learners and CLA based on a sliding window (CLA-FFNN) out-performs a LSTM (CLA-LSTM) implementation. While for memory-addressing, ED under-performs DTW and AE but warp-AE shows the best overall performance in a real-world financial task.
    Date: 2019–11

This nep-cmp issue is ©2019 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.