nep-cmp New Economics Papers
on Computational Economics
Issue of 2019‒03‒04
twelve papers chosen by
Stan Miles
Thompson Rivers University

  1. Financial series prediction using Attention LSTM By Sangyeon Kim; Myungjoo Kang
  2. Global Stock Market Prediction Based on Stock Chart Images Using Deep Q-Network By Jinho Lee; Raehyun Kim; Yookyung Koh; Jaewoo Kang
  3. Environmental Policy on the Back of an Envelope: A Cobb-Douglas Model is Not Just a Teaching Tool By Don Fullerton; Chi L. Ta
  4. Heuristic Switching Model and Exploration-Explotation Algorithm to describe long-run expectations in LtFEs: a comparison By Colasante, Annarita; Alfarano, Simone; Camacho-Cuena, Eva
  5. Agricultural Drought Impacts on Crops Sector and Adaptation Options in Mali: a Macroeconomic Computable General Equilibrium Analysis By Jean-Marc MONTAUD
  6. Price Manipulation, Dynamic Informed Trading and Tame Equilibria: Theory and Computation By Shino Takayama
  7. Incremental Risk Charge Methodology By Tim Xiao
  8. A numerical scheme for the quantile hedging problem By Cyril B\'en\'ezet; Jean-Fran\c{c}ois Chassagneux; Christoph Reisinger
  9. Controlling systemic risk - network structures that minimize it and node properties to calculate it By Sebastian M. Krause; Hrvoje \v{S}tefan\v{c}i\'c; Vinko Zlati\'c; Guido Caldarelli
  10. Artificial intelligence, algorithmic pricing and collusion By Calvano, Emilio; Calzolari, Giacomo; Denicolò, Vincenzo; Pastorello, Sergio
  11. Machine Learning Estimation of Heterogeneous Causal Effects: Empirical Monte Carlo Evidence By Knaus, Michael C.; Lechner, Michael; Strittmatter, Anthony
  12. Quantum model for price forecasting in financial markets By J. L. Subias

  1. By: Sangyeon Kim; Myungjoo Kang
    Abstract: Financial time series prediction, especially with machine learning techniques, is an extensive field of study. In recent times, deep learning methods (especially time series analysis) have performed outstandingly for various industrial problems, with better prediction than machine learning methods. Moreover, many researchers have used deep learning methods to predict financial time series with various models in recent years. In this paper, we will compare various deep learning models, such as multilayer perceptron (MLP), one-dimensional convolutional neural networks (1D CNN), stacked long short-term memory (stacked LSTM), attention networks, and weighted attention networks for financial time series prediction. In particular, attention LSTM is not only used for prediction, but also for visualizing intermediate outputs to analyze the reason of prediction; therefore, we will show an example for understanding the model prediction intuitively with attention vectors. In addition, we focus on time and factors, which lead to an easy understanding of why certain trends are predicted when accessing a given time series table. We also modify the loss functions of the attention models with weighted categorical cross entropy; our proposed model produces a 0.76 hit ratio, which is superior to those of other methods for predicting the trends of the KOSPI 200.
    Date: 2019–02
  2. By: Jinho Lee; Raehyun Kim; Yookyung Koh; Jaewoo Kang
    Abstract: We applied Deep Q-Network with a Convolutional Neural Network function approximator, which takes stock chart images as input, for making global stock market predictions. Our model not only yields profit in the stock market of the country where it was trained but generally yields profit in global stock markets. We trained our model only in the US market and tested it in 31 different countries over 12 years. The portfolios constructed based on our model's output generally yield about 0.1 to 1.0 percent return per transaction prior to transaction costs in 31 countries. The results show that there are some patterns on stock chart image, that tend to predict the same future stock price movements across global stock markets. Moreover, the results show that future stock prices can be predicted even if the training and testing procedures are done in different countries. Training procedure could be done in relatively large and liquid markets (e.g., USA) and tested in small markets. This result demonstrates that artificial intelligence based stock price forecasting models can be used in relatively small markets (emerging countries) even though they do not have a sufficient amount of data for training.
    Date: 2019–02
  3. By: Don Fullerton; Chi L. Ta
    Abstract: To clarify and interpret the workings of a large computable general equilibrium (CGE) model of environmental policy in the U.S., we build an aggregated Cobb-Douglas (CD) model that can be solved easily and analytically. Its closed-form expressions show exactly how key parameters determine the sign and size of effects from a large new carbon tax on emissions, revenue, prices, output, and welfare. Data and parameters from the detailed, dynamic CGE model of Goulder and Hafstead (2018) are used in the CD model to calculate results that can be compared with theirs. Results from the CD model track those from the large CGE model quite closely, even though the CD model omits much detail such as the number of sectors, intermediate inputs, and international trade. A CGE model is quite useful to generate detailed numerical results and to reflect on particular aspects of environmental policy, but the simpler CD model provides a transparent view of exactly how the policy affects key outcomes.
    JEL: H23 Q28 Q54
    Date: 2019–02
  4. By: Colasante, Annarita; Alfarano, Simone; Camacho-Cuena, Eva
    Abstract: We compare the performance of two learning algorithms in replicating individual short and long-run expectations: the Exploration-Explotation Algorithm (EEA) and the Heuristic Switching Model (HSM). Individual expectations are elicited in a series of Learning-to-Forecast Experiments (LtFEs) with different feedback mechanisms between expectations and market price: positive and negative feedback markets. We implement the EEA proposed by Colasante et al. (2018c). Moreover, we modify the existing version of the HSM in order to incorporate the long-run predictions. Although the two algorithms provide a fairly good description of marker prices in the short-run, the EEA outperforms the HSM in replicating the main characteristics of individual expectation in the long-run, both in terms of coordination of individual expectations and convergence of expectations to the fundamental value.
    Keywords: Expectations, Experiment, Evolutionary Learning
    JEL: C91 D03 G12
    Date: 2019
  5. By: Jean-Marc MONTAUD
    Abstract: In Mali’s current context where the crops sector is particularly exposed and vulnerable to agricultural drought, this study assesses the economy-wide impacts of such events and the potential effectiveness of some adaptation strategies. Using a dynamic computable general equilibrium model, we conduct counterfactual simulations of various scenarios accounting for different levels of intensity and frequency of droughts over a 15-year period. We first show how mild, moderate, and intense droughts currently experienced by the country affect its economic performances and considerably degrade its households’ welfare. We also show how these negative impacts could be aggravated in the future by the likely increased number of intense droughts threatened by global climate change. However, we finally show that there appears to be some room for Mali to manoeuvre in terms of drought-risk management policies, such as fostering the use of drought-tolerant crop varieties, improving drought early warning systems or extending irrigation capacities.
    Keywords: Climate variability, General Equilibrium, Agriculture, Food Security, Mali
    JEL: C68 O13 Q54
    Date: 2019–02
  6. By: Shino Takayama (School of Economics, The University of Queensland)
    Abstract: This paper studies the manipulation of prices by using a dynamic version of the Glosten and Milgrom (1985) model with a long-lived informed trader. We make a fundamental contribution by clarifying the conditions under which a unique equilibrium exists, and in what situations this equilibrium involves manipulation of prices by the informed trader. Furthermore, within the unique equilibrium, we characterize bid–ask spreads and show that bid and ask prices are monotonically increasing in the market maker’s prior belief. Finally, we propose a computational method to find equilibria in the model. Our simulation results confirm our theoretical findings and find multiple equilibria in some cases.
    Keywords: Market microstructure; Glosten–Milgrom; Insider trading; Dynamic trading; Price formation; Sequential trade; Asymmetric information; Bid–ask spreads.
    JEL: D82 G12
    Date: 2018–10–02
  7. By: Tim Xiao (University of Toronto)
    Abstract: The incremental risk charge (IRC) is a new regulatory requirement from the Basel Committee in response to the recent financial crisis. Notably few models for IRC have been developed in the literature. This paper proposes a methodology consisting of two Monte Carlo simulations. The first Monte Carlo simulation simulates default, migration, and concentration in an integrated way. Combining with full re-valuation, the loss distribution at the first liquidity horizon for a subportfolio can be generated. The second Monte Carlo simulation is the random draws based on the constant level of risk assumption. It convolutes the copies of the single loss distribution to produce one year loss distribution. The aggregation of different subportfolios with different liquidity horizons is addressed. Moreover, the methodology for equity is also included, even though it is optional in IRC. Acknowledge: The work was sponsored by FinPricing at
    Keywords: Incremental risk charge (IRC),constant level of risk,liquidity horizon,constant loss distribution,Merton-type model,concentration
    Date: 2019–02–18
  8. By: Cyril B\'en\'ezet; Jean-Fran\c{c}ois Chassagneux; Christoph Reisinger
    Abstract: We consider the numerical approximation of the quantile hedging price in a non-linear market. In a Markovian framework, we propose a numerical method based on a Piecewise Constant Policy Timestepping (PCPT) scheme coupled with a monotone finite difference approximation. We prove the convergence of our algorithm combining BSDE arguments with the Barles & Jakobsen and Barles & Souganidis approaches for non-linear equations. In a numerical section, we illustrate the efficiency of our scheme by considering a financial example in a market with imperfections.
    Date: 2019–02
  9. By: Sebastian M. Krause; Hrvoje \v{S}tefan\v{c}i\'c; Vinko Zlati\'c; Guido Caldarelli
    Abstract: Evaluation of systemic risk in networks of financial institutions in general requires information of inter-institution financial exposures. In the framework of Debt Rank algorithm, we introduce an approximate method of systemic risk evaluation which requires only node properties, such as total assets and liabilities, as inputs. We demonstrate that this approximation captures a large portion of systemic risk measured by Debt Rank. Furthermore, using Monte Carlo simulations, we investigate network structures that can amplify systemic risk. Indeed, while no topology in general sense is {\em a priori} more stable if the market is liquid [1], a larger complexity is detrimental for the overall stability [2]. Here we find that the measure of scalar assortativity correlates well with level of systemic risk. In particular, network structures with high systemic risk are scalar assortative, meaning that risky banks are mostly exposed to other risky banks. Network structures with low systemic risk are scalar disassortative, with interactions of risky banks with stable banks.
    Date: 2019–02
  10. By: Calvano, Emilio; Calzolari, Giacomo; Denicolò, Vincenzo; Pastorello, Sergio
    Abstract: Increasingly, pricing algorithms are supplanting human decision making in real marketplaces. To inform the competition policy debate on the possible consequences of this development, we experiment with pricing algorithms powered by Artificial Intelligence (AI) in controlled environments (computer simulations), studying the interaction among a number of Q-learning algorithms in a workhorse oligopoly model of price competition with Logit demand and constant marginal costs. In this setting the algorithms consistently learn to charge supra-competitive prices, without communicating with one another. The high prices are sustained by classical collusive strategies with a finite phase of punishment followed by a gradual return to cooperation. This finding is robust to asymmetries in cost or demand and to changes in the number of players.
    Keywords: artificial intelligence; Collusion; Pricing-Algorithms; Q-Learning; Reinforcement Learning
    JEL: D43 D83 L13 L41
    Date: 2018–12
  11. By: Knaus, Michael C.; Lechner, Michael; Strittmatter, Anthony
    Abstract: We investigate the finite sample performance of causal machine learning estimators for heterogeneous causal effects at different aggregation levels. We employ an Empirical Monte Carlo Study that relies on arguably realistic data Generation processes (DGPs) based on actual data. We consider 24 different DGPs, Eleven different causal machine learning estimators, and three aggregation levels of the estimated effects. In the main DGPs, we allow for selection into treatment based on a rich set of observable covariates. We provide evidence that the estimators can be categorized into three groups. The first group performs consistently well across all DGPs and aggregation levels. These estimators have multiple steps to account for the selection into the treatment and the outcome process. The second group shows competitive performance only for particular DGPs. The third group is clearly outperformed by the other estimators.
    Keywords: Causal Forest; Causal machine learning; conditional average treatment effects; Lasso; Random Forest; selection-on-observables
    JEL: C21
    Date: 2018–12
  12. By: J. L. Subias
    Abstract: The present paper describes a practical example in which the probability distribution of the prices of a stock market blue chip is calculated as the wave function of a quantum particle confined in a potential well. This model may naturally explain the operation of several empirical rules used by technical analysts. Models based on the movement of a Brownian particle do not account for fundamental aspects of financial markets. This is due to the fact that the Brownian particle is a classical particle, while stock market prices behave more like quantum particles. When a classical particle meets an obstacle or a potential barrier, it may either bounce or overcome the obstacle, yet not both at a time. Only a quantum particle can simultaneously reflect and transmit itself on a potential barrier. This is precisely what prices in a stock market imitate when they find a resistance level: they partially bounce against and partially overcome it. This can only be explained by admitting that prices behave as quantum rather than as classic particles. The proposed quantum model finds natural justification not only for the aforementioned facts but also for other empirically well-known facts such as sudden changes in volatility, non-Gaussian distribution in prices, among others.
    Date: 2019–01

This nep-cmp issue is ©2019 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.