nep-cmp New Economics Papers
on Computational Economics
Issue of 2019‒06‒17
fifteen papers chosen by
Stan Miles
Thompson Rivers University

  1. Evaluating the Performance of Machine Learning Algorithms in Financial Market Forecasting: A Comprehensive Survey By Lukas Ryll; Sebastian Seidens
  2. Macroeconomic simulation comparison with a multivariate extension of the Markov Information Criterion By Sylvain Barde
  3. Investment Ranking Challenge: Identifying the best performing stocks based on their semi-annual returns By Shanka Subhra Mondal; Sharada Prasanna Mohanty; Benjamin Harlander; Mehmet Koseoglu; Lance Rane; Kirill Romanov; Wei-Kai Liu; Pranoot Hatwar; Marcel Salathe; Joe Byrum
  4. An economy under the digital transformation By Bertani, Filippo; Ponta, Linda; Raberto, Marco; Teglio, Andrea; Cincotti, Silvano
  5. Kinetic Market Model: An Evolutionary Algorithm By Evandro Luquini; Nizam Omar
  6. FinTech in Financial Inclusion: Machine Learning Applications in Assessing Credit Risk By Majid Bazarbash
  7. The Hard Problem of Prediction for Conflict Prevention By Mueller, Hannes Felix; Rauh, Christopher
  8. Neural Learning of Online Consumer Credit Risk By Di Wang; Qi Wu; Wen Zhang
  9. Aggregation of Diverse Information with Double Auction Trading among Minimally-Intelligent Algorithmic Agents By Karim Jamal; Michael Maier; Shyam Sunder
  10. Smart Algorithms to Increase Rail Capacity in Congested Areas By Dessouky, Maged; Fu, Lunce; Hu, Shichun
  11. Trading in Complex Networks By Felipe M. Cardoso; Carlos Gracia-Lazaro; Frederic Moisan; Sanjeev Goyal; Angel Sanchez; Yamir Moreno
  12. Risk-Sensitive Compact Decision Trees for Autonomous Execution in Presence of Simulated Market Response By Svitlana Vyetrenko; Shaojie Xu
  13. Boom-Bust Cycles of Learning, Investment and Disagreement By Osnat Zohar
  14. Portfolio diversification based on ratios of risk measures By Mathias Barkhagen; Brian Fleming; Sergio Garcia Quiles; Jacek Gondzio; Jens Kroeske; Sotirios Sabanis; Arne Staal
  15. Simulating stress in the UK corporate bond market: investor behaviour and asset fire-sales By Baranova, Yuliya; Douglas, Graeme; Silvestri, Laura

  1. By: Lukas Ryll; Sebastian Seidens
    Abstract: With increasing competition and pace in the financial markets, robust forecasting methods are becoming more and more valuable to investors. While machine learning algorithms offer a proven way of modeling non-linearities in time series, their advantages against common stochastic models in the domain of financial market prediction are largely based on limited empirical results. The same holds true for determining advantages of certain machine learning architectures against others. This study surveys more than 150 related articles on applying machine learning to financial market forecasting. Based on a comprehensive literature review, we build a table across seven main parameters describing the experiments conducted in these studies. Through listing and classifying different algorithms, we also introduce a simple, standardized syntax for textually representing machine learning algorithms. Based on performance metrics gathered from papers included in the survey, we further conduct rank analyses to assess the comparative performance of different algorithm classes. Our analysis shows that machine learning algorithms tend to outperform most traditional stochastic methods in financial market forecasting. We further find evidence that, on average, recurrent neural networks outperform feed forward neural networks as well as support vector machines which implies the existence of exploitable temporal dependencies in financial time series across multiple asset classes and geographies.
    Date: 2019–06
  2. By: Sylvain Barde
    Abstract: Comparison of macroeconomic simulation models, particularly agent-based models (ABMs), with more traditional approaches such as VAR and DSGE models has long been identified as an important yet problematic issue in the literature. This is due to the fact that many such simulations have been developed following the great recession with a clear aim to inform policy, yet the methodological tools required for validating these models on empirical data are still in their infancy. The paper aims to address this issue by developing and testing a comparison framework for macroeconomic simulation models based on a multivariate extension of the Markov Information Criterion (MIC) originally developed in Barde (2017). The MIC is designed to measure the informational distance between a set of models and some empirical data by mapping the simulated data to the markov transition matrix of the underlying data generating process, and is proven to perform optimally (i.e. the measurement is unbiased in expectation) for all models reducible to a markov process. As a result, not only can the MIC provide an accurate measure of distance solely on the basis of simulated data, but it can do it for a very wide class of data generating processes. The paper first presents the strategies adopted to address the computational challenges that arise from extending the methodology to multivariate settings and validates the extension on VAR and DGSE models. The paper then carries out a comparison of the benchmark ABM of Caiani et al. (2016) and the DGSE framework of Smets and Wouters (2007), which to our knowledge, is the first direct comparison between a macroeconomic ABM and a DGSE model.
    Keywords: Model comparison; Agent-based models; Validation methods
    JEL: B41 C15 C52 C63
    Date: 2019–06
  3. By: Shanka Subhra Mondal; Sharada Prasanna Mohanty; Benjamin Harlander; Mehmet Koseoglu; Lance Rane; Kirill Romanov; Wei-Kai Liu; Pranoot Hatwar; Marcel Salathe; Joe Byrum
    Abstract: In the IEEE Investment ranking challenge 2018, participants were asked to build a model which would identify the best performing stocks based on their returns over a forward six months window. Anonymized financial predictors and semi-annual returns were provided for a group of anonymized stocks from 1996 to 2017, which were divided into 42 non-overlapping six months period. The second half of 2017 was used as an out-of-sample test of the model's performance. Metrics used were Spearman's Rank Correlation Coefficient and Normalized Discounted Cumulative Gain (NDCG) of the top 20% of a model's predicted rankings. The top six participants were invited to describe their approach. The solutions used were varied and were based on selecting a subset of data to train, combination of deep and shallow neural networks, different boosting algorithms, different models with different sets of features, linear support vector machine, combination of convoltional neural network (CNN) and Long short term memory (LSTM).
    Date: 2019–06
  4. By: Bertani, Filippo; Ponta, Linda; Raberto, Marco; Teglio, Andrea; Cincotti, Silvano
    Abstract: During the last twenty years, we have witnessed the deep development of digital technologies. Artificial intelligence, software and algorithms have started to impact more and more frequently in our daily lives and most people didn't notice it. Recently, economists seem to have perceived that this new technological wave could have some consequences, but which one are they? Will they be positive or negative? In this paper we try to give a possible answer to these questions through an agent based computational approach; more specifically we enriched the large-scale macroeconomics model EURACE with the concept of digital technologies in order to investigate the effect that their business dynamics have at a macroeconomic level. Our preliminary results show that this productivity increase could be a double-edged sword: notwithstanding the development of the digital technologies sector can create new job opportunities, at the same time, these products could jeopardize the employment inside the traditional mass-production system.
    Keywords: Intangible assets, Industry 4.0, Digital revolution, Agent-based macroeconomics
    JEL: C63 O33
    Date: 2019–05–30
  5. By: Evandro Luquini; Nizam Omar
    Abstract: This research proposes the econophysics kinetic market model as an evolutionary algorithm's instance. The immediate results from this proposal is a new replacement rule for family competition genetic algorithms. It also represents a starting point to adding evolvable entities to kinetic market models.
    Date: 2019–06
  6. By: Majid Bazarbash
    Abstract: Recent advances in digital technology and big data have allowed FinTech (financial technology) lending to emerge as a potentially promising solution to reduce the cost of credit and increase financial inclusion. However, machine learning (ML) methods that lie at the heart of FinTech credit have remained largely a black box for the nontechnical audience. This paper contributes to the literature by discussing potential strengths and weaknesses of ML-based credit assessment through (1) presenting core ideas and the most common techniques in ML for the nontechnical audience; and (2) discussing the fundamental challenges in credit risk analysis. FinTech credit has the potential to enhance financial inclusion and outperform traditional credit scoring by (1) leveraging nontraditional data sources to improve the assessment of the borrower’s track record; (2) appraising collateral value; (3) forecasting income prospects; and (4) predicting changes in general conditions. However, because of the central role of data in ML-based analysis, data relevance should be ensured, especially in situations when a deep structural change occurs, when borrowers could counterfeit certain indicators, and when agency problems arising from information asymmetry could not be resolved. To avoid digital financial exclusion and redlining, variables that trigger discrimination should not be used to assess credit rating.
    Date: 2019–05–17
  7. By: Mueller, Hannes Felix; Rauh, Christopher
    Abstract: There is a growing interest in better conflict prevention and this provides a strong motivation for better conflict forecasting. A key problem of conflict forecasting for prevention is that predicting the start of conflict in previously peaceful countries is extremely hard. To make progress in this hard problem this project exploits both supervised and unsupervised machine learning. Specifically, the latent Dirichlet allocation (LDA) model is used for feature extraction from 3.8 million newspaper articles and these features are then used in a random forest model to predict conflict. We find that forecasting hard cases is possible and benefits from supervised learning despite the small sample size. Several topics are negatively associated with the outbreak of conflict and these gain importance when predicting hard onsets. The trees in the random forest use the topics in lower nodes where they are evaluated conditionally on conflict history, which allows the random forest to adapt to the hard problem and provides useful forecasts for prevention.
    Keywords: Armed Conflict; Forecasting; Machine Learning; Newspaper Text; Random Forest; Topic Models
    Date: 2019–05
  8. By: Di Wang; Qi Wu; Wen Zhang
    Abstract: This paper takes a deep learning approach to understand consumer credit risk when e-commerce platforms issue unsecured credit to finance customers' purchase. The "NeuCredit" model can capture both serial dependences in multi-dimensional time series data when event frequencies in each dimension differ. It also captures nonlinear cross-sectional interactions among different time-evolving features. Also, the predicted default probability is designed to be interpretable such that risks can be decomposed into three components: the subjective risk indicating the consumers' willingness to repay, the objective risk indicating their ability to repay, and the behavioral risk indicating consumers' behavioral differences. Using a unique dataset from one of the largest global e-commerce platforms, we show that the inclusion of shopping behavioral data, besides conventional payment records, requires a deep learning approach to extract the information content of these data, which turns out significantly enhancing forecasting performance than the traditional machine learning methods.
    Date: 2019–06
  9. By: Karim Jamal (University of Alberta); Michael Maier (University of Alberta); Shyam Sunder (School of Management and Cowles Foundation, Yale University)
    Abstract: Information dissemination and aggregation are key economic functions of financial markets. How intelligent do traders have to be for the complex task of aggregating diverse information (i.e., approximate the predictions of the rational expectations equilibrium) in a competitive double auction market? An apparent ex-ante answer is: intelligent enough to perform the bootstrap operation necessary for the task—to somehow arrive at prices that are needed to generate those very prices. Constructing a path to such equilibrium through rational behavior has remained beyond what we know of human cognitive abilities. Yet, laboratory experiments report that pro?t motivated human traders are able to aggregate information in some, but not all, market environments (Plott and Sunder 1988, Forsythe and Lundholm 1990). Algorithmic agents have the potential to yield insights into how simple individual behavior may perform this complex market function as an emergent phenomenon. We report on a computational experiment with markets populated by algorithmic traders who follow cognitively simple heuristics humans are known to use. These markets, too, converge to rational expectations equilibria in environments in which human markets converge, albeit slowly and noisily. The results suggest that high level of individual intelligence or rationality is not necessary for e?icient outcomes to emerge at the market level; the structure of the market itself is a source of rationality observed in the outcomes.
    Keywords: Algorithmic traders, Rational expectations, Structural rationality, Means-end heuristic, Information aggregation, Zero-intelligence agents
    JEL: C92 D44 D50 D70 D82 G14
    Date: 2019–06
  10. By: Dessouky, Maged; Fu, Lunce; Hu, Shichun
    Abstract: Railway has always been an effective mode to transport both people and goods. Freight trains are about four times more fuel efficient than trucks and passenger trains and are popular because of their blend of efficiency, speed and low emissions. Increasing rail network capacity, however, can be difficult and expensive. Finding more efficient ways to utilize existing rail network capacity can mitigate the impacts of growing freight demand. New communication technologies, such as Positive Train Control (PTC), have the potential to improve efficiency and minimize delays in freight and passenger railway operations. PTC enables trains to communicate and share critical information such as speed and location with each other in real time. This research brief highlights findings from the project, "Integrated Management of Truck and Rail Systems in Los Angeles," which simulated the complex, busy freight and passenger rail corridor between downtown Los Angeles and Pomona to evaluate the effectiveness of proposed new scheduling and dispatching algorithms using PTC. View the NCST Project Webpage
    Keywords: Engineering, Delays, Freight trains, Freight transportation, Headways, Passenger trains, Positive train control, Railroad tracks, Switches (Railroads)
    Date: 2019–05–01
  11. By: Felipe M. Cardoso; Carlos Gracia-Lazaro; Frederic Moisan; Sanjeev Goyal; Angel Sanchez; Yamir Moreno
    Abstract: Global supply networks in agriculture, manufacturing, and services are a defining feature of the modern world. The efficiency and the distribution of surpluses across different parts of these networks depend on choices of intermediaries. This paper conducts price formation experiments with human subjects located in large complex networks to develop a better understanding of the principles governing behavior. Our first finding is that prices are larger and that trade is significantly less efficient in small-world networks as compared to random networks. Our second finding is that location within a network is not an important determinant of pricing. An examination of the price dynamics suggests that traders on cheapest -- and hence active -- paths raise prices while those off these paths lower them. We construct an agent-based model (ABM) that embodies this rule of thumb. Simulations of this ABM yield macroscopic patterns consistent with the experimental findings. Finally, we extrapolate the ABM on to significantly larger random and small world networks and find that network topology remains a key determinant of pricing and efficiency.
    Date: 2019–06
  12. By: Svitlana Vyetrenko; Shaojie Xu
    Abstract: We demonstrate an application of risk-sensitive reinforcement learning to optimizing execution in limit order book markets. We represent taking order execution decisions based on limit order book knowledge by a Markov Decision Process; and train a trading agent in a market simulator, which emulates multi-agent interaction by synthesizing market response to our agent's execution decisions from historical data. Due to market impact, executing high volume orders can incur significant cost. We learn trading signals from market microstructure in presence of simulated market response and derive explainable decision-tree-based execution policies using risk-sensitive Q-learning to minimize execution cost subject to constraints on cost variance.
    Date: 2019–06
  13. By: Osnat Zohar (Bank of Israel)
    Abstract: Real activity as well as expectations often exhibit asymmetric dynamics, namely, they increase gradually with occasional large downturns. Such dynamics emerge in a model with strong feedback between activity and information. In the model, active investment reveals private information about the state of the world. An agent (Follower) only learns about another agent's (Loner's) signals from his actions. Equilibrium in the model generates asymmetric cycles: Entry to the market is gradual; exits tend to be abrupt and are followed by slow recoveries. The asymmetry in the cycle is magnified when information is public. If Follower observes Loner's payoffs and not just his actions, he is more likely to defer his entry compared to the benchmark model. Finally, model simulations show a positive correlation between investment and dispersion of beliefs which is largely attributed to the learning mechanism in the model.​
    Keywords: dynamic learning, asymmetric cycles, slow recovery, private information
    JEL: C73 D82 D83 E32
    Date: 2019–05
  14. By: Mathias Barkhagen; Brian Fleming; Sergio Garcia Quiles; Jacek Gondzio; Jens Kroeske; Sotirios Sabanis; Arne Staal
    Abstract: A new framework for portfolio diversification is introduced which goes beyond the classical mean-variance theory and other known portfolio allocation strategies such as risk parity. It is based on a novel concept called portfolio dimensionality and ultimately relies on the minimization of ratios of convex functions. The latter arises naturally due to our requirements that diversification measures should be leverage invariant and related to the tail properties of the distribution of portfolio returns. This paper introduces this new framework and its relationship to standardized higher order moments of portfolio returns. Moreover, it addresses the main drawbacks of standard diversification methodologies which are based primarily on estimates of covariance matrices. Maximizing portfolio dimensionality leads to highly non-trivial optimization problems with objective functions which are typically non-convex with potentially multiple local optima. Two complementary global optimization algorithms are thus presented. For problems of moderate size, a deterministic Branch and Bound algorithm is developed, whereas for problems of larger size a stochastic global optimization algorithm based on Gradient Langevin Dynamics is given. We demonstrate through numerical experiments that the introduced diversification measures possess desired properties as introduced in the portfolio diversification literature.
    Date: 2019–06
  15. By: Baranova, Yuliya (Bank of England); Douglas, Graeme (Bank of England); Silvestri, Laura (Bank of England)
    Abstract: We build a framework to simulate stress dynamics in the UK corporate bond market. This quantifies how the behaviours and interactions of major market participants, including open-ended funds, dealers, and institutional investors, can amplify different types of shocks to corporate bond prices. We model market participants’ incentives to buy or sell corporate bonds in response to initial price falls, the constraints under which they operate (including those arising due to regulation), and how the resulting behaviour may amplify initial falls in price and impact market functioning. We find that the magnitude of amplification depends on the cause of the initial reduction in price and is larger in the case of shocks to credit risk or risk-free interest rates, than in the case of a perceived deterioration in corporate bond market liquidity. Amplification also depends on agents’ proximity to their regulatory constraints. We further find that long-term institutional investors (eg pension funds) only partially mitigate the amplification due to their slower-moving nature. Finally, we find that shocks to corporate bond spreads, similar in magnitude to the largest weekly moves observed in the past, could trigger asset sales that may test the capacity of dealers to absorb them.
    Keywords: Corporate bond market; fire-sales; open-ended investment funds; pension funds; insurance companies; dealers; stress simulation
    JEL: G10 G20
    Date: 2019–06–14

This nep-cmp issue is ©2019 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.