nep-cmp New Economics Papers
on Computational Economics
Issue of 2019‒12‒02
eleven papers chosen by
Stan Miles
Thompson Rivers University

  1. Bayesian regularized artificial neural networks for the estimation of the probability of default By Sariev, Eduard; Germano, Guido
  2. Deep Reinforcement Learning in Cryptocurrency Market Making By Jonathan Sadighian
  3. Users' Involvement in Value Co‐Creation: The More the Better? By Benoît Desmarchelier; Faridah Djellal; Faïz Gallouj
  4. A note on observational equivalence of micro assumptions on macro level By Ponomarenko, Alexey
  5. Artificial intelligence approach to momentum risk-taking By Ivan Cherednik
  6. Using Insurance to Manage Reliability in the Distributed Electricity Sector: Insights From an Agent-Based Model By Rolando Fuentes; Abhijit Sengupta
  7. Opportunities for agent-based modeling in fisheries social science By Burgess, Matthew G.; Carrella, Ernesto; Drexler, Michael; Axtell, Robert L.; Bailey, Richard M.; Watson, James R.; Cabral, Reniel B.; Clemence, Michaela; Costello, Christopher; Dorsett, Chris
  8. Administration by Algorithm? Public Management meets Public Sector Machine Learning By Veale, Michael; Brass, Irina
  9. Central bank tone and the dispersion of views within monetary policy committes By Paul Hubert; Fabien Labondance
  10. Confirmation Bias in Social Networks By Marcos Fernandes
  11. Firm-Level Political Risk: Measurement and Effects By Tarek A. Hassan; Stephan Hollander; Laurence van Lent; Ahmed Tahoun

  1. By: Sariev, Eduard; Germano, Guido
    Abstract: Artificial neural networks (ANN) have been extensively used for classification problems in many areas such as gene, text and image recognition. Although ANN are popular also to estimate the probability of default in credit risk, they have drawbacks; a major one is their tendency to overfit the data. Here we propose an improved Bayesian regularization approach to train ANN and compare it to the classical regularization that relies on the back-propagation algorithm for training feed-forward networks. We investigate different network architectures and test the classification accuracy on three data sets. Profitability, leverage and liquidity emerge as important financial default driver categories.
    Keywords: Artificial neural networks; Bayesian regularization; Credit risk; Probability of default; ES/K002309/1
    JEL: C11 C13
    Date: 2019–10–31
  2. By: Jonathan Sadighian
    Abstract: This paper sets forth a framework for deep reinforcement learning as applied to market making (DRLMM) for cryptocurrencies. Two advanced policy gradient-based algorithms were selected as agents to interact with an environment that represents the observation space through limit order book data, and order flow arrival statistics. Within the experiment, a forward-feed neural network is used as the function approximator and two reward functions are compared. The performance of each combination of agent and reward function is evaluated by daily and average trade returns. Using this DRLMM framework, this paper demonstrates the effectiveness of deep reinforcement learning in solving stochastic inventory control challenges market makers face.
    Date: 2019–11
  3. By: Benoît Desmarchelier (CLERSE - Centre Lillois d’Études et de Recherches Sociologiques et Économiques - UMR 8019 - Université de Lille - ULCO - Université du Littoral Côte d'Opale - CNRS - Centre National de la Recherche Scientifique); Faridah Djellal (CLERSE - Centre Lillois d’Études et de Recherches Sociologiques et Économiques - UMR 8019 - Université de Lille - ULCO - Université du Littoral Côte d'Opale - CNRS - Centre National de la Recherche Scientifique); Faïz Gallouj (CLERSE - Centre Lillois d’Études et de Recherches Sociologiques et Économiques - UMR 8019 - Université de Lille - ULCO - Université du Littoral Côte d'Opale - CNRS - Centre National de la Recherche Scientifique)
    Abstract: Literature on value co-creation often postulates that a greater degree of co-production increases the potential of value co-creation. To test this hypothesis , we build a computational model of value proposition inspired by March's model of organizational learning (1991[24]). The model allows to represent various cases of co-creation: (i) without co-production, (ii) with downstream co-production, and (iii) with upstream co-production. Repeated simulations are partly supporting the literature. On one hand, we find that deeper involvement of consumers into the value offering process increases the potential for value co-creation. One the other hand, we find that co-production can increase inequalities of satisfaction among consumers. Also, while scenarios with learning consumers offer the highest potential for value co-creation, a negative relationship emerges between the number of learning consumers and organizational performance. We thank the two anonymous reviewers for their very insightful comments on an earlier version of this paper. This paper draws on a research carried out within the Co-Val project, funded by the European Commission under the Horizon 2020 framework.
    Keywords: Value co-creation,organizational learning,modelling *
    Date: 2019–10–31
  4. By: Ponomarenko, Alexey
    Abstract: The author sets up a simplistic agent-based model where agents learn with reinforcement observing an incomplete set of variables. The model is employed to generate an artificial dataset that is used to estimate standard macro econometric models. The author shows that the results are qualitatively indistinguishable (in terms of the signs and significances of the coefficients and impulse-responses) from the results obtained with a dataset that emerges in a genuinely rational system.
    Keywords: microfoundations,bounded rationality,reinforcement learning,agent-based model
    JEL: B41 C63 D83
    Date: 2019
  5. By: Ivan Cherednik
    Abstract: We propose a mathematical model of momentum risk-taking, which is real-time risk management, and discuss its implementation: an automated momentum equity trading system. Risk-taking is one of the key components of general decision-making, a challenge for artificial intelligence and machine learning. We begin with a simple continuous model of news impact and then perform its discretization, adjusting it to dealing with discontinuous functions. Stock charts are the main examples for us; stock markets are quite a test for any risk management theories. An entirely automated trading system based on our approach proved to be successful in extensive historical and real-time experiments. Its preimage is a new contract card game presented at the end of the paper.
    Date: 2019–11
  6. By: Rolando Fuentes; Abhijit Sengupta (King Abdullah Petroleum Studies and Research Center)
    Abstract: Our results suggest that consumers would transfer some of the inherent risks of a blackout to the utility for a price lower than their willingness to pay to achieve their desired level of protection, creating economic value. The purchase of insurance would help most consumers avoid a complete loss of power. Our simulations show that of those households that would otherwise experience a complete loss of power, on average between 1% and 15% can fully cover their excess energy needs through insurance. Between 50% and 70% of these households are budget constrained but would still be able to partially cover their excess energy needs.
    Keywords: Agent based models, Distributed energy resources, New business models in electricity, Reliability Insurance, Utility death spiral
    Date: 2019–07–21
  7. By: Burgess, Matthew G.; Carrella, Ernesto; Drexler, Michael; Axtell, Robert L.; Bailey, Richard M.; Watson, James R.; Cabral, Reniel B.; Clemence, Michaela; Costello, Christopher; Dorsett, Chris
    Abstract: Like other coupled natural-human systems, fisheries are ultimately managed from the human side. Models are important to understanding and predicting fishing industry responses to, and feedbacks with, changes in the ecosystem or management institutions. In situ controlled experiments are difficult or impossible to conduct. Recent advances in computation have made it possible to construct realistic agent-based models (ABMs) of human systems that track the behaviour of each individual, firm, or vessel. ABMs are widely used for both academic and applied purposes in many settings including finance, urban planning, and the military, but are not yet mainstream in fisheries science and management. ABMs are well suited to understanding emergent consequences of fisher interactions, heterogeneity, and bounded rationality, especially in complex ecological and institutional contexts. For these reasons, we argue that ABMs of human behaviour can contribute significantly to fisheries social science in three areas: (i) understanding interactions between multiple management institutions, (ii) incorporating cognitive and behavioural sciences into fisheries science and practice, and (iii) understanding and projecting the social consequences of management institutions. We provide simple worked examples illustrating the potential for ABMs in each of these areas, using the POSEIDON model, and we discuss terms of reference for addressing common ABM development and application challenges.
    Date: 2018–11–16
  8. By: Veale, Michael; Brass, Irina
    Abstract: Public bodies and agencies increasingly seek to use new forms of data analysis in order to provide 'better public services'. These reforms have consisted of digital service transformations generally aimed at 'improving the experience of the citizen', 'making government more efficient' and 'boosting business and the wider economy'. More recently however, there has been a push to use administrative data to build algorithmic models, often using machine learning, to help make day-to-day operational decisions in the management and delivery of public services rather than providing general policy evidence. This chapter asks several questions relating to this. What are the drivers of these new approaches? Is public sector machine learning a smooth continuation of e-Government, or does it pose fundamentally different challenge to practices of public administration? And how are public management decisions and practices at different levels enacted when machine learning solutions are implemented in the public sector? Focussing on different levels of government: the macro, the meso, and the 'street-level', we map out and analyse the current efforts to frame and standardise machine learning in the public sector, noting that they raise several concerns around the skills, capacities, processes and practices governments currently employ. The forms of these are likely to have value-laden, political consequences worthy of significant scholarly attention.
    Date: 2019–04–19
  9. By: Paul Hubert (Sciences Po - OFCE); Fabien Labondance (Université de Bourgogne Franche-Comté, CRESE)
    Abstract: Does policymakers’ choice of words matter? We explore empirically whether central bank tone conveyed in FOMC statements contains useful information for financial market participants. We quantify central bank tone using computational linguistics and identify exogenous shocks to central bank tone orthogonal to the state of the economy. Using an ARCH model and a high-frequency approach, we find that positive central bank tone increases interest rates at the 1-year maturity. We therefore investigate which potential pieces of information could be revealed by central bank tone. Our tests suggest that it relates to the dispersion of views among FOMC members. This information may be useful to financial markets to understand current and future policy decisions. Finally, we show that central bank tone helps predicting future policy decisions.
    Date: 2019–11
  10. By: Marcos Fernandes
    Abstract: I propose a social learning model that investigates how confimatory bias affects public opinion when agents exchange information over a social network. For that, besides exchanging opinions with friends, individuals observe a public sequence of potentially ambiguous signals and they interpret it according to a rule that accounts for confirmation bias. I first show that, regardless the level of ambiguity and both in the case of a single individual or of a networked society, only two types of opinions might be formed and both are biased. One opinion type, however, is necessarily less biased (more efficient) than the other depending on the state of the world. The size of both biases depends on the ambiguity level and the relative magnitude of the state and confirmatory biases. In this context, long-run learning is not attained even when individuals interpret ambiguity impartially. Finally, since it is not trivial to ascertain analytically the probability of emergence of the efficient consensus when individuals are connected through a social network and have different priors, I use simulations to analyze its determinants. Three main results derived from this exercise are that, in expected terms, i) some network topologies are more conducive to consensus efficiency, ii) some degree of partisanship enhances consensus efficiency even under confirmatory bias and iii) open-mindedness, i.e. when partisans agree to exchange opinions with other partisans with polar opposite beliefs, might harm efficiency in some cases.
    Date: 2019
  11. By: Tarek A. Hassan (Boston University); Stephan Hollander (Tilburg University); Laurence van Lent (Frankfurt School of Finance and Management); Ahmed Tahoun (London Business School)
    Abstract: We adapt simple tools from computational linguistics to construct a new measure of political risk faced by individual US firms: the share of their quarterly earnings conference calls that they devote to political risks. We validate our measure by showing it correctly identifies calls containing extensive conversations on risks that are political in nature, that it varies intuitively over time and across sectors, and that it correlates with the firm`s actions and stock market volatility in a manner that is highly indicative of political risk. Firms exposed to political risk retrench hiring and investment and actively lobby and donate to politicians. These results continue to hold after controlling for news about the mean (as opposed to the variance) of political shocks. Interestingly, the vast majority of the variation in our measure is at the firm level rather than at the aggregate or sector level, in the sense that it is neither captured by the interaction of sector and time fixed effects, nor by heterogeneous exposure of individual firms to aggregate political risk. The dispersion of this firm level political risk increases significantly at times with high aggregate political risk. Decomposing our measure of political risk by topic, we find that firms that devote more time to discussing risks associated with a given political topic tend to increase lobbying on that topic, but not on other topics, in the following quarter.
    Keywords: Political uncertainty, quantification, firm-level, lobbying
    JEL: D8 E22 E24 E32 E6 G18 G32 G38 H32
    Date: 2019–06

This nep-cmp issue is ©2019 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.