nep-cmp New Economics Papers
on Computational Economics
Issue of 2019‒02‒18
fourteen papers chosen by

  1. Stochastic Approximation Schemes for Economic Capital and Risk Margin Computations By David Barrera; Stéphane Crépey; Babacar Diallo; Gersende Fort; Emmanuel Gobet; Uladzislau Stazhynski
  2. Machine learning in the service of policy targeting: the case of public credit guarantees By Monica Andini; Michela Boldrini; Emanuele Ciani; Guido de Blasio; Alessio D'Ignazio; Andrea Paladini
  3. Dynamic Bank Runs: an agent-based approach By Toni Ricardo Eugenio dos Santos; Marcio Issao Nakane
  4. The economic impacts of UK fiscal policies and their spillover effects on the energy system By Andrew G Ross; Grant Allan; Gioele Figus; Peter G McGregor; J Kim Swales; Karen Turner
  5. Simulating financial contagion dynamics in random interbank networks By John Leventides; Kalliopi Loukaki; Vassilios G. Papavassiliou
  6. Low-rank tensor approximation for Chebyshev interpolation in parametric option pricing By Kathrin Glau; Daniel Kressner; Francesco Statti
  7. The added value of more accurate predictions for school rankings By Fritz Schiltz; Paolo Sestito; Tommaso Agasisti; Kristof De Witte
  8. Asymptotic Poincaré Maps along the Edges of Polytopes By Hassan Najafi Alishah; Pedro Duarte; Telmo Peixe
  9. Risk management with machine-learning-based algorithms By Simon Fecamp; Joseph Mikael; Xavier Warin
  10. High-performance stock index trading: making effective use of a deep LSTM neural network By Chariton Chalvatzis; Dimitrios Hristu-Varsakelis
  11. What predicts corruption? By Colonnelli, E; Gallego, J.A.; Prem, M
  12. Simultaneous inference for Best Linear Predictor of the Conditional Average Treatment Effect and other structural functions By Victor Chernozhukov; Vira Semenova
  13. Should We Care (More) About Data Aggregation? Evidence from the Democracy-Growth-Nexus. By Klaus Gründler; Tommy Krieger
  14. Physics and Derivatives: Effective-Potential Path-Integral Approximations of Arrow-Debreu Densities By Luca Capriotti; Ruggero Vaia

  1. By: David Barrera (CMAP - Centre de Mathématiques Appliquées - Ecole Polytechnique - X - École polytechnique - CNRS - Centre National de la Recherche Scientifique); Stéphane Crépey (LaMME - Laboratoire de Mathématiques et Modélisation d'Evry - INRA - Institut National de la Recherche Agronomique - UEVE - Université d'Évry-Val-d'Essonne - ENSIIE - CNRS - Centre National de la Recherche Scientifique); Babacar Diallo (LaMME - Laboratoire de Mathématiques et Modélisation d'Evry - INRA - Institut National de la Recherche Agronomique - UEVE - Université d'Évry-Val-d'Essonne - ENSIIE - CNRS - Centre National de la Recherche Scientifique); Gersende Fort (IMT - Institut de Mathématiques de Toulouse UMR5219 - UT1 - Université Toulouse 1 Capitole - UT2J - Université Toulouse - Jean Jaurès - UPS - Université Toulouse III - Paul Sabatier - Université Fédérale Toulouse Midi-Pyrénées - PRES Université de Toulouse - INSA Toulouse - Institut National des Sciences Appliquées - Toulouse - INSA - Institut National des Sciences Appliquées - CNRS - Centre National de la Recherche Scientifique); Emmanuel Gobet (CMAP - Centre de Mathématiques Appliquées - Ecole Polytechnique - X - École polytechnique - CNRS - Centre National de la Recherche Scientifique); Uladzislau Stazhynski (CMAP - Centre de Mathématiques Appliquées - Ecole Polytechnique - X - École polytechnique - CNRS - Centre National de la Recherche Scientifique)
    Abstract: We consider the problem of the numerical computation of its economic capital by an insurance or a bank, in the form of a value-at-risk or expected shortfall of its loss over a given time horizon. This loss includes the appreciation of the mark-to-model of the liabilities of the firm, which we account for by nested Monte Carlo à la Gordy and Juneja (2010) or by regression à la Broadie, Du, and Moallemi (2015). Using a stochastic approximation point of view on value-at-risk and expected shortfall, we establish the convergence of the resulting economic capital simulation schemes, under mild assumptions that only bear on the theoretical limiting problem at hand, as opposed to assumptions on the approximating problems in Gordy-Juneja (2010) and Broadie-Du-Moallemi (2015). Our economic capital estimates can then be made conditional in a Markov framework and integrated in an outer Monte Carlo simulation to yield the risk margin of the firm, corresponding to a market value margin (MVM) in insurance or to a capital valuation adjustment (KVA) in banking par- lance. This is illustrated numerically by a KVA case study implemented on GPUs.
    Date: 2019
  2. By: Monica Andini (Bank of Italy); Michela Boldrini (University of Bologna); Emanuele Ciani (Bank of Italy); Guido de Blasio (Bank of Italy); Alessio D'Ignazio (Bank of Italy); Andrea Paladini (University of Rome "La Sapienza")
    Abstract: We use Machine Learning (ML) predictive tools to propose a policy-assignment rule designed to increase the effectiveness of public guarantee programs. This rule can be used as a benchmark to improve targeting in order to reach the stated policy goals. Public guarantee schemes should target firms that are both financially constrained and creditworthy, but they often employ naïve assignment rules (mostly based only on the probability of default) that may lead to an inefficient allocation of resources. Examining the case of Italy’s Guarantee Fund, we suggest a benchmark ML-based assignment rule, trained and tested on credit register data. Compared with the current eligibility criteria, the ML-based benchmark leads to a significant improvement in the effectiveness of the Fund in gaining credit access to firms. We discuss the problems in estimating and using these algorithms for the actual implementation of public policies, such as transparency and omitted payoffs.
    Keywords: machine learning, program evaluation, loan guarantees
    JEL: C5 H81
    Date: 2019–02
  3. By: Toni Ricardo Eugenio dos Santos; Marcio Issao Nakane
    Abstract: This paper simulates bank runs by using an agent-based approach to assess the depositors’ behavior under various scenarios in a Diamond-Dybvig model framework to answer the following question: What happens if several depositors and banks play in multiple rounds of a Diamond-Dybvig economy? The main contribution to the literature is that we take into account a sequential service restriction and the influence from the neighborhood in the decision of patient depositors to withdraw earlier or later. Our simulations show that the number of bank runs goes to zero as banks grow and the market concentration increases in the long run
    Keywords: Liquidity; Banking, Bank run
    JEL: G21
    Date: 2019–02–13
  4. By: Andrew G Ross (Department of Economics, University of Strathclyde); Grant Allan (Department of Economics, University of Strathclyde); Gioele Figus (Department of Economics, University of Strathclyde); Peter G McGregor (Department of Economics, University of Strathclyde); J Kim Swales (Department of Economics, University of Strathclyde); Karen Turner (Centre for Energy Policy, University of Strathclyde)
    Abstract: The energy system and the economy are inextricably intertwined. While this interdependence is, of course, widely recognised, it has not featured prominently in assessing the likely impact of economic policies. In principle, broad fiscal policies are likely to have a significant influence on key elements of the energy system, the neglect of which may lead to inefficiencies in the design of appropriate energy and economic policies. The importance of this in practice depends on the strength of the spillover effects from fiscal policy instruments to energy policy goals. This is the focus of this paper. We employ a multi-sectoral computable general equilibrium (CGE) approach for the UK which allows us to track the impact of key fiscal policy interventions on key goals of economic and energy policies. Overall, our results suggest that a double dividend - a simultaneous stimulus to the economy and a reduction in emissions – induced by an increase in current public spending or a hike in the income tax rate seem unlikely in the UK context. Nonetheless, there are undoubted differential spillover effects on key components of the energy system from tax and public spending interventions that may prove capable of being exploited through the coordination of fiscal and energy policies. Even if it seems doubtful that fiscal policies would be formulated with a view to improved coordination with energy policies, policymakers should at least be aware of likely direction and scale of fiscal spillover effects to the energy system.
    Keywords: Energy policy, fiscal policy, income tax
    JEL: C68 D58 Q43 Q48
    Date: 2018–12
  5. By: John Leventides; Kalliopi Loukaki; Vassilios G. Papavassiliou
    Abstract: The purpose of this study is to assess the resilience of financial systems to exogenous shocks using techniques drawn from the theory of complex networks. We investigate by means of Monte Carlo simulations the fragility of several network topologies using a simple default model of contagion applied on interbank networks of varying sizes. We trigger a series of banking crises by exogenously failing each bank in the system and observe the propagation mechanisms that take effect within the system under different scenarios. Finally, we add to the existing literature by analyzing the interplay of several crucial drivers of interbank contagion, such as network topology, leverage, interconnectedness, heterogeneity and homogeneity across bank sizes and interbank exposures.
    Keywords: Interbank congtagion; Random networks; Financial stability; Interconectedness; Systemic risk
    Date: 2018–12
  6. By: Kathrin Glau; Daniel Kressner; Francesco Statti
    Abstract: Treating high dimensionality is one of the main challenges in the development of computational methods for solving problems arising in finance, where tasks such as pricing, calibration, and risk assessment need to be performed accurately and in real-time. Among the growing literature addressing this problem, Gass et al. [14] propose a complexity reduction technique for parametric option pricing based on Chebyshev interpolation. As the number of parameters increases, however, this method is affected by the curse of dimensionality. In this article, we extend this approach to treat high-dimensional problems: Additionally exploiting low-rank structures allows us to consider parameter spaces of high dimensions. The core of our method is to express the tensorized interpolation in tensor train (TT) format and to develop an efficient way, based on tensor completion, to approximate the interpolation coefficients. We apply the new method to two model problems: American option pricing in the Heston model and European basket option pricing in the multi-dimensional Black-Scholes model. In these examples we treat parameter spaces of dimensions up to 25. The numerical results confirm the low-rank structure of these problems and the effectiveness of our method compared to advanced techniques.
    Date: 2019–02
  7. By: Fritz Schiltz (University of Leuven); Paolo Sestito (Bank of Italy); Tommaso Agasisti (Politecnico di Milano); Kristof De Witte (University of Leuven, University of Maastricht)
    Abstract: School rankings based on value-added (VA) estimates are subject to prediction errors, since VA is defined as the difference between predicted and actual performance. We introduce a more flexible random forest (RF), rooted in the machine learning literature, to minimize prediction errors and to improve school rankings. Monte Carlo simulations demonstrate the advantages of this approach. Applying the proposed method to data on Italian middle schools indicates that school rankings are sensitive to prediction errors, even when extensive controls are added. RF estimates provide a low-cost way to increase the accuracy of predictions, resulting in more informative rankings, and better policies.
    Keywords: value-added, school rankings, machine learning, Monte Carlo
    JEL: I21 C50
    Date: 2019–02
  8. By: Hassan Najafi Alishah; Pedro Duarte; Telmo Peixe
    Abstract: For a class of flows on polytopes, including many examples from Evolutionary Game Theory, we describe a piecewise linear model wchich encapsulates the asymptotic dynamics along the heteroclinic network formed out of the polytope's vertexes and edges. This piecewise linear flow is easy to compute even in higher dimensions, which allows the usage of numeric algorithms to find invariant dynamical structures such as periodic, homoclinic or heteroclinic orbits, which if robust persist as invariant dynamical structures of the original flow. We apply this method to prove the existence of chaotic behavior in some Hamiltonian replicator systems on the five dimensional simplex.
    Keywords: Flows on polytopes, Asymptotic dynamics, Heteroclinic networks, Poincaré maps, Hyperbolicity, Chaos, Evolutionary game theory
    Date: 2019–02
  9. By: Simon Fecamp; Joseph Mikael; Xavier Warin
    Abstract: We propose some machine-learning-based algorithms to solve hedging problems in incomplete markets. Sources of incompleteness cover illiquidity, untradable risk factors, discrete hedging dates and transaction costs. The proposed algorithms resulting strategies are compared to classical stochastic control techniques on several payoffs using a variance criterion. One of the proposed algorithm is flexible enough to be used with several existing risk criteria. We furthermore propose a new moment-based risk criteria.
    Date: 2019–02
  10. By: Chariton Chalvatzis; Dimitrios Hristu-Varsakelis
    Abstract: We present a deep long short-term memory (LSTM)-based neural network for predicting asset prices, together with a successful trading strategy for generating profits based on the model's predictions. Our work is motivated by the fact that the effectiveness of any prediction model is inherently coupled to the trading strategy it is used with, and vise versa. This highlights the difficulty in developing models and strategies which are jointly optimal, but also points to avenues of investigation which are broader than prevailing approaches. Our LSTM model is structurally simple and generates predictions based on price observations over a modest number of past trading days. The model's architecture is tuned to promote profitability, as opposed to accuracy, under a strategy that does not trade simply based on whether the price is predicted to rise or fall, but rather takes advantage of the distribution of predicted returns, and the fact that a prediction's position within that distribution carries useful information about the expected profitability of a trade. The proposed model and trading strategy were tested on the S&P 500, Dow Jones Industrial Average (DJIA), NASDAQ and Russel 2000 stock indices, and achieved cumulative returns of 329%, 241%, 468% and 279%, respectively, over 2010-2018, far outperforming the benchmark buy-and-hold strategy as well as other recent efforts.
    Date: 2019–02
  11. By: Colonnelli, E; Gallego, J.A.; Prem, M
    Abstract: Using rich micro data from Brazil, we show that multiple popular machine learning models display extremely high levels of performance in predicting municipality-level corruption in public spending. Measures of private sector activity, financial development, and human capital are the strongest predictors of corruption, while public sector and political features play a secondary role. Our findings have implications for the design and cost-effectiveness of various anti-corruption policies.
    Date: 2019–02–08
  12. By: Victor Chernozhukov (Institute for Fiscal Studies and MIT); Vira Semenova (Institute for Fiscal Studies)
    Abstract: This paper provides estimation and inference methods for a structural function, such as Conditional Average Treatment Effect (CATE), based on modern machine learning (ML) tools. We assume that such function can be represented as a conditional expectation = of a signal , where is the unknown nuisance function. In addition to CATE, examples of such functions include regression function with Partially Missing Outcome and Conditional Average Partial Derivative. We approximate by a linear form , where is a vector of the approximating functions and is the Best Linear Predictor. Plugging in the fi rst-stage estimate into the signal , we estimate via ordinary least squares of on . We deliver a high-quality estimate of the pseudo-target function , that features (a) a pointwise Gaussian approximation of at a point , (b) a simultaneous Gaussian approximation of uniformly over x, and (c) optimal rate of convergence of to uniformly over x. In the case the misspeci cation error of the linear form decays sufficiently fast, these approximations automatically hold for the target function instead of a pseudo-target . The fi rst stage nuisance parameter is allowed to be high-dimensional and is estimated by modern ML tools, such as neural networks, -shrinkage estimators, and random forest. Using our method, we estimate the average price elasticity conditional on income using Yatchew and No (2001) data and provide uniform con fidence bands for the target regression function.
    Date: 2018–07–04
  13. By: Klaus Gründler; Tommy Krieger
    Abstract: We compile data for 186 countries (1919 - 2016) and apply different aggregation methods to create new democracy indices. We observe that most of the available aggregation techniques produce indices that are often too favorable for autocratic regimes and too unfavorable for democratic regimes. The sole exception is a machine learning technique. Using a stylized model, we show that applying an index with implausibly low (high) scores for democracies (autocracies) in a regression analysis produces upward-biased OLS and 2SLS estimates. The results of an analysis of the effect of democracy on economic growth show that the distortions in the OLS and 2SLS estimates are substantial. Our findings imply that commonly used indices are not well suited for empirical purposes.
    Keywords: data aggregation, democracy, economic growth, indices, institutions, machine learning, measurement of democracy, non-random measurement error
    JEL: C26 C43 O10 P16 P48
    Date: 2019
  14. By: Luca Capriotti; Ruggero Vaia
    Abstract: We show how effective-potential path-integrals methods, stemming on a simple and nice idea originally due to Feynman and successfully employed in Physics for a variety of quantum thermodynamics applications, can be used to develop an accurate and easy-to-compute semi-analytical approximation of transition probabilities and Arrow-Debreu densities for arbitrary diffusions. We illustrate the accuracy of the method by presenting results for the Black-Karasinski and the GARCH linear models, for which the proposed approximation provides remarkably accurate results, even in regimes of high volatility, and for multi-year time horizons. The accuracy and the computational efficiency of the proposed approximation makes it a viable alternative to fully numerical schemes for a variety of derivatives pricing applications.
    Date: 2019–02

General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.