nep-cmp New Economics Papers
on Computational Economics
Issue of 2017‒11‒05
fourteen papers chosen by



  1. A Calibration of the Shadow Rate to the Euro Area Using Genetic Algorithms By Eric McCoy; Ulrich Clemens
  2. Dynamic Bank Runs: an agent-based approach By Toni Ricardo Eugenio dos Santos; Marcio Issao Nakane
  3. Computational Methods for Martingale Optimal Transport problems By Gaoyue Guo; Jan Obloj
  4. Regulatory Learning: how to supervise machine learning models? An application to credit scoring By Dominique Guegan; Bertrand Hassani
  5. Calibration of Machine Learning Classifiers for Probability of Default Modelling By Pedro G. Fonseca; Hugo D. Lopes
  6. A case study in efficient programming in Stata and Mata: Speeding up the ardl estimation command By Daniel C. Schneider; Sebastian Kripfganz
  7. The State of Applied Econometrics - Causality and Policy Evaluation By Susan Athey; Guido Imbens
  8. Biome composition in deforestation deterrence and GHG emissions in Brazil By Joaquim Bento de Souza Ferreira Filho; Mark Horridge
  9. Dynamic Scoring of Tax Reforms in the EU By Dolls, Mathias; Wittneben, Christian
  10. Growth and Welfare under Endogenous Lifetime By Maik T. Schneider; Ralph Winkler
  11. Customer Consolidated Routing Problem – An Omni-channel Retail Study By Paul, J.; Agatz, N.A.H.; Spliet, R.; de Koster, M.B.M.
  12. Social Capital and Labor Market Networks By Brian J. Asquith; Judith K. Hellerstein; Mark J. Kutzbach; David Neumark
  13. A Structural Topic Model of the Features and the Cultural Origins of the Baconian Program By Peter Grajzl; Peter Murrell
  14. Generalized Random Forests By Susan Athey; Julie Tibshirani; Stefan Wager

  1. By: Eric McCoy; Ulrich Clemens
    Abstract: In the face of the lower bound on interest rates, central banks have relied on unconventional policy tools such as large-scale asset purchases and forward guidance to try to affect long-term interest rates and provide monetary stimulus to the economy. Assessing the impact of these measures and summarising the overall stance of monetary policy in this new environment has proven to be a challenge for academics and central banks. As a result, researchers have worked on modifying current term structure models and have adapted them to the current situation of close to zero or even negative interest rates. The paper begins by providing a non-technical overview of Leo Krippner's two-factor shadow rate model (K-ANSM2), explaining the underlying mechanics of the model through an illustrative example. Thereafter, the paper presents the results obtained from calibrating Krippner's KANSM2 shadow rate model to the euro area using genetic algorithms and discusses the pros and the cons of using genetic algorithms as an alternative to the optimisation method currently used (Nelder-Mead optimisation routine). Finally, the paper ends by analysing the strengths and weaknesses of using the shadow short rate as a tool to illustrate the stance and the dynamics of monetary policy.
    JEL: E43 E44 E52 E58
    Date: 2017–07
    URL: http://d.repec.org/n?u=RePEc:euf:dispap:051&r=cmp
  2. By: Toni Ricardo Eugenio dos Santos; Marcio Issao Nakane
    Abstract: This paper simulates bank runs by using an agent-based approach to assess the depositors’ behavior under various scenarios in a Diamond-Dybvig model framework to answer the following question: What happens if several depositors and banks play in multiple rounds of a Diamond-Dybvig economy? The main contribution to the literature is that we take into account a sequential service restriction and the influence from the neighborhood in the decision of patient depositors to withdraw earlier or later. Our simulations show that the number of bank runs goes to zero as banks grow and the market concentration increases in the long run
    Date: 2017–10
    URL: http://d.repec.org/n?u=RePEc:bcb:wpaper:465&r=cmp
  3. By: Gaoyue Guo; Jan Obloj
    Abstract: We establish numerical methods for solving the martingale optimal transport problem (MOT) - a version of the classical optimal transport with an additional martingale constraint on transport's dynamics. We prove that the MOT value can be approximated using linear programming (LP) problems which result from a discretisation of the marginal distributions combined with a suitable relaxation of the martingale constraint. Specialising to dimension one, we provide bounds on the convergence rate of the above scheme. We also show a stability result under only partial specification of the marginal distributions. Finally, we specialise to a particular discretisation scheme which preserves the convex ordering and does not require the martingale relaxation. We introduce an entropic regularisation for the corresponding LP problem and detail the corresponding iterative Bregman projection. We also rewrite its dual problem as a minimisation problem without constraint and solve it by computing the concave envelope of scattered data.
    Date: 2017–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1710.07911&r=cmp
  4. By: Dominique Guegan (Centre d'Economie de la Sorbonne and LabEx ReFi); Bertrand Hassani (Group Capgemini and Centre d'Economie de la Sorbonne and LabEx ReFi)
    Abstract: The arrival of big data strategies is threatening the lastest trends in financial regulation related to the simplification of models and the enhancement of the comparability of approaches chosen by financial institutions. Indeed, the intrinsic dynamic philosophy of Big Data strategies is almost incompatible with the current legal and regulatory framework as illustrated in this paper. Besides, as presented in our application to credit scoring, the model selection may also evolve dynamically forcing both practitioners and regulators to develop libraries of models, strategies allowing to switch from one to the other as well as supervising approaches allowing financial institutions to innovate in a risk mitigated environment. The purpose of this paper is therefore to analyse the issues related to the Big Data environment and in particular to machine learning models highlighting the issues present in the current framework confronting the data flows, the model selection process and the necessity to generate appropriate outcomes.
    Keywords: Big Data; Credit scoring; machine learning; AUC; regulation
    Date: 2017–07
    URL: http://d.repec.org/n?u=RePEc:mse:cesdoc:17034r&r=cmp
  5. By: Pedro G. Fonseca; Hugo D. Lopes
    Abstract: Binary classification is highly used in credit scoring in the estimation of probability of default. The validation of such predictive models is based both on rank ability, and also on calibration (i.e. how accurately the probabilities output by the model map to the observed probabilities). In this study we cover the current best practices regarding calibration for binary classification, and explore how different approaches yield different results on real world credit scoring data. The limitations of evaluating credit scoring models using only rank ability metrics are explored. A benchmark is run on 18 real world datasets, and results compared. The calibration techniques used are Platt Scaling and Isotonic Regression. Also, different machine learning models are used: Logistic Regression, Random Forest Classifiers, and Gradient Boosting Classifiers. Results show that when the dataset is treated as a time series, the use of re-calibration with Isotonic Regression is able to improve the long term calibration better than the alternative methods. Using re-calibration, the non-parametric models are able to outperform the Logistic Regression on Brier Score Loss.
    Date: 2017–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1710.08901&r=cmp
  6. By: Daniel C. Schneider (Max Planck Institute for Demographic Research); Sebastian Kripfganz (University of Exeter)
    Abstract: Abstract: The user-written package ardl, first released in 2014, estimates autoregressive distributed lag (ARDL) time-series models and provides the popular Pesaran, Shin, and Smith (2001, Journal of Applied Econometrics) bounds testing procedure for a long-run relationship. In this presentation, the statistics and application side of the command take a back seat and give way to a discussion of the algorithms used under the hood of ardl. Efficient programming is critical for ardl for two reasons: optimal lag selection and for obtaining critical values via simulation. This presentation will use the "case study" of the ardl estimation command to discuss efficient programming in Stata and Mata. Various programming concepts (compilation, argument passing, data types, pointer variables, etc.) and their implementation in Stata/Mata will be explained, as well as various finer Mata-specific topics (fast matrix indexing, matrix inversion, etc.). The overall message is that coding based on common sense, knowledge of the workings of Stata/Mata, and knowledge of linear algebra goes a long way when trying to write high-performance code and in many cases is to be preferred to the tedium of moving to a lower-level programming language like C/C++.
    Date: 2017–09–20
    URL: http://d.repec.org/n?u=RePEc:boc:dsug17:04&r=cmp
  7. By: Susan Athey; Guido Imbens
    Abstract: In this paper we discuss recent developments in econometrics that we view as important for empirical researchers working on policy evaluation questions. We focus on three main areas, where in each case we highlight recommendations for applied work. First, we discuss new research on identification strategies in program evaluation, with particular focus on synthetic control methods, regression discontinuity, external validity, and the causal interpretation of regression methods. Second, we discuss various forms of supplementary analyses to make the identification strategies more credible. These include placebo analyses as well as sensitivity and robustness analyses. Third, we discuss recent advances in machine learning methods for causal effects. These advances include methods to adjust for differences between treated and control units in high-dimensional settings, and methods for identifying and estimating heterogeneous treatment effects.
    Date: 2016–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1607.00699&r=cmp
  8. By: Joaquim Bento de Souza Ferreira Filho; Mark Horridge
    Abstract: We analyze the Brazilian commitments to COP21 under different scenarios, with a general equilibrium model of Brazil developed for land use change and GHG emissions analysis. The model is dynamic, inter-regional and bottom-up, and here distinguishes 16 regions and 6 biomes. We simulate different scenarios of future deforestation, including the halt of illegal deforestation, the restoration of 12Mha of forests, and replacing deforestation in the Amazon biome by deforestation in the Cerrado biome. Our analysis shows that the restoration of 12 Mha of forests would be enough to meet Brazil's 2025 commitments with no other extra GHG savings efforts, but would not meet 2030 commitments. Exchanging deforestation in the Amazon biome for deforestation in the Cerrado biome would seriously compromise the accomplishment of the targets. We note that emissions in the general economy are increasing in Brazil, suggesting that other efforts must be made to meet the COP21 targets.
    Keywords: Brazil, deforestation, CO2, Cerrado, Amazon
    JEL: C68 D58 E47 Q53 Q54 R14
    Date: 2017–07
    URL: http://d.repec.org/n?u=RePEc:cop:wpaper:g-274&r=cmp
  9. By: Dolls, Mathias; Wittneben, Christian
    Abstract: In this paper, we present a dynamic scoring analysis of tax reforms for EU countries, accounting for the feedback effects resulting from the adjustment in labour supply and economy-wide reaction to tax policy changes. We combine the microsimulation model EUROMOD incorporating an estimated labour supply model, with the new Keynesian DSGE model QUEST used by the European Commission. We illustrate the results obtained when scoring specific reforms in three EU Member States.
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:zbw:vfsc17:168261&r=cmp
  10. By: Maik T. Schneider; Ralph Winkler
    Abstract: We study the role of endogenous healthcare choices by households to extend their expected lifetimes on economic growth and welfare in a decentralized overlapping generations economy with the realistic feature that households’ savings are held in annuities. We characterize healthcare spending in the decentralized market equilibrium and its effects on economic growth. We identify the moral-hazard effect in healthcare investments when annuity rates are conditioned on average mortality and explain the conditions under which this leads to over-investment in healthcare. Moreover, we specify the general equilibrium effects and macroeconomic repercussions associated with this moral-hazard effect. In a numerical simulation of our model with OECD data, we find that the moral-hazard effect may be substantial and implies sizeable welfare losses of approximately 1.5%. At a more general level, our study suggests that welfare improvements from longevity increases may be lower than suggested when considered in planner economies.
    Keywords: annuities, economic growth, endogenous longevity, healthcare expenditures, healthcare technology, moral hazard, pension systems, welfare analysis
    JEL: O40 I10 J10
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_6367&r=cmp
  11. By: Paul, J.; Agatz, N.A.H.; Spliet, R.; de Koster, M.B.M.
    Abstract: In this paper, we study a setting in which a carrier can satisfy customer delivery requests directly or outsource them to another carrier. A request can be outsourced to a carrier that is already scheduled to visit the corresponding customer, if capacity allows. For the customers that receive their deliveries directly, we make a vehicle routing schedule that minimizes transportation costs, while for the outsourced customers we incur additional transfer costs between the carriers. This study is motivated by a collaboration with an omni-channel grocery retailer for which goods that are ordered online can be picked up from the stores. The goal is to save costs by consolidating the supply of pick-up points with the store inventory replenishment. To solve this problem, we present exact and heuristic approaches. Computational experiments on both the real-world grocery retail case and artificial instances show that substantial savings can be achieved.
    Keywords: Consolidation, Omni-channel retailing, Vehicle routing problem, Local Search
    Date: 2017–10–19
    URL: http://d.repec.org/n?u=RePEc:ems:eureri:102352&r=cmp
  12. By: Brian J. Asquith; Judith K. Hellerstein; Mark J. Kutzbach; David Neumark
    Abstract: We explore the links between social capital and labor market networks at the neighborhood level. We harness rich data taken from multiple sources, including matched employer-employee data with which we measure the strength of labor market networks, data on behavior such as voting patterns that have previously been tied to social capital, and new data – not previously used in the study of social capital – on the number and location of non-profits at the neighborhood level. We use a machine learning algorithm to identify potential social capital measures that best predict neighborhood-level variation in labor market networks. We find evidence suggesting that smaller and less centralized schools, and schools with fewer poor students, foster social capital that builds labor market networks, as does a larger Republican vote share. The presence of establishments in a number of non-profit oriented industries are identified as predictive of strong labor market networks, likely because they either provide public goods or facilitate social contacts. These industries include, for example, churches and other religious institutions, schools, country clubs, and amateur or recreational sports teams or clubs.
    JEL: J01 J64 R23
    Date: 2017–10
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:23959&r=cmp
  13. By: Peter Grajzl; Peter Murrell
    Abstract: We use machine-learning methods to study the features and origins of the Baconian program, a cultural paradigm that provided intellectual roots for modern economic development. We estimate a structural topic model, a state-of-the-art methodology for analysis of text corpora. The estimates uncover sixteen topics prominent in Bacon’s opus. Two are central in the Baconian program: fact-finding and inductive epistemology. While Bacon’s epistemology arises from his jurisprudence, fact-finding is sui generis Bacon. The utilitarian promise of science, embraced by Bacon’s followers, was not emphasized by him. Bacon’s use of different topics varies notably with intended audience and chosen medium.
    Keywords: Baconian program, culture, law, knowledge, natural philosophy, politics, religion
    JEL: B31 Z10 N73 K10 P10
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_6443&r=cmp
  14. By: Susan Athey; Julie Tibshirani; Stefan Wager
    Abstract: We propose generalized random forests, a method for non-parametric statistical estimation based on random forests (Breiman, 2001) that can be used to fit any quantity of interest identified as the solution to a set of local moment equations. Following the literature on local maximum likelihood estimation, our method operates at a particular point in covariate space by considering a weighted set of nearby training examples; however, instead of using classical kernel weighting functions that are prone to a strong curse of dimensionality, we use an adaptive weighting function derived from a forest designed to express heterogeneity in the specified quantity of interest. We propose a flexible, computationally efficient algorithm for growing generalized random forests, develop a large sample theory for our method showing that our estimates are consistent and asymptotically Gaussian, and provide an estimator for their asymptotic variance that enables valid confidence intervals. We use our approach to develop new methods for three statistical tasks: non-parametric quantile regression, conditional average partial effect estimation, and heterogeneous treatment effect estimation via instrumental variables. A software implementation, grf for R and C++, is available from CRAN.
    Date: 2016–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1610.01271&r=cmp

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.