nep-cmp New Economics Papers
on Computational Economics
Issue of 2018‒02‒19
nine papers chosen by



  1. A General Method for Demand Inversion By Lixiong Li
  2. An interdisciplinary model for macroeconomics By haldane, Andrew; Turrell, Arthur
  3. Learning Generative Models with Sinkhorn Divergences By Aude Geneway; Gabriel Peyré; Marco Cuturi
  4. Robust machine learning by median-of-means : theory and practice By Guillaume Lecué; Mathieu Lerasle
  5. Leadership in Scholarship: A Machine Learning Based Investigation of Editors' Influence on Textual Structure By Onder, Ali Sina; Popov, Sergey V; Schweitzer, Sascha
  6. Particle-without-Particle: a practical pseudospectral collocation method for numerical differential equations with distributional sources By Marius Oltean; Carlos F. Sopuerta; Alessandro D. A. M. Spallicci
  7. The value of foresight in the drybulk freight market By Prochazka, Vit; Adland, Roar; Wallace, Stein W.
  8. How Large is the Corporate Tax Base Erosion and Profit Shifting? A General Equilibrium Approach By Alvarez-Martinez, Maria; Barrios, Salvador; d'Andria, Diego; Gesualdo, Maria; Nicodème, Gaëtan; Pycroft, Jonathan
  9. Climate Change and Agriculture: Farmer Adaptation to Extreme Heat By Fernando M. Aragón; Francisco Oteiza; Juan Pablo Rud

  1. By: Lixiong Li
    Abstract: This paper describes a numerical method to solve for mean product qualities which equates the real market share to the market share predicted by a discrete choice model. The method covers a general class of discrete choice model, including the pure characteristics model in Berry and Pakes(2007) and the random coefficient logit model in Berry et al.(1995) (hereafter BLP). The method transforms the original market share inversion problem to an unconstrained convex minimization problem, so that any convex programming algorithm can be used to solve the inversion. Moreover, such results also imply that the computational complexity of inverting a demand model should be no more than that of a convex programming problem. In simulation examples, I show the method outperforms the contraction mapping algorithm in BLP. I also find the method remains robust in pure characteristics models with near-zero market shares.
    Date: 2018–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1802.04444&r=cmp
  2. By: haldane, Andrew (Bank of England); Turrell, Arthur (Bank of England)
    Abstract: Macroeconomic modelling has been under intense scrutiny since the Great Financial Crisis, when serious shortcomings were exposed in the methodology used to understand the economy as a whole. Criticism has been levelled at the assumptions employed in the dominant models, particularly that economic agents are homogeneous and optimising and that the economy is equilibrating. This paper seeks to explore an interdisciplinary approach to macroeconomic modelling, with techniques drawn from other (natural and social) sciences. Specifically, it discusses agent-based modelling, which is used across a wide range of disciplines, as an example of such a technique. Agent-based models are complementary to existing approaches and are suited to answering macroeconomic questions where complexity, heterogeneity, networks, and heuristics play an important role.
    Keywords: Macroeconomics; modelling; agent-based model
    JEL: A12 C60 E17 E60
    Date: 2017–11–17
    URL: http://d.repec.org/n?u=RePEc:boe:boeewp:0696&r=cmp
  3. By: Aude Geneway (CEREMADE; Université Paris-Dauphine); Gabriel Peyré (CNRS; DMA; École Normale Supérieure); Marco Cuturi (ENSAE; CREST; Université Paris-Saclay)
    Abstract: The ability to compare two degenerate probability distributions, that is two distributions supported on low-dimensional manifolds in much higher-dimensional spaces, is a crucial factor in the estimation of generative models. It is therefore no surprise that optimal transport (OT) metrics and their ability to handle measures with non-overlapping supports have emerged as a promising tool. Yet, training generative machines using OT raises formidable computational and statistical challenges, because of (i) the computational burden of evaluating OT losses, (ii) their instability and lack of smoothness, (iii) the difficulty to estimate them, as well as their gradients, in high dimension. This paper presents the first tractable method to train large scale generative models using an OT-based loss called Sinkhorn loss which tackles these three issues by relying on two key ideas: (a) entropic smoothing, which turns the original OT loss into a differentiable and more robust quantity that can be computed using Sinkhorn fixed point iterations; (b) algorithmic (automatic) differentiation of these iterations with seamless GPU execution. Additionally, Entropic smoothing generates a family of losses interpolating between Wasserstein (OT) and Energy distance/Maximum Mean Discrepancy (MMD) losses, thus allowing to find a sweet spot leveraging the geometry of OT on the one hand, and the favorable high-dimensional sample complexity of MMD, which comes with unbiased gradient estimates. The resulting computational architecture complements nicely standard deep network generative models by
    Date: 2017–10–20
    URL: http://d.repec.org/n?u=RePEc:crs:wpaper:2017-83&r=cmp
  4. By: Guillaume Lecué (CREST; CNRS; Université Paris Saclay); Mathieu Lerasle (CNRS,département de mathématiques d’Orsay)
    Abstract: We introduce new estimators for robust machine learning based on median-of-means (MOM) estimators of the mean of real valued random variables. These estimators achieve optimal rates of convergence under minimal assumptions on the dataset. The dataset may also have been corrupted by outliers on which no assumption is granted. We also analyze these new estimators with standard tools from robust statistics. In particular, we revisit the concept of breakdown point. We modify the original definition by studying the number of outliers that a dataset can contain without deteriorating the estimation properties of a given estimator. This new notion of breakdown number, that takes into account the statistical performances of the estimators, is non-asymptotic in nature and adapted for machine learning purposes. We proved that the breakdown number of our estimator is of the order of number of observations * rate of convergence. For instance, the breakdown number of our estimators for the problem of estimation of a d-dimensional vector with a noise variance a² is a²d and it becomes a²s log(ed/s) when this vector has only s non-zero component. Beyond this breakdown point, we proved that the rate of convergence achieved by our estimator is number of outliers divided by number of observations. Besides these theoretical guarantees, the major improvement brought by these new estimators is that they are easily computable in practice. In fact, basically any algorithm used to approximate the standard Empirical Risk Minimizer (or its regularized versions) has a robust version approximating our estimators. On top of being robust to outliers, the "MOM version" of the algorithms are even faster than the original ones, less demanding in memory resources in some situations and well adapted for distributed datasets which makes it particularly attractive for large dataset analysis. As a proof of concept, we study many algorithms for the classical LASSO estimator. It turns out that the original algorithm can be improved a lot in practice by randomizing the blocks on which \local means" are computed at each step of the descent algorithm. A byproduct of this modification is that our algorithms come with a measure of depth of data that can be used to detect outliers, which is another major issue in Machine learning.
    Date: 2017–11–01
    URL: http://d.repec.org/n?u=RePEc:crs:wpaper:2017-32&r=cmp
  5. By: Onder, Ali Sina (University of Bayreuth); Popov, Sergey V (Cardiff Business School); Schweitzer, Sascha (University of Bayreuth)
    Abstract: Academic journals disseminate new knowledge, and editors of prominent journals are in a position to affect the direction and composition of research. Using machine learning procedures, we measure the influence of editors of the American Economic Review (AER) on the relative topic structure of papers published in the AER and other top general interest journals. We apply the topic analysis apparatus to the corpus of all publications in the Top 5 journals in Economics between 1976 and 2013, and also to the publications of the AER's editors during the same period. This enables us to observe the changes occurring over time in the relative frequency of topics covered by the AER and other leading general interest journals over time. We .nd that the assignment of a new editor tends to coincide with a change of topics in the AER in favour of a new editor's topics which can not be explained away by shifts in overall research trends that may be observed in other leading general interest journals.
    Keywords: Text Search; Topical Analysis; Academia; Knowledge Dissemination; In- fluence; Journals; Editors
    JEL: A11 A14 O3
    Date: 2018–01
    URL: http://d.repec.org/n?u=RePEc:cdf:wpaper:2018/2&r=cmp
  6. By: Marius Oltean; Carlos F. Sopuerta; Alessandro D. A. M. Spallicci
    Abstract: Differential equations with distributional sources---in particular, involving delta distributions and/or derivatives thereof---have become increasingly ubiquitous in numerous areas of physics and applied mathematics. It is often of considerable interest to obtain numerical solutions for such equations, but the singular ("point-like") modeling of the sources in these problems typically introduces nontrivial obstacles for devising a satisfactory numerical implementation. A common method to circumvent these is through some form of delta function approximation procedure on the computational grid, yet this strategy often carries significant limitations. In this paper, we present an alternative technique for tackling such equations: the "Particle-without-Particle" method. Previously introduced in the context of the self-force problem in gravitational physics, the idea is to discretize the computational domain into two (or more) disjoint pseudospectral (Chebyshev-Lobatto) grids in such a way that the "particle" (the singular source location) is always at the interface between them; in this way, one only needs to solve homogeneous equations in each domain, with the source effectively replaced by jump (boundary) conditions thereon. We prove here that this method is applicable to any linear PDE (of arbitrary order) the source of which is a linear combination of one-dimensional delta distributions and derivatives thereof supported at an arbitrary number of particles. We furthermore apply this method to obtain numerical solutions for various types of distributionally-sourced PDEs: we consider first-order hyperbolic equations with applications to neuroscience models (describing neural populations), parabolic equations with applications to financial models (describing price formation), second-order hyperbolic equations with applications to wave acoustics, and finally elliptic (Poisson) equations.
    Date: 2018–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1802.03405&r=cmp
  7. By: Prochazka, Vit (Dept. of Business and Management Science, Norwegian School of Economics); Adland, Roar (Dept. of Business and Management Science, Norwegian School of Economics); Wallace, Stein W. (Dept. of Business and Management Science, Norwegian School of Economics)
    Abstract: We analyze the value of foresight in the drybulk freight market when repositioning a vessel through space and time. In order to do that, we apply an optimization model on a network with dynamic regional freight rate differences and stochastic travel times. We evaluate the value of the geographical switching option for three cases: the upper bound based on having perfect foresight, the lower bound based on a "coin flip", and the case of perfect foresight but only for a limited horizon. By combining a neural network with optimization, we can assess the impact of varying foresight horizon on economic performance. In a simple but realistic two-region case, we show empirically that the upper bound for large vessels can be as high as 25% cumulative outperformance, and that a significant portion of this theoretical value can be captured with limited foresight of several weeks. Our research sheds light on the important issue of spatial efficiency in global ocean freight markets and provides a benchmark for the value of investing in predictive analysis.
    Keywords: Dry bulk market; dynamic programming; neural network; foresight
    JEL: C44 C60 C61
    Date: 2018–01–31
    URL: http://d.repec.org/n?u=RePEc:hhs:nhhfms:2018_001&r=cmp
  8. By: Alvarez-Martinez, Maria; Barrios, Salvador; d'Andria, Diego; Gesualdo, Maria; Nicodème, Gaëtan; Pycroft, Jonathan
    Abstract: This paper estimates the size and macroeconomic effects of base erosion and profit shifting (BEPS) using a computable general equilibrium model designed for corporate taxation and multinationals. Our central estimate of the impact of BEPS on corporate tax losses for the EU amounts to €36 billion annually or 7.7% of total corporate tax revenues. The USA and Japan also appear to loose tax revenues respectively of €101 and €24 billion per year or 10.7% of corporate tax revenues in both cases. These estimates are consistent with gaps in bilateral multinationals' activities reported by creditor and debtor countries using official statistics for the EU. Our results suggest that by increasing the cost of capital, eliminating profit shifting would slightly reduce investment and GDP. It would however raise corporate tax revenues thanks to enhanced domestic production. This in turn could reduce other taxes and increase welfare.
    Keywords: BEPS; CGE model; Corporate taxation; Profit shifting; Tax avoidance
    JEL: C68 E62 H25 H26 H87
    Date: 2018–01
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:12637&r=cmp
  9. By: Fernando M. Aragón (Simon Fraser University); Francisco Oteiza (Institute for Fiscal Studies); Juan Pablo Rud (Department of Economics, Royal Holloway, University of London and Institute of Fiscal Studies)
    Abstract: This paper examines how farmers adapt, in the short-run, to extreme heat. Using a production function approach and micro-data from Peruvian households, we find that high temperatures induce farmers to increase the use of inputs, such as land and domestic labor. This reaction partially attenuates the negative effects of high temperatures on output. We interpret this change in inputs as an adaptive response in a context of subsistence farming, incomplete markets, and lack of other coping mechanisms. We use our estimates to simulate alternative climate change scenarios and show that accounting for adaptive responses is quantitatively important.
    Keywords: Climate Change, Agriculture, Adaptation
    JEL: O13 O12 Q12 Q15 Q51 Q54
    Date: 2018–01
    URL: http://d.repec.org/n?u=RePEc:sfu:sfudps:dp18-02&r=cmp

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.