nep-gth New Economics Papers
on Game Theory
Issue of 2019‒12‒16
sixteen papers chosen by
Sylvain Béal
Université de Franche-Comté

  1. Maximin equilibrium By Mehmet Ismail
  2. Assignment Markets: Theory and Experiments By Arthur Dolgopolov; Daniel Houser; Cesar Martinelli; Thomas Stratmann
  3. On importance indices in multicriteria decision making By Michel Grabisch; Christophe Labreuche; Mustapha Ridaoui
  4. Interaction indices for multichoice games By Mustapha Ridaoui; Michel Grabisch; Christophe Labreuche
  5. An axiomatisation of the Banzhaf value and interaction index for multichoice games By Mustapha Ridaoui; Michel Grabisch; Christophe Labreuche
  6. Central Counterparty Auctions and Loss Allocation By Robert Oleschak
  7. Bidding on price and quality: An experiment on the complexity of scoring auctions By Riccardo Camboni; Luca Corazzini; Stefano Galavotti; Paola Valbonesi
  8. Least Square Approximations and Linear Values of Cooperative Game By Ulrich Faigle; Michel Grabisch
  9. Interpretation of multicriteria decision making models with interacting criteria By Michel Grabisch; Christophe Labreuche
  10. On the modeling and testing of groundwater resource models By Murielle Djiguemde; Dimitri Dubois; Alexandre Sauquet; Mabel Tidball
  11. Applications of the Deep Galerkin Method to Solving Partial Integro-Differential and Hamilton-Jacobi-Bellman Equations By Ali Al-Aradi; Adolfo Correia; Danilo de Frietas Naiff; Gabriel Jardim; Yuri Saporito
  12. Probabilistic Approach to Mean Field Games and Mean Field Type Control Problems with Multiple Populations By Masaaki Fujii
  13. Decentralization and mutual liability rules By Ketelaars, Martijn; Borm, Peter; Quant, Marieke
  14. Banking on cooperation: An evolutionary analysis of microfinance loan repayment By Gehrig, Stefan; Mesoudi, Alex; Lamba, Shakti
  15. The CMMV Pricing Model in Practice By Bernard de Meyer; Moussa Dabo
  16. Trade Policy with Intermediate Inputs Trade By Qasim, Ahmed Waqar; Itaya, Jun-ichi

  1. By: Mehmet Ismail
    Abstract: We introduce a new theory of games which extends von Neumann's theory of zero-sum games to nonzero-sum games by incorporating common knowledge of individual and collective rationality of the players. Maximin equilibrium, extending Nash's value approach, is based on the evaluation of the strategic uncertainty of the whole game. We show that maximin equilibrium is invariant under strictly increasing transformations of the payoffs. Notably, every finite game possesses a maximin equilibrium in pure strategies. Considering the games in von Neumann-Morgenstern mixed extension, we demonstrate that the maximin equilibrium value is precisely the maximin (minimax) value and it coincides with the maximin strategies in two-player zero-sum games. We also show that for every Nash equilibrium that is not a maximin equilibrium there exists a maximin equilibrium that Pareto dominates it. In addition, a maximin equilibrium is never Pareto dominated by a Nash equilibrium. Finally, we discuss maximin equilibrium predictions in several games including the traveler's dilemma.
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1912.00211&r=all
  2. By: Arthur Dolgopolov (Interdisciplinary Center for Economic Science and Department of Economics, George Mason University); Daniel Houser (Interdisciplinary Center for Economic Science and Department of Economics, George Mason University); Cesar Martinelli (Interdisciplinary Center for Economic Science and Department of Economics, George Mason University); Thomas Stratmann (Interdisciplinary Center for Economic Science and Department of Economics, George Mason University)
    Abstract: We study theoretically and experimentally assignment markets, i.e. two-sided markets where indivisible heterogeneous items with unit demand and unit supply are traded for money, as exemplified by housing markets. We define an associated strategic market game, and show that every Nash equilibrium outcome of this game is a competitive equilibrium allocation with respect to an economy consisting exclusively of the goods that were traded. That is, inefficiency may arise from miscoordination because some goods are not traded. Experimental results show players behaving close to Nash equilibrium predictions for auction-like market designs and close to generalized bargaining for the market design that incorporates decentralized communication. Communication improves efficiency, but introduces with some probability outcomes inconsistent with Nash equilibria.
    Date: 2019–12
    URL: http://d.repec.org/n?u=RePEc:gms:wpaper:1075&r=all
  3. By: Michel Grabisch (CES - Centre d'économie de la Sorbonne - UP1 - Université Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique); Christophe Labreuche (Thales Research and Technology [Palaiseau] - THALES); Mustapha Ridaoui (CES - Centre d'économie de la Sorbonne - UP1 - Université Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique)
    Abstract: We address the problem of how to define an importance index in multicriteria decision problems, when a numerical representation of preferences is given. We make no restrictive assumption on the model, which could have discrete or continuous attributes, and in particular , it is not assumed that the model is monotonically increasing or decreasing with respect to the attributes. Our analysis first considers discrete models, which are seen to be equivalent to multichoice games. We propose essentially two importance indices, namely the signed importance index and the absolute importance index, both based on the average variation of the value of the model induced by a given attribute. We provide several axiomatizations for these importance indices, extend them to the continuous case, and finally illustrate them with examples: classical simple models and an example of discomfort evaluation based on real data.
    Keywords: Multiple criteria analysis,Multichoice game,Shapley value,Choquet integral
    Date: 2019–08
    URL: http://d.repec.org/n?u=RePEc:hal:cesptp:halshs-02380863&r=all
  4. By: Mustapha Ridaoui (CES - Centre d'économie de la Sorbonne - UP1 - Université Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique); Michel Grabisch (CES - Centre d'économie de la Sorbonne - UP1 - Université Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique); Christophe Labreuche (Thales Research and Technology [Palaiseau] - THALES)
    Abstract: Models in Multicriteria Decision Analysis (MCDA) can be analyzed by means of an importance index and an interaction index for every group of criteria. We consider first discrete models in MCDA, without further restriction, which amounts to considering multichoice games, that is, cooperative games with several levels of participation. We propose and axiomatize two interaction indices for multichoice games: the signed interaction index and the absolute interaction index. In a second part, we consider the continuous case, supposing that the continuous model is obtained from a discrete one by means of the Choquet integral. We show that, as in the case of classical games, the interaction index defined for continuous aggre-gation functions coincides with the (signed) interaction index, up to a normalizing coefficient.
    Keywords: multicriteria decision analysis,interaction,multichoice game,Choquet inte- gral
    Date: 2019–04
    URL: http://d.repec.org/n?u=RePEc:hal:cesptp:halshs-02380901&r=all
  5. By: Mustapha Ridaoui (CES - Centre d'économie de la Sorbonne - UP1 - Université Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique); Michel Grabisch (CES - Centre d'économie de la Sorbonne - UP1 - Université Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique); Christophe Labreuche (Thales Research and Technology [Palaiseau] - THALES)
    Abstract: We provide an axiomatisation of the Banzhaf value (or power index) and the Banzhaf interaction index for multichoice games, which are a generalisa-tion of cooperative games with several levels of participation. Multichoice games can model any aggregation model in multicriteria decision making, provided the attributes take a finite number of values. Our axiomatisation uses standard axioms of the Banzhaf value for classical games (linearity, null axiom, symmetry), an invariance axiom specific to the multichoice context, and a generalisation of the 2-efficiency axiom, characteristic of the Banzhaf value.
    Keywords: interaction,Banzhaf value,multicriteria decision aid,multichoice games
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:hal:cesptp:halshs-02381119&r=all
  6. By: Robert Oleschak
    Abstract: In this paper, I analyse first-price single-item auctions in case of a default of a clearing agent in a central counterparty (CCP). The bidding surviving clearing agents attach a private value to the item to be sold and share eventual losses with the CCP. The CCP as auctioneer can choose the time of auction and the loss allocation mechanism in order to minimize her own losses. I show that incentives (e.g. juniorising default fund contributions) are irrelevant for the outcome of the auction but that the composition of bidders matters. Auctions with a subset of bidders have distributional effects, i.e. the invited bidders are better off than those who are not invited to the auction. Conversely, inviting additional bidders (i.e., clients) could lead to an inefficient auction, yet their participation leaves the CCP as well as all the losing bidders better off. Recovery measures increase the safety and soundness of CCPs but can adversely affect incentives of a CCP in an auction. I show that in cases of extreme losses a CCP would rather prefer to wait than to swiftly conduct an auction, thereby inflicting costs on the financial system. Finally, I show that tear-ups are not only more costly than other recovery measures but that they fail to coordinate the actions of bidders, leading to an inferior equilibrium for all.
    Keywords: Central Counterparty, Default Management, Auctions, Recovery
    JEL: C72 D44 D53 D82 G23 G28
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:snb:snbwpa:2019-06&r=all
  7. By: Riccardo Camboni (DSEA, University of Padova); Luca Corazzini (Department of Economics, University of Venice "Ca' Foscari"); Stefano Galavotti (DEMDI, University of Bari); Paola Valbonesi (DSEA, University of Padova and HSE-NRU, Moscow)
    Abstract: We run an experiment on procurement auctions in a setting where both quality and price matter. We compare two unidimensional treatments in which the buyer fixes one dimension (quality or price) and sellers compete on the other, with three bidimensional treatments (with different strategy spaces) in which sellers submit a price-quality bid and the winner is determined by a score that linearly combines the two offers. We find that, with respect to the theoretical predictions, the bidimensional treatments significantly underperform, both in terms of efficiency and buyer's utility. We attribute this result to the higher strategic complexity of these treatments and test this intuition by fitting a structural Quantal Response Equilibrium model with risk aversion to our experimental data. We find very similar estimates for the risk aversion parameter across all treatments; instead, the error parameter, which captures deviations between the observed bids and the payoff-maximizing ones, is larger in the bidimensional treatments than in the unidimensional ones. Our evidence suggests that increasing the dimensionality and the size of the suppliers' strategy space increases their tendency to make suboptimal offers, thus undermining the theoretical superiority of more complex mechanisms.
    Keywords: scoring auctions, multidimensional auctions, complexity, bidding behaviour, Quantal Response Equilibrium
    JEL: D44 H11 H57
    Date: 2019–12
    URL: http://d.repec.org/n?u=RePEc:pad:wpaper:0243&r=all
  8. By: Ulrich Faigle (Zentrum für Angewandte Informatik [Köln] - Universität zu Köln); Michel Grabisch (CES - Centre d'économie de la Sorbonne - UP1 - Université Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique)
    Abstract: Many important values for cooperative games are known to arise from least square optimization problems. The present investigation develops an optimization framework to explain and clarify this phenomenon in a general setting. The main result shows that every linear value results from some least square approximation problem and that, conversely, every least square approximation problem with linear constraints yields a linear value. This approach includes and extends previous results on so-called least square values and semivalues in the literature. In particular , it is demonstrated how known explicit formulas for solutions under additional assumptions easily follow from the general results presented here.
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:hal:cesptp:halshs-02381231&r=all
  9. By: Michel Grabisch (CES - Centre d'économie de la Sorbonne - UP1 - Université Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique); Christophe Labreuche (Thales Research and Technology [Palaiseau] - THALES)
    Abstract: We consider general MCDA models with discrete attributes. These models are shown to be equivalent to a multichoice game and we put some emphasis on discrete Generalized Independence Models (GAI), especially those which are 2-additive, that is, limited to terms of at most two attributes. The chapter studies the interpretation of these models. For general MCDA models, we study how to define a meaningful importance index, and propose mainly two kinds on importance indices: the signed and the absolute importance indices. For 2-additive GAI models , we study the issue of the decomposition, which is not unique in general. We show that for a monotone 2-additive GAI model, it is always possible to obtain a decomposition where each term is monotone. This has important consequences on the tractability and interpretability of the model.
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:hal:cesptp:halshs-02381243&r=all
  10. By: Murielle Djiguemde (CEE-M - Centre d'Economie de l'Environnement - Montpellier - FRE2010 - CNRS - Centre National de la Recherche Scientifique - Montpellier SupAgro - Institut national d’études supérieures agronomiques de Montpellier - UM - Université de Montpellier - INRA - Institut National de la Recherche Agronomique); Dimitri Dubois (CEE-M - Centre d'Economie de l'Environnement - Montpellier - FRE2010 - CNRS - Centre National de la Recherche Scientifique - Montpellier SupAgro - Institut national d’études supérieures agronomiques de Montpellier - UM - Université de Montpellier - INRA - Institut National de la Recherche Agronomique); Alexandre Sauquet (CEE-M - Centre d'Economie de l'Environnement - Montpellier - FRE2010 - CNRS - Centre National de la Recherche Scientifique - Montpellier SupAgro - Institut national d’études supérieures agronomiques de Montpellier - UM - Université de Montpellier - INRA - Institut National de la Recherche Agronomique); Mabel Tidball (CEE-M - Centre d'Economie de l'Environnement - Montpellier - FRE2010 - CNRS - Centre National de la Recherche Scientifique - Montpellier SupAgro - Institut national d’études supérieures agronomiques de Montpellier - UM - Université de Montpellier - INRA - Institut National de la Recherche Agronomique)
    Abstract: Economists have been attempting to take on the optimal management of groundwater for many decades, initially through static models, and since the 1970's through a dynamic framework. Since then, several attempts have been made to test dynamic models through laboratory experiments. Yet formulating and testing these models raises several challenges that we attempt to tackle in this study by testing a very simple dynamic groundwater extraction model in a laboratory experiment. We propose a full characterization of the theoretical solutions, taking into account economic constraints. In the experiment we mimic continuous time by allowing subjects to make their extraction decisions whenever they wish, with an actualization and updating the data (resource and payoffs) every second. The infinite horizon is simulated through the computation of payoffs, as if time were endless. To get around the weaknesses of the widely used Mean Squared Deviation (MSD) statistic and classify individual behavior as myopic, feedback or optimal, we combine the MSD with Ordinary Least Squares (OLS) regressions and time series treatments. Results show that a significant percentage of agents are able to adopt an optimal extraction path, that few agents should be considered truly myopic, and that using the MSD alone to classify agents would be misleading for about half of the study participants.
    Keywords: Experimental Economics,Renewable Resources,Continuous Time,Dynamic Optimization,Differential Games,Applied Econometrics.
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-02316729&r=all
  11. By: Ali Al-Aradi; Adolfo Correia; Danilo de Frietas Naiff; Gabriel Jardim; Yuri Saporito
    Abstract: We extend the Deep Galerkin Method (DGM) introduced in Sirignano and Spiliopoulos (2018) to solve a number of partial differential equations (PDEs) that arise in the context of optimal stochastic control and mean field games. First, we consider PDEs where the function is constrained to be positive and integrate to unity, as is the case with Fokker-Planck equations. Our approach involves reparameterizing the solution as the exponential of a neural network appropriately normalized to ensure both requirements are satisfied. This then gives rise to a partial integro-differential equation (PIDE) where the integral appearing in the equation is handled using importance sampling. Secondly, we tackle a number of Hamilton-Jacobi-Bellman (HJB) equations that appear in stochastic optimal control problems. The key contribution is that these equations are approached in their unsimplified primal form which includes an optimization problem as part of the equation. We extend the DGM algorithm to solve for the value function and the optimal control simultaneously by characterizing both as deep neural networks. Training the networks is performed by taking alternating stochastic gradient descent steps for the two functions, a technique similar in spirit to policy improvement algorithms.
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1912.01455&r=all
  12. By: Masaaki Fujii
    Abstract: In this work, we systematically investigate mean field games and mean field type control problems with multiple populations using a coupled system of forward-backward stochastic differential equations of McKean-Vlasov type stemming from Pontryagin's stochastic maximum principle. Although the same cost functions as well as the coefficient functions of the state dynamics are shared among the agents within each population, they can be different population by population. We study the mean field limits of the three different situations; (i) every agent is non-cooperative; (ii) the agents within each population are cooperative; and (iii) only for some populations, the agents are cooperative within each population. We provide several sets of sufficient conditions for the existence of mean field equilibrium for each of these cases.
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1911.11501&r=all
  13. By: Ketelaars, Martijn; Borm, Peter (Tilburg University, Center For Economic Research); Quant, Marieke (Tilburg University, Center For Economic Research)
    Abstract: This paper builds on the recent work of Groote Schaarsberg, Reijnierse and Borm (2018) on mutual liability problems. In essence, a mutual liability problem comprises a financial network in which agents may have both monetary individual assets and mutual liabilities. Here mutual liabilities reflect rightful monetary obligations from past bilateral transactions. To settle these liabilities by reallocating the individual assets, mutual liability rules are analyzed that are based on centralized bilateral transfer schemes which use a certain bankruptcy rule as its leading allocation mechanism. In this paper we derive a new characterization of mutual liability rules by taking a decentralized approach instead, which is based on a recursive individual settlement procedure. We show that for bankruptcy rules that satisfy composition, this decentralized procedure always leads to the same allocation as the one prescribed by the corresponding mutual liability rule based on centralized bilateral transfer schemes.
    Keywords: mutual liability rules; individual settlement allocation procedure; composition property
    JEL: C71 G33
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:tiu:tiucen:fa745b4f-f959-41d0-8c9f-0e53b82fcab7&r=all
  14. By: Gehrig, Stefan; Mesoudi, Alex (University of Exeter); Lamba, Shakti
    Abstract: Microfinance is an economic development intervention that involves credit provision to low-income entrepreneurs. Lenders typically require joint liability, where borrowers share the responsibility of repaying a group loan. We argue that this lending practice is subject to the same fundamental cooperation problem faced by other organisms in nature, and consequently evolutionary theories of cooperation from the biological sciences can provide new insights into loan repayment behaviour. This could both inform the design of microfinance institutions, and offer a real-world test case for evolutionary theories of cooperation. We first formulate evolutionary hypotheses on group loan repayment based on assortment mechanisms like kin selection, reciprocity or partner choice. We then test them by reviewing 40 studies on micro-borrowers’ loan repayment from 31 countries. We find more supportive than contrary evidence for the hypotheses, but results are generally mixed, generating avenues for future research within this framework. Finally, we present an evolutionary game-theoretic model of group lending as a threshold public goods game which further explains some empirical findings and generates new predictions on repayment rates. Our work shows how understanding the evolution of cooperation can guide economic development interventions and, more generally, offer ultimate explanatory theories for phenomena studied by social scientists.
    Date: 2019–10–21
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:tmpqj&r=all
  15. By: Bernard de Meyer (CES - Centre d'économie de la Sorbonne - UP1 - Université Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique, PSE - Paris School of Economics); Moussa Dabo (CES - Centre d'économie de la Sorbonne - UP1 - Université Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique, PSE - Paris School of Economics)
    Abstract: Mainstream financial econometrics methods are based on models well tuned to replicate price dynamics, but with little to no economic justification. In particular, the randomness in these models is assumed to result from a combination of exogenous factors. In this paper, we present a model originating from game theory, whose corresponding price dynamics are a direct consequence of the information asymmetry between private and institutional investors. This model, namely the CMMV pricing model, is therefore rooted in market microstructure. The pricing methods derived from it also appear to fit very well historical price data. Indeed, as evidenced in the last section of the paper, the CMMV model does a very good job predicting option prices from readily available data. It also enables to recover the dynamic of the volatility surface.
    Keywords: Game Theory,Information asymmetry,CMMV,Option pricing
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:hal:cesptp:halshs-02383135&r=all
  16. By: Qasim, Ahmed Waqar; Itaya, Jun-ichi
    Abstract: The paper aims to characterize the tariff policy for final goods as well as for intermediate inputs in the model of heterogeneous firms. We developed a theoretical model to show how the tariff on final goods and intermediate inputs affect the welfare, productivity, and the entry of firms in a country. We formulate the tariff level selection choice available to the policymaker with respect to four policy experiments. These policy experiments include; unilateral tariff selection, cooperative tariff selection, non-cooperative tariff selection, and political tariff selection. Our results show that at the Stackelberg equilibrium, which results from the unilateral tariff selection, the policy level selected by the leader is higher compared to the rest of the experiments. While, in the case of cooperation, free trade will be the equilibrium outcome. Since, the welfare gains of one country come at the cost of others, therefore, zero tariffs are the optimal strategy for both countries. At Nash equilibrium, which results of non-cooperative tariff policy selection, both countries select policy level simultaneously and applied positive tariff rates for both intermediate inputs and final goods. Lastly, at political equilibrium, which results after considering lobby by the heterogeneous firms, the policy level selection diverges from benchmark unilateral level. To illustrate our tariff policy formulations quantitively, we use the US import data to estimate the policy levels. These estimates are then compared the factual tariff rates to evaluate the degree of political interference of lobbying firms in the policy level selection.
    Keywords: intermediate inputs, heterogeneous firms, trade policy, lobbying firms,
    Date: 2019–10
    URL: http://d.repec.org/n?u=RePEc:hok:dpaper:342&r=all

This nep-gth issue is ©2019 by Sylvain Béal. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.