New Economics Papers
on Computational Economics
Issue of 2007‒11‒24
eight papers chosen by



  1. Multiagent System Platform for Auction Simulations By Alan Mehlenbacher
  2. Multiagent System Simulations of Signal Averaging in English Auctions with Two-Dimensional Value Signals By Alan Mehlenbacher
  3. Multiagent System Simulations of Treasury Auctions By Alan Mehlenbacher
  4. The Sydney Olympics, seven years on: an ex-post dynamic CGE assessment By James A Giesecke; John R Madden
  5. Environmental Policy in a Federal State - A Regional CGE Analysis of the NEC Directive in Belgium By Saveyn Bert; Van Regemorter Denise
  6. SOCIAL ASSISTANCE – NO, THANKS EMPIRICAL ANALYSIS OF NON-TAKE-UP IN AUSTRIA 2003 By Fuchs M
  7. ANALYSING THE EFFECTS OF TAX BENEFIT REFORMS ON INCOME DISTRIBUTION: A DECOMPOSITION APPROACH By Bargain O; Callan T
  8. Combinatorial and computational aspects of multiple weighted voting games By Aziz, Haris; Paterson, Mike; Leech, Dennis

  1. By: Alan Mehlenbacher (Department of Economics, University of Victoria)
    Abstract: I have developed a multiagent system platform that provides a valuable complement to the alternative auction research methods. The platform facilitates the development of heterogeneous agents and provides an experimental environment that is under the experimenter's complete control. Simulations with alternative learning methods results in impulse balance learning as the most promising approach for auctions.
    Keywords: Axiomatic bargaining, resource monotonicity, transferable utility, risk aversion; Agent-based computational economics, agent learning
    JEL: C71 D13 D63 C15 C72 D83
    Date: 2007–11–19
    URL: http://d.repec.org/n?u=RePEc:vic:vicddp:0706&r=cmp
  2. By: Alan Mehlenbacher (Department of Economics, University of Victoria)
    Abstract: This study uses a multiagent system to investigate English auctions with two-dimensional value signals and agents that learn a signal-averaging factor. I find that signal averaging increases nonlinearly as the common value percent increases, decreases with the number of bidders, and decreases at high common value percents when the common value signal is more uncertain. Using signal averaging, agents increase their profit when the value is more uncertain. The most obvious effect of signal averaging is on reducing the percentage of auctions won by bidders with the highest common value signal.
    Keywords: Axiomatic bargaining, resource monotonicity, transferable utility, risk aversion; Agent-based computational economics, multi-dimensional value signals, English auctions, signal averaging
    JEL: C71 D13 D63 C15 C72 D83
    Date: 2007–11–19
    URL: http://d.repec.org/n?u=RePEc:vic:vicddp:0708&r=cmp
  3. By: Alan Mehlenbacher (Department of Economics, University of Victoria)
    Abstract: This study uses a multiagent system to determine which payment rule provides the most revenue in Treasury auctions that are based on Canadian rules. The model encompasses the when-issued, auction, and secondary markets, as well as constraints for primary dealers. I find that the Spanish payment rule is revenue inferior to the Discriminatory payment rule across all market price spreads, but the Average rule is revenue superior. For most market-price spreads, Uniform payment results in less revenue than Discriminatory, but there are many cases in which Vickrey payment produces more revenue.
    Keywords: Axiomatic bargaining, resource monotonicity, transferable utility, risk aversion; Agent-based computational economics, treasury auctions, auction context
    JEL: C71 D13 D63 C15 C72 D83
    Date: 2007–11–19
    URL: http://d.repec.org/n?u=RePEc:vic:vicddp:0709&r=cmp
  4. By: James A Giesecke; John R Madden
    Abstract: A recent development in ex-ante analysis of mega events is the use of computable general equilibrium (CGE) models. CGE models improve greatly on the input-output model, which they have largely displaced, since they incorporate fixed factors and substitution effects. However, like input-output, the method is still subject to the risk of over-optimistic estimation of benefits. We see three sources of such risk: (i) failure to treat public inputs as costs; (ii) elastic factor supply assumptions; and (iii) overestimation of foreign demand shocks via inclusion of "induced tourism" expenditure. In this paper, we undertake an ex-post analysis of the Olympics that addresses each of these risks. We handle the first two directly: public services used to support the Games (such as security services) are treated as Games-specific inputs, and we model the national labour market in full employment. For the third risk, we undertake an historical simulation to uncover the extent, if any, of induced tourism. We find no evidence of an induced tourism effect, and so exclude it from our analysis. With these assumptions, we find the Sydney Olympics generated a net consumption loss of approximately $2.1 billion.
    Keywords: Olympics economic impact, major projects, regional dynamic CGE
    JEL: R13 H43 C68
    Date: 2007–09
    URL: http://d.repec.org/n?u=RePEc:cop:wpaper:g-168&r=cmp
  5. By: Saveyn Bert (K.U.Leuven-Center for Economic Studies); Van Regemorter Denise (K.U.Leuven-Center for Economic Studies)
    Date: 2007–03
    URL: http://d.repec.org/n?u=RePEc:ete:etewps:ete0701&r=cmp
  6. By: Fuchs M
    Abstract: Based on the comparison of detailed micro-data (EU-SILC), official figures on recipients and ex-penditure as well as potential entitlements simulated with the tax/benefit microsimulation model EUROMOD, the paper estimates the size of non-take-up of monetary social assistance benefits in Austria in 2003. To account for likely measurement errors both in the reported income data as well as in the simulation of household needs, participation rates are calculated for various scenarios of the underlying parameters. I find that more than half of all households potentially entitled to the benefit do not claim. The determinants of non-take-up analysed in different regression models con-trolling for possible endogeneity of independent variables – significant higher participation rates inter alia in case of higher amounts entitled to, a non-employed head, living in Vienna – confirm hypotheses derived from theoretical models of take-up related to pecuniary determinants, informa-tion and administration costs as well as psychological costs.
    Keywords: Austria, take-up, social assistance, microsimulation
    JEL: D31 H31 H53 I38
    Date: 2007–10
    URL: http://d.repec.org/n?u=RePEc:ese:emodwp:em4/07&r=cmp
  7. By: Bargain O; Callan T
    Abstract: To assess the impact of tax–benefit policy changes on income distribution over time, we suggest a methodology based on counterfactual simulations. We start by decomposing changes in inequality/poverty indices into three contributions: reforms of the tax–benefit structure (eligibility rules, tax rate structure, etc.), changes in nominal levels of both market incomes and tax–benefit parameters (e.g. benefit amounts, tax bands), and all other changes in the underlying population (including market income inequality and demographic composition). Then, the decomposition helps to extract an absolute measure of the impact of tax–benefit changes on inequality when evaluated against a distributionally–neutral benchmark, i.e. a situation where tax–benefit parameters are adjusted in line with income growth. We apply this measure to assess recent policy changes in twelve European countries. Finally, the full decomposition allows quantifying the relative role of policy changes compared to all other factors. We provide an illustration on France and Ireland and check the sensitivity of the results to the decomposition order.
    Keywords: Tax-benefit policy, inequality, poverty, decomposition, microsimulation.
    JEL: H23 H53 I32
    Date: 2007–10
    URL: http://d.repec.org/n?u=RePEc:ese:emodwp:em5/07&r=cmp
  8. By: Aziz, Haris (Computer Science Department, University of Warwick); Paterson, Mike (Computer Science Department, University of Warwick); Leech, Dennis (Economics Department, University of Warwick)
    Abstract: Weighted voting games are ubiquitous mathematical models which are used in economics, political science, neuroscience, threshold logic, reliability theory and distributed systems. They model situations where agents with variable voting weight vote in favour of or against a decision. A coalition of agents is winning if and only if the sum of weights of the coalition exceeds or equals a specified quota. We provide a mathematical and computational characterization of multiple weighted voting games which are an extension of weighted voting games1. We analyse the structure of multiple weighted voting games and some of their combinatorial properties especially with respect to dictatorship, veto power, dummy players and Banzhaf indices. Among other results we extend the concept of amplitude to multiple weighted voting games. An illustrative Mathematica program to compute voting power properties of multiple weighted voting games is also provided.
    Keywords: multi-agent systems ; multiple weighted ; voting games ; game theory, algorithms and ; complexity ; voting power.
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:wrk:warwec:823&r=cmp

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.