nep-cmp New Economics Papers
on Computational Economics
Issue of 2014‒10‒03
eleven papers chosen by
Stan Miles
Thompson Rivers University

  1. Personal Income Tax Reforms: a Genetic Algorithm Approach By Matteo Morini; Simone Pellegrino
  2. Probabilistic Transitivity in Sports By Johannes Tiwisina; Philipp Kuelpmann
  3. Choosing a Good Toolkit: An Essay in Behavioral Economics By Alejandro Francetich; David M. Kreps
  4. Modelling land use, deforestation, and policy analysis: A hybrid optimization-ABM heterogeneous agent model with application to the Bolivian Amazon By Lykke Andersen; Ugur Bilge; Ben Groom; David Gutierrez; Evan Killick; Juan Carlos Ledezma; Charles Palmer; Diana Weinhold
  5. Integrating VAT into EUROMOD. Documentation and results for Belgium By Decoster, André; Ochmann, Richard; Spiritus, Kevin
  6. On the conjugacy of off-line and on-line Sequential Monte Carlo Samplers By Arnaud Dufays
  7. Design and Implementation of Schedule-Based Trading Strategies Based on Uncertainty Bands By Vladimir Markov; Slava Mazur; David Saltz
  8. An Adaptive VNS Algorithm for Vehicle Routing Problems with Intermediate Stops By Schneider, M.; Stenger, A.; Hof, J.
  9. Behavioral Finance and Agent Based Model: the new evolving discipline of quantitative behavioral finance ? By Concetta Sorropago
  10. Testing for Neglected Nonlinearity Using Artificial Neural Networks with Many Randomized Hidden Unit Activations By Tae-Hwy Lee; Zhou Xi; Ru Zhang
  11. An Aggregation Matrix MATLAB Function By Caleb Stair

  1. By: Matteo Morini (ENS Lyon, RHÔNE ALPES COMPLEX SYSTEMS INSTITUTE (IXXI), Lyon, France; Department of Economics and Statistics (Dipartimento di Scienze Economico-Sociali e Matematico-Statistiche), University of Torino, Italy); Simone Pellegrino (Department of Economics and Statistics (Dipartimento di Scienze Economico-Sociali e Matematico-Statistiche), University of Torino, Italy)
    Abstract: Given a settled reduction in the present level of tax revenue, and by exploring a very large combinatorial space of tax structures, in this paper we employ a genetic algorithm in order to determine the optimal structure of a personal income tax that allows the maximization of the redistributive effect of the tax, while preventing all taxpayers being worse off than with the present tax structure. We take Italy as a case study.
    Keywords: Personal income taxation, Genetic algorithms, Micro-simulation models, Reynolds-Smolensky index, Tax reforms
    JEL: C63 C81 H23 H24
    Date: 2014–09
  2. By: Johannes Tiwisina (Center for Mathematical Economics, Bielefeld University); Philipp Kuelpmann (Center for Mathematical Economics, Bielefeld University)
    Abstract: We seek to find the statistical model that most accurately describes empirically observed results in sports. The idea of a transitive relation concerning the team strengths is implemented by imposing a set of constraints on the outcome probabilities. We theoretically investigate the resulting optimization problem and draw comparisons to similar problems from the existing literature including the linear ordering problem and the isotonic regression problem. Our optimization problem turns out to be very complicated to solve. We propose a branch and bound algorithm for an exact solution and for larger sets of teams a heuristic method for quickly finding a "good" solution. Finally we apply the described methods to panel data from soccer, American football and tennis and also use our framework to compare the performance of empirically applied ranking schemes.
    Keywords: stochastic transitivity, trinomial, geometric optimization, ranking, branch and bound, linear ordering problem, ELO, tabu search, football, soccer, tennis, bundesliga, NFL, ATP
    JEL: L83 C61 C63 C81
    Date: 2014–08
  3. By: Alejandro Francetich; David M. Kreps
    Abstract: The problemof choosing an optimal toolkit day after day,when there is uncertainty concerning the value of different tools that can only be resolved by carrying the tools, is a multi-armed bandit problem with nonindependent arms. Accordingly, except for very simple specifications, this optimization problem cannot (practically) be solved. Decision takers facing this problem presumably resort to decision heuristics, “sensible” rules fordeciding which tools to carry, based on past experience. In this paper, we examine and compare the performance of a variety of heuristics, some very simple and others inspired by the computer-science literature on these problems. Some asymptotic results are obtained, especially concerning the long-run outcomes of using the heuristics, hence these results indicate which heuristics do well when the discount factor is close to one. But our focus is on the relative performance of these heuristics for discount factors bounded away from one, which we study through simulation of the heuristics on a collection of test problems.
    Date: 2014
  4. By: Lykke Andersen; Ugur Bilge; Ben Groom; David Gutierrez; Evan Killick; Juan Carlos Ledezma; Charles Palmer; Diana Weinhold
    Abstract: Policy interventions designed to simultaneously stem deforestation and reduce poverty in tropical countries entail complex socio-environmental trade-offs. A hybrid model, comprising an optimising, agricultural household model integrated into the ‘shell’ of an agent-based model, is developed in order to explore the trade-offs of alternative policy bundles and sequencing options. The model is calibrated to the initial conditions of a small forest village in rural Bolivia. Heterogeneous farmers make individually optimal land-use decisions based on factor endowments and market conditions. Endogenously determined wages and policy provided jobs link the agricultural labour market and rural-urban migration rates. Over a simulated 20-year period, the policymaker makes “real-time” public investments and public policy that in turn impact welfare, productivity, and migration. National and local land-use policy interventions include conservation payments, deforestation taxes and international REDD payments that both impact land use directly and affect the policymaker’s budget. The results highlight trade-offs between reductions in deforestation and improvements in household welfare that can only be overcome either when international REDD payments are offered or when decentralized deforestation taxes are implemented. Yet, the sequencing of policies is also found to play a critical role in these results.
    Date: 2014–09
  5. By: Decoster, André; Ochmann, Richard; Spiritus, Kevin
    Abstract: This paper documents the integration of microsimulation tools for direct taxation, indirect taxation, and social benefits in the context of the European tax and benefit simulator, EUROMOD. Integration has been developed in parallel for two countries: Belgium and Germany. The paper at hand documents the process and presents simulation results for the case of Belgium. An integrated database underlying EUROMOD that contains householdlevel information on income and consumption is generated. Consumption micro data from the 2009 cross section of the household budget survey for Belgium is used to impute information on spending for durable and non-durable commodities into EU-SILC data, applying regression-based imputation techniques. Engel curves are estimated at the household level for total non-durable spending, expenditures on durable goods, as well as non-durable expenditure share equations. The imputed household spending is then used to simulate the baseline VAT system in EUROMOD, for which we report an incidence analysis. Finally, several arbitrary policy reforms implementing VAT rate uniformity are analyzed with respect to their distributional impact.
    Date: 2014–06–16
  6. By: Arnaud Dufays (École Nationale de la Statistique et de l'Administration Économique, CREST)
    Abstract: Sequential Monte Carlo (SMC) methods are widely used for filtering purposes of non-linear economic or financial models. Nevertheless the SMC scope encompasses wider applications such as estimating static model parameters so much that it is becoming a serious alternative to Markov- Chain Monte-Carlo (MCMC) methods. Not only SMC algorithms draw posterior distributions of static or dynamic parameters but additionally provide an estimate of the normalizing constant. The tempered and time (TNT) algorithm, developed in the paper, combines (off-line) tempered SMC inference with on-line SMC inference for estimating many slightly different distributions. The method encompasses the Iterated Batch Importance Sampling (IBIS) algorithm and more generally the Resample Move (RM) algorithm. Besides the number of particles, the TNT algorithm self-adjusts its calibrated parameters and relies on a new MCMC kernel that allows for particle interactions. The algorithm is well suited for efficiently back-testing models. We conclude by comparing in-sample and out-of-sample performances of complex volatility models.
    Keywords: Bayesian inference, Sequential Monte Carlo, Annealed Importance sampling, Differential Evolution, Volatility models, Multifractal model, Markov-switching model
    JEL: C11 C15 C22 C58
    Date: 2014–09
  7. By: Vladimir Markov; Slava Mazur; David Saltz
    Abstract: We propose a design for schedule-based execution trading strategies based on uncertainty bands. This formulation: 1) simplifies strategy specification and implementation; 2) provides for flexible allocation among passive, opportunistic, aggressive, and dark pool crossing execution tactics; 3) allows for rapid enhancements as new optimization methods, scheduling techniques, alpha models, and execution tactics are developed; and 4) yields information at macroscopic (strategic) and microscopic (tactical) levels that is easily published to trading databases and front-end applications.
    Date: 2014–09
  8. By: Schneider, M.; Stenger, A.; Hof, J.
    Date: 2014
  9. By: Concetta Sorropago (Department of Computer, Control and Management Engineering, Universita' degli Studi di Roma "La Sapienza")
    Abstract: The financial crisis of recent years has deeply questioned the ability of the traditional economic models to help to govern the complexity of the modern financial world. A growing number of scholars, practitioners, and regulators agree that the recurring financial crisis as well as the overwhelming evidence of market anomalies could be explained only resorting to behavioral finance. Behavioral finance has been able to identify the individual investor irrationality but unable to quantify its total effect on the market in terms of price deviation from fundamental. Quantitative Behavioral Finance (QBF) is an emerging discipline that attempts to model the impact of human cognitive biases over asset prices. The aim of this paper is to provide an overview of its theoretical foundations and its challenges. The paper is divided in two parts. In the first one, we present a much selected literature review of the key theoretical foundations. Why does this new field of study emerge ? What topics does it study ? Which disciplines have contributed the most and why ? In the second part, the paper sketches an outline and provides a preliminary, set of references about the agent-based model approach as one of the most promising line of research in quantitative modeling the behavioral investorsÕ impact on the market. The literature surveyed supports the use of this class of models because of their capability in copying with heterogeneous agentsÕ behaviours either rational or bounded rational without losing the ability to identify and examine how each of them operates separately or in interaction. Taken as a whole, the articles reviewed here indicate that many open issues remains both on the theoretical design of agent based models, due to the large degree of freedom of modelers, and on the empirical use of this class of models for real political economic implications, due to the arduous methods for the model validation, calibration and estimation.
    Keywords: Literature review ; Behavioral Finance ; Agent Computational Economics
    Date: 2014
  10. By: Tae-Hwy Lee (Department of Economics, University of California Riverside); Zhou Xi (University of California, Riverside); Ru Zhang (University of California, Riverside)
    Abstract: This paper makes a simple but previously neglected point with regard to an empirical application of the test of White (1989) and Lee, White and Granger (LWG, 1993), for neglected nonlinearity in conditional mean, using the feedforward single layer artificial neural network (ANN). Because the activation parameters in the hidden layer are not identified under the null hypothesis of linearity, LWG suggested to activate the ANN hidden units based on the randomly generated activation parameters. Their Monte Carlo experiments demonstrated the excellence performance (good size and power), even if LWG considered a fairly small number (10 or 20) of random hidden unit activations. However, in this paper we note that the good size and power of Monte Carlo experiments are the average frequencies of rejecting the null hypothsis over multiple replications of the data generating process. The average over many simulations in Monte Carlo smooths out the randomness of the activations. In an empirical study, unlike in a Monte Carlo study, multiple realizations of the data are not possible or available. In this case, the ANN test is sensitive to the randomly generated activation parameters. One solution is the use of Bonferroni bounds as suggested in LWG (1993), which however still exhibit some excessive dependence on the random activations (as shown in this paper). Another solution can be to integrate the test statistic over the nuisance parameter space, for which however, bootstrap or simulation should be used to obtain the null distribution of the integrated statistic. In this paper, we consider a much simpler solution that is shown to work very well. That is, we simply increase the number of randomized hidden unit activations to a (very) large number (e.g., 1000). We show that using many randomly generated activation parameters can robustify the performance of the ANN test when it is applied to a real empirical data. This robustification is reliable and useful in practice, and can be achieved at no cost as increasing the number of random activations is almost costless given today's computer technology.
    Keywords: Many Activations. Randomized Nuisance Parameters. Boferroni Bounds. Principal Components.
    JEL: C1 C4 C5
    Date: 2014–09
  11. By: Caleb Stair (Regional Research Institute, West Virginia University)
    Abstract: This Technical Document describes the foundations for an aggregation matrix function implemented in MATLAB, including the format and structure of the required aggregation vector used as an argument to the function. The function is passed with the N-dimensional aggregation vector as an argument. The aggregation vector contains N values ranging from 1 to M, each of which is the aggregate index corresponding to the N pre-aggregation indices. The function returns an aggregation matrix with M rows and N columns. Pre-multiplying an existing matrix with N rows by the aggregation matrix reduces the row dimensionality from N to M by adding the sectors to be aggregated. Post-multiplication by the transpose of the aggregation matrix reduces the column dimensionality from N to M accordingly.
    Keywords: aggregate/aggregation matrix, input-output, IO
    JEL: C6
    Date: 2013–12–17

This nep-cmp issue is ©2014 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.