nep-cmp New Economics Papers
on Computational Economics
Issue of 2016‒11‒27
fourteen papers chosen by
Stan Miles
Thompson Rivers University

  1. Branch-and-Cut for the Split Delivery Vehicle Routing Problem with Time Windows By Nicola Bianchessi; Stefan Irnich
  2. Trust and Performance: Exploring Socio-Economic Mechanisms in the “Deep” Network Structure with Agent-Based Modeling By Gao, Lin
  3. Resource Allocation Heuristics for Unknown Sales Response Functions with Additive Disturbances By Gahler, Daniel; Hruschka, Harald
  4. Fiscal policy, inequality and poverty in Iran: Assessing the impact and effectiveness of taxes and transfers By Ali Enami; Nora Lustig; Alireza Taqdiri
  5. On the wavelets-based SWIFT method for backward stochastic differential equations By Ki Wai Chau; Cornelis W. Oosterlee
  6. Monetary policy and large crises in a financial accelerator agent-based model By Giri, Federico; Riccetti, Luca; Russo, Alberto; Gallegati, Mauro
  7. A decomposition algorithm for computing income taxes with pass-through entities and its application to the Chilean case By Javiera Barrera; Eduardo Moreno; Sebastian Varas
  8. Q3-D3-LSA By Lukas Borke; Wolfgang K. Härdle;
  9. In Praise of (Some) Red Tape: A New Approach to Regulation By Gordon Menzies; Peter Dixon; Maureen Rimmer
  10. Onset of financial instability studied via agent-based models By Yi-Fang Liu; Jorgen Vitting Andersen; Philippe De Peretti
  11. Scheduling parallel batching machines in a sequence By Ward Passchyn; Frits Spieksma
  12. Simple Forecasting Heuristics that Make us Smart: Evidence from Different Market Experiments By Mikhail Anufriev; Cars Hommes; Tomasz Makarewicz
  13. Divergent behavior in markets with idiosyncratic private information By David Goldbaum
  14. Networks formation to assist decision making By David Goldbaum

  1. By: Nicola Bianchessi (Johannes Gutenberg University Mainz); Stefan Irnich (Johannes Gutenberg University Mainz)
    Abstract: The Split Delivery Vehicle Routing Problem with Time Windows (SDVRPTW) is a notoriously hard combinatorial optimization problem. First, it is hard to ?nd a useful compact Mixed-Integer Programming (MIP) formulation for the SDVRPTW. Standard modeling approach either suffer from inherent symmetries (MIPs with a vehicle index) or cannot exactly capture all aspects of feasibility. Due to the possibility to visit customers more than once, the standard mechanisms to propagate load and time along the routes fail. Second, the lack of useful formulations has rendered any direct MIP-based approach impossible. Up to now, the most effective exact algorithms for the SDVRPTW are branch-and-price-and-cut approaches using a path-based formulation. In this paper, we propose a new and tailored branch-and-cut algorithm to solve the SDVRPTW. It is based on a new relaxed compact model, in which some integer solutions are infeasible to the SDVRPTW. We use known and introduce some new classes of valid inequalities in order to cut o? such infeasible solutions. One new class is path-matching constraints that generalize infeasible-path constraints. However, even with the valid inequalities, some integer solutions to the new compact formulation remain to be tested for feasibility. For a given integer solution, we built a generally sparse subnetwork of the original instance. On this subnetwork, all time-window feasible routes can be enumerated and a path-based residual problem is then solved in order to decide on the selection of routes, the delivery quantities, and herewith the overall feasibility. All infeasible solutions need to be cut off. For this reason, we derive some strengthened feasibility cuts exploiting the fact that solutions often decompose into clusters. Computational experiments show that the new approach is able to prove optimality for several previously unsolved instances from the literature.
    Keywords: Vehicle routing problem, split delivery, time windows, valid inequalities
    Date: 2016–10–26
  2. By: Gao, Lin
    Abstract: This paper extends the concept of interaction platforms and explores the evolution of interaction and cooperation supported by individuals’ changing trust and trustworthiness on directed weighted regular ring network from the angle of micro scope by using agent-based modeling. This agent-based model integrates several considerations below via a relatively delicate experimental design: 1) a characteristic of trust is that trust is destroyed easily and built harder (Slovic, 1993); 2) trustworthiness may be reflected on both strategy decision and payoff structure decision; 3) individuals can decide whether or not to be involved in an interaction; 4) interaction density exists, not only between neighbors and strangers (Macy and Skvoretz, 1998), but also within neighbors; 5) information diffusion. In this agent-based model, marginal rate of exploitation of original payoff matrix and relative exploitation degree between two payoff matrices are stressed in their influence of trust-destroying; influence of observing is introduced via imagined strategy; relationship is maintained through relationship maintenance strength, and so on. This paper treats number of immediate neighbors, degree of embeddedness in social network, mutation probability of payoff matrix, mutated payoff matrix, proportion of high trust agents and probabilities of information diffusion within neighborhood and among non-neighbors as important aspects happening on interaction platforms, and the influences of these factors are probed respectively on the base of a base-line simulation.
    Keywords: Trust, trustworthiness, directed weighted regular ring network, agent-based modeling, marginal rate of exploitation, relative exploitation degree, imagined strategy, relationship maintenance strength, number of neighbors, degree of embeddedness in social network, mutation of payoff matrix, information diffusion, social mobility, institutional quality, evolution of interaction, evolution of cooperation
    JEL: B52 C63 D82 D85
    Date: 2016–11–21
  3. By: Gahler, Daniel; Hruschka, Harald
    Abstract: We develop an exploration-exploitation algorithm which solves the allocation of a fixed resource (e.g., a budget, a sales force size, etc.) to several units (e.g., sales districts, customer groups, etc.) with the objective to attain maximum sales. This algorithm does not require knowledge of the form of the sales response function and is also able cope with additive random disturbances. The latter as a rule are a component of sales response functions estimated by econometric methods. We compare the algorithm to three rules of thumb which in practice are often used for this allocation problem. The comparison is based on a Monte Carlo simulation for five replications of 192 experimental constellations, which are obtained from four function types, four procedures (i.e., the three rules of thumb and the algorithm), similar/varied elasticities, similar/varied saturations, and three error levels. A statistical analysis of the simulation results shows that the algorithm performs better than the three rules of thumb if the objective consists in maximizing sales across several periods. We also mention several more general marketing decision problems which could be solved by appropriate modifications of the algorithm presented.
    Keywords: Marketing Resource Allocation; Exploration-Exploitation Algorithm; Monte Carlo Simulation; Optimization
    JEL: M30 C61 C63
    Date: 2016–11–17
  4. By: Ali Enami (Tulane University, U.S.A.); Nora Lustig (Tulane University, U.S.A.); Alireza Taqdiri (University of Akron, U.S.A.)
    Abstract: Using the Iranian Household Expenditure and Income Survey for 2011/12, we apply the marginal contribution approach to determine the impact and effectiveness of each fiscal intervention, and the fiscal system as a whole, on inequality and poverty. Net taxes (combined direct and indirect taxes and transfers) reduce the Gini coefficient by 0.0644 points and the poverty headcount ratio by 61 percent. Based on the magnitudes of the marginal contributions, we find that the main driver of this reduction in inequality is the Targeted Subsidy Program (TSP), a universal cash transfer program implemented in 2010 to compensate individuals for the elimination of energy subsidies. The main reduction in poverty occurs in rural areas, where the headcount ratio declines from 44 to 23 percent as opposed to urban areas that experience a modest reduction in the headcount ratio from 13 to 5 percent. Taxes and transfers are similar in their effectiveness in achieving their inequality-reducing potential. By achieving 40 percent of its inequality-reducing potential, the income tax is the most effective intervention on the revenue side. On the spending side, Social Assistance transfers are the most effective programs which achieves 45 percent of their potential. While taxes are especially effective in raising revenue without causing poverty to rise, transfer programs in general and the TSP in particular are not effective in reducing poverty since the bulk of them are not targeted toward the poor. Using simulation, we show how policy makers can improve the effectiveness of the TSP in reducing poverty.
    Keywords: Inequality, poverty, marginal contribution, CEQ framework, policy simulation.
    JEL: D31 H22 I38
    Date: 2016–10
  5. By: Ki Wai Chau; Cornelis W. Oosterlee
    Abstract: We propose a numerical algorithm for backward stochastic differential equations based on time discretization and trigonometric wavelets. This method combines the effectiveness of Fourier-based methods and the simplicity of a wavelet-based formula, resulting in an algorithm that is both accurate and easy to implement. Furthermore, we mitigate the problem of errors near the computation boundaries by means of an antireflective boundary technique, giving an improved approximation. We test our algorithm with different numerical experiments.
    Date: 2016–11
  6. By: Giri, Federico; Riccetti, Luca; Russo, Alberto; Gallegati, Mauro
    Abstract: An accommodating monetary policy followed by a sudden increase of the short term interest rate often leads to a bubble burst and to an economic slowdown. Two examples are the Great Depression of 1929 and the Great Recession of 2008. Through the implementation of an Agent Based Model with a financial accelerator mechanism we are able to study the relationship between monetary policy and large scale crisis events. The main results can be summarized as follow: a) sudden and sharp increases of the policy rate can generate recessions; b) after a crisis, returning too soon and too quickly to a normal monetary policy regime can generate a \double dip" recession, while c) keeping the short term interest rate anchored to the zero lower bound in the short run can successfully avoid a further slowdown.
    Keywords: Monetary Policy,Large Crises,Agent Based Model,Financial Accelerator,Zero Lower Bound
    JEL: E32 E44 E58 C63
    Date: 2016
  7. By: Javiera Barrera; Eduardo Moreno; Sebastian Varas
    Abstract: Income tax systems with pass-through entities transfer a firm's incomes to the shareholders, which are taxed individually. In 2014, a Chilean tax reform introduced this type of entity and changed to an accrual basis that distributes incomes (but not losses) to shareholders. A crucial step for the Chilean taxation authority is to compute the final income of each individual, given the complex network of corporations and companies, usually including cycles between them. In this paper, we show the mathematical conceptualization and the solution to the problem, proving that there is only one way to distribute incomes to taxpayers. Using the theory of absorbing Markov chains, we define a mathematical model for computing the taxable incomes of each taxpayer, and we propose a decomposition algorithm for this problem. This allows us to compute the solution accurately and with the efficient use of computational resources. Finally, we present some characteristics of the Chilean taxpayers' network and computational results of the algorithm using this network.
    Date: 2016–09
  8. By: Lukas Borke; Wolfgang K. Härdle;
    Abstract: QuantNet 1 is an integrated web-based environment consisting of different types of statistics-related documents and program codes. Its goal is creating reproducibility and offering a platform for sharing validated knowledge native to the social web. To increase the information retrieval (IR) efficiency there is a need for incorporating semantic information. Three text mining models will be examined: vector space model (VSM), generalized VSM (GVSM) and latent semantic analysis (LSA). The LSA has been successfully used for IR purposes as a technique for capturing semantic relations between terms and inserting them into the similarity measure between documents. Our results show that different model configurations allow adapted similarity-based document clustering and knowledge discovery. In particular, different LSA configurations together with hierarchical clustering reveal good results under M3 evaluation. QuantNet and the corresponding Data-Driven Documents (D3) based visualization can be found and applied under The driving technology behind it is Q3-D3-LSA, which is the combination of “GitHub API based QuantNet Mining infrastructure in R”, LSA and D3 implementation.
    JEL: C87 C88 G17 D3
    Date: 2016–11
  9. By: Gordon Menzies (Economics Discipline Group, University of Technology, Sydney); Peter Dixon (Victoria University); Maureen Rimmer (Victoria University)
    Abstract: The costs of removing red tape include a lower chance of detecting recession-generating flaws in the financial system. What we call independent dimensions of regulation (IDRs) operate more or less independently to other groupings. If an IDR’s optimality is unknown, it may be risky to remove. Uncertainty thus implies that (some) red tape – i.e. a small amount of overregulation – is justified, in contrast to the Brainard (1967) principle that uncertainty dictates less policy activism. The long run GDP benefit of a 1% improvement in financial services productivity is 0.06% in our CGE model. These relatively modest gains reinforce our conclusion.
    Keywords: Banking; Regulation; Monitoring; Financial Crises; Independent dimension of regulation
    JEL: G01 G18 G28
    Date: 2016–03–01
  10. By: Yi-Fang Liu (College of Management and Economics - Tianjin University); Jorgen Vitting Andersen (CES - Centre d'économie de la Sorbonne - UP1 - Université Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique); Philippe De Peretti (CES - Centre d'économie de la Sorbonne - UP1 - Université Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique)
    Abstract: The mere complexity of scenarios which could lead tothe onset of financial market instability seems to demand new tools, in particular concerning the role of human decision-making during crises. Here we present agent-based models that could provide new insights into the wayperiods of market turmoil unfold. We illustrate the method through a well-controlled setup in a series of experiments. We are thereby able to:i) validate the impact of model parameters and test their relevance by predicting the average outcome of an experiment; andii) consider each individual experiment and predict outcomes through a scenario analysis. These illustrations should show the appeal of the method in applications to real market situations.
    Keywords: complex systems,systemic risk, agent based modeling
    Date: 2016
  11. By: Ward Passchyn; Frits Spieksma
    Abstract: Motivated by the application of scheduling a sequence of locks along a waterway, we consider a scheduling problem where multiple parallel batching machines are arranged in a sequence and process jobs that travel along this sequence. We investigate the computational complexity of this problem. More specifically, we show that minimizing the sum of completion times is strongly NP-hard, even for two identical machines and when all jobs travel in the same direction. A second NP-hardness result is obtained for a different special case where jobs all travel at an identical speed. Additionally, we introduce a class of so-called synchronized schedules, and investigate special cases where the existence of an optimum solution which is synchronized can be guaranteed. Finally, we reinforce the claim that bi-directional travel contributes fundamentally to the computational complexity of this problem by describing a polynomial time procedure for a setting with identical machines and where all jobs travel in the same direction at equal speed.
    Keywords: Machine scheduling, Complexity, Parallel batching machine, Machine sequence
    Date: 2016–11
  12. By: Mikhail Anufriev (Economics Discipline Group, University of Technology, Sydney); Cars Hommes (CeNDEF, University of Amsterdam); Tomasz Makarewicz (CeNDEF, University of Amsterdam)
    Abstract: We study a model in which individual agents use simple linear first order price forecasting rules, adapting them to the complex evolving market environment with a smart Genetic Algorithm optimization procedure. The novelties are: (1) a parsimonious experimental foundation of individual forecasting behaviour; (2) an explanation of individual and aggregate behavior in four different experimental settings, (3) improved one-period and 50-period ahead forecasting of lab experiments, and (4) a characterization of the mean, median and empirical distribution of forecasting heuristics. The median of the distribution of GA forecasting heuristics can be used in designing or validating simple Heuristic Switching Model.
    Keywords: Expectation Formation; Learning to Forecast Experiment; Genetic Algorithm Model of Individual Learning
    JEL: C53 C63 C91 D03 D83 D84
    Date: 2015–07–13
  13. By: David Goldbaum (Economics Discipline Group, University of Technology, Sydney)
    Abstract: Perpetually evolving divergent trading strategies is the natural consequence of a market with idiosyncratic private information. In the face of intrinsic uncertainty about other traders' strategies, participants resort to learning and adaptation to identify and exploit profitable trading opportunities. Model-consistent use of market-based information generally improves price performance but can inadvertently produce episodes of sudden mispricing. The paper examines the impact of trader's use of information and bounded rationality on price efficiency.
    Keywords: Heterogeneous Agents; Efficient Markets; Learning; Dynamics; Computational Economics
    JEL: G14 C62 D82
    Date: 2016–02–22
  14. By: David Goldbaum (Economics Discipline Group, University of Technology, Sydney)
    Abstract: This paper examines network formation among a connected population with a preference for conformity and leadership. Agents build stable personal relationships to achieve coordinated actions. The network serves as a repository of collective experiences so that cooperation can emerge from simple, myopic, self-serving adaptation to recent events despite the competing impulses of conformity and leadership and despite limiting individuals to only local information. Computational analysis reveals how behavioral tendencies impact network formation and identifies locally stable disequilibrium structures.
    Keywords: Leader; Dynamic Network; Payoff interdependence; Social Interaction; Simulation
    JEL: D85 D71 C71
    Date: 2016–04–29

This nep-cmp issue is ©2016 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.