nep-cmp New Economics Papers
on Computational Economics
Issue of 2005‒11‒19
23 papers chosen by
Stan Miles
York University

  1. Financial Computational Intelligence By Chiu-Che Tseng; Yu-Chieh Lin
  2. Numerical Analysis of Asymmetric First Price Auctions By Wayne-Roy Gayle
  3. Evolution with Individual and Social Learning in an Agent-Based Stock Market By Ryuichi YAMAMOTO
  4. A SNCP Method for Solving Equilibrium Problems with Equilibrium Constraints By Che-Lin Su
  5. User-Friendly Parallel Computations with Econometric Examples By Michael Creel
  6. A Malliavin-based Monte-Carlo Approach for Numerical Solution of Stochastic Control Problems: Experiences from Merton's Problem By Simon Lysbjerg Hansen
  7. Aging, pension reform, and capital flows: A multi-country simulation model By Axel Boersch-Supan; Alexander Ludwig
  8. SOCIODYNAMICA: An agent based computer simulation studying the interacting web of biological, social and economic behaviors By Klaus Jaffe
  9. Teaching to do economics with the computer By Kurt Schmidheiny; Harris Dellas
  10. An Agent-Based Model of Mortality Shocks, Intergenerational Effects, and Urban Crime By Michael D. Makowsky
  11. Operational risk management and new computational needs in banks By Duc PHAM-HI
  12. Agent-based simulation of power exchange with heterogeneous production companies By Silvano Cincotti; Eric Guerci
  13. Time Varying Sensitivities on a GRID architecture By Mattia Ciprian; Stefano d'Addona
  14. Growth Effects of Age-related Productivity Differentials in an Ageing Society. A Simulation Study for Austria By Hofer, Helmut; Url, Thomas
  15. An Agent-Based Computational Laboratory for Testing the Economic Reliability of Wholesale Power Market Designs By Deddy Koesrindartoto; Junjie Sun
  16. An Analysis on Simulation Models of Competing Parties By Jie-Shin Lin
  17. Designing large value payment systems: an agent based approach By Jing Yang; Sheri Markose; Amadeo Alentorn
  18. Economic Effects of Free Trade between the EU and Russia By Pekka Sulamaa; Mika Widgrén
  19. Information Visualization Of An Agent-Based Financial System Model By Wei Jiang; Richard Webber & Ric D Herbert
  20. Multi-core CPUs, Clusters and Grid Computing: a Tutorial By William L. Goffe; Michael Creel
  21. Emergence in multi-agent systems:Cognitive hierarchy, detection, and complexity reduction By Jean Louis Dessalles; Denis Phan
  22. A dissimilarity-based approach for Classification By Carrizosa,Emilio; Martín-Barragán,Belén; Plastria,Frank; Romero Morales,Dolores
  23. Multi-Step Perturbation Solution of Nonlinear Rational Expectations Models By Baoline Chen; Peter A. Zadrozny

  1. By: Chiu-Che Tseng; Yu-Chieh Lin
    Abstract: Artificial intelligence decision support system is always a popular topic in providing the human with an optimized decision recommendation when operating under uncertainty in complex environments. The particular focus of our discussion is to compare different methods of artificial intelligence decision support systems in the investment domain – the goal of investment decision-making is to select an optimal portfolio that satisfies the investor’s objective, or, in other words, to maximize the investment returns under the constraints given by investors. In this study we apply several artificial intelligence systems like Influence Diagram (a special type of Bayesian network), Decision Tree and Neural Network to get experimental comparison analysis to help users to intelligently select the best portfoli
    Keywords: Artificial intelligence, neural network, decision tree, bayesian network
    JEL: C45
    Date: 2005–11–11
    URL: http://d.repec.org/n?u=RePEc:sce:scecf5:42&r=cmp
  2. By: Wayne-Roy Gayle
    Abstract: We develop a powerful and user-friendly program for numerically solving first price auction problems where an arbitrary number of bidders draw independent valuations from heterogenous distributions and the auctioneer imposes a reserve price for the object. The heterogeneity in this model arises both from the specification of ex-ante heterogenous, non-uniform distributions of private values for bidders, as well as the possibility of subsets of these bidders colluding. The technique extends the work of Marshall, Meurer, Richard, and Stromquist (1994), where they applied backward recursive Taylor series expansion techniques to solve two-player asymmetric first price auctions under uniform distributions. The algorithm is also used to numerically investigate whether revenue equivalence between first price and second price auctions in symmetric models extend to the asymmetric case. In particular, we simulate the model under various environments and find evidence that under the assumption of first order stochastic dominance, the first price auction generates higher expected revenue to the seller, while the second price auction is more susceptible to collusive activities. However, when the assumption of first order stochastic dominance is relaxed, and the distributions of private values cross once, the evidence suggests that the second price auction may in some cases generate higher expected revenue to the seller
    Keywords: Asymetric, Optimal Reserve, Ex-ante Heterogeneity
    JEL: D44 C63 C72
    Date: 2005–11–11
    URL: http://d.repec.org/n?u=RePEc:sce:scecf5:472&r=cmp
  3. By: Ryuichi YAMAMOTO (International Business School Brandeis University)
    Abstract: Recent research has shown a variety of computational techniques to describe evolution in an artificial stock market. One can distinguish the techniques based on at which level the learning of agents is modeled. The previous literature describes learning at either individual or social level. The level of learning is exogenously given, and agents involve only a particular level of learning when they update their rules. But such a setting doesn’t say anything about why agents choose a particular level of learning to update their trading rules. This paper introduces a learning mechanism which allows agents to choose one rule at each period among a set of ideas updated through both individual and social learning. A trading strategy performed well in the past is more likely to be selected by agents regardless it is created at individual or social level. This framework allows agents to choose a decision rule endogenously among a wider set of ideas. With such evolution, the following two questions are examined. First, since agents who have a wider set of ideas to choose are more intelligent, a question would arise if the time series from an economy with intelligent agents would converge to a rational expectation equilibrium (REE). Previous literature like LeBaron (2000) and Arthur et al. (1996) investigates the convergence property to the REE by looking at different time-horizons. It finds that the more information from the market agents get before updating their rules, the market is more likely to converge to the REE. But this paper investigates the convergence property by looking at different degrees of intelligence given a time horizon. The second investigates which level of learning is likely to dominate in the market. This is analyzed by investigating who chooses which level of learning and what proportion of the agents often uses individual or social learning. We analyze a hypothesis that wealthy agents often choose an idea from a set of her private ideas (from individual learning) while some with less wealth frequently imitate ideas from others (from social learning). The result eventually indicates that the agent-based stock market in this paper would possibly explain the mechanism of herding behavior which is often observed in financial markets
    Keywords: Individual learning; Social learning; Evolution; Asset pricing; Financial time series
    JEL: G12 G14 D83
    Date: 2005–11–11
    URL: http://d.repec.org/n?u=RePEc:sce:scecf5:228&r=cmp
  4. By: Che-Lin Su
    Abstract: This paper studies algorithms for equilibrium problems with equilibrium constraints (EPECs). We present a generalization of Scholtes’s regularization scheme for MPECs and extend his convergence results to this new relaxation method. We propose a sequential nonlinear complementarity (SNCP) algorithm to solve EPECs and establish the convergence of this algorithm. We present numerical results comparing the SNCP algorithm and diagonalization (nonlinear Gauss- Seidel and nonlinear Jacobi) methods on randomly generated EPEC test problems. The computational experience to date shows that both the SNCP algorithm and the nonlinear Gauss-Seidel method outperform the nonlinear Jacobi method
    Keywords: Multi-leader Multi-follower games, equilibrium problems, nonlinear complementarity problems,
    JEL: C63 C72
    Date: 2005–11–11
    URL: http://d.repec.org/n?u=RePEc:sce:scecf5:150&r=cmp
  5. By: Michael Creel
    Abstract: This paper shows how a high level matrix programming language may be used to perform Monte Carlo simulation, bootstrapping, estimation by maximum likelihood and GMM, and kernel regression in parallel on symmetric multiprocessor computers or clusters of workstations. The implementation of parallelization is done in a way such that an investigator may use the programs without any knowledge of parallel programming. A bootable CD that allows rapid creation of a cluster for parallel computing is introduced. Examples show that parallelization can lead to important reductions in computational time. Detailed discussion of how the Monte Carlo problem was parallelized is included as an example for learning to write parallel programs for Octave
    Keywords: parallel computing; maximum likelihood; GMM; Monte Carlo
    JEL: C13 C14 C15 C63
    Date: 2005–11–11
    URL: http://d.repec.org/n?u=RePEc:sce:scecf5:445&r=cmp
  6. By: Simon Lysbjerg Hansen (Accounting and Finance University of Southern Denmark)
    Abstract: The problem of choosing optimal investment and consumption strategies has been widely studied. In continuous time theory the pioneering work by Merton (1969) is a standard reference. In his work, Merton studied a continuous time economy with constant investment opportunities. Since then Merton's problem has been extended in many ways to capture empirically observed investment and consumption behavior. As more realism is incorporated into a model, the problem of optimal investment and consumption becomes harder to solve. Only rarely can analytical solutions be found, and only for problems possessing nice characteristics. To solve problems lacking analytical solutions we must apply numerical methods. Many realistic problems, however, are difficult to solve even numerically, due to their dimensionality. The purpose of this paper is to present a numerical procedure for solving high-dimensional stochastic control problems arising in the study of optimal portfolio choice. For expositional reasons we develop the algorithm in one dimension, but the mathematical results needed can be generalized to a multi-dimensional setting. The starting point of the algorithm is an initial guess about the agent's investment and consumption strategies at all times and wealth levels. Given this guess it is possible to simulate the wealth process until the investment horizon of the agent. We exploit the dynamic programming principle to break the problem into a series of smaller one-period problems, which can be solved recursively backwards. To be specific we determine first-order conditions relating the optimal controls to the value function in the next period. Starting from the final date we now numerically solve the first-order conditions for all simulated paths iteratively backwards. The investment and consumption strategies resulting from this procedure are used to update the simulated wealth paths, and the procedure can be repeated until it converges. The numerical properties of the algorithm are analyzed by testing it on Merton's optimal portfolio choice problem. The reason for this is that the solution to Merton's problem is explicitly known and can therefore serve as a benchmark for the algorithm. Our results indicate that it is possible to obtain some sort of convergence for both the initial controls and the distribution of their future values. Bearing in mind that we intend to apply the algorithm to a multi-dimensional setting, we also consider the possible complications that might arise. However, the state variables added will in most cases be exogenous non-controllable processes, which does not complicate the optimization routine in the proposed algorithm. Problems with computer storage could arise, but they should be solvable with clever computer programming
    JEL: C15 G11
    Date: 2005–11–11
    URL: http://d.repec.org/n?u=RePEc:sce:scecf5:391&r=cmp
  7. By: Axel Boersch-Supan; Alexander Ludwig
    Abstract: We present a quantitative analysis of the effects of population aging and pension reform on international capital markets. First, demographic change alters the time path of aggregate savings within each country. Second, this process may be amplified when a pension reform shifts old-age provision towards more pre-funding. Third, while the patterns of population aging are similar in most countries, timing and initial conditions differ substantially. Hence, to the extent that capital is internationally mobile, population aging will induce capital flows between countries. All three effects influence the rate of return to capital and interact with the demand for capital in production and with labor supply. In order to quantify these effects, we develop a computational general equilibrium model. We feed this multi-country overlapping generations model with detailed long-term demographic projections for seven world regions. Our simulations indicate that capital flows from fast-aging regions to the rest of the world will initially be substantial but that trends are reversed when households decumulate savings. We also conclude that closed-economy models of pension reform miss quantitatively important effects of international capital mobility
    Keywords: aging; pension reform; capital mobility
    JEL: E27 F21 G15
    Date: 2005–11–11
    URL: http://d.repec.org/n?u=RePEc:sce:scecf5:123&r=cmp
  8. By: Klaus Jaffe
    Abstract: Sociodynamics is an interdisciplinary attempt to study the dynamics of complex systems within the conceptual frame of subjects spanning biology, sociology, politics, history, economy and other sciences. For this purpose, the agent based computer simulation Sociodynamica has been developed to study the effect of attitudes and behaviors on aggregate wealth accumulation and other macro-economic parameters in artificial societies. The model simulates a continuous two-dimensional toroidal world through which different types of agents interact in an economically meaningful environment. The simulations test for the effect of different financial structures, such as barter, money, banks and derivatives on the ability of the virtual system to produce and accumulate wealth. Sociodynamica allows exploring the effect of heterogeneous distribution of labor, different types of organizations, variable properties of natural resources, different altruistic, emotive or rational behaviors of agents, and other features on the economic dynamic of the system. The results can be compared with known economic phenomena to test for the robustness of the assumptions used. The simulations help us in understanding and quantifying the relevance of different interactions that occur at the micro-economic level to the outcome of macroeconomic variables. Sociodynamica is proposed as an analytically useful a metaphor for a complex poly-ethic society of agents living in a free competitive market. Some concrete examples of the working of altruism, division of labor and banks on macroeconomic variables are provided. Two important results achieved so far are: 1- A precise differentiation between altruism and social investment that help clarify divergences in ongoing discussion of the subject among physicists, ecologists, game theorists, computer scientists, ethologists and economists. 2- A demonstration that optimal behavior of agents differ for different economic environments. Specifically, optimal behavior for undifferentiated hunter-gatherer economies, for agricultural societies, and for highly labor-differentiated technical societies is very different regarding optimal levels of mutual cooperation and other basic behaviors
    Keywords: agent simulation micro-macro behavior
    Date: 2005–11–11
    URL: http://d.repec.org/n?u=RePEc:sce:scecf5:283&r=cmp
  9. By: Kurt Schmidheiny; Harris Dellas (Department of Economics Tufts University)
    Abstract: This paper presents the course "Doing Economics with the Computer" we taught since 1999 at the University of Bern, Switzerland. "Doing Economics with the Computer" is a course we designed to introduce sophomores playfully and painlessly into computational economics. Computational methods are usually used in economics to analyze complex problems, which are impossible (or very difficult) to solve analytically. However, our course only looks at economic models, which can (easily) be solved analytically. This approach has two advantages: First, relying on economic theory students have met in their first year, we can introduce numerical methods at an early stage. This stimulates students to use computational methods later in their academic career when they encounter difficult problems. Second, the confrontation with the analytical analysis shows convincingly both power and limits of numerical methods. Our course introduces students to three types of software: spreadsheet and simple optimizer (Excel with Solver), numerical computation (Matlab) and symbolic computation (Maple). The course consists of 10 sessions, we taught each in a 3-hour lecture. In the 1st part of each session we present the economic problem, sometimes its analytical solution and introduce the software used. The 2nd part, in the computer lab, starts the numerical implementation with step-by-step guidance. In this part, students work on exercises with clearly defined questions and precise guidance for their implementation. The 3rd part is a workshop, where students work in groups on exercises with still very clear defined questions but no help on their implementation. This part teaches students how to practically handle numerical questions in a well-defined framework. The 4th part of a session is a graded take home assignment where students are asked to answer general economic questions. This part teaches students how to translate general economic questions into a numerical task and back into an economically meaningful answer. A short debriefing in the following week is part 5 and completes each session
    JEL: A22 C63
    Date: 2005–11–11
    URL: http://d.repec.org/n?u=RePEc:sce:scecf5:63&r=cmp
  10. By: Michael D. Makowsky (Economics George Mason University)
    Abstract: This paper presents an agent-based model of urban crime, mortality, and exogenous population shocks. Agent decision making is built around a career maximization function, with life expectancy as the key independent variable. Individual rationality is bounded by locally held information, creating a strong delineation between an objective and subjective reality. The effects of population shocks are explored using the Crime and Mortality Simulation (CAMSIM), with effects demonstrated to persist across generations. The potential for social simulation as a tool for the integration of theory across multiple disciplines is explored. CAMSIM is available via the web for future research by modelers and other social scientists.
    Keywords: Agent-based modeling, urban geography, crime
    JEL: J24 K42 R0
    Date: 2005–11–11
    URL: http://d.repec.org/n?u=RePEc:sce:scecf5:91&r=cmp
  11. By: Duc PHAM-HI (Systemes Informations & Finance Ecole Centrale Electronique)
    Abstract: Basel II banking regulation introduces new needs for computational schemes. They involve both optimal stochastic control, and large scale simulations of decision processes of preventing low-frequency high loss-impact events. This paper will first state the problem and present its parameters. It then spells out the equations that represent a rational risk management behavior and link together the variables: Levy processes are used to model operational risk losses, where calibration by historical loss databases is possible ; where it is not the case, qualitative variables such as quality of business environment and internal controls can provide both costs-side and profits-side impacts. Among other control variables are business growth rate, and efficiency of risk mitigation. The economic value of a policy is maximized by resolving the resulting Hamilton-Jacobi-Bellman type equation. Computational complexity arises from embedded interactions between 3 levels: * Programming global optimal dynamic expenditures budget in Basel II context, * Arbitraging between the cost of risk-reduction policies (as measured by organizational qualitative scorecards and insurance buying) and the impact of incurred losses themselves. This implies modeling the efficiency of the process through which forward-looking measures of threats minimization, can actually reduce stochastic losses, * And optimal allocation according to profitability across subsidiaries and business lines. The paper next reviews the different types of approaches that can be envisaged in deriving a sound budgetary policy solution for operational risk management, based on this HJB equation. It is argued that while this complex, high dimensional problem can be resolved by taking some usual simplifications (Galerkin approach, imposing Merton form solutions, viscosity approach, ad hoc utility functions that provide closed form solutions, etc.) , the main interest of this model lies in exploring the scenarios in an adaptive learning framework ( MDP, partially observed MDP, Q-learning, neuro-dynamic programming, greedy algorithm, etc.). This makes more sense from a management point of view, and solutions are more easily communicated to, and accepted by, the operational level staff in banks through the explicit scenarios that can be derived. This kind of approach combines different computational techniques such as POMDP, stochastic control theory and learning algorithms under uncertainty and incomplete information. The paper concludes by presenting the benefits of such a consistent computational approach to managing budgets, as opposed to a policy of operational risk management made up from disconnected expenditures. Such consistency satisfies the qualifying criteria for banks to apply for the AMA (Advanced Measurement Approach) that will allow large economies of regulatory capital charge under Basel II Accord.
    Keywords: REGULAR - Operational risk management, HJB equation, Levy processes, budget optimization, capital allocation
    JEL: G21
    Date: 2005–11–11
    URL: http://d.repec.org/n?u=RePEc:sce:scecf5:355&r=cmp
  12. By: Silvano Cincotti (University of Genoa DIBE); Eric Guerci
    Abstract: Since early nineties, worldwide production and distribution of electricity has been characterized by a progressive liberalization. The state-owned monopolistic production of electricity has been substituted by organized power exchanges (PEs). PEs are markets which aggregate the effective supply and demand of electricity. Usually spot-price market are Day Ahead Market (DAM) and are requested in order to provide an indication for the hourly unit commitment. This first session of the complex daily energy market collects and orders all the offers, determining the market price by matching the cumulative demand and supply curves for every hour of the day after according to a merit order rule. Subsequent market sessions (also online) operate in order to guarantee the feasibility and the security of this plan. The electric market is usually characterized by a reduced number of competitors, thus oligopolistic scenario may arise. Understanding how electricity prices depend on oligopolistic behavior of suppliers and on production costs has become a very important issue. Several restructuring designs for the electric power industry have been proposed. Main goal is to increase the overall market efficiency, trying to study, to develop and to apply different market mechanisms. Auction design is the standard domain for commodity markets. However, properties of different auction mechanism must be studied and determined correctly before their appliance. Generally speaking, different approaches have been proposed in the literature. Game theory analysis has provided an extremely useful methodology to study and derive properties of economic "games", such as auctions. Within this context, an interesting computational approach, for studying market inefficiencies, is the theory of learning in games. This methodology is useful in the context of infinitely repeated games. <BR> This paper investigates the nature of the clearing mechanism comparing two different methods, i.e., discriminatory and uniform auctions. The theoretical framework used to perform the analysis is the theory of learning in games. We consider an inelastic demand faced by sellers which use learning algorithms to understand proper strategies for increasing their profits. We model the auction mechanism in two different duopolistic scenario, i.e., a low demand situation, where one seller can clear all the demand, and a high demand condition, where both sellers are requested. Moreover, heterogeneity in the linear cost function is considered. Consistent results are achieved with two different learning algorithms
    Keywords: Agent-based simulation; power-exchange market; market power, reinforcement learning, electricity production costs
    JEL: C73 L1 L94
    Date: 2005–11–11
    URL: http://d.repec.org/n?u=RePEc:sce:scecf5:334&r=cmp
  13. By: Mattia Ciprian (mciprian@gmail.com); Stefano d'Addona (sd2123@columbia.edu)
    Abstract: We estimate time varying risk sensitivities on a wide range of stocks' portfolios of the US market. We empirically test, on a 1926-2004 Monthly CRSP database, a classic one factor model augmented with a time varying specification of betas. Using a Kalman filter based on a genetic algorithm, we show that the model is able to explain a large part of the variability of stock returns. Furthermore we run a Risk Management application on a GRID computing architecture. By estimating a parametric Value at Risk, we show how GRID computing offers an opportunity to enhance the solution of computational demanding problems with decentralized data retrieval.
    JEL: G
    Date: 2005–11–16
    URL: http://d.repec.org/n?u=RePEc:wpa:wuwpfi:0511007&r=cmp
  14. By: Hofer, Helmut (Department of Economics and Finance, Institute for Advanced Studies, Vienna, Austria); Url, Thomas (Austrian Institute of Economic Research)
    Abstract: We integrate age specific productivity differentials into a long-run neoclassical growth model for the Austrian economy with a highly disaggregated labor supply structure. We assume two life time productivity profiles reflecting either small or large hump-shaped productivity differentials and compute an average labor productivity index using three different aggregation functions: linear, Cobb-Douglas, and a nested Constant Elasticity of Substitution (CES). Model simulations with age specific productivity differentials are compared to a base scenario with uniform productivity over age groups. Depending on the aggregation function, the simulation results show only negligible or small negative effects on output and other macroeconomic key variables.
    Keywords: Age specific productivity, Demographic change, Model simulation
    JEL: O41 J11 E17
    Date: 2005–11
    URL: http://d.repec.org/n?u=RePEc:ihs:ihsesp:179&r=cmp
  15. By: Deddy Koesrindartoto (Economics Iowa State University); Junjie Sun
    Abstract: In April 2003 the U.S. Federal Energy Regulatory Commission proposed the Wholesale Power Market Platform (WPMP) for common adoption by all U.S. wholesale power markets. The WPMP is a complicated market design envisioning day-ahead, real-time, and ancillary service markets maintained and operated by an independent system operator or regional transmission organization. Variants of the WPMP have been implemented or accepted for implementation in several regions of the U.S. However, strong opposition to the WPMP still persists in many regions due in part to a perceived lack of adequate reliability testing. This presentation will report on the development of an agent-based computational laboratory for testing the economic reliability of the WPMP market design. The computational laboratory incorporates several core elements of the WPMP design as actually implemented in March 2003 by the New England independent system operator (ISO-NE) for the New England wholesale power market. Specifically, our modeled wholesale power market operates over a realistically rendered AC transmission grid. Computationally rendered generator agents (bulk electricity sellers) and load-serving entity agents (bulk electricity buyers) repeatedly bid into the day-ahead and real-time markets using the same protocols as actual ISO-NE market participants. In each trading period the agents use reinforcement learning to update their bids on the basis of past experience. We are using our agent-based computational laboratory to test the extent to which the core WPMP protocols are capable of sustaining efficient, orderly, and fair market outcomes over time despite attempts by market participants to gain individual advantage through strategic pricing, capacity withholding, and induced transmission congestion. This presentation will report on some of our initial experimental findings.
    Keywords: Agent-based computational economics; Wholesale power market design; Learning agents
    JEL: L1 L5 L94 C6 C7
    Date: 2005–11–11
    URL: http://d.repec.org/n?u=RePEc:sce:scecf5:50&r=cmp
  16. By: Jie-Shin Lin (Public Policy and Management I-Shou University)
    Abstract: Down’s spatial theory of elections (1957) has occupied a prominent theoretical status within political science. Studies use a notion of ideological distance to develop explanations for observable electoral trends. In elections, voters by observing party ideologies and using the information to make decisions for their votes because voters do not always have enough information to appraise the difference of which they are aware. The Downsian idea suggests that parties’ effort to attract votes leads them to adopt a median position. However, many studies have questioned the result and have many different conclusions. In recent years there has been an increasing interest in learning and adaptive behaviour including simulation models. In this study, we model the dynamics of competing parties who make decisions in an evolving environment and construct simulation models of party competition. We illustrate and compare their consequences by analyzing two variants of computational models.
    Keywords: Spatial Voting Model, Party Competition, Evolutionary Modelling, Learning
    JEL: Z
    Date: 2005–11–11
    URL: http://d.repec.org/n?u=RePEc:sce:scecf5:284&r=cmp
  17. By: Jing Yang; Sheri Markose; Amadeo Alentorn
    Abstract: In this paper, we report on the main building blocks of an ongoing project to develop a computational agent-based simulator for a generic real-time large-value interbank payment system with a central processor that can implement different rules for payment settlement. The main types of payment system in their polar forms are Real Time Gross Settlement (RTGS) and Deferred Net Settlement (DNS). DNS generates large quantities of settlement risk; in contrast, the elimination of settlement risk in RTGS comes with excessive demands for liquidity on banks. This could lead them to adopt various delaying tactics to minimise liquidity needs with free-riding and other ‘bad’ equilibria as potential outcomes. The introduction of hybrid systems with real-time netting is viewed as a means by which liquidity costs can be reduced while settlement risk is unchanged. Proposed reforms for settlement rules make it imperative to have a methodology to assess the efficiency of the different variants along three dimensions: the cost of liquidity to the individual banks and the system as a whole, settlement risk at both bank and system levels, and how early in the day payments are processed, since this proxies the impact of an operational incident. In this paper, we build a simulator for interbank payments capable of handling real time payment records along with autonomous bank behaviour and show that it can be used to evaluate different payment system designs against these three criteria
    Keywords: Agent based modeling, Real Time Gross Settlement; Deferred Net Settlement; Agent-based simulation; Payment Concentration; Liquidity; Systemic Risk
    JEL: H30 G21
    Date: 2005–11–11
    URL: http://d.repec.org/n?u=RePEc:sce:scecf5:396&r=cmp
  18. By: Pekka Sulamaa (ETLA, the Research Institute of the Finnish Economy); Mika Widgrén (ETLA, the Research Institute of the Finnish Economy)
    Abstract: This study simulates the economic effects of eastern enlargement of the EU and an EU-Russian free trade area. The main emphasis of the paper is on the effect this would have on the Russian economy. The simulations were carried out with a GTAP computable general equilibrium model, using the most recent GTAP database 6.0 beta, which takes the former Europe agreements between the EU-15 and the eight new Central and Eastern European member states into account. The results confirm the earlier findings that a free trade agreement with the EU is beneficial for Russia in terms of total output but not necessarily in terms of economic welfare when measured by equivalent variation. The main reason behind this is the deterioration that would occur in Russia’s terms of trade. Improved productivity in Russia would, however, make the free trade agreement with the EU advantageous.
    Keywords: EU, Russia, free trade, integration
    JEL: F15 F17
    Date: 2005–05
    URL: http://d.repec.org/n?u=RePEc:epr:enepwp:036&r=cmp
  19. By: Wei Jiang (Faculty of Science and I.T. University of Newcastle, Australia); Richard Webber & Ric D Herbert
    Abstract: This paper considers the application of information visualization techniques to an agent-based model of a financial system. The minority game is a simple agent-based model which can be used to simulate the events in a real-world financial market. To aid understanding of this model, we can apply information visualization techniques. Treemap and sunburst are two such information visualization techniques, which previous research tells us can effectively represent information similar to that generated by the minority game. Another information visualization technique, called logical fisheye-lens, can be used to augment treemap and sunburst, allowing users to magnify areas of interest in these visualizations. In this paper, treemap and sunburst, both with and without fisheye-lens, are applied to the minority game, and their effectiveness is evaluated. This evaluation is carried out through an analysis of users performing various tasks on (simulated) financial market data using the visualization techniques. A subjective questionnaire is also used to measure the users’ impressions of the visualization techniques.
    Keywords: Dynamic Models, Minority Game, Visualization
    JEL: C63 C73
    Date: 2005–11–11
    URL: http://d.repec.org/n?u=RePEc:sce:scecf5:468&r=cmp
  20. By: William L. Goffe; Michael Creel
    Abstract: The nature of computing is changing and it poses both challenges and opportunities for economists. Instead of increasing clock speed, future microprocessors will have "multi-cores" with separate execution units. "Threads" or other multi-processing techniques that are rarely used today are required to take full advantage of them. Beyond one machine, it has become easy to harness multiple computers to work in clusters. Besides dedicated clusters, they can be made up of unused lab computers or even your colleagues' machines. We will give live demos of multi-core and clusters and will describe grid computing (multiple clusters that could span the Internet). OpenMP (open multi-processing) and MPI (message passing interface) are among the topics described and shown live
    Keywords: parallel computing, clusters, grid computing
    JEL: C63
    Date: 2005–11–11
    URL: http://d.repec.org/n?u=RePEc:sce:scecf5:438&r=cmp
  21. By: Jean Louis Dessalles (CREM CNRS UMR 6211 University of Rennes I,); Denis Phan
    Abstract: This paper provides a formal definition of emergence, operative in multi-agent framework and which make sense from both a cognitive and an economics point of view. The first part discuses the ontological and epistemic dimension of emergence and provides a complementary set of definitions. Following Bonabeau, Dessalles, emergence is defined as an unexpected decrease in relative algorithmic complexity. The relative algorithmic complexity of a system measures the complexity of the shortest description that a given observer can give of the system, relative to the description tools available to that observer. Emergence occurs when RAC abruptly drops by a significant amount, i.e. the system appears much simpler than anticipated. Following Muller, we call strong emergence a situation in which the agents involved in the emerging phenomenon are able to perceive it. Strong emergence is particularly important in economic modelling, because the behaviour of agents may be recursively influenced by their perception of emerging properties. Emerging phenomena in a population of agents are expected to be richer and more complex when agents have enough cognitive abilities to perceive the emergent patterns. Our aim here is to design a minimal setting in which this kind of “strong emergence†unambiguously takes place. In part II, we design a model for strong emergence as an extension of Axtell et al. In the basic model, agents tend to correlate their fellows’ behaviour with fortuitous visible but meaningless characteristics. On some occasions, these fortuitous tags turn out to be reliable indicators of dominant and submissive behaviour in an iterative Nash bargaining tournament. One limit of this model is that dominant and submissive classes remain implicit within the system. As a consequence, classes only emerge in the eye of external observers. In the enhanced model, Individuals may deliberately choose to display a tag after observing that they are regularly dominated by other agents who display that tag. Tag display is constrained by the fact that displaying agents must endure a cost. Agents get an explicit representation of the dominant class whenever that class emerges, thus implementing strong emergence. This phenomenon results from a double-level emergence. As in the initial model, dominant and submissive strategies may emerge through amplification of fortuitous differences in agents’ personal experiences. We add the possibility of a second level in emergence, where a tag is explicitly used by agents to announce their intention to adopt a dominant strategy. Costly signalling (Spence, Zahavi et al. Gintis, Smith, Bowles) is an essential feature of this extended model. Qualities are not objective, but correspond to an emerging de facto ranking of individuals. Without strong emergence, endogenous signalling allows possible inversion in the class regime, while with strong emergence class behaviour may became a stochastically stable regim
    Keywords: adaptive complex systems, agent based computational economics, behavioural learning in games, cognitive hierarchy, complexity, detection, emergence, population games, signalling, stochastic stability
    JEL: B41 C73 C88 D83
    Date: 2005–11–11
    URL: http://d.repec.org/n?u=RePEc:sce:scecf5:257&r=cmp
  22. By: Carrizosa,Emilio; Martín-Barragán,Belén; Plastria,Frank; Romero Morales,Dolores (METEOR)
    Abstract: The Nearest Neighbor classifier has shown to be a powerful tool for multiclass classification. In this note we explore both theoretical properties and empirical behavior of a variant of such method, in which the Nearest Neighbor rule is applied after selecting a set of so-called prototypes, whose cardinality is fixed in advance, by minimizing the empirical mis-classification cost. With this we alleviate the two serious drawbacks of the Nearest Neighbor method: high storage requirements and time-consuming queries. The problem is shown to be NP-Hard. Mixed Integer Programming (MIP) programs are formulated, theoretically compared and solved by a standard MIP solver for problem instances of small size. Large sized problem instances are solved by a metaheuristic yielding good classification rules in reasonable time.
    Keywords: operations research and management science;
    Date: 2005
    URL: http://d.repec.org/n?u=RePEc:dgr:umamet:2005045&r=cmp
  23. By: Baoline Chen; Peter A. Zadrozny
    Abstract: This paper develops and illustrates the multi-step generalization of the standard single-step perturbation (SSP) method or MSP. In SSP, we can think of evaluating at x the computed approximate solution based on x0, as moving from x0 to x in "one big step" along the straight-line vector x-x0. By contrast, in MSP we move from x0 to x along any chosen path, continuous, curved-line or connected-straight-line, in h steps of equal length 1/h. If at each step we apply SSP, Taylor-series theory says that the approximation error per step is 0(e) = h^(-k-1), so that the total approximation error in moving from x0 to x in h steps is 0(e) = h^(-k). Thus, MSP has two major advantages over SSP. First, both SSP and MSP accuracy declines as the approximation point, x, moves from the initial point, x0, although only in MSP can the decline be countered by increasing h. Increasing k is much more costly than increasing h, because increasing k requires new derivations of derivatives, more computer programming, more computer storage, and more computer run time. By contrast, increasing h generally requires only more computer run time and often only slightly more. Second, in SSP the initial point is usually a nonstochastic steady state but can sometimes also be set up in function space as the known exact solution of a close but simpler model. This "closeness" of a related, simpler, and known solution can be exploited much more explicitly by MSP, when moving from x0 to x. In MSP, the state space could include parameters, so that the initial point, x0, would represent the simpler model with the known solution, and the final point, x, would continue to represent the model of interest. Then, as we would move from the initial x0 to the final x in h steps, the state variables and parameters would move together from their initial to final values and the model being solved would vary continuously from the simple model to the model of interest. Both advantages of MSP facilitate repeatedly, accurately, and quickly solving a NLRE model in an econometric analysis, over a range of data values, which could differ enough from nonstochastic steady states of the model of interest to render computed SSP solutions, for a given k, inadequately accurate. In the present paper, we extend the derivation of SSP to MSP for k = 4. As we did before, we use a mixture of gradient and differential-form differentiations to derive the MSP computational equations in conventional linear-algebraic form and illustrate them with a version of the stochastic optimal one-sector growth model.
    Keywords: numerical solution of dynamic stochastic equilibrium models
    JEL: C32 C61 C63
    Date: 2005–11–11
    URL: http://d.repec.org/n?u=RePEc:sce:scecf5:254&r=cmp

This nep-cmp issue is ©2005 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.