
on Computational Economics 
By:  Graupner, Marten 
Abstract:  The paper presents a detailed documentation of the underlying concepts and methods of the Spatial Agentbased Competition Model (SpAbCoM). For instance, SpAbCoM is used to study firms' choices of spatial pricing policy (GRAUBNER et al., 2011a) or pricing and location under a framework of multifirm spatial competition and twodimensional markets (GRAUBNER et al., 2011b). While the simulation model is briefly introduced by means of relevant examples within the corresponding papers, the present paper serves two objectives. First, it presents a detailed discussion of the computational concepts that are used, particularly with respect to genetic algorithms (GAs). Second, it documents SpAbCoM and provides an overview of the structure of the simulation model and its dynamics.  Das vorliegende Papier dokumentiert die zugrundeliegenden Konzepte und Methoden des Räumlichen Agentenbasierten Wettbewerbsmodells (Spatial Agentbased Competition Model) SpAbCoM. Anwendungsbeispiele dieses Simulationsmodells untersuchen die Entscheidung bezüglich der räumlichen Preisstrategie von Unternehmen (GRAUBNER et al., 2011a) oder Preissetzung und Standortwahl im Rahmen eines räumlichen Wettbewerbsmodells, welches mehr als einen Wettbewerber und zweidimensionalen Marktgebiete berücksichtigt. Während das Simulationsmodell in den jeweiligen Arbeiten kurz anhand eines Beispiels eingeführt wird, dient das vorliegende Papier zwei Zielen. Zum Einen sollen die verwendeten computergestützten Konzepte, hier speziell Genetische Algorithmen (GA), detailliert vorgestellt werden. Zum Anderen besteht die Absicht dieser Dokumentation darin, einen Überblick über die Struktur von SpAbCoM und die während einer Simulation ablaufenden Prozesse zu gegeben. 
Keywords:  Agentbased modelling,genetic algorithms,spatial pricing,location model.,Agentbasierte Modellierung,Genetische Algorithmen,räumliche Preissetzung,Standortmodell. 
JEL:  Y90 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:zbw:iamodp:135&r=cmp 
By:  Carlos Léon; Clara Machado 
Abstract:  Defining whether a financial institution is systemically important (or not) is challenging due to (i) the inevitability of combining complex importance criteria such as institutions’ size, connectedness and substitutability; (ii) the ambiguity of what an appropriate threshold for those criteria may be; and (iii) the involvement of expert knowledge as a key input for combining those criteria. The proposed method, a Fuzzy Logic Inference System, uses four key systemic importance indicators that capture institutions’ size, connectedness and substitutability, and a convenient deconstruction of expert knowledge to obtain a Systemic Importance Index. This method allows for combining dissimilar concepts in a nonlinear, consistent and intuitive manner, whilst considering them as continuous –non binary functions. Results reveal that the method imitates the way experts themselves think about the decision process regarding what a systemically important financial institution is within the financial system under analysis. The Index is a comprehensive relative assessment of each financial institution’s systemic importance. It may serve financial authorities as a quantitative tool for focusing their attention and resources where the severity resulting from an institution failing or nearfailing is estimated to be the greatest. It may also serve for enhanced policymaking (e.g. prudential regulation, oversight and supervision) and decisionmaking (e.g. resolving, restructuring or providing emergency liquidity). 
Date:  2011–09–01 
URL:  http://d.repec.org/n?u=RePEc:col:000094:008953&r=cmp 
By:  Anders Bredahl Kock (Aarhus University and CREATES); Timo Teräsvirta (Aarhus University and CREATES) 
Abstract:  In this paper we consider the forecasting performance of a welldefined class of flexible models, the socalled single hiddenlayer feedforward neural network models. A major aim of our study is to find out whether they, due to their flexibility, are as useful tools in economic forecasting as some previous studies have indicated. When forecasting with neural network models one faces several problems, all of which influence the accuracy of the forecasts. First, neural networks are often hard to estimate due to their highly nonlinear structure. In fact, their parameters are not even globally identified. Recently, White (2006) presented a solution that amounts to converting the specification and nonlinear estimation problem into a linear model selection and estimation problem. He called this procedure the QuickNet and we shall compare its performance to two other procedures which are built on the linearisation idea: the Marginal Bridge Estimator and Autometrics. Second, one must decide whether forecasting should be carried out recursively or directly. Comparisons of these two methodss exist for linear models and here these comparisons are extended to neural networks. Finally, a nonlinear model such as the neural network model is not appropriate if the data is generated by a linear mechanism. Hence, it might be appropriate to test the null of linearity prior to building a nonlinear model. We investigate whether this kind of pretesting improves the forecast accuracy compared to the case where this is not done. 
Keywords:  artificial neural network, forecast comparison, model selection, nonlinear autoregressive model, nonlinear time series, root mean square forecast error, Wilcoxon’s signedrank test 
JEL:  C22 C45 C52 C53 
Date:  2011–08–26 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201127&r=cmp 
By:  Anders Bredahl Kock (Aarhus University and CREATES); Timo Teräsvirta (Aarhus University and CREATES) 
Abstract:  In this work we consider forecasting macroeconomic variables dur ing an economic crisis. The focus is on a specific class of models, the socalled single hiddenlayer feedforward autoregressive neural net work models. What makes these models interesting in the present context is that they form a class of universal approximators and may be expected to work well during exceptional periods such as major economic crises. These models are often difficult to estimate, and we follow the idea of White (2006) to transform the speci?fication and non linear estimation problem into a linear model selection and estimation problem. To this end we employ three automatic modelling devices. One of them is White's QuickNet, but we also consider Autometrics, well known to time series econometricians, and the Marginal Bridge Estimator, better known to statisticians and microeconometricians. The performance of these three model selectors is compared by look ing at the accuracy of the forecasts of the estimated neural network models. We apply the neural network model and the three modelling techniques to monthly industrial production and unemployment se ries of the G7 countries and the four Scandinavian ones, and focus on forecasting during the economic crisis 20072009. Forecast accuracy is measured by the root mean square forecast error. Hypothesis testing is also used to compare the performance of the different techniques with each other. 
Keywords:  Autometrics, economic forecasting, Marginal Bridge estimator, neural network, nonlinear time series model, Wilcoxon's signedrank test 
JEL:  C22 C45 C52 C53 
Date:  2011–08–26 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201128&r=cmp 
By:  Kay Giesecke; Konstantinos Spiliopoulos; Richard B. Sowers; Justin A. Sirignano 
Abstract:  We prove a law of large numbers for the loss from default and use it for approximating the distribution of the loss from default in large, potentially heterogenous portfolios. The density of the limiting measure is shown to solve a nonlinear SPDE, and the moments of the limiting measure are shown to satisfy an infinite system of SDEs. The solution to this system leads to %the solution to the SPDE through an inverse moment problem, and to the distribution of the limiting portfolio loss, which we propose as an approximation to the loss distribution for a large portfolio. Numerical tests illustrate the accuracy of the approximation, and highlight its computational advantages over a direct Monte Carlo simulation of the original stochastic system. 
Date:  2011–09 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1109.1272&r=cmp 
By:  Ericson, Peter (Sim Solution); Flood, Lennart (Department of Economics, School of Business, Economics and Law, Göteborg University) 
Abstract:  This paper presents estimates of individuals’ responses in hourly wages to changes in marginal tax rates. Estimates based on register panel data of Swedish households covering the period 1992 to 2007 produce significant but relatively small netoftax rate elasticities. The results vary with family type, with the largest elasticities obtained for single males and the smallest for married/cohabitant females. Despite these seemingly small elasticities, evaluation of the effects of a reduced state tax using a microsimulation model shows that the effort effect matters. The largest effect is due to changes in number of working hours yet including the effort effect results in an almost selffinanced reform. As a reference to the earlier literature we also estimate taxable income elasticities. As expected, these are larger than for the hourly wage rates. However, both specifications produce significantly and positive income effects.<p> 
Keywords:  income taxation; hourly wage rates; work effort; micro simulation 
JEL:  D31 H24 J22 J31 
Date:  2011–08–31 
URL:  http://d.repec.org/n?u=RePEc:hhs:gunwpe:0514&r=cmp 
By:  Muffasir Badshah (Department of Finance and Economcis, Qatar University, Doha Qatar); Paul Beaumont (Department of Economics, Florida State University); Anuj Srivastava (Department of Statistics, Florida State University) 
Abstract:  This paper describes an accurate, fast and robust fixed point method for computing the stationary wealth distributions in macroeconomic models with a continuum of infinitelylived households who face idiosyncratic shocks with aggregate certainty. The household wealth evolution is modeled as a mixture Markov process and the stationary wealth distributions are obtained using eigen structures of transition matrices by enforcing the conditions for the PerronFrobenius theorem by adding a perturbation constant to the Markov transition matrix. This step is utilized repeatedly within a binary search algorithm to find the equilibrium state of the system. The algorithm suggests an efficient and reliable framework for studying dynamic stochastic general equilibrium models with heterogeneous agents. 
Keywords:  Numerical solutions, Wealth distributions, Stationary equilibria, DSGE models 
JEL:  C63 D52 
Date:  2011–08 
URL:  http://d.repec.org/n?u=RePEc:fsu:wpaper:wp2011_08_02&r=cmp 
By:  Fahim, Arash; Touzi, Nizar; Warin, Xavier 
Abstract:  We consider the probabilistic numerical scheme for fully nonlinear PDEs suggested in [12], and show that it can be introduced naturally as a combination of Monte Carlo and finite differences scheme without appealing to the theory of backward stochastic differential equations. Our first main result provides the convergence of the discretetime approximation and derives a bound on the discretization error in terms of the time step. An explicit implementable scheme requires to approximate the conditional expectation operators involved in the discretization. This induces a further Monte Carlo error. Our second main result is to prove the convergence of the latter approximation scheme, and to derive an upper bound on the approximation error. Numerical experiments are performed for the approximation of the solution of the mean curvature flow equation in dimensions two and three, and for two and fivedimensional (plus time) fullynonlinear HamiltonJacobiBellman equations arising in the theory of portfolio optimization in financial mathematics. 
Keywords:  second order backward stochastic differential equations; Viscosity Solutions; monotone schemes; Monte Carlo approximation; 
JEL:  C15 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:ner:dauphi:urn:hdl:123456789/5524&r=cmp 
By:  Lluís Bermúdez (Departament de Matemàtica Econòmica, Financera i Actuarial. RISCIREA. University of Barcelona. Spain); Antoni Ferri (Departament d'Econometria, Estadística i Economia Espanyola. RISCIREA. University of Barcelona. Spain); Montserrat Guillén (Departament d'Econometria, Estadística i Economia Espanyola. RISCIREA. University of Barcelona. Spain) 
Abstract:  This paper analyses the impact of using different correlation assumptions between lines of business when estimating the riskbased capital reserve, the Solvency Capital Requirement (SCR), under Solvency II regulations. A case study is presented and the SCR is calculated according to the Standard Model approach. Alternatively, the requirement is then calculated using an Internal Model based on a Monte Carlo simulation of the net underwriting result at a oneyear horizon, with copulas being used to model the dependence between lines of business. To address the impact of these model assumptions on the SCR we conduct a sensitivity analysis. We examine changes in the correlation matrix between lines of business and address the choice of copulas. Drawing on aggregate historical data from the Spanish nonlife insurance market between 2000 and 2009, we conclude that modifications of the correlation and dependence assumptions have a significant impact on SCR estimation. 
Keywords:  Solvency II, Solvency Capital Requirement, Standard Model, Internal Model, Monte Carlo simulation, Copulas. 
Date:  2011–09 
URL:  http://d.repec.org/n?u=RePEc:xrp:wpaper:xreap201112&r=cmp 
By:  Ronald Heijmans; Richard Heuver 
Abstract:  We develop indicators for signs of liquidity shortages and potential financial problems of banks by studying transaction data of the Dutch part of the European real time gross settlement system and collateral management data. The indicators give information on 1) overall liquidity position, 2) the interbank money market, 3) the timing of payment flows, 4) the collateral’s amount and use and 5) bank run signs. This information can be used both for monitoring the TARGET2 payment system and for individual banks’ supervision. By studying these data before, during and after stressful events in the crisis, banks’ reaction patterns are identified. These patterns are translated into a set of behavioural rules, which can be used in payment systems’ stress scenario analyses, such as e.g. simulations and network topology. In the literature behaviour and reaction patterns in simulations are either ignored or very static. To perform realistic payment system simulations it is crucial to understand how banks react to shocks. 
Keywords:  behaviour of banks; wholesale payment systems; financial stability 
JEL:  D23 E42 E58 
Date:  2011–08 
URL:  http://d.repec.org/n?u=RePEc:dnb:dnbwpp:316&r=cmp 
By:  Chahim, M.; Brekelmans, R.C.M.; Hertog, D. den; Kort, P.M. (Tilburg University, Center for Economic Research) 
Abstract:  This paper determines the optimal timing of dike heightenings as well as the corresponding optimal dike heightenings to protect against floods. To derive the optimal policy we design an algorithm based on the Impulse Control Maximum Principle. In this way the paper presents one of the first real life applications of the Impulse Control Maximum Principle developed by Blaquiere. We show that the proposed Impulse Control approach performs better than Dynamic Programming with respect to computational time. This is caused by the fact that Impulse Control does not need discretization in time. 
Keywords:  Impulse Control Maximum Principle;Optimal Control;flood prevention;dikes;costbenefit analysis. 
JEL:  C61 D61 H54 Q54 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:dgr:kubcen:2011097&r=cmp 
By:  Boldea, O.; Engwerda, J.C.; Michalak, T.; Plasmans, J.E.J.; Salmah, S. (Tilburg University, Center for Economic Research) 
Abstract:  This paper analyzes some pros and cons of a monetary union for the ASEAN1 countries, excluding Myanmar. We estimate a stylized openeconomy dynamic general equilibrium model for the ASEAN countries. Using the framework of linear quadratic differential games, we contrast the potential gains or losses for these countries due to economic shocks, in case they maintain their statusquo, they coordinate their monetary and/or fiscal policies, or form a monetary union. Assuming for all players openloop information, we conclude that there are substantial gains from cooperation of monetary authorities. We also find that whether a monetary union improves upon monetary cooperation depends on the type of shocks and the extent of fiscal policy cooperation. Results are based both on a theoretical study of the structure of the estimated model and a simulation study. 
Keywords:  ASEAN economic integration;monetary union;linear quadratic differential games;openloop information structure. 
JEL:  C61 C71 C72 C73 E17 E52 E61 F15 F42 F47 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:dgr:kubcen:2011098&r=cmp 
By:  Paul Beaumont (Department of Economics, Florida State University AuthorXNameFirst: Yaniv); Yaniv JerassyEtzion (Department of Economics and Management; Ruppin Academic Center) 
Abstract:  We present a simple and fast iterative, linear algorithm for simultaneously stripping the coupon payments from and smoothing the yield curve of the term structure of interest rates. The method minimizes pricing errors, constrains initial and terminal conditions of the curves and produces maximally smooth forward rate curves. 
Keywords:  Term structure of interest rates, yield curve, coupon stripping, curve interpolation 
JEL:  G12 C63 
Date:  2011–08 
URL:  http://d.repec.org/n?u=RePEc:fsu:wpaper:wp2011_08_03&r=cmp 