New Economics Papers
on Computational Economics
Issue of 2011‒09‒16
thirteen papers chosen by



  1. The Spatial Agent-based Competition Model (SpAbCoM) By Graupner, Marten
  2. Designing an expert knowledge-based Systemic Importance Index for financial institutions By Carlos Léon; Clara Machado
  3. Forecasting Macroeconomic Variables using Neural Network Models and Three Automated Model Selection Techniques By Anders Bredahl Kock; Timo Teräsvirta
  4. Forecasting performance of three automated modelling techniques during the economic crisis 2007-2009 By Anders Bredahl Kock; Timo Teräsvirta
  5. Large Portfolio Asymptotics for Loss From Default By Kay Giesecke; Konstantinos Spiliopoulos; Richard B. Sowers; Justin A. Sirignano
  6. Taxes, Wages and Working Hours By Ericson, Peter; Flood, Lennart
  7. Computing Equilibrium Wealth Distributions in Models with Heterogeneous-Agents, Incomplete Markets and Idiosyncratic Risk By Muffasir Badshah; Paul Beaumont; Anuj Srivastava
  8. A Probabilistic Numerical Method for Fully Nonlinear Parabolic PDEs. By Fahim, Arash; Touzi, Nizar; Warin, Xavier
  9. A correlation sensitivity analysis of non-life underwriting risk in solvency capital requirement estimation By Lluís Bermúdez; Antoni Ferri; Montserrat Guillén
  10. Is this bank ill? The diagnosis of doctor TARGET2 By Ronald Heijmans; Richard Heuver
  11. An Impulse Control Approach to Dike Height Optimization By Chahim, M.; Brekelmans, R.C.M.; Hertog, D. den; Kort, P.M.
  12. A simulation study of an ASEAN Monetary Union (Replaces CentER DP 2010-100) By Boldea, O.; Engwerda, J.C.; Michalak, T.; Plasmans, J.E.J.; Salmah, S.
  13. Computing maximally smooth forward rate curves for coupon bonds: An iterative piecewise quartic polynomial interpolation method By Paul Beaumont; Yaniv Jerassy-Etzion

  1. By: Graupner, Marten
    Abstract: The paper presents a detailed documentation of the underlying concepts and methods of the Spatial Agent-based Competition Model (SpAbCoM). For instance, SpAbCoM is used to study firms' choices of spatial pricing policy (GRAUBNER et al., 2011a) or pricing and location under a framework of multi-firm spatial competition and two-dimensional markets (GRAUBNER et al., 2011b). While the simulation model is briefly introduced by means of relevant examples within the corresponding papers, the present paper serves two objectives. First, it presents a detailed discussion of the computational concepts that are used, particularly with respect to genetic algorithms (GAs). Second, it documents SpAbCoM and provides an overview of the structure of the simulation model and its dynamics. -- Das vorliegende Papier dokumentiert die zugrundeliegenden Konzepte und Methoden des Räumlichen Agenten-basierten Wettbewerbsmodells (Spatial Agent-based Competition Model) SpAbCoM. Anwendungsbeispiele dieses Simulationsmodells untersuchen die Entscheidung bezüglich der räumlichen Preisstrategie von Unternehmen (GRAUBNER et al., 2011a) oder Preissetzung und Standortwahl im Rahmen eines räumlichen Wettbewerbsmodells, welches mehr als einen Wettbewerber und zweidimensionalen Marktgebiete berücksichtigt. Während das Simulationsmodell in den jeweiligen Arbeiten kurz anhand eines Beispiels eingeführt wird, dient das vorliegende Papier zwei Zielen. Zum Einen sollen die verwendeten computergestützten Konzepte, hier speziell Genetische Algorithmen (GA), detailliert vorgestellt werden. Zum Anderen besteht die Absicht dieser Dokumentation darin, einen Überblick über die Struktur von SpAbCoM und die während einer Simulation ablaufenden Prozesse zu gegeben.
    Keywords: Agent-based modelling,genetic algorithms,spatial pricing,location model.,Agent-basierte Modellierung,Genetische Algorithmen,räumliche Preissetzung,Standortmodell.
    JEL: Y90
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:zbw:iamodp:135&r=cmp
  2. By: Carlos Léon; Clara Machado
    Abstract: Defining whether a financial institution is systemically important (or not) is challenging due to (i) the inevitability of combining complex importance criteria such as institutions’ size, connectedness and substitutability; (ii) the ambiguity of what an appropriate threshold for those criteria may be; and (iii) the involvement of expert knowledge as a key input for combining those criteria. The proposed method, a Fuzzy Logic Inference System, uses four key systemic importance indicators that capture institutions’ size, connectedness and substitutability, and a convenient deconstruction of expert knowledge to obtain a Systemic Importance Index. This method allows for combining dissimilar concepts in a non-linear, consistent and intuitive manner, whilst considering them as continuous –non binary- functions. Results reveal that the method imitates the way experts them-selves think about the decision process regarding what a systemically important financial institution is within the financial system under analysis. The Index is a comprehensive relative assessment of each financial institution’s systemic importance. It may serve financial authorities as a quantitative tool for focusing their attention and resources where the severity resulting from an institution failing or near-failing is estimated to be the greatest. It may also serve for enhanced policy-making (e.g. prudential regulation, oversight and supervision) and decision-making (e.g. resolving, restructuring or providing emergency liquidity).
    Date: 2011–09–01
    URL: http://d.repec.org/n?u=RePEc:col:000094:008953&r=cmp
  3. By: Anders Bredahl Kock (Aarhus University and CREATES); Timo Teräsvirta (Aarhus University and CREATES)
    Abstract: In this paper we consider the forecasting performance of a well-defined class of flexible models, the so-called single hidden-layer feedforward neural network models. A major aim of our study is to find out whether they, due to their flexibility, are as useful tools in economic forecasting as some previous studies have indicated. When forecasting with neural network models one faces several problems, all of which influence the accuracy of the forecasts. First, neural networks are often hard to estimate due to their highly nonlinear structure. In fact, their parameters are not even globally identified. Recently, White (2006) presented a solution that amounts to converting the specification and nonlinear estimation problem into a linear model selection and estimation problem. He called this procedure the QuickNet and we shall compare its performance to two other procedures which are built on the linearisation idea: the Marginal Bridge Estimator and Autometrics. Second, one must decide whether forecasting should be carried out recursively or directly. Comparisons of these two methodss exist for linear models and here these comparisons are extended to neural networks. Finally, a nonlinear model such as the neural network model is not appropriate if the data is generated by a linear mechanism. Hence, it might be appropriate to test the null of linearity prior to building a nonlinear model. We investigate whether this kind of pretesting improves the forecast accuracy compared to the case where this is not done.
    Keywords: artificial neural network, forecast comparison, model selection, nonlinear autoregressive model, nonlinear time series, root mean square forecast error, Wilcoxon’s signed-rank test
    JEL: C22 C45 C52 C53
    Date: 2011–08–26
    URL: http://d.repec.org/n?u=RePEc:aah:create:2011-27&r=cmp
  4. By: Anders Bredahl Kock (Aarhus University and CREATES); Timo Teräsvirta (Aarhus University and CREATES)
    Abstract: In this work we consider forecasting macroeconomic variables dur- ing an economic crisis. The focus is on a specific class of models, the so-called single hidden-layer feedforward autoregressive neural net- work models. What makes these models interesting in the present context is that they form a class of universal approximators and may be expected to work well during exceptional periods such as major economic crises. These models are often difficult to estimate, and we follow the idea of White (2006) to transform the speci?fication and non- linear estimation problem into a linear model selection and estimation problem. To this end we employ three automatic modelling devices. One of them is White's QuickNet, but we also consider Autometrics, well known to time series econometricians, and the Marginal Bridge Estimator, better known to statisticians and microeconometricians. The performance of these three model selectors is compared by look- ing at the accuracy of the forecasts of the estimated neural network models. We apply the neural network model and the three modelling techniques to monthly industrial production and unemployment se- ries of the G7 countries and the four Scandinavian ones, and focus on forecasting during the economic crisis 2007-2009. Forecast accuracy is measured by the root mean square forecast error. Hypothesis testing is also used to compare the performance of the different techniques with each other.
    Keywords: Autometrics, economic forecasting, Marginal Bridge estimator, neural network, nonlinear time series model, Wilcoxon's signed-rank test
    JEL: C22 C45 C52 C53
    Date: 2011–08–26
    URL: http://d.repec.org/n?u=RePEc:aah:create:2011-28&r=cmp
  5. By: Kay Giesecke; Konstantinos Spiliopoulos; Richard B. Sowers; Justin A. Sirignano
    Abstract: We prove a law of large numbers for the loss from default and use it for approximating the distribution of the loss from default in large, potentially heterogenous portfolios. The density of the limiting measure is shown to solve a non-linear SPDE, and the moments of the limiting measure are shown to satisfy an infinite system of SDEs. The solution to this system leads to %the solution to the SPDE through an inverse moment problem, and to the distribution of the limiting portfolio loss, which we propose as an approximation to the loss distribution for a large portfolio. Numerical tests illustrate the accuracy of the approximation, and highlight its computational advantages over a direct Monte Carlo simulation of the original stochastic system.
    Date: 2011–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1109.1272&r=cmp
  6. By: Ericson, Peter (Sim Solution); Flood, Lennart (Department of Economics, School of Business, Economics and Law, Göteborg University)
    Abstract: This paper presents estimates of individuals’ responses in hourly wages to changes in marginal tax rates. Estimates based on register panel data of Swedish households covering the period 1992 to 2007 produce significant but relatively small net-of-tax rate elasticities. The results vary with family type, with the largest elasticities obtained for single males and the smallest for married/cohabitant females. Despite these seemingly small elasticities, evaluation of the effects of a reduced state tax using a microsimulation model shows that the effort effect matters. The largest effect is due to changes in number of working hours yet including the effort effect results in an almost self-financed reform. As a reference to the earlier literature we also estimate taxable income elasticities. As expected, these are larger than for the hourly wage rates. However, both specifications produce significantly and positive income effects.<p>
    Keywords: income taxation; hourly wage rates; work effort; micro simulation
    JEL: D31 H24 J22 J31
    Date: 2011–08–31
    URL: http://d.repec.org/n?u=RePEc:hhs:gunwpe:0514&r=cmp
  7. By: Muffasir Badshah (Department of Finance and Economcis, Qatar University, Doha Qatar); Paul Beaumont (Department of Economics, Florida State University); Anuj Srivastava (Department of Statistics, Florida State University)
    Abstract: This paper describes an accurate, fast and robust fixed point method for computing the stationary wealth distributions in macroeconomic models with a continuum of infinitely-lived households who face idiosyncratic shocks with aggregate certainty. The household wealth evolution is modeled as a mixture Markov process and the stationary wealth distributions are obtained using eigen structures of transition matrices by enforcing the conditions for the Perron-Frobenius theorem by adding a perturbation constant to the Markov transition matrix. This step is utilized repeatedly within a binary search algorithm to find the equilibrium state of the system. The algorithm suggests an efficient and reliable framework for studying dynamic stochastic general equilibrium models with heterogeneous agents.
    Keywords: Numerical solutions, Wealth distributions, Stationary equilibria, DSGE models
    JEL: C63 D52
    Date: 2011–08
    URL: http://d.repec.org/n?u=RePEc:fsu:wpaper:wp2011_08_02&r=cmp
  8. By: Fahim, Arash; Touzi, Nizar; Warin, Xavier
    Abstract: We consider the probabilistic numerical scheme for fully nonlinear PDEs suggested in [12], and show that it can be introduced naturally as a combination of Monte Carlo and finite differences scheme without appealing to the theory of backward stochastic differential equations. Our first main result provides the convergence of the discrete-time approximation and derives a bound on the discretization error in terms of the time step. An explicit implementable scheme requires to approximate the conditional expectation operators involved in the discretization. This induces a further Monte Carlo error. Our second main result is to prove the convergence of the latter approximation scheme, and to derive an upper bound on the approximation error. Numerical experiments are performed for the approximation of the solution of the mean curvature flow equation in dimensions two and three, and for two and five-dimensional (plus time) fully-nonlinear Hamilton-Jacobi-Bellman equations arising in the theory of portfolio optimization in financial mathematics.
    Keywords: second order backward stochastic differential equations; Viscosity Solutions; monotone schemes; Monte Carlo approximation;
    JEL: C15
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:ner:dauphi:urn:hdl:123456789/5524&r=cmp
  9. By: Lluís Bermúdez (Departament de Matemàtica Econòmica, Financera i Actuarial. RISC-IREA. University of Barcelona. Spain); Antoni Ferri (Departament d'Econometria, Estadística i Economia Espanyola. RISC-IREA. University of Barcelona. Spain); Montserrat Guillén (Departament d'Econometria, Estadística i Economia Espanyola. RISC-IREA. University of Barcelona. Spain)
    Abstract: This paper analyses the impact of using different correlation assumptions between lines of business when estimating the risk-based capital reserve, the Solvency Capital Requirement (SCR), under Solvency II regulations. A case study is presented and the SCR is calculated according to the Standard Model approach. Alternatively, the requirement is then calculated using an Internal Model based on a Monte Carlo simulation of the net underwriting result at a one-year horizon, with copulas being used to model the dependence between lines of business. To address the impact of these model assumptions on the SCR we conduct a sensitivity analysis. We examine changes in the correlation matrix between lines of business and address the choice of copulas. Drawing on aggregate historical data from the Spanish non-life insurance market between 2000 and 2009, we conclude that modifications of the correlation and dependence assumptions have a significant impact on SCR estimation.
    Keywords: Solvency II, Solvency Capital Requirement, Standard Model, Internal Model, Monte Carlo simulation, Copulas.
    Date: 2011–09
    URL: http://d.repec.org/n?u=RePEc:xrp:wpaper:xreap2011-12&r=cmp
  10. By: Ronald Heijmans; Richard Heuver
    Abstract: We develop indicators for signs of liquidity shortages and potential financial problems of banks by studying transaction data of the Dutch part of the European real time gross settlement system and collateral management data. The indicators give information on 1) overall liquidity position, 2) the interbank money market, 3) the timing of payment flows, 4) the collateral’s amount and use and 5) bank run signs. This information can be used both for monitoring the TARGET2 payment system and for individual banks’ supervision. By studying these data before, during and after stressful events in the crisis, banks’ reaction patterns are identified. These patterns are translated into a set of behavioural rules, which can be used in payment systems’ stress scenario analyses, such as e.g. simulations and network topology. In the literature behaviour and reaction patterns in simulations are either ignored or very static. To perform realistic payment system simulations it is crucial to understand how banks react to shocks.
    Keywords: behaviour of banks; wholesale payment systems; financial stability
    JEL: D23 E42 E58
    Date: 2011–08
    URL: http://d.repec.org/n?u=RePEc:dnb:dnbwpp:316&r=cmp
  11. By: Chahim, M.; Brekelmans, R.C.M.; Hertog, D. den; Kort, P.M. (Tilburg University, Center for Economic Research)
    Abstract: This paper determines the optimal timing of dike heightenings as well as the corresponding optimal dike heightenings to protect against floods. To derive the optimal policy we design an algorithm based on the Impulse Control Maximum Principle. In this way the paper presents one of the first real life applications of the Impulse Control Maximum Principle developed by Blaquiere. We show that the proposed Impulse Control approach performs better than Dynamic Programming with respect to computational time. This is caused by the fact that Impulse Control does not need discretization in time.
    Keywords: Impulse Control Maximum Principle;Optimal Control;flood prevention;dikes;cost-benefit analysis.
    JEL: C61 D61 H54 Q54
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:dgr:kubcen:2011097&r=cmp
  12. By: Boldea, O.; Engwerda, J.C.; Michalak, T.; Plasmans, J.E.J.; Salmah, S. (Tilburg University, Center for Economic Research)
    Abstract: This paper analyzes some pros and cons of a monetary union for the ASEAN1 countries, excluding Myanmar. We estimate a stylized open-economy dynamic general equilibrium model for the ASEAN countries. Using the framework of linear quadratic differential games, we contrast the potential gains or losses for these countries due to economic shocks, in case they maintain their status-quo, they coordinate their monetary and/or fiscal policies, or form a monetary union. Assuming for all players open-loop information, we conclude that there are substantial gains from cooperation of monetary authorities. We also find that whether a monetary union improves upon monetary cooperation depends on the type of shocks and the extent of fiscal policy cooperation. Results are based both on a theoretical study of the structure of the estimated model and a simulation study.
    Keywords: ASEAN economic integration;monetary union;linear quadratic differential games;open-loop information structure.
    JEL: C61 C71 C72 C73 E17 E52 E61 F15 F42 F47
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:dgr:kubcen:2011098&r=cmp
  13. By: Paul Beaumont (Department of Economics, Florida State University Author-X-Name-First: Yaniv); Yaniv Jerassy-Etzion (Department of Economics and Management; Ruppin Academic Center)
    Abstract: We present a simple and fast iterative, linear algorithm for simultaneously stripping the coupon payments from and smoothing the yield curve of the term structure of interest rates. The method minimizes pricing errors, constrains initial and terminal conditions of the curves and produces maximally smooth forward rate curves.
    Keywords: Term structure of interest rates, yield curve, coupon stripping, curve interpolation
    JEL: G12 C63
    Date: 2011–08
    URL: http://d.repec.org/n?u=RePEc:fsu:wpaper:wp2011_08_03&r=cmp

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.