nep-cmp New Economics Papers
on Computational Economics
Issue of 2009‒03‒14
thirteen papers chosen by
Stan Miles
Thompson Rivers University

  1. Neural Networks for Cross-Sectional Employment Forecasts: A Comparison of Model Specifications for Germany By Roberto Patuelli; Aura Reggiani; Peter Nijkamp; Norbert Schanne
  2. Computational rationality and voluntary provision of public goods: an agent-based simulation model By M. Raimondi
  3. Sensitivity Analysis of Simulation Models By Kleijnen, J.P.C.
  4. Variety Trade and Skill Premium in a Calibrated General Equilibrium Model: The Case of Mexico By Atolia, Manoj; Yoshinori, Kurokawa
  5. Simulating a Sequential Coalition Formation Process for the Climate Change Problem: First Come, but Second Served? By Finus, Michael; Rundshagen, Bianca; Eyckmans, Johan
  6. The Evaluation of American Compound Option Prices Under Stochastic Volatility Using the Sparse Grid Approach By Carl Chiarella; Boda Kang
  7. Policy Instruments for Evolution of Bounded Rationality: Application to Climate-Energy Problems By Nannen, Volker; van den Bergh, Jeroen C. J. M.
  8. Impact of inventory inaccuracy on service-level quality: A simulation analysis By Daniel Thiel; Vincent Hovelaque; Thi Le Hoa Vo
  9. On the transition dynamics in endogenous recombinant growth models. By Privileggi, Fabio
  10. Does fast Growth in India and China harm U.S. Workers? Insights from Simulation Evidence By Alex Izurieta; Ajit Singh
  11. "Assessing the Consequences of a Horizontal Merger and its Remedies in a Dynamic Environment" By Isao Ishida; Toshiaki Watanabe
  12. Estimating Sequential-move Games by a Recursive Conditioning Simulator By Shiko Maruyama
  13. Alternative Defaultable Term Structure Models By Nicola Bruti-Liberati; Christina Nikitopoulos-Sklibosios; Eckhard Platen; Erik Schlogl

  1. By: Roberto Patuelli (Institute for Economic Research (IRE), University of Lugano, Switzerland; The Rimini Centre for Economic Analysis, Italy); Aura Reggiani (Department of Economics, University of Bologna, Italy); Peter Nijkamp (Department of Spatial Economics, VU University Amsterdam, The Netherlands); Norbert Schanne (Institute for Employment Research (IAB), Nuremberg, Germany)
    Abstract: In this paper, we present a review of various computational experiments – and consequent results – concerning Neural Network (NN) models developed for regional employment forecasting. NNs are widely used in several fields because of their flexible specification structure. Their utilization in studying/predicting economic variables, such as employment or migration, is justified by the ability of NNs of learning from data, in other words, of finding functional relationships – by means of data – among the economic variables under analysis. A series of NN experiments is presented in the paper. Using two data sets on German NUTS 3 districts (326 and 113 labour market districts in the former West and East Germany, respectively), the results emerging from the implementation of various NN models – in order to forecast variations in full-time employment – are provided and discussed In our approach, single forecasts are computed by the models for each district. Different specifications of the NN models are first tested in terms of: (a) explanatory variables; and (b) NN structures. The average statistical results of simulated out-of-sample forecasts on different periods are summarized and commented on. In addition to variable and structure specification, the choice of NN learning parameters and internal functions is also critical to the success of NNs. Comprehensive testing of these parameters is, however, limited in the literature. A sensitivity analysis is therefore carried out and discussed, in order to evaluate different combinations of NN parameters. The paper concludes with methodological and empirical remarks, as well as with suggestions for future research.
    Keywords: neural networks, sensitivity analysis, employment forecasts, Germany
    JEL: C45 E27 R23
    Date: 2009–02
  2. By: M. Raimondi
    Abstract: The issue of the cooperation among private agents in realising collective goods has always raised problems concerning the basic nature of individual behaviour as well as the more traditional economic problems. The Computational Economics literature on public goods provision can be useful to study the possibility of cooperation under alternative sets of assumptions concerning the nature of individual rationality and the kind of interactions between individuals. In this work I will use an agent-based simulation model to study the evolution of cooperation among private agents taking part in a collective project: a high number of agents, characterised by computational rationality, defined as the capacity to calculate and evaluate their own immediate payoffs perfectly and without errors, interact to producing a public good. The results show that when the agents’ behaviour is not influenced either by expectations of others’ behaviour or by social and relational characteristics, they opt to contribute to the public good to an almost socially optimal extent, even where there is no big difference between the rates of return on the private and the public investment.
    Keywords: Computational Economics; Agent-based models; Social Dilemmas; Collective Action; Public Goods
    JEL: C63 D64 D80 H41
    Date: 2009
  3. By: Kleijnen, J.P.C. (Tilburg University, Center for Economic Research)
    Abstract: This contribution presents an overview of sensitivity analysis of simulation models, including the estimation of gradients. It covers classic designs and their corresponding (meta)models; namely, resolution-III designs including fractional-factorial two-level designs for first-order polynomial metamodels, resolution-IV and resolution-V designs for metamodels augmented with two-factor interactions, and designs for second-degree polynomial metamodels including central composite designs. It also reviews factor screening for simulation models with very many factors, focusing on the so-called "sequential bifurcation" method. Furthermore, it reviews Kriging metamodels and their designs. It mentions that sensitivity analysis may also aim at the optimization of the simulated system, allowing multiple random simulation outputs.
    Keywords: simulation;sensitivity analysis;gradients;screening;Kriging;optimization;Response SurfaceMethodology;Taguchi
    JEL: C0 C1 C9
    Date: 2009
  4. By: Atolia, Manoj; Yoshinori, Kurokawa
    Abstract: It can be theoretically shown that variety trade can be a possible source of increased skill premium in wages. No past studies, however, have empirically quantified how much of the increase in skill premium can be accounted for by the increase in variety trade. This paper now formulates a static general equilibrium model and then calibrates it to the Mexican input-output matrix for 1987. In the calibrated model, our numerical experiments show that the increase in U.S.-Mexican variety trade can explain approximately 12 percent of the actual increase in skill premium in Mexico from 1987 to 2000.
    Keywords: Variety Trade; Skill Premium; Variety-Skill Complementarity; Calibrated General Equilibrium Model; Mexico
    JEL: F16 F12
    Date: 2008–10–29
  5. By: Finus, Michael; Rundshagen, Bianca; Eyckmans, Johan
    Abstract: We analyze stability of self-enforcing climate agreements based on a data set generated by the CLIMNEG world simulation model (CWSM), version 1.2. We consider two new aspects which appear important in actual treaty-making. First, we consider a sequential coalition formation process where players can make proposals which are either accepted or countered by other proposals. Second, we analyze whether a moderator, like an international organization, even without enforcement power, can improve upon globally suboptimal outcomes through coordinating actions by making recommendations that must be Pareto-improving to all parties. We discuss the conceptual difficulties of implementing our algorithm.
    Keywords: International Climate Agreements; Sequential Coalition Formation; Coordination through Moderator; Integrated Assessment Model; Algorithm for Computa tions
    Date: 2009–03
  6. By: Carl Chiarella (School of Finance and Economics, University of Technology, Sydney); Boda Kang (School of Finance and Economics, University of Technology, Sydney)
    Abstract: A compound option (the mother option) gives the holder the right, but not obligation to buy (long) or sell (short) the underlying option (the daughter option). In this paper, we demonstrate a partial differential equation (PDE) approach to pricing American-type compound options where the underlying dynamics follow Heston?s stochastic volatility model. This price is formulated as the solution to a two-pass free boundary PDE problem. A modified sparse grid approach is implemented to solve the PDEs, which is shown to be accurate and efficient compared with the results from Monte Carlo simulation combined with the Method of Lines.
    Keywords: American compound option; stochastic volatility; free boundary problem; sparse grid; combination technique; Monte Carlo simulation; method of lines
    JEL: C61 D11
    Date: 2009–02–01
  7. By: Nannen, Volker; van den Bergh, Jeroen C. J. M.
    Abstract: We demonstrate how an evolutionary agent-based model can be used to evaluate climate policies that take the heterogeneity of strategies of individual agents into account. An essential feature of the model is that the fitness of an economic strategy is determined by the relative welfare of the associated agent as compared to its immediate neighbors in a social network. This enables the study of policies that affect relative positions of individuals. We formulate two innovative climate policies, namely `prizes', altering directly relative welfare, and `advertisement', which influences the social network of interactions. The policies are illustrated using a simple model of global warming where a resource with a negative environmental impact---fossil energy---can be replaced by an environmentally neutral yet less cost effective alternative, namely renewable energy. It is shown that the general approach enlarges the scope of economic policy analysis.
    Keywords: agent-based modeling; behavioral economics; climate policy; evolutionary economics; relative welfare; social network
    JEL: B52 H23 Q54 C73
    Date: 2009–01–14
  8. By: Daniel Thiel; Vincent Hovelaque; Thi Le Hoa Vo
    Abstract: This article discusses the impact of inventory inaccuracy on service-level quality in (Q,R) continuous review, lost-sales inventory models. A simulation model is built to study the behaviour of this kind of model exposed to an inaccuracy in inventory records as well as demand variability. We have observed an unusual result which goes against certain empirical practices in the SMEs that consist in hiking the inventory level proportionally to the data inaccuracy rate. A nonmonotone function shows that at the outset, the service-level quality is lowered as the inaccuracy rate increases but when the inaccuracy rate becomes much higher this quality is conversely enhanced. This relation can equally be observed given that stocktaking commences as soon as the threshold of decline in the service-level rate has been reached and when demand consequently dwindles. Finally, another noteworthy result also shows the same phenomenon between the function involving a level of safety stock defined by the simulation and the function between the service-level quality and the inventory inaccuracy. These different observed results are discussed in terms of both contribution to the (Q,R) inventory management policies in SMEs and of the limitations to this study.
    Keywords: Continuous review inventory system, inventory inaccuracy, continuous model, discrete-time simulation
    JEL: C61 C65 M11
    Date: 2009
  9. By: Privileggi, Fabio
    Abstract: This paper constitutes a first attempt at studying the transition dynamics of the Tsur and Zemel (2007) continuous time endogenous growth framework in which knowledge evolves according to the Weitzman (1998) recombinant process. For a specific choice of the probability function characterizing the Weitzman recombinant process, we find a suitable transformation for the state and control variables in the dynamical system diverging to asymptotic constant growth, so that an equivalent 'detrended' system converging to a steady state in the long run can be tackled. Since the dynamical system obtained so far turns out to be analytically intractable, we rely on numerical simulation in order to fully describe the transition dynamics for a set of values of the parameters.
    Keywords: Knowledge Production, Recombinant Expansion Process, Endogenous Balanced Growth, Turnpike, Transition Dynamics
    JEL: C61 O31 O41
    Date: 2008–12
  10. By: Alex Izurieta; Ajit Singh
    Abstract: A major political and policy issue today is whether globalisation and rapid economic growth in India and China would have an adverse affect on labour markets in the U.S. and other advanced countries. Some leading economists have argued that even though the recent integration of India and China with the liberalised global economy has not so far had a serious negative impact on wages and employment in advanced countries, it is most likely to do so in the future in view of the growing technological and scientific capabilities in the two developing countries. This is also because it is suggested that this integration represents a sudden doubling of the world labour force without a concomitant increase in capital. The present paper argues against this plausible thesis, essentially on two grounds: (a) it does not take into account the demand side effects of fast growth in India and China; and (b) it abstracts from the dynamism of the U.S. real economy and its innovative large corporations. However, simulations of different scenarios on the CAM world econometric model indicate that at a disaggregated level there are severe supply side constraints on energy, raw materials and food which thwart the expansionary demand side effects of fast growth in India and China.
    Keywords: Globalisation; China and India; Simulation; U.S. Workers; Economic integration
    JEL: J20 J21 F01
    Date: 2008–12
  11. By: Isao Ishida (Faculty of Economics and Graduate School of Public Policy, University of Tokyo); Toshiaki Watanabe (Institute of Economic Research, Hitotsubashi University)
    Abstract: This paper estimates a dynamic oligopoly model to assess the economic consequences of a horizontal merger that took place in 1970 to create the second largest global producer of steel. The paper solves a Markov perfect Nash equilibrium for the model and simulates the welfare effects of the horizontal merger. Estimates reveal that the merger enhanced the production efficiency of the merging party by a magnitude of 4.1 %, while the exercise of market power was restrained primarily by the presence of fringe competitors. Our simulation result also indicates that structural remedies endorsed by the competition authority failed to promote competition. model.
    Date: 2009–01
  12. By: Shiko Maruyama (School of Economics, The University of New South Wales)
    Abstract: Sequential decision-making is a noticeable feature of strategic interactions among agents. The full estimation of sequential games, however, has been challenging due to the sheer computational burden, especially when the game is large and asymmetric. In this paper, I propose an estimation method for discrete choice sequential games that is computationally feasible, easy-to-implement, and e¢ cient, by modifying the Geweke-Hajivassiliou-Keane (GHK) simulator, the most widely used probit simulator. I show that the recursive nature of the GHK simulator is easily dovetailed with the sequential structure of strategic interactions.
    Date: 2009–01
  13. By: Nicola Bruti-Liberati (School of Finance and Economics, University of Technology, Sydney); Christina Nikitopoulos-Sklibosios (School of Finance and Economics, University of Technology, Sydney); Eckhard Platen (School of Finance and Economics, University of Technology, Sydney); Erik Schlogl (School of Finance and Economics, University of Technology, Sydney)
    Abstract: The objective of this paper is to consider defaultable term structure models in a general setting beyond standard risk-neutral models. Using as numeraire the growth optimal portfolio, defaultable interest rate derivatives are priced under the real-world probability measure. Therefore, the existence of an equivalent risk-neutral probability measure is not required. In particular, the real-world dynamics of the instantaneous defaultable forward rates under a jump-diffusion extension of a HJM type framework are derived. Thus, by establishing a modelling framework fully under the real-world probability measure, the challenge of reconciling real-world and risk-neutral probabilities of default is deliberately avoided, which provides significant extra modelling freedom. In addition, for certain volatility specifications, finite dimensional Markovian defaultable term structure models are derived. The paper also demonstrates an alternative defaultable term structure model. It provides tractable expressions for the prices of defaultable derivatives under the assumption of independence between the discounted growth optimal portfolio and the default-adjusted short rate. These expressions are then used in a more general model as control variates for Monte Carlo simulations of credit derivatives.
    Keywords: defaultable forward rates; jump-diffusion processes; growth optimal portfolio; real-world pricing
    JEL: G10 G13
    Date: 2009–01–01

This nep-cmp issue is ©2009 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.