nep-cmp New Economics Papers
on Computational Economics
Issue of 2007‒05‒12
ten papers chosen by
Stan Miles
Thompson Rivers University

  1. Feedback Approximation of the Stochastic Growth Model by Genetic Neural Networks By Sibel Sirakaya; Stephen Turnovsky; N.M. Alemdar
  2. Minimization of Keane’s Bump Function by the Repulsive Particle Swarm and the Differential Evolution Methods By Mishra, SK
  3. A framework for cut-off sampling in business survey design By Marco Bee; Roberto Benedetti; Giuseppe Espa
  4. Electricity Reforms in Senegal: A Macro–Micro Analysis of the Effects on Poverty and Distribution By Dorothée Boccanfuso; Antonio Estache; Luc Savard
  5. Armington Parameter Estimation for a Computable General Equilibrium Model: A Database Consistent Approach By Xiao-guang Zhang; George Verikios
  6. Bond Indebtedness in a Recursive Dynamic CGE Model By André Lemelin
  7. Providing Duty-Free Access to Australian Markets for Least-Developed COuntries: a General Equilibrium Analysis By Xiao-guang Zhang; George Verikios
  8. Regional Economic Integration and its Impacts on Growth, Poverty and Income Distribution: The Case of Indonesia By Djoni Hartono; D.S. Priyarsono; Tien Dung Nguyen; Mitsuo Ezaki
  9. Household loan loss risk in Finland – estimations and simulations with micro data By Herrala, Risto; Kauko, Karlo
  10. Conditioning and Hessians in analytical and numerical optimization - Some illustrations By Christie Smith

  1. By: Sibel Sirakaya; Stephen Turnovsky; N.M. Alemdar
    Date: 2005–07
  2. By: Mishra, SK
    Abstract: Keane’s bump function is considered as a standard benchmark for nonlinear constrained optimization. It is highly multi-modal and its optimum is located at the non-linear constrained boundary. The true minimum of this function is, perhaps, unknown. We intend in this paper to optimize Keane’s function of different dimensions (2 to 100) by the Repulsive Particle Swarm and Differential Evolution methods. The DE optimization program has gone a long way to obtain the optimum results. However, the Repulsive Particle Swarm optimization has faltered. We have also conjectured that the values of the decision variables diminish with the increasing index values and they form two distinct clusters with almost equal number of members. These regularities indicate whether the function could attain a minimum or (at least) has reached close to the minimum. We have used this conjecture to incorporate ordering of variable values before evalution of the function and its optimization at every trial. As a result, the performance of DE as well as the RPS has improved significantly. Our results are comparable with the best results available in the literature on optimization of Keane function. Our two findings are notable: (i) Keane’s envisaged min(f) = -0.835 for 50-dimensional problem is realizable; (ii) Liu-Lewis’ min(f) = -0.84421 for 200-dimensional problem is grossly sub-optimal.Computer programs (written by us in Fortran) are available on request.
    Keywords: Nonlinear; constrained; global optimization; repulsive particle swarm; differential evolution; Fortran; computer program; Hybrid; Genetic algorithms
    JEL: C61 C88
    Date: 2007–05–01
  3. By: Marco Bee; Roberto Benedetti; Giuseppe Espa
    Abstract: In sampling theory the large concentration of the population with respect to most surveyed variables constitutes a problem which is difficult to tackle by means of classical tools. One possible solution is given by cut-off sampling, which explicitly prescribes to discard part of the population; in particular, if the population is composed by firms or establishments, the method results in the exclusion of the “smallest” firms. Whereas this sampling scheme is common among practitioners, its theoretical foundations tend to be considered weak, because the inclusion probability of some units is equal to zero. In this paper we propose a framework to justify cut-off sampling and to determine the census and cut-off thresholds. We use an estimation model which assumes as known the weight of the discarded units with respect to each variable; we compute the variance of the estimator and its bias, which is caused by violations of the aforementioned hypothesis. We develop an algorithm which minimizes the MSE as a function of multivariate auxiliary information at the population level. Considering the combinatorial optimization nature of the model, we resort to the theory of stochastic relaxation: in particular, we use the simulated annealing algorithm.
    Keywords: Cut-off sampling, skewed populations, model-based estimation, optimal stratification, simulated annealing
    JEL: C21 D92 L60 O18 R12
    Date: 2007
  4. By: Dorothée Boccanfuso (GREDI, Faculte d'administration, Université de Sherbrooke); Antonio Estache (World Bank and, the European Centre for Advanced Research in Economics and Statistics at the Free University of Brussels); Luc Savard (GREDI, Faculte d'administration, Université de Sherbrooke)
    Abstract: This paper uses a computable general equilibrium (CGE) macro-micro model to explore the distributional effects of price reform in the electricity sector of Senegal. In the first part of the paper we analyze the distribution of electricity in Senegal by income quintiles, between 1995 and 2001. The analysis demonstrates that poor and rural households are not the main beneficiaries of the expanded network. The results of the CGE application show that direct price increases have a minimal effect on poverty and inequality, whereas the general equilibrium effects are stronger and negative. Moreover, compensatory policies tested can help attenuate some negative effects.
    Keywords: computable general equilibrium model, micro-simulation, poverty analysis, income distribution, privatization, water utilities
    JEL: D58 D31 I32 L33 L93
    Date: 2007
  5. By: Xiao-guang Zhang (Australian Productivity Commission); George Verikios (Department of Economics, The University of Western Australia)
    Abstract: Substitution elasticities in policy-oriented computable general equilibrium (CGE) models are key parameters for model results since they determine behaviour in these models. As Dawkins et al. (2001) observe, the current situation with regard to the elasticities available for use in these models is poor. We focus on an important type of elasticity that is widely used in CGE models with international trade: the so-called ‘Armington’ elasticities (Armington, 1969). These elasticities are well known for their critical role in determining model results. We present an alternative approach to quantifying Armington elasticities which is consistent across historical databases. The approach is used to derive elasticities from successive databases of a commonly-used global CGE model, the GTAP model.
    Keywords: Armington assumption, computable general equilibrium models, estimating Armington paprmeters
    JEL: C68 D58 F17
    Date: 2006
  6. By: André Lemelin
    Abstract: In this paper, we present a minimalist version of a model of bond financing and debt, imbedded in a stepwise dynamic CGE model. The proposed specification takes into account the main characteristics of bond financing. Bonds compete on the securities market with shares, so that the yield demanded by the buyers of new bond issues increases as the cumulative bond debt grows relative to the stock of outstanding shares. Restrictions are imposed on the maturity structure of bonds, so that it is possible to attain a reasonable compromise between a realistic representation of the evolution of the debt, and the demands on model memory of past variables values which impinge on the current period. In the proposed model, the borrowing government reimburses bonds that have reached maturity, and pays interest on the outstanding debt. The prices of bonds issued at diffferent periods and with different maturities are consistent with an arbitrage equilibrium. The supply of new bonds and of new shares is determined by the government's and business's borrowing needs. Security demand reflects the rational choices of portfolio managing households, following a version of the Decaluwé-Souissi model. These notions are illustrated with fictitious data in model EXTER-Debt. The full specification of the model is described, and simulation results are presented which demonstrate model properties.
    Keywords: CGE models, recursive dynamics, bond debt, financial assets
    JEL: C68 D58 G1 H63
    Date: 2007
  7. By: Xiao-guang Zhang (Australian Productivity Commission); George Verikios (Department of Economics, The University of Western Australia)
    Abstract: The Doha ministerial declaration commits industrialised countries to liberalising access for least-developed countries (LDCs) to their markets. Preferential trade policies have diverse impacts on the initiating country and its trading partners. These effects are of concern to scholars and policy makers. We use Australia as a case study to quantify the direct and indirect effects of providing preferential access to LDC imports entering Australian markets, using a global general equilibrium model of the world economy. LDCs are projected to benefit; Australia is predicted to lose, reflecting the dominance of trade diversion over trade creation effects and adverse terms of trade effects. However, the magnitude of the adverse effect on Australia is small. If one was to view this initiative as an exercise in foreign aid, it suggests that Australia can provide a significant benefit to the poorest nations with which it trades, at almost no cost to itself.
    Keywords: economic development, numerical simulation, preferential trading arrangements, trade policy
    JEL: C68 F14 O24
    Date: 2006
  8. By: Djoni Hartono (Department of Economics, University of Indonesia); D.S. Priyarsono (Bogor Agriculture University); Tien Dung Nguyen (Ministry of Trade, Vietnam); Mitsuo Ezaki (Nagoya University)
    Abstract: Indonesia is facing the trade liberalization and regional economic integration with several free trade areas, i.e. bilateral FTA, regional FTA and multilateral FTA. The aim of this paper is to analyze the impact of those international relationships on Indonesian economic growth, poverty and income distribution. By using a Global Computable General Equilibrium (GCGE) model, we made eighteen simulations to analyze the current and the potential international relationship that is faced by Indonesia. Generally, Indonesia gains significant benefit in terms of real GDP, output and welfare except FTA with India. FTA also increases the household income of rural group higher than the urban group ones. Unskilled labor experiences more advantages than skilled labor and poor household gain more benefit than the rich household both in rural and urban areas. Those conditions imply that FTA potentially could be a solution for national poverty reduction.
    Keywords: Economic integration, Indonesia
    JEL: F15
    Date: 2007–03
  9. By: Herrala, Risto (Bank of Finland Monetary Policy and Research/Monitoring); Kauko, Karlo (Bank of Finland Research)
    Abstract: This discussion paper presents a microsimulation model of household distress. We use logit analysis to estimate the extent to which a household’s risk of being financially distressed depends on net income after tax and loan servicing costs. The impact of assumed macroeconomic shocks on this net income concept is calculated at the household level. The microsimulation model is used to simulate both the number of distressed households and their aggregate debt in various macroeconomic scenarios. The simulations indicate that household credit risks to banks are relatively well contained.
    Keywords: financial stability; indebtedness; micro simulations; households
    JEL: D14 E47 G21 R29
    Date: 2007–05–08
  10. By: Christie Smith (Reserve Bank of New Zealand)
    Abstract: This note illustrates the connections between the Hessians of numerical optimization problems, variance-covariance matrices for parameter vectors, and the influence that data mismeasurement may have on parameter estimates. Condition numbers provide a central guide to the sensitivity of common numerical problems to data mismeasurement. Examples are provided that clarify their importance. Two simple prescriptions arise from this analysis. First, data must be of an ‘appropriate’ scale. In some cases this means that the data need similar means and similar variances. Second, in numerical algorithms it is desirable to ascertain the condition number of the Hessian implied by the initial parameter values used for numerical optimisation algorithms. Condition numbers are easy to compute and indicate whether the updates from an initial starting value are likely to be poor.
    JEL: C61 C63
    Date: 2007–03

This nep-cmp issue is ©2007 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.