New Economics Papers
on Computational Economics
Issue of 2005‒02‒20
eight papers chosen by



  1. META-HEURISTICS FOR DYNAMIC LOT SIZING: A REVIEW AND COMPARISON OF SOLUTION APPROACHES By Jans, R.; Degraeve, Z.
  2. Weird Ties? Growth, Cycles and Firm Dynamics in an Agent-Based Model with Financial-Market Imperfections By Mauro Napoletano, Domenico Delli Gatti, Giorgio Fagiolo, Mauro Gallegati
  3. Calculating and Using Second Order Accurate Solutions of Discrete Time Dynamic Equilibrium Models By Henry Kim
  4. Computational Analysis of the Menu of U.S.-Japan Trade Policies By Drusilla K. Brown; Kozo Kiyota; Robert M. Stern
  5. Incorporating sequential information into traditional classification models by using an element/position- sensitive SAM By A. PRINZIE; D. VAN DEN POEL
  6. Solving SDP's in non-commutative algebras part I: the dual-scaling algorithm By Klerk,E. de; Pasechnik,D.V.
  7. Using a Structural Retirement Model to Simulate the Effect of Changes to the OASDI and Medicare Programs By John Bound; Todd Stinebrickner; Timothy Waidman
  8. Applying perturbation methods to incomplete market models with exogenous borrowing constraints By Henry Kim

  1. By: Jans, R.; Degraeve, Z. (Erasmus Research Institute of Management (ERIM), RSM Erasmus University)
    Abstract: Proofs from complexity theory as well as computational experiments indicate that most lot sizing problems are hard to solve. Because these problems are so difficult, various solution techniques have been proposed to solve them. In the past decade, meta-heuristics such as tabu search, genetic algorithms and simulated annealing, have become popular and efficient tools for solving hard combinational optimization problems. We review the various meta-heuristics that have been specifically developed to solve lot sizing problems, discussing their main components such as representation, evaluation neighborhood definition and genetic operators. Further, we briefly review other solution approaches, such as dynamic programming, cutting planes, Dantzig-Wolfe decomposition, Lagrange relaxation and dedicated heuristics. This allows us to compare these techniques. Understanding their respective advantages and disadvantages gives insight into how we can integrate elements from several solution approaches into more powerful hybrid algorithms. Finally, we discuss general guidelines for computational experiments and illustrate these with several examples.
    Keywords: dynamic lotsizing;algorithms;meta-heuristics;Dantzig-Wolfe decomposition;reformulations;
    Date: 2004–06–24
    URL: http://d.repec.org/n?u=RePEc:dgr:eureri:30001465&r=cmp
  2. By: Mauro Napoletano, Domenico Delli Gatti, Giorgio Fagiolo, Mauro Gallegati
    Abstract: This paper studies how the interplay between technological shocks and financial variables shapes the properties of macroeconomic dynamics. Most of the existing literature has based the analysis of aggregate macroeconomic regularities on the representative agent hypothesis (RAH). However, recent empirical research on longitudinal micro data sets has revealed a picture of business cycles and growth dynamics that is very far from the homogeneous one postulated in models based on the RAH. In this work, we make a preliminary step in bridging this empirical evidence with theoretical explanations. We propose an agent-based model with heterogeneous firms, which interact in an economy characterized by financial-market imperfections and costly adoption of new technologies. Monte-Carlo simulations show that the model is able jointly to replicate a wide range of stylised facts characterizing both macroeconomic time-series (e.g. output and investment) and firms' microeconomic dynamics (e.g. size, growth, and productivity).
    Keywords: Financial Market Imperfections, Business Fluctuations, Economic Growth, Firm Size, Firm Growth, Productivity Growth, Agent-Based Models.
    URL: http://d.repec.org/n?u=RePEc:ssa:lemwps:2005/03&r=cmp
  3. By: Henry Kim
    Abstract: We describe an algorithm for calculating second order approximations to the solutions to nonlinear stochastic rational expectations models. The paper also explains methods for using such an approximate solution to generate forecasts, simulated time paths for the model, and evaluations of expected welfare differences across different versions of a model. The paper gives conditions for local validity of the approximation that allow for disturbance distributions with unbounded support and allow for non-stationarity of the solution process.
    URL: http://d.repec.org/n?u=RePEc:tuf:tuftec:0505&r=cmp
  4. By: Drusilla K. Brown; Kozo Kiyota; Robert M. Stern
    Date: 2004–12
    URL: http://d.repec.org/n?u=RePEc:hst:hstdps:d04-63&r=cmp
  5. By: A. PRINZIE; D. VAN DEN POEL
    Abstract: The inability to capture sequential patterns is a typical drawback of predictive classification methods. This caveat might be overcome by modeling sequential independent variables by sequence-analysis methods. Combining classification methods with sequenceanalysis methods enables classification models to incorporate non-time varying as well as sequential independent variables. In this paper, we precede a classification model by an element/position-sensitive Sequence-Alignment Method (SAM) followed by the asymmetric, disjoint Taylor-Butina clustering algorithm with the aim to distinguish clusters with respect to the sequential dimension. We illustrate this procedure on a customer-attrition model as a decisionsupport system for customer retention of an International Financial-Services Provider (IFSP). The binary customer-churn classification model following the new approach significantly outperforms an attrition model which incorporates the sequential information directly into the classification method.
    Keywords: sequence analysis, binary classification methods, Sequence-Alignment Method, asymmetric clustering, customer-relationship management, churn analysis
    Date: 2005–02
    URL: http://d.repec.org/n?u=RePEc:rug:rugwps:05/292&r=cmp
  6. By: Klerk,E. de; Pasechnik,D.V. (Tilburg University, Center for Economic Research)
    Abstract: Semidefinite programming (SDP) may be viewed as an extension of linear programming (LP), and most interior point methods (IPM s) for LP can be extended to solve SDP problems. However, it is far more difficult to exploit data structures (especially sparsity) in the SDP case. In this paper we will look at the data structure where the SDP data matrices lie in a low dimensional matrix algebra. This data structure occurs in several applications, including the lower bounding of the stability number in certain graphs and the crossing number in complete bipartite graphs. We will show that one can reduce the linear algebra involved in an iteration of an IPM to involve matrices of the size of the dimension of the matrix algebra only. In other words, the original sizes of the data matrices do not appear in the computational complexity bound. In particular, we will work out the details for the dual scaling algorithm, since a dual method is most suitable for the types of applications we have in mind.
    Date: 2005
    URL: http://d.repec.org/n?u=RePEc:dgr:kubcen:200517&r=cmp
  7. By: John Bound (University of Michigan and NBER); Todd Stinebrickner (University of Western Ontario); Timothy Waidman (Urban Institute)
    Abstract: In this paper, we specify a dynamic programming model that addresses the interplay among health, financial resources, and the labor market behavior of men in the later part of their working lives. The model is estimated using data from the Health and Retirement Study. We use the model to simulate the impact on behavior of raising the normal retirement age, eliminating early retirement altogether and introducing universal health insurance.
    Date: 2004–10
    URL: http://d.repec.org/n?u=RePEc:mrr:papers:wp091&r=cmp
  8. By: Henry Kim
    Abstract: This paper solves an incomplete market model with infinite number of agents and exogenous borrowing constraints described in den Haan, Judd and Juillard (2004). We apply the idea of “barrier methods” to convert optimization problem with borrowing constraints as inequalities into a problem with equality constraints, and the converted model is solved by a second-order perturbation method. The simulation results of impulse responses and second moments match the standardized features of incomplete market models. Accuracy of the solution is in a reasonable range but significantly decreases when the economy is near the borrowing limit or moves away from the steady state.
    Keywords: perturbation, barrier method, borrowing constraint, incomplete market, accuracy.
    JEL: C63 C68 C88 F41
    URL: http://d.repec.org/n?u=RePEc:tuf:tuftec:0504&r=cmp

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.