New Economics Papers
on Computational Economics
Issue of 2013‒05‒22
six papers chosen by



  1. An Efficient Sampling Method for Regression-Based Polynomial Chaos Expansion. By Zein, Samih
  2. Accelerated Multiplicative Updates and Hierarchical ALS Algorithms for Nonnegative Matrix Factorization. By Gillis, Nicolas
  3. Genetic algorithm-based topology optimization: Performance improvement through dynamic evolution of the population size. By Denies, Jonathan
  4. Japanese Manufacturing Facing the Power Crisis after Fukushima: A Dynamic Computable General Equilibrium Analysis with Foreign Direct Investment By Nobuhiro Hosoe
  5. Efficiency of coordinate-descent methods on huge-scale optimization problems. By Nesterov, Yurii
  6. Generational Risk–Is It a Big Deal?: Simulating an 80-Period OLG Model with Aggregate Shocks By Jasmina Hasanhodzic; Laurence J. Kotlikoff

  1. By: Zein, Samih
    Abstract: The polynomial chaos expansion (PCE) is an efficient numerical method for performing a reliability analysis. It relates the output of a nonlinear system with the uncertainty in its input parameters using a multidimensional polynomial approximation (the so-called PCE). Numerically, such an approximation can be obtained by using a regression method with a suitable design of experiments. The cost of this approximation depends on the size of the design of experiments. If the design of experiments is large and the system is modeled with a computationally expensive FEA (Finite Element Analysis) model, the PCE approximation becomes unfeasible. The aim of this work is to propose an algorithm that generates efficiently a design of experiments of a size defined by the user, in order to make the PCE approximation computationally feasible. It is an optimization algorithm that seeks to find the best design of experiments in the D-optimal sense for the PCE. This algorithm is a coupling between genetic algorithms and the Fedorov exchange algorithm. The efficiency of our approach in terms of accuracy and computational time reduction is compared with other existing methods in the case of analytical functions and finite element based functions.
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:ner:louvai:info:hdl:2078.1/115779&r=cmp
  2. By: Gillis, Nicolas
    Abstract: Nonnegative matrix factorization (NMF) is a data analysis technique used in a great variety of applications such as text mining, image processing, hyperspectral data analysis, computational biology, and clustering. In this letter, we consider two well-known algorithms designed to solve NMF problems: the multiplicative updates of Lee and Seung and the hierarchical alternating least squares of Cichocki et al. We propose a simple way to significantly accelerate these schemes, based on a careful analysis of the computational cost needed at each iteration, while preserving their convergence properties. This acceleration technique can also be applied to other algorithms, which we illustrate on the projected gradient method of Lin. The efficiency of the accelerated algorithms is empirically demonstrated on image and text data sets and compares favorably with a state-of-the-art alternating nonnegative least squares algorithm.
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:ner:louvai:info:hdl:2078.1/108506&r=cmp
  3. By: Denies, Jonathan
    Abstract: Topological optimization tool using genetic algorithm as optimization algorithm are known as very expensive in computation time. In this paper, we study an approach to improve performance of topological optimization tool by introducing a dynamic variation of the population size of children during the process of optimization. This method allows to improve performance of each generation by adapting the number of children created and by introducing a coefficient of reproduction for each individual inside the population of parents. Through this coefficient of reproduction, the number of children assigns to each parent is calculated. The number of evaluations at each generation changes and the tool can saves evaluations in order to increase the number of iterations.
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:ner:louvai:info:hdl:2078.1/113955&r=cmp
  4. By: Nobuhiro Hosoe (National Graduate Institute for Policy Studies)
    Abstract: The Great East Japan Earthquake and the subsequent tsunami hit and destroyed the Fukushima Daiichi Nuclear Power Station. People lost trust in the safety of nuclear power plants, and the regulatory authority became reluctant to permit power companies to restart their nuclear power plants. To make up for the lost nuclear power supply, thermal power plants started operating more. They consume more fossil fuels, which raises power charges. This power crisis is anticipated to raise energy input costs and to force the domestic manufacturing industries to move out to, for example, China through foreign direct investment (FDI). Using a world trade computable general equilibrium model, with recursive dynamics installed to describe both domestic investment and FDI from Japan to China, we simulate the power crisis by assuming lost capital stock and intensified fossil fuel use by the power sector to investigate its impact on the Japanese manufacturing sectors. We found that the power crisis would adversely affect several sectors that use power intensively but would benefit the transportation equipment, electric equipment, and machinery sectors, despite the common expectation that these sectors would undergo a so-called “hollowing-out.”
    Date: 2013–05
    URL: http://d.repec.org/n?u=RePEc:ngi:dpaper:13-01&r=cmp
  5. By: Nesterov, Yurii
    Abstract: In this paper we propose new methods for solving huge-scale optimization problems. For problems of this size, even the simplest full-dimensional vector operations are very expensive. Hence, we propose to apply an optimization technique based on random partial update of decision variables. For these methods, we prove the global estimates for the rate of convergence. Surprisingly, for certain classes of objective functions, our results are better than the standard worst-case bounds for deterministic algorithms. We present constrained and unconstrained versions of the method and its accelerated variant. Our numerical test confirms a high efficiency of this technique on problems of very big size.
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:ner:louvai:info:hdl:2078.1/121612&r=cmp
  6. By: Jasmina Hasanhodzic (Department of Economics, Boston University); Laurence J. Kotlikoff (Department of Economics, Boston University)
    Abstract: The theoretical literature on generational risk assumes that this risk is large and that the government can effectively share it. To assess these assumptions, this paper simulates a realistically calibrated 80-period overlapping generations life-cycle model with aggregate productivity shocks. Previous solution methods could not handle large-scale OLG models such as ours due to the well-known curse of dimensionality. The prior state of the art is Krueger and Kubler (2004, 2006), whose sparse-grid method handles 10 to 30 periods depending on the model’s realism. Other methods used to solve large-scale, multi-period life-cycle models are tenuous because they rely on either local approximations (Rios-Rull, 1994, 1996) or summary statistics of state variables (Krusell and Smith, 1997, 1998). We build on a new algorithm by Judd, Maliar, and Maliar (2009, 2011), which restricts the state space to the model’s ergodic set. This limits the required computation and effectively banishes the dimensionality curse in models like ours. We find that intrinsic generational risk is quite small, that government policies can produce generational risk, and that bond markets can help share generational risk. We also show that a bond market can mitigate risk-inducing government policy. Our simulations produce very small equity premia for three reasons. First, there is relatively little intrinsic generational risk. Second, intrinsic generational risk hits both the young and the old in similar ways. And third, artificially inducing risk between the young and the old via government policy elicits more net supply as well as more net demand for bonds, by the young and the old respectively, leaving the risk premium essentially unchanged. Our results hold even in the presence of rare disasters and very high risk aversion. They echo Lucas’ (1987) and Krusell and Smith’s (1999) point that macroeconomic fluctuations are too small to have major microeconomic consequences.
    Keywords: Intergenerational Risk Sharing; Government Transfer Policies; Aggregate Shocks; Incomplete Markets; Stochastic Simulation
    JEL: E21 E24 E62 H55 H31 D91 D58 C63 C68
    Date: 2013–05
    URL: http://d.repec.org/n?u=RePEc:byu:byumcl:201301&r=cmp

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.