nep-cmp New Economics Papers
on Computational Economics
Issue of 2014‒11‒22
nine papers chosen by



  1. Endogenous Savings Rate with Forward-Looking Households in a Recursive Dynamic CGE Model: Application to South Africa By André Lemelin
  2. How Diverse can Spatial Measures of Cultural Diversity be? Results from Monte Carlo Simulations on an Agent-Based Model By Daniel Arribas-Bel; Peter Nijkamp; Jacques Poot
  3. Accuracy Verification for Numerical Solutions of Equilibrium Models By Indrajit Mitra; Leonid Kogan
  4. Effects of Reducing Tariffs in The Democratic Republic of Congo(DRC): A CGE Analysis By Jean Luc Erero, Daniel Djauhari Pambudi & Lumengo Bonga-Bonga
  5. Why can sectoral shocks lead to sizable macroeconomic fluctuations? Assessing alternative theories by means of stochastic simulation with a general equilibrium model By Roberto Roson; Martina Sartori
  6. Global convergence and stability of a convolution method for numerical solution of BSDEs By Cody Blaine Hyndman; Polynice Oyono Ngou
  7. Stabilising expenditure rule in Poland – stochastic simulations for 2014-2040 By Korniluk, Dominik
  8. A Class of Convergent Parallel Algorithms for SVMs Training By Andrea Manno; Laura Palagi; Simone Sagratella
  9. Increasing the Opportunity of Live Kidney Donation by Matching for Two and Three Way Exchanges By Saidman, Susan L.; Roth, Alvin E.; Sonmez, Tayfun; Unver, M. Utku; Delmonico, Francis L.

  1. By: André Lemelin
    Abstract: In the vast majority of recursive dynamic CGE models, the savings rate is constant and exogenous. Intertemporal CGE models, by contrast, are solved simultaneously for all periods, and agents optimize intertemporally. But the theoretical consistency of intertemporal optimization is achieved only at the cost of more aggregated, less detailed models, due to computational limitations. In some applications, therefore, recursive dynamics should be preferred to intertemporal dynamics for practical reasons of computability. This paper presents a recursive dynamic CGE model in which households determine their savings rate from intertemporal optimization, by solving a simplified form of their intertemporal problem. This approach we call « truncated rational expectations » (TRE). In the TRE framework, households have rational expectations for the current period and the following one. Accordingly, the model is solved simultaneously for two periods at a time, the current period ? and the following one. Household (rational) expectations for period ? +1 are given by the model solution. For subsequent periods, household expectations are formed by extrapolating from ? and ? +1 solution values, assuming a constant rate of change. The TRE framework is implemented in a modified version of the Decaluwé et al. (2013) PEP-1-t model, and applied to South Africa, using a 2005 SAM by Davies and Thurlow (2011). Several simulations are run, with two variants of the 2005 SAM, the original one and a modified one with a high initial household savings rate. The results are compared with those of a static expectations model with intertemporal optimization, and of a fixed-savings rate model. The main difference is that in the first two models, the household savings rate is not constant, even in the BAU scenario. It is also responsive to changes in the rate of return on assets. On the other hand, an exogenous reduction in household wealth has very little effect.
    Keywords: Computable general equilibrium models, recursive dynamic models, intertemporal optimization, household savings
    JEL: C6 D1 D58 D91
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:lvl:mpiacr:2014-06&r=cmp
  2. By: Daniel Arribas-Bel (VU University Amsterdam); Peter Nijkamp (VU University Amsterdam); Jacques Poot (The University of Waikato, New Zealand)
    Abstract: Cultural diversity is a complex and multi-faceted concept. Commonly used quantitative measures of the spatial distribution of culturally-defined groups 'such as segregation, isolation or concentration indexes' are often only capable of identifying just one aspect of this distribution. The strengths or weaknesses of any measure can only be comprehensively assessed empirically. This paper provides evidence on the empirical properties of various spatial measures of cultural diversity by using Monte Carlo replications of agent-based modeling (MC-ABM) simulations with synthetic data assigned to a realistic and detailed geographical context of the city of Amsterdam. Schelling's classical segregation model is used as the theoretical engine to generate patterns of spatial clustering. The data inputs include the initial population, the number and shares of various cultural groups, and their preferences with respect to co-location. Our MC-ABM data generating process generates output maps that enable us to assess the performance of various spatial measures of cultural diversity under a range of demographic compositions and preferences. We find that, as our simulated city becomes more diverse, stable residential location equilibria are only possible when particularly minorities become more tolerant. We test whether observed measures can be interpreted as revealing unobserved preferences for co-location of individuals with their own group and find that the segregation and isolation measures of spatial diversity are shown to be non-decreasing in increasing preference for within-group co-location, but the Gini coefficient and concentration measures are not.
    Keywords: cultural diversity, spatial segregation, agent-based model, Monte Carlo simulation
    JEL: C63 J15 R23 Z13
    Date: 2014–07–03
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20140081&r=cmp
  3. By: Indrajit Mitra (MIT); Leonid Kogan (MIT)
    Abstract: We propose a simulation-based procedure for evaluating approximation accuracy of numerical solutions of general equilibrium models with heterogeneous agents. We measure the approximation accuracy by the magnitude of the loss suered by the agents as a result of following suboptimal policies. Our procedure allows agents to have knowledge of the future paths of the economy under suitably imposed costs of such foresight. This method is very general, straightforward to implement, and can be used in conjunction with various solution algorithms. We illustrate our method in the context of the incomplete-markets model of Krusell and Smith (1998), where we apply it to two widely used approximation techniques: cross-sectional moment truncation and history truncation.
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:red:sed014:423&r=cmp
  4. By: Jean Luc Erero, Daniel Djauhari Pambudi & Lumengo Bonga-Bonga
    Abstract: In this paper, the effects of reducing tariffs are analysed through a Computable General Equilibrium (CGE) model of the DRC. The specific DRC Formal-Informal Model (DRCFIM) is a multi-sectoral computable general equilibrium model that captures the observed structure of the DRC’s formal and informal economies, as well as the numerous linkages or transmission channels connecting their various economic agents, such as investors, firms, traders, and the government. The parameters of the CGE equations are calibrated to observed data from a social accounting matrix (SAM). In particular, this study draws the attention of policy makers to a different employment outcome when tariff reduction is taken into consideration. Tariff reduction increases formal employment and output but hurts informal producers. It considerably increases the output and employment of the formal sector by raising import competition without providing further opportunities for the informal sector to access foreign export markets. Nonetheless, it induces productivity improvements when local producers survive import competition by seeking importing input-saving technologies and production practices. These findings highlight the importance of differentiating between the formal and informal sector impacts of the DRC’s socioeconomic policies.
    Keywords: informal sector, CGE model, Democratic Republic of Congo
    JEL: C68 D58 E26 F16
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:rza:wpaper:467&r=cmp
  5. By: Roberto Roson (Department of Economics, University Of Venice Cà Foscari); Martina Sartori (Department of Economics, University Of Milan, Bicocca)
    Abstract: Relatively small sectoral productivity shocks could lead to sizable macroeconomic variability. Whereas most contributions in the literature analyze the issue of aggregate sensitivity using simple general equilibrium models, a novel approach is proposed in this paper, based on stochastic simulations with a global CGE model. We estimate the statistical distribution of the real GDP in 109 countries, assuming that the productivities of the industrial value added composites are identically and independently distributed random variables. We subsequently undertake a series of regressions in which the standard error of the GDP is expressed as a function of variables measuring the “granularity” of the economy, the distribution of input-output trade flows, and the degree of foreign trade openness. We find that the variability of the GDP, induced by sectoral shocks, is basically determined by the degree of industrial concentration as counted by the Herfindhal index of industrial value added. The degree of centrality in inter-industrial connectivity, measured by the standard deviation of second order degrees, is mildly significant, but it is also correlated with the industrial concentration index. After controlling for the correlation effect, we find that connectivity turns out to be statistically significant, although less so than granularity.
    Keywords: Aggregate volatility, input-output linkages, intersectoral network, sectoral shocks, granularity, stochastic simulation, computable general equilibrium models.
    JEL: C15 D58 E32 O57
    URL: http://d.repec.org/n?u=RePEc:ven:wpaper:2014:16&r=cmp
  6. By: Cody Blaine Hyndman; Polynice Oyono Ngou
    Abstract: The implementation of the convolution method for the numerical solution of backward stochastic differential equations (BSDEs) presented in Hyndman and Oyono Ngou (arXiv:1304.1783, 2013) uses a uniform space grid. Locally, this approach produces a truncation error, a space discretization error and an additional extrapolation error. Even if the extrapolation error is convergent in time, the resulting absolute error may be high at the boundaries of the uniform space grid. In order to solve this problem, we propose a tree-like grid for the space discretization which suppresses the extrapolation error leading to a globally convergent numerical solution for the BSDE. On this alternative grid the conditional expectations involved in the BSDE time discretization are computed using Fourier analysis and the fast Fourier transform (FFT) algorithm as in the initial implementation. The method is then extended to reflected BSDEs and numerical results demonstrating convergence are presented.
    Date: 2014–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1410.8595&r=cmp
  7. By: Korniluk, Dominik (Ministry of Finance in Poland)
    Abstract: The stabilising expenditure rule (SER) imposed on general government (GG) sector in Poland has been binding since 2014. According to this rule, about 90% of GG expenditure will grow in line with the real medium-term GDP, or slower if there is excessive debt or deficit, or balance does not meet the medium-term objective. It was shown in this paper how the SER affects the most important public finance indicators in the period 2014-2040. The consequences of the lowered debt thresholds in the SER's correction mechanism due to the pension reform were also presented. Finally, future fiscal policy conducted under the new rule was simulated and assessed.
    Keywords: stabilising expenditure rule; stochastic simulations; debt thresholds; fiscal policy cyclica lity
    JEL: C53 E62
    Date: 2014–09–01
    URL: http://d.repec.org/n?u=RePEc:ris:mfplwp:0019&r=cmp
  8. By: Andrea Manno (Department of Computer, Control and Management Engineering, Universita' degli Studi di Roma "La Sapienza"); Laura Palagi (Department of Computer, Control and Management Engineering, Universita' degli Studi di Roma "La Sapienza"); Simone Sagratella (Department of Computer, Control and Management Engineering, Universita' degli Studi di Roma "La Sapienza")
    Abstract: The training of Support Vector Machines may be a very difficult task when dealing with very large datasets. The memory requirement and the time consumption of the SVMs algorithms grow rapidly with the increase of the data. To overcome these drawbacks a lot of parallel algorithms have been implemented, but they lack of convergence properties. In this work we propose a generic parallel algorithmic scheme for SVMs and we state its asymptotical global convergence under suitable conditions. We outline how these assumptions can be satisfied in practice and we suggest various specific implementations exploiting the adaptable structure of the algorithmic model.
    Keywords: Support Vector Machines ; Machine Learning ; Parallel Computing ; Decomposition Techniques ; Huge Data
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:aeg:report:2014-17&r=cmp
  9. By: Saidman, Susan L.; Roth, Alvin E.; Sonmez, Tayfun; Unver, M. Utku; Delmonico, Francis L.
    Abstract: Background: To expand the opportunity for paired live donor kidney transplantation, computerized matching algorithms have been designed to identify maximal sets of compatible donor/recipient pairs from a registry of incompatible pairs submitted as candidates for transplantation. Methods: Demographic data of patients who had been evaluated for live donor kidney transplantation but found to be incompatible with their potential donor (because of ABO blood group or positive crossmatch) were submitted for computer analysis and matching. Data included ABO and HLA types of donor and recipient, %PRA and specificity of recipient alloantibody, donor/recipient relationship, and the reason the donor was incompatible. The data set used for the initial simulation included 29 patients with one donor each and 16 patients with multiple donors for a total of 45 patients and 68 donor/patient pairs. In addition, a simulation based on OPTN/SRTR data was used to further assess the practical importance of multiple exchange combinations. Results: If only exchanges involving two patient-donor pairs were allowed, a maximum of 8 patient-donor pairs in the data set could exchange kidneys. If 3-way exchanges were also allowed, a maximum of 11 pairs could exchange kidneys. Simulations with OPTN/SRTR data demonstrate that the increase in the number of potential transplants if 3-way exchanges are allowed is robust, and does not depend on the particular patients in our sample. Conclusions: A computerized matching protocol can be used to identify donor/recipient pairs from a registry of incompatible pairs who can potentially enter into donor exchanges that otherwise would not readily occur.
    Keywords: Kidney Exchange
    JEL: C78
    Date: 2014–09–01
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:58247&r=cmp

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.