New Economics Papers
on Computational Economics
Issue of 2014‒03‒22
ten papers chosen by

  1. An unsupervised parallel genetic cluster algorithm for graphics processing units By Dieter Hendricks; Dariusz Cieslakiewicz; Diane Wilcox; Tim Gebbie
  2. Learning the Ramsey outcome in a Kydland & Prescott economy By Jasmina ARIFOVIC; Murat YILDIZOGLU
  3. Fiscal and monetary policies in complex evolving economies By Fagiolo G.; Treibich T.G.; Roventini A.; Napoletano M.; Dosi G.
  4. Introducing Melitz-Style Firm Heterogeneity in CGE Models: Technical Aspects and Implications By Roberto Roson; Kazuhiko Oyamada
  5. The diffusion of electric vehicles: An agent-based microsimulation By McCoy, Daire; Lyons, Sean
  6. A CGE Analysis on a Rate-based Policy for Climate Change Mitigation By Shinya Kato; Kenji Takeuchi
  7. Convergent subgradient methods for nonsmooth convex minimization By NESTEROV, Yu.; SHIKHMAN, Vladimir
  8. Agent-based modeling of knowledge transfer within social networks By Widad Guechtouli
  9. Network characteristics enabling efficient coordination: A simulation study By Uyttendaele P.; Khan A.; Peeters R.J.A.P.; Thuijsman F.
  10. Google Flu Trends Still Appears Sick:�An Evaluation of the 2013â€2014 Flu Season By Lazer, David; Ryan Kennedy; Gary King; Alessandro Vespignani

  1. By: Dieter Hendricks; Dariusz Cieslakiewicz; Diane Wilcox; Tim Gebbie
    Abstract: During times of stock market turbulence, monitoring the intraday clustering behaviour of financial instruments allows one to better understand market characteristics and systemic risks. While genetic algorithms provide a versatile methodology for identifying such clusters, serial implementations are computationally intensive and can take a long time to converge to the global optimum. We implement a Master-Slave parallel genetic algorithm (PGA) with a Marsili and Giada log-likelihood fitness function to identify clusters within stock correlation matrices. We utilise the Nvidia Compute Unified Device Architecture (CUDA) programming model to implement a PGA and visualise the results using minimal spanning trees (MSTs). We demonstrate that the CUDA PGA implementation runs significantly faster than the test case implementation of a comparable serial genetic algorithm. This, combined with fast online intraday correlation matrix estimation from high frequency data for cluster identification, may enhance near-real-time risk assessment for financial practitioners.
    Date: 2014–03
  2. By: Jasmina ARIFOVIC; Murat YILDIZOGLU
    Abstract: We study learning in the Kydland and Prescott environment. Our policy maker evaluates its potential strategies regarding the announced and the actual inflation rate using its mental model. This model is forward looking and adaptive at the same time. \r\nThere are two types of agents: Believers who set their inflation forecast equal to the announced inflation, and nonbelievers who form static optimal forecast coupled with a forecast error correction mechanism. Our results show that the economy can reach near Ramsey outcomes most of the time. In the absence of believers, the economies almost always converge to the Ramsey outcome. \r\nIn their experiments with human subjects, Arifovic and Sargent (2003) showed that experimental economies reach and stay close to the Ramsey outcome most of the time, giving support to the \'just do it\' policy recommendation. In light of the experimental findings, our model is of particular interest as it is the only agent-based or adaptive learning model that consistently selects the Ramsey outcome.
    Keywords: learning in the Kydland-Prescott environment, artificial neural networks, Ramsey outcome
    JEL: E50 C45 C72 D60
    Date: 2014
  3. By: Fagiolo G.; Treibich T.G.; Roventini A.; Napoletano M.; Dosi G. (GSBE)
    Abstract: In this paper we explore the effects of alternative combinations of fiscal and monetary policies under different income distribution regimes. In particular, we aim at evaluating fiscal rules in economies subject to banking crises and deep recessions. We do so using an agent-based model populated by heterogeneous capital- and consumption-good firms, heterogeneous banks, workers/consumers, a central bank and a government. We show that the model is able to reproduce a wide array of macro and micro empirical regularities, including stylized facts concerning financial dynamics and banking crises. Simulation results suggest that the most appropriate policy mix to stabilize the economy requires unconstrained anti-cyclical fiscal policies, where automatic stabilizers are free to dampen business cycles fluctuations, and a monetary policy targeting also employment. Instead,discipline-guided fiscal rules such as the Stability and Growth Pact or the Fiscal Compact in the eurozone always depress the economy, without improving public finances, even when escape clauses in case of recessions are considered. Consequently, austerity policies appear to be in general self-defeating. Furthermore, we show that the negative effects of austere fiscal rules are magnified by conservative monetary policies focused on inflation stabilization only. Finally, the effects of monetary and fiscal policies become sharper as the level of income inequality increases.
    Keywords: Computational Techniques; Simulation Modeling; Business Fluctuations; Cycles; Monetary Policy; Financial Crises; Banks; Depository Institutions; Micro Finance Institutions; Mortgages;
    JEL: C63 E32 E52 G01 G21
    Date: 2014
  4. By: Roberto Roson (Department of Economics, University Of Venice Cà Foscari); Kazuhiko Oyamada (Institute of Developing Economies, Japan External Trade Organization)
    Abstract: This paper discusses which changes in the architecture of a standard CGE model are needed in order to introduce effects of trade and firm het- erogeneity à la Melitz. Starting from a simple specification with partial equilibrium, one primary production factor and one industry, the framework is progressively enriched by including multiple factors, intermedi- ate inputs, multiple industries (with a mixture of differentiated and non-differentiated products), and a real general equilibrium closure. Therefore, the model structure is gradually made similar to a full-fledged CGE. Calibration techniques are discussed, and a number of changes from the original Melitz’s assumptions are also proposed. It is argued that the inclusion of industries with heterogeneous firms in a CGE framework does not simply make the Melitz model “operational”, but allows accounting for structural effects that may significantly affect the nature, meaning and implications of the model results.
    Keywords: Computable General Equilibrium Models, Melitz, Firm Heterogeneity, International Trade.
    JEL: C63 C68 D51 D58 F12 L11
  5. By: McCoy, Daire; Lyons, Sean
    Abstract: We implement an agent-based, threshold model of innovation diffusion to simulate the adoption of electric vehicles among Irish households. We use detailed survey microdata to develop a nationally representative, heterogeneous agent population. We then calibrate our agent population to reflect the aggregate socioeconomic characteristics of a number of geographic areas of interest. Our data allow us to create agents with socioeconomic characteristics and environmental preferences. Agents are placed within social networks through which the diffusion process propagates. We find that even if overall adoption is relatively low, mild peer effects could result in large clusters of adopters forming in certain areas. This may put pressure on electricity distribution networks in these areas.
    Keywords: Electric vehicles; Agent-based modelling; Spatial microsimulation
    JEL: C63 D1 O33 R41
    Date: 2014–03–18
  6. By: Shinya Kato (Graduate School of Economics, Kobe University); Kenji Takeuchi (Graduate School of Economics, Kobe University)
    Abstract: We conducted a computable general equilibrium analysis of a policy to regulate carbon dioxide emissions per unit of production in Japan. It is often claimed that regulations based on emission rates might lead to an increase in carbon dioxide emissions but do not suppress economic growth. This study shows that in the short run, a rate-based policy reduce firmsf emissions at a rate greater than that specified by the regulation. We also compared the rate-based policy with the cap-and-trade policy and found that the former leads to a greater reduction in the real GDP than the latter. Furthermore, the decrease in output is tend to be more evenly distributed under the rate-based policy than under with a cap-and-trade policy. Our results suggest that the rate-based policy is inferior in terms of efficiency but is favorable in terms of ensuring the burden of emission reduction is shared equally.
    Date: 2014–03
  7. By: NESTEROV, Yu. (Université catholique de Louvain, CORE, Belgium); SHIKHMAN, Vladimir (Université catholique de Louvain, CORE, Belgium)
    Abstract: In this paper, we develop new subgradient methods for solving nonsmooth convex optimization problems. These methods are the first ones, for which the whole sequence of test points is endowed with the worst-case performance guarantees. The new methods are derived from a relaxed estimating sequences condition, which allows reconstruction of the approximate primal-dual optimal solutions. Our methods are applicable as efficient real-time stabilization tools for potential systems with infinite horizon. As an example, we consider a model of privacy-respecting taxation, where the center has no information on the utility functions of the agents. Nevertheless, we show that by a proper taxation policy, the agents can be forced to apply in average the socially optimal strategies. Preliminary numerical experiments confirm a high efficiency of the new methods.
    Keywords: convex optimization, nonsmooth optimization, subgradient methods, rate of convergence, primal-dual methods, privacy-respecting tax policy
    Date: 2014–02–12
  8. By: Widad Guechtouli
    Abstract: In this paper, we study both processes of direct and indirect knowledge transfer, from a modelling perspective, using agent-based models. In fact, there are several ways to model knowledge. We choose to study three different representations, and try to determine which one allows to better capture the dynamics of knowledge diffusion within a social network. Results show that when knowledge is modelled as a binary vector, and not cumulated, this enables us to observe some heterogeneity in agents' learning and interactions, in both types of knowledge transfer.
    Keywords: knowledge model, knowledge transfer, social networks, communication.
    Date: 2014–02–25
  9. By: Uyttendaele P.; Khan A.; Peeters R.J.A.P.; Thuijsman F. (GSBE)
    Abstract: The primary question in coordination games concerns the possibility of achieving efficient coordination. We consider a situation where individuals from a finite population are randomly matched to play a coordination game. While this interaction is global in the sense that the co-player can be drawn from the entire population, individuals observe the strategies and payoffs of only the direct connections or neighbors in their social network. The most successful strategy used in the neighborhood is imitated. We study how the network structure in fluences the dynamic process of achieving efficient coordination. We simulate this coordination game for small-world and scale-free networks and find that segregation is an important factor in determining the possibility of efficient coordination. In addition, a classification tree analysis reveals segregation to be an important variable of the nonoccurrence of efficient coordination.
    Keywords: Stochastic and Dynamic Games; Evolutionary Games; Repeated Games;
    JEL: C73
    Date: 2014
  10. By: Lazer, David; Ryan Kennedy; Gary King; Alessandro Vespignani
    Abstract: Last year was difficult for Google Flu Trends (GFT). In early 2013, Nature reported that GFT was estimating more than double the percentage of doctor visits for influenza like illness than the Centers for Disease Control and Prevention s (CDC) sentinel reports during the 2012 2013 flu season (1). Given that GFT was designed to forecast upcoming CDC reports, this was a problematic finding. In March 2014, our report in Science found that the overestimation problem in GFT was also present in the 2011 2012 flu season (2). The report also found strong evidence of autocorrelation and seasonality in the GFT errors, and presented evidence that the issues were likely, at least in part, due to modifications made by Google s search algorithm and the decision by GFT engineers not to use previous CDC reports or seasonality estimates in their models what the article labeled algorithm dynamics and big data hubris respectively. Moreover, the report and the supporting online materials detailed how difficult/impossible it is to replicate the GFT results, undermining independent efforts to explore the source of GFT errors and formulate improvements.
    Date: 2014–01

General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.