nep-cmp New Economics Papers
on Computational Economics
Issue of 2019‒01‒28
fifteen papers chosen by
Stan Miles
Thompson Rivers University

  1. Large Multiple Neighborhood Search for the Soft-Clustered Vehicle-Routing Problem By Timo Hintsch
  2. The Success of the Deferred Acceptance Algorithm under Heterogenous Preferences with Endogenous Aspirations By Saglam, Ismail
  3. Robustness of Support Vector Machines in Algorithmic Trading on Cryptocurrency Market By Maryna Zenkova; Robert Ślepaczuk
  4. Internal versus External Growth in Industries with Scale Economies: A Computational Model of Optimal Merger Policy By Ben Mermelstein; Volker Nocke; Mark A. Satterthwaite; Michael D. Whinston
  5. An evaluation of early warning models for systemic banking crises: Does machine learning improve predictions? By Beutel, Johannes; List, Sophia; von Schweinitz, Gregor
  6. Measuring the External Stability of the One-to-One Matching Generated by the Deferred Acceptance Algorithm By Saglam, Ismail
  7. Applying Tax Rate of 33,33% on Primary Energy in Indonesia By Tri Purwaningsih, Vitriyani; Widodo, Tri
  8. Separating the signal from the noise - financial machine learning for Twitter By Schnaubelt, Matthias; Fischer, Thomas G.; Krauss, Christopher
  9. Impacts of China Coal Import Tariff against US on Global Economy and CO2 Emissions By Septiyas Trisilia, Mustika; Widodo, Tri
  10. Inequality, mobility and the financial accumulation process: A computational economic analysis By Simone Righi; Yuri Biondi
  11. More is Different ... and Complex! The Case for Agent-Based Macroeconomics By Giovanni Dosi; Andrea Roventini
  12. Diffusion of Shared Goods in Consumer Coalitions. An Agent-Based Model By Francesco Pasimeni; Tommaso Ciarli
  13. Does Scientific Progress Affect Culture? A Digital Text Analysis By Michela Giorcelli; Nicola Lacetera; Astrid Marinoni
  14. Schedule-Based Integrated Inter-City Bus Line Planning for Multiple Timetabled Services via Large Multiple Neighborhood Search By Konrad Steiner
  15. Weapon-Carrying among High School Students: A Predictive Model Using Machine Learning By Yiran Fan

  1. By: Timo Hintsch (Johannes Gutenberg University Mainz)
    Abstract: The soft-clustered vehicle-routing problem (SoftCluVRP) is a variant of the classical capacitated vehiclerouting problem. Customers are partitioned into clusters and all customers of the same cluster must be served by the same vehicle. In this paper, we present a large multiple neighborhood search for the SoftCluVRP. We design and analyze multiple cluster destroy and repair operators as well as two post-optimization components, which are both based on variable neighborhood descent. The first allows inter-route exchanges of complete clusters, while the second searches for intra-route improvements by combining classical neighborhoods (2- opt, Or-Opt, double-bridge) and the Balas-Simonetti neighborhood. Computational experiments show that our algorithm clearly outperforms the only existing heuristic approach from the literature. By solving benchmark instances, we provide 130 new best solutions for 220 medium-sized instances with up to 483 customers and prove 12 of them to be optimal.
    Keywords: Vehicle Routing, Clustered Vehicle Routing, Large neighborhood search
    JEL: C91 C92 D03 D91
    Date: 2019–01–16
  2. By: Saglam, Ismail
    Abstract: In this paper, we consider a one-to-one matching model with two phases; an adolescence phase where individuals meet a number of dates and learn about their aspirations, followed by a matching phase where individuals are matched according to a version of Gale and Shapley's (1962) deferred acceptance (DA) algorithm. Using simulations of this model, we study how the likelihoods of matching and divorce, and also the balancedness and the speed of matching associated with the outcome of the DA algorithm are affected by the size of correlation in the preferences of individuals and by the frequency individuals update their aspirations in the adolescence phase.
    Keywords: Mate search; one-to-one matching; stability; agent-based simulation
    JEL: C63 C78
    Date: 2019–01–15
  3. By: Maryna Zenkova (Quantitative Finance Research Group, Faculty of Economic Sciences, University of Warsaw); Robert Ślepaczuk (Quantitative Finance Research Group, Faculty of Economic Sciences, University of Warsaw)
    Abstract: This study investigates the profitability of a algorithmic trading strategy based on training SVM model to identify cryptocurrencies with high or low predicted returns. A tail set is defined to be a group of coins whose volatility-adjusted returns are in the highest or lowest quantile. Each cryptocurrency is represented by a set of six technical features. SVM is trained on historical tail sets and tested on the current data. The classifier is chosen to be a nonlinear support vector machine. Portfolio is formed by ranking coins using SVM output. The highest ranked coins are used for long positions to be included in the portfolio for one reallocation period. The following metrics were used to estimate the portfolio profitability: %ARC (the annualized rate of change), %ASD (the annualized standard deviation of daily returns), MDD (the maximum drawdown coefficient), IR1, IR2 (the information ratio coefficients). The performance of the SVM portfolio is compared to the performance of the four benchmark strategies based on the values of the information ratio coefficient IR1 which quantifies the risk-weighted gain. The question on how sensitive the portfolio performance is to the parameters set in the SVM model is also addressed in this study.
    Keywords: machine learning, support vector machines, investment algorithm, algorithmic trading, strategy, optimization, cross-validation, overfitting, cryptocurrency market, technical analysis, meta parameters
    JEL: C4 C45 C61 C15 G14 G17
    Date: 2019
  4. By: Ben Mermelstein; Volker Nocke; Mark A. Satterthwaite; Michael D. Whinston
    Abstract: We study optimal merger policy in a dynamic model in which the presence of scale economies implies that firms can reduce costs through either internal investment in building capital or through mergers. The model, which we solve computationally, allows firms to invest or propose mergers according to the relative profitability of these strategies. An antitrust authority is able to block mergers at some cost. We examine the optimal policy for an antitrust authority who cannot commit to its future policy rule and approves or rejects mergers as they are proposed, considering both consumer value and aggregate value as its possible objectives. We find that the optimal policy can differ substantially from what would be best considering only welfare in the period the merger is proposed. In general, antitrust policy can greatly affect firms' optimal investment behavior, and firms' investment behavior can in turn greatly affect the antitrust authority's optimal policy. Moreover, externalities imposed by mergers on rivals can have significant effects on firms' investment incentives and thereby shape the optimal policy.
    Keywords: Horizontal merger, merger policy, investment, scale economies, antitrust
    JEL: L13 L40
    Date: 2018–08
  5. By: Beutel, Johannes; List, Sophia; von Schweinitz, Gregor
    Abstract: This paper compares the out-of-sample predictive performance of different early warning models for systemic banking crises using a sample of advanced economies covering the past 45 years. We compare a benchmark logit approach to several machine learning approaches recently proposed in the literature. We find that while machine learning methods often attain a very high in-sample fit, they are outperformed by the logit approach in recursive out-of-sample evaluations. This result is robust to the choice of performance measure, crisis definition, preference parameter, and sample length, as well as to using different sets of variables and data transformations. Thus, our paper suggests that further enhancements to machine learning early warning models are needed before they are able to offer a substantial value-added for predicting systemic banking crises. Conventional logit models appear to use the available information already fairly effciently, and would for instance have been able to predict the 2007/2008 financial crisis out-of-sample for many countries. In line with economic intuition, these models identify credit expansions, asset price booms and external imbalances as key predictors of systemic banking crises.
    Keywords: early warning system,logit,machine learning,systemic banking crises
    JEL: C35 C53 G01
    Date: 2019
  6. By: Saglam, Ismail
    Abstract: In this paper, we consider a one-to-one matching model where the population expands with the arrival of a man and a woman. Individuals in this population are matched, before and after the expansion, according to a version of the deferred acceptance algorithm (Gale and Shapley, 1962) where men propose and women reject or (tentatively or permanently) accept. Using computer simulations of this model, we study how the percentage of matches disrupted (undisrupted) with the expansion of the population is affected when the initial size of the population and the size of correlation in the preferences of individuals change.
    Keywords: One-to-one matching; deferred acceptance; stability; external stability
    JEL: C63 C78
    Date: 2019–01–15
  7. By: Tri Purwaningsih, Vitriyani; Widodo, Tri
    Abstract: High fuel consumption has a negative impact not only on the environment, but also can have wider impact on the country's economic conditions. Thus, steps need to be taken regarding the use of fuel in order to reduce the negative impact that results. The aims of this study is to analyze the impact that occurred on the industry and the Indonesian economy when a tax of 33,33% was determined on the use of primary energy, that is coal and petroleum products, through three simulations. By using a model from GTAP-E, the region is aggregated into 7 regions and the industrial sector will be aggregated into 11 industries. The result shows that simulation C has a significant impact on the industry and the Indonesian economy. In addition, this simulation is also able to reduce carbon dioxide gas emissions which derive from coal and petroleum.
    Keywords: Tax, Petroleum, Coal, GTAP-E
    JEL: Q43 Q48
    Date: 2019–01–07
  8. By: Schnaubelt, Matthias; Fischer, Thomas G.; Krauss, Christopher
    Abstract: Most statistical arbitrage strategies in the academic literature soley rely on price time series. By contrast, alternative data sources are of growing importance for professional investors. We contribute to bridging this gap by assessing the price-predictive value of more than nine million tweets on intraday returns of the S&P 500 constituents. For this purpose, we design a machine learning pipeline addressing specific challenges inherent to this task. At first, we engineer domain-specific features along three categories, i.e., directional indicators, relevance indicators and meta features. Next, we leverage a random forest to extract the relationship between these features and subsequent stock returns in a low signal-to-noise setting. For performance evaluation, we run a rigorous eventbased backtesting study across all tweets and stocks. We find annualized returns of 6.4 percent and a Sharpe ratio of 2.2 after transaction costs. Finally, we illuminate the machine learning black box and unveil sources of profitability: First, results are both driven and limited by the temporal clustering of tweets, i.e., the majority of profits stem from tweets clustered closely together in time, corresponding to high-event situations. Second, the importance of included features follows an economic rationale, e.g., tweets with positive sentiment tend to yield positive returns and vice versa. Third, we find that stocks of medium market capitalization and from the consumer and technology sectors contribute most to our results, which we interpret as a trade-off between tweet coverage and tweet relevance.
    Keywords: finance,statistical arbitrage,machine learning,random forests,trading strategy backtesting,social media
    Date: 2018
  9. By: Septiyas Trisilia, Mustika; Widodo, Tri
    Abstract: This paper examines the impacts of China coal import tariff against US on global economy and CO2 emissions. Using Global Trade Analysis Project Environmental (GTAP-E) model, coal import tariff was found to generate trade deflection and trade depression phenomena. Then, US and China’s would have welfare loss, but Indonesia and Australia would seem gainers from this tariff war. Furthermore, skilled and unskilled labor will decline in coal’s industry in US and increase in China. Finally, it is also found evidence that China coal import tariff was not good policy because not only the global economy, the environment would be disadvantaged by increasing CO2.
    Keywords: Import tariff, Coal, Carbon dioxide emissions, GTAP
    JEL: F18 Q5 Q54
    Date: 2019–01–04
  10. By: Simone Righi; Yuri Biondi
    Abstract: Our computational economic analysis investigates the relationship between inequality, mobility and the financial accumulation process. Extending the baseline model by Levy et al., we characterise the economic process through stylised return structures generating alternative evolutions of income and wealth through time. First, we explore the limited heuristic contribution of one and two factors models comprising one single stock (capital wealth) and one single flow factor (labour) as pure drivers of income and wealth generation and allocation over time. Second, we introduce heuristic modes of taxation in line with the baseline approach. Our computational economic analysis corroborates that the financial accumulation process featuring compound returns plays a significant role as source of inequality, while institutional arrangements including taxation play a significant role in framing and shaping the aggregate economic process that evolves over socioeconomic space and time.
    Date: 2019–01
  11. By: Giovanni Dosi; Andrea Roventini
    Abstract: This work nests the Agent-Based macroeconomic perspective into the earlier history of macroeconomics. We discuss how the discipline in the 70's took a perverse path relying on models grounded on fictitious rational representative agent in order to try to pathetically circumvent aggregation and coordination problems. The Great Recession was a natural experiment for macroeconomics, showing the inadequacy of the predominant theoretical framework grounded on DSGE models. After discussing the pathological fallacies of the DSGE-based approach, we claim that macroeconomics should consider the economy as a complex evolving system, i.e. as an ecology populated by heterogenous agents, whose far-from-equilibrium interactions continuously change the structure of the system. This in turn implies that more is different: macroeconomics cannot be shrink to representative-agent micro, but agents' complex interactions lead to emergence of new phenomena and hierarchical structure at the macro level. This is what is taken into account by agent-based models, which provide a novel way to model complex economies from the bottom-up, with sound empirically-based micro-foundations. We present the foundations of Agent-Based macroeconomics and we discuss how the contributions of this special issue push its frontier forward. Finally, we conclude by discussing the ways ahead for the fully acknowledgement of agent-based models as the standard way of theorizing in macroeconomics.
    Keywords: Macroeconomics, Economic Policy, Keynesian Theory, New Neoclassical Synthesis, New Keynesian Models, DSGE Models, Agent-Based Evolutionary Models, Complexity Theory, Great Recession, Crisis
    Date: 2019–01–11
  12. By: Francesco Pasimeni (SPRU, University of Sussex, THE UK; European Commission, Joint Research Centre (JRC), Petten, Netherlands); Tommaso Ciarli (SPRU, University of Sussex, THE UK)
    Abstract: This paper focuses on the process of coalition formation conditioning the common decision to adopt a shared good, which cannot be afforded by an average single consumer and whose use cannot be exhausted by any single consumer. An agent based model is developed to study the interplay between these two processes: coalition formation and diffusion of shared goods. Coalition formation is modelled in an evolutionary game theoretic setting, while adoption uses elements from both the Bass and the threshold models. Coalitions formation sets the conditions for adoption, while diffusion influences the consequent formation of coalitions. Results show that both coalitions and diffusion are subject to network effects and have an impact on the information flow though the population of consumers. Large coalitions are preferred over small ones since individual cost is lower, although it increases if higher quantities are purchased collectively. The paper concludes by connecting the model conceptualisation to the on-going discussion of diffusion of sustainable goods, discussing related policy implications.
    Keywords: Coalition formation, diffusion, shared goods, agent-based model
    JEL: D71 E27 O33
    Date: 2018–12
  13. By: Michela Giorcelli; Nicola Lacetera; Astrid Marinoni
    Abstract: We study the interplay between scientific progress and culture through text analysis on a corpus of about eight million books, with the use of techniques and algorithms from machine learning. We focus on a specific scientific breakthrough, the theory of evolution through natural selection by Charles Darwin, and examine the diffusion of certain key concepts that characterized this theory in the broader cultural discourse and social imaginary. We find that some concepts in Darwin’s theory, such as Evolution, Survival, Natural Selection and Competition diffused in the cultural discourse immediately after the publication of On the Origins of Species. Other concepts such as Selection and Adaptation were already present in the cultural dialogue. Moreover, we document semantic changes for most of these concepts over time. Our findings thus show a complex relation between two key factors of long-term economic growth – science and culture. Considering the evolution of these two factors jointly can offer new insights to the study of the determinants of economic development, and machine learning is a promising tool to explore these relationships.
    JEL: N00 O30 Z1
    Date: 2019–01
  14. By: Konrad Steiner (A.T. Kearney GmbH, Johannes Gutenberg University)
    Abstract: This work addresses line planning for inter-city bus networks, which requires a high level of integration with other planning steps. One key reason is given by passengers choosing a speci?c timetabled service rather than just a line, as is typically the case in urban transportation. Schedule-based modeling approaches are required to incorporate this aspect, i.e., demand is assigned to a speci?c timetabled service. Furthermore, in liberalized markets, there is usually ?erce competition within and across modes. This encourages considering dynamic demand, i.e., not relying on static demand values, but adjusting them based on the trip characteristics. We provide a schedule-based mixed-integer model formulation allowing a bus operator to optimize multiple timetabled services in a travel corridor with simultaneous decisions on both departure time and which stations to serve. The demand behaves dynamically with respect to departure time, trip duration, trip frequency, and cannibalization. To solve this new problem formulation, we introduce a large multiple neighborhood search (LMNS) as an overall metaheuristic approach, together with multiple variations including matheuristics. Applying the LMNS algorithm, we solve instances based on real-world data from the German market. Computation times are attractive and the high quality of the solutions is con?rmed by analyzing examples with known optimal solutions. Moreover, we show that the explicit consideration of the dependencies between the di?erent timetabled services often produces insightful new results that di?er from approaches which only focus on a single service.
    Keywords: integration, schedule-based modeling, inter-city bus transportation, dynamic demand, large multiple neighborhood search LMNS
    Date: 2018–12–20
  15. By: Yiran Fan (The Linsly School, Wheeling, WV, USA)
    Abstract: This study is aimed at 1) identifying the predictors for weapon-carrying on school properties; 2) build a predictive model for parents, educators, and pediatricians for early intervention. Youth Risk Behavior Surveillance System (YRBSS) 2017 data were used for this study. Logistic regression model is used to calculate the predicted risk. Logistic regression is a part of a category of statistical models called generalized linear models, and it allows one to predict a discrete outcome from a set of variables that may be continuous, discrete, dichotomous, or a combination of these. Typically, the dependent variable is dichotomous and the independent variables are either categorical or continuous. The data is run through R program. The outcome variable is weapon-carrying based Q13 (During the past 30 days, on how many days did you carry a weapon such as a gun, knife, or club on school property?) The result identified several important predictors for carrying weapon on school properties, such as gender, alcohol use, and smoking age. This provided important information for the educators and parents for early intervention and alleviating the negative effects of weapon-carrying among teenagers.
    Keywords: weapon, school, educators
    Date: 2018–11

This nep-cmp issue is ©2019 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.