nep-cmp New Economics Papers
on Computational Economics
Issue of 2018‒06‒18
fifteen papers chosen by
Stan Miles
Thompson Rivers University

  1. Algorithmic Trading with Fitted Q Iteration and Heston Model By Son Le
  2. Structural Labour Supply Models and Microsimulation By Rolf Aaberge; Ugo Colombino
  3. RHOMOLO V3:A Spatial Modelling Framework By Patrizio Lecca; Javier Barbero Jimenez; Martin Aaroe Christensen; Andrea Conte; Francesco Di Comite; Jorge Diaz-Lanchas; Olga Diukanova; Giovanni Mandras; Damiaan Persyn; Stylianos Sakkas
  4. Effects of driverless vehicles: A review of simulations By Pernestål Brenden , Anna; Kristoffersson , Ida
  5. Introducing Roy-like Worker Assignment into Computable General Equilibrium Models By Jaewon Jung
  6. And Then He Wasnùt a She: Climate Change and Green Transitions in an Agent-Based Integrated Assessment Model By Francesco Lamperti; Giovanni Dosi; Mauro Napoletano; Andrea Roventini; Alessandro Sapio
  7. A flexible regime switching model with pairs trading application to the S&P 500 high-frequency stock returns By Endres, Sylvia; Stübinger, Johannes
  8. Tobacco spending in Georgia: Machine learning approach By Maksym Obrizan; Karine Torosyan; Norberto Pignatti
  9. The energy efficiency rebound effect in general equilibrium By Christoph Boehringer; Nicholas Rivers
  10. Data Science for Institutional and Organizational Economics By Prüfer, Jens; Prüfer, Patricia
  11. Machine Learning the Cryptocurrency Market By Laura Alessandretti; Abeer ElBahrawy; Luca Maria Aiello; Andrea Baronchelli
  12. Économétrie & Machine Learning By Arthur Charpentier; Emmanuel Flachaire; Antoine Ly
  13. Simulateur pédagogique des effets de répartition des soutiens de la PAC au niveau nation By Laurent Piet; Catherine Laroche-Dupraz
  14. On testing substitutability By Cosmina Croitoru; Kurt Mehlhorn
  15. Regime switching in the presence of endogeneity By Tom Auld; Oliver Linton

  1. By: Son Le
    Abstract: We present the use of the fitted Q iteration in algorithmic trading. We show that the fitted Q iteration helps alleviate the dimension problem that the basic Q-learning algorithm faces in application to trading. Furthermore, we introduce a procedure including model fitting and data simulation to enrich training data as the lack of data is often a problem in realistic application. We experiment our method on both simulated environment that permits arbitrage opportunity and real-world environment by using prices of 450 stocks. In the former environment, the method performs well, implying that our method works in theory. To perform well in the real-world environment, the agents trained might require more training (iteration) and more meaningful variables with predictive value.
    Date: 2018–05
  2. By: Rolf Aaberge (Statistics Norway); Ugo Colombino
    Abstract: The purpose of the paper is to provide a discussion of the various approaches for accounting for labour supply responses in microsimulation models. The paper focus attention on two methodologies for modelling labour supply: • The discrete choice model • The random utility – random opportunities model The paper then describes approaches to utilising these models for policy simulation in terms of producing and interpreting simulation outcomes, outlining an extensive literature of policy analyses utilising these approaches. Labour supply models are not only central for analyzing behavioural labour supply responses but also for identifying optimal tax-benefit systems, given some of the challenges of the theoretical approach. Combining labour supply results with individual and social welfare functions enables the social evaluation of policy simulations. Combining welfare functions and labour supply functions, the paper discusses how to model socially optimal income taxation.
    Keywords: Behavioural microsimulation; Labour supply; Discrete choice; Tax reforms
    JEL: C50 D10 D31 H21 H24 H31 J20
    Date: 2018–06
  3. By: Patrizio Lecca (European Commission - JRC); Javier Barbero Jimenez (European Commission - JRC); Martin Aaroe Christensen (European Commission - JRC); Andrea Conte (European Commission - JRC); Francesco Di Comite (European Commission - ECFIN); Jorge Diaz-Lanchas (European Commission - JRC); Olga Diukanova (European Commission - JRC); Giovanni Mandras (European Commission - JRC); Damiaan Persyn (European Commission - JRC); Stylianos Sakkas (European Commission - JRC)
    Abstract: In this paper we provide the mathematical presentation of the RHOMOLO model. In addition, we perform some stylized and illustrative simulations with the aim to make the reader familiar with the economic adjustment mechanisms incorporated into the model. Essentially, we attempt to offer the reader and the potential users of the model an intuition of the transmission channels existing in the current version RHOMOLO. The analysis is kept simple to facilitate a better understanding of the model's findings. We simulate a permanent demand-side shock implemented separately for each of the 267 regions contained in the model. We repeat the same simulation under three alternative labour market closures and three different imperfectly competitive product market structures.
    Keywords: Numerical General Equilibrium Models, Regional Economic Adjustment, Regional spillover
    Date: 2018–05
  4. By: Pernestål Brenden , Anna (CTS - Centre for Transport Studies Stockholm (KTH and VTI)); Kristoffersson , Ida (VTI)
    Abstract: The development of driverless vehicles is fast, and the technology has the potential to significantly affect the transport system, society and environment. However, there are still many open questions regarding what this development will look like and there are several counteracting forces. This paper addresses the effects of driverless vehicles by performing a literature review of twenty papers that use simulation to model effects of driverless vehicles. By combing and analysing the results from these simulation studies, an overall picture of the effects of driverless vehicles is presented. The paper shows that focus in existing literature has been on effects of driverless taxi applications in urban areas. Some parameters, such as trip cost and waiting time, show small variations between the reviewed papers. Other parameters, such as vehicle kilometres travelled (VKT), show larger variations and depend heavily on the assumptions concerning value of time and level of sharing. In general, increases in VKT are predicted for most applications. Ride sharing has the potential to reduce VKT, and thereby energy consumption and congestion, but the analysis indicates that a sufficient level of ride sharing to reduce VKT will not be achieved without incentives or regulations. Furthermore, the VKT of driverless vehicles is unevenly distributed from a time and space perspective, with larger increases in VKT during peak hours than in off-peak, and in the suburbs compared to city centres. The reviewed papers provide a first prediction of factors such as waiting time, VKT and trip cost, in particular for urban areas and for schemes where there is one service provider present. To get a deeper understanding of the effects of driverless vehicles, aspects such as local spatial considerations, e.g. at pick-up stations, and more complex schemes with competition between service providers should be studied. Furthermore, there is a need for sensitivity analyses regarding travel demand.
    Keywords: Driverless vehicle; Automated vehicle; Autonomous taxi; Traffic simulation; Societal effects
    JEL: R40 R41
    Date: 2018–06–11
  5. By: Jaewon Jung (Université de Cergy-Pontoise, THEMA)
    Abstract: This paper develops a new CGE model incorporating a Roy-like worker assignment in which heterogeneous workers endogenously sort into different technologies based on their comparative advantage. The model predicts significantly higher welfare-improving effects of trade liberalization due to technology-upgrading mechanism.
    Keywords: Technology upgrading, Heterogeneous firms/workers, Roy model, Computable general equilibrium (CGE), Gains from trade.
    JEL: C68 D58 F16
    Date: 2018
  6. By: Francesco Lamperti; Giovanni Dosi; Mauro Napoletano; Andrea Roventini; Alessandro Sapio
    Abstract: In this work, we employ an agent-based integrated assessment model to study the likelihood of transition to green, sustainable growth in presence of climate damages. The model comprises heterogeneous fossil-fuel and renewable plants, capital- and consumption-good firms and a climate box linking greenhouse gasses emission to temperature dynamics and microeconomic climate shocks affecting labour productivity and energy demand of firms. Simulation results show that the economy possesses two statistical equilibria: a carbon-intensive lock-in and a sustainable growth path characterized by better macroeconomic performances. Once climate damages are accounted for, the likelihood of a green transition depends on the damage function employed. In particular, aggregate and quadratic damage functions overlook the impact of climate change on the transition to sustainability; to the contrary, more realistic micro-level damages are found to deeply influence the chances of a transition. Finally, we run a series of policy experiments on carbon (fossil fuel) taxes and green subsidies. We find that the effectiveness of such market-based instruments depends on the different channels climate change affects the economy through, and complementary policies might be required to avoid carbon-intensive lock-ins.
    Keywords: climate change; agent based models; transitions; energy policy; growth
    Date: 2018–06–07
  7. By: Endres, Sylvia; Stübinger, Johannes
    Abstract: This paper develops the regime classification algorithm and applies it within a fully-edged pairs trading framework on minute-by-minute data of the S&P 500 constituents from 1998 to 2015. Specifically, the highly flexible algorithm automatically determines the number of regimes for any stochastic process and provides a complete set of parameter estimations. We demonstrate its performance in a simulation study - the algorithm achieves promising results for the general class of Lévy-driven Ornstein-Uhlenbeck processes with regime switches. In our empirical back-testing study, we apply our regime classification algorithm to propose a high-frequency pair selection and trading strategy. The results show statistically and economically significant returns with an annualized Sharpe ratio of 3.92 after transaction costs - results remain stable even in recent years. We compare our strategy with existing quantitative trading frameworks and find its results to be superior in terms of risk and return characteristics. The algorithm takes full advantage of its flexibility and identifies various regime patterns over time that are well-documented in the literature.
    Keywords: Finance,Pairs trading,Statistical arbitrage,Markov regime switching,Lévy-driven Ornstein-Uhlenbeck process,High-frequency data
    Date: 2018
  8. By: Maksym Obrizan (Kyiv School of Economics); Karine Torosyan (International School of Economics at TSU); Norberto Pignatti (International School of Economics at TSU)
    Abstract: The purpose of this study is to analyze tobacco spending in Georgia using various machine learning methods applied to a sample of 10,757 households from Integrated Household Survey collected by GeoStat in 2016. Previous research has shown that smoking is the leading cause of death for 35-69 year olds. In addition, tobacco expenditures may constitute as much as 17% of the household budget. Five different algorithms (ordinary least squares, random forest, two gradient boosting methods and deep learning) were applied to 8,173 households (or 76.0%) in the train set. Out-of-sample predictions were then obtained for 2,584 remaining households in the test set. Under the default settings random forest algorithm showed the best performance with more than 10% improvement in terms of root-mean-square error (RMSE). Improved accuracy and availability of machine learning tools in R calls for active use of these methods by policy makers and scientists in health economics, public health and related fields.
    Keywords: Tobacco Spending, Household Survey, Georgia, Machine Learning
    JEL: I12 L66 D12
    Date: 2018–05
  9. By: Christoph Boehringer (University of Oldenburg, Department of Economics); Nicholas Rivers (Graduate School of Public and International Affairs and Institute of the Environment, University of Ottawa)
    Abstract: We develop a stylized general equilibrium model to decompose the rebound effect of energy efficiency improvements into its partial and general equilibrium components. In our theoretical analysis, we identify key drivers of the general equilibrium rebound effect, including a composition channel, an energy price channel, a labor supply channel, and a growth channel. Based on numerical simulations with both the stylized model as well as a large-scale computable general equilibrium model of the global economy, we show that both general and partial equilibrium components of the rebound effect can be substantial. Our benchmark parameterization suggests a total rebound effect due to an exogenous energy efficiency improvement in the US manufacturing sector of 67% with roughly two-thirds occurring through the partial equilibrium rebound channel and the remaining one-third occurring through the general equilibrium rebound channel
    Keywords: energy efficiency, climate change, rebound effect, general equilibrium
    Date: 2018–06
  10. By: Prüfer, Jens (Tilburg University, Center For Economic Research); Prüfer, Patricia (Tilburg University, Center For Economic Research)
    Abstract: To which extent can data science methods – such as machine learning, text analysis, or sentiment analysis – push the research frontier in the social sciences? This essay briefly describes the most prominent data science techniques that lend themselves to analyses of institutional and organizational governance structures. We elaborate on several examples applying data science to analyze legal, political, and social institutions and sketch how specific data science techniques can be used to study important research questions that could not (to the same extent) be studied without these techniques. We conclude by comparing the main strengths and limitations of computational social science with traditional empirical research methods and its relation to theory.
    Keywords: data science; maching learning; institutions; text analysis
    JEL: C50 C53 C87 D02 K0
    Date: 2018
  11. By: Laura Alessandretti; Abeer ElBahrawy; Luca Maria Aiello; Andrea Baronchelli
    Abstract: Machine learning and AI-assisted trading have attracted growing interest for the past few years. Here, we use this approach to test the hypothesis that the inefficiency of the cryptocurrency market can be exploited to generate abnormal profits. We analyse daily data for $1,681$ cryptocurrencies for the period between Nov. 2015 and Apr. 2018. We show that simple trading strategies assisted by state-of-the-art machine learning algorithms outperform standard benchmarks. Our results show that non-trivial, but ultimately simple, algorithmic mechanisms can help anticipate the short-term evolution of the cryptocurrency market.
    Date: 2018–05
  12. By: Arthur Charpentier (CREM - Centre de recherche en économie et management - UNICAEN - Université de Caen Normandie - NU - Normandie Université - UR1 - Université de Rennes 1 - UNIV-RENNES - Université de Rennes - CNRS - Centre National de la Recherche Scientifique); Emmanuel Flachaire (GREQAM - Groupement de Recherche en Économie Quantitative d'Aix-Marseille - ECM - Ecole Centrale de Marseille - CNRS - Centre National de la Recherche Scientifique - AMU - Aix Marseille Université - EHESS - École des hautes études en sciences sociales); Antoine Ly (UPE - Université Paris-Est)
    Abstract: L'économétrie et l'apprentissage machine semblent avoir une finalité en commun: construire un modèle prédictif, pour une variable d'intérêt, à l'aide de variables explicatives (ou features). Pourtant, ces deux champs se sont développés en parallèle, créant ainsi deux cultures différentes, pour paraphraser Breiman (2001a). Le premier visait à construire des modèles probabilistes permettant de décrire des phénomèmes économiques. Le second utilise des algorithmes qui vont apprendre de leurs erreurs, dans le but, le plus souvent de classer (des sons, des images, etc). Or récemment, les modèles d'apprentissage se sont montrés plus efficaces que les techniques économétriques traditionnelles (avec comme prix à payer un moindre pouvoir explicatif), et surtout, ils arrivent à gérer des données beaucoup plus volumineuses. Dans ce contexte, il devient nécessaire que les économètres comprennent ce que sont ces deux cultures, ce qui les oppose et surtout ce qui les rapproche, afin de s'approprier des outils développés par la communauté de l'apprentissage statistique, pour les intégrer dans des modèles économétriques.
    Keywords: apprentissage,moindres carrés,modélisation,économétrie,données massives
    Date: 2018–05–25
  13. By: Laurent Piet; Catherine Laroche-Dupraz
    Abstract: [texte en français] The latest reform of the Common Agricultural Policy (CAP), which was decided by the Member States of the European Union in 2013, has again modified some of the modalities with which the different types of direct payments are allocated to farmers. The reform concerns first Pillar « decoupled » payments as well as « coupled » payments, and leaves Member States with a great flexibility in how to implement the various policy measures inside a common framework. In this paper, we present a tool (which was developed with Excel and is compatible with the LibreOffice package) which allows to simulate the implementation of the main aspects of the reform, and to assess their impacts both in terms of payment distribution across farms and in terms of income. Being an educational tool, it is neither an academic effort which would precisely model the impact of the latest reform, nor a device which would permit to estimate the exact amount of support a specific farmer could claim in practice. It is based on the French strand of the Farm Accounting Data Network (FADN) and allows to analyse the choices made by France in 2013, shedding light on the underlying rationale which may have motivated the decisions. Because its initial purpose was the lifelong training of agricultural yet non-CAP or non-economics specialist professionals, the proposed simulation tool comes with a user-friendly interface and has also already been used for the initial training of Agrocampus Ouest students and with an even larger audience during the INRA’s 70th birthday Open Days in Rennes in 2016. Building on these experiences, we are currently incorporating this simulation tool inside an Agreenium-IAVFF MOOC dedicated to the economics of the European agricultural policy.
    Keywords: common agricultural policy, 2013 reform, simulation tool, FADN, France
    JEL: Q18 A20 C69
    Date: 2018
  14. By: Cosmina Croitoru; Kurt Mehlhorn
    Abstract: The papers~\cite{hatfimmokomi11} and~\cite{azizbrilharr13} propose algorithms for testing whether the choice function induced by a (strict) preference list of length $N$ over a universe $U$ is substitutable. The running time of these algorithms is $O(|U|^3\cdot N^3)$, respectively $O(|U|^2\cdot N^3)$. In this note we present an algorithm with running time $O(|U|^2\cdot N^2)$. Note that $N$ may be exponential in the size $|U|$ of the universe.
    Date: 2018–05
  15. By: Tom Auld; Oliver Linton
    Abstract: We study the behaviour of the Betfair betting market and the sterling/dollar exchange rate (futures price) during 24 June 2016, the night of the EU referendum. We investigate how the two markets responded to the announcement of the voting results. We employ a Bayesian updating methodology to update prior opinion about the likelihood of the final outcome of the vote. We then relate the voting model to the real time evolution of the market determined prices as results are announced. We find that although both markets appear to be inefficient in absorbing the new information contained in vote outcomes, the betting market is apparently less inefficient than the FX market. The different rates of convergence to fundamental value between the two markets leads to highly profitable arbitrage opportunities.
    Keywords: EU Referendum, prediction markets, machine learning, efficient markets hypothesis, pairs trading, cointegration, Bayesian methods, exchange rates.
    Date: 2018

This nep-cmp issue is ©2018 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.