nep-cmp New Economics Papers
on Computational Economics
Issue of 2019‒04‒15
thirteen papers chosen by



  1. Robust Mathematical Formulation and Implementation of Agent-Based Computational Economic Market Models By Maximilian Beikirch; Simon Cramer; Martin Frank; Philipp Otte; Emma Pabich; Torsten Trimborn
  2. Empirical Asset Pricing via Machine Learning By Shihao Gu; Bryan T. Kelly; Dacheng Xiu
  3. Feature Engineering for Mid-Price Prediction Forecasting with Deep Learning By Adamantios Ntakaris; Giorgio Mirone; Juho Kanniainen; Moncef Gabbouj; Alexandros Iosifidis
  4. Identifying effects of farm subsidies on structural change using neural networks By Storm, Hugo; Heckelei, Thomas; Baylis, Kathy; Mittenzwei, Klaus
  5. Classifying occupations using web-based job advertisements: an application to STEM and creative occupations By Antonio Lima; Hasan Bakhshi
  6. Model-Free Reinforcement Learning for Financial Portfolios: A Brief Survey By Yoshiharu Sato
  7. Accounting for the distributional effects of the 2007-2008 crisis and the Economic Adjustment Program in Portugal. By SOLOGON Denisa; ALMEIDA Vanda; VAN KERM Philippe
  8. (Martingale) Optimal Transport And Anomaly Detection With Neural Networks: A Primal-dual Algorithm By Pierre Henry-Labord`ere
  9. Enhancing Time Series Momentum Strategies Using Deep Neural Networks By Bryan Lim; Stefan Zohren; Stephen Roberts
  10. Text Data Analysis Using Latent Dirichlet Allocation: An Application to FOMC Transcripts By Hali Edison; Hector Carcel
  11. Simulation of the Impacts of Value-Added-Tax Increases on Welfare and Poverty in Vietnam By Nguyen, Cuong
  12. 25 Years of European Merger Control By Pauline Affeldt; Tomaso Duso; Florian Szücs
  13. New Digital Technologies and Heterogeneous Employment and Wage Dynamics in the United States: Evidence from Individual-Level Data By Fossen, Frank M.; Sorgner, Alina

  1. By: Maximilian Beikirch; Simon Cramer; Martin Frank; Philipp Otte; Emma Pabich; Torsten Trimborn
    Abstract: Monte Carlo Simulations of agent-based models in science and especially in the economic literature have become a widely used modeling approach. In many applications the number of agents is huge and the models are formulated as a large system of difference equations. In this study we discuss four numerical aspects which we present exemplified by two agent-based computational economic market models; the Levy-Levy-Solomon model and the Franke-Westerhoff model. First, we discuss finite-size effects present in the Levy-Levy-Solomon model and show that this behavior originates from the scaling within the model. Secondly, we discuss the impact of a low-quality random number generator on the simulation output. Furthermore, we discuss the continuous formulation of difference equations and the impact on the model behavior. Finally, we show that a continuous formulation makes it possible to employ correct numerical solvers in order to obtain correct simulation results. We conclude that it is of immanent importance to simulate the model with a large number of agents in order to exclude finite-size effects and to use a well tested pseudo random number generator. Furthermore, we argue that a continuous formulation of agent-based models is advantageous since it allows the application of proper numerical methods and it admits a unique continuum limit.
    Date: 2019–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1904.04951&r=all
  2. By: Shihao Gu (University of Chicago - Booth School of Business); Bryan T. Kelly (Yale SOM; AQR Capital Management, LLC; National Bureau of Economic Research (NBER)); Dacheng Xiu (University of Chicago - Booth School of Business)
    Abstract: We synthesize the field of machine learning with the canonical problem of empirical asset pricing: measuring asset risk premia. In the familiar empirical setting of cross section and time series stock return prediction, we perform a comparative analysis of methods in the machine learning repertoire, including generalized linear models, dimension reduction, boosted regression trees, random forests, and neural networks. At the broadest level, we find that machine learning offers an improved description of expected return behavior relative to traditional forecasting methods. Our implementation establishes a new standard for accuracy in measuring risk premia summarized by an unprecedented out-of-sample return prediction R2. We identify the best performing methods (trees and neural nets) and trace their predictive gains to allowance of nonlinear predictor interactions that are missed by other methods. Lastly, we find that all methods agree on the same small set of dominant predictive signals that includes variations on momentum, liquidity, and volatility. Improved risk premia measurement through machine learning can simplify the investigation into economic mechanisms of asset pricing and justifies its growing role in innovative financial technologies.
    Keywords: Machine Learning, Big Data, Return Prediction, Cross-Section of Returns, Ridge Regression, (Group) Lasso, Elastic Net, Random Forest, Gradient Boosting, (Deep) Neural Networks, Fintech
    Date: 2018–11
    URL: http://d.repec.org/n?u=RePEc:chf:rpseri:rp1871&r=all
  3. By: Adamantios Ntakaris; Giorgio Mirone; Juho Kanniainen; Moncef Gabbouj; Alexandros Iosifidis
    Abstract: Mid-price movement prediction based on limit order book (LOB) data is a challenging task due to the complexity and dynamics of the LOB. So far, there have been very limited attempts for extracting relevant features based on LOB data. In this paper, we address this problem by designing a new set of handcrafted features and performing an extensive experimental evaluation on both liquid and illiquid stocks. More specifically, we implement a new set of econometrical features that capture statistical properties of the underlying securities for the task of mid-price prediction. Moreover, we develop a new experimental protocol for online learning that treats the task as a multi-objective optimization problem and predicts i) the direction of the next price movement and ii) the number of order book events that occur until the change takes place. In order to predict the mid-price movement, the features are fed into nine different deep learning models based on multi-layer perceptrons (MLP), convolutional neural networks (CNN) and long short-term memory (LSTM) neural networks. The performance of the proposed method is then evaluated on liquid and illiquid stocks, which are based on TotalView-ITCH US and Nordic stocks, respectively. For some stocks, results suggest that the correct choice of a feature set and a model can lead to the successful prediction of how long it takes to have a stock price movement.
    Date: 2019–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1904.05384&r=all
  4. By: Storm, Hugo; Heckelei, Thomas; Baylis, Kathy; Mittenzwei, Klaus
    Abstract: Farm subsidies are commonly motivated by their promise to help keep families in agriculture and reduce farm structural change. Many of these subsidies are designed to be targeted to smaller farms, and include production caps or more generous funding for smaller levels of activity. Agricultural economists have long studied how such subsidies affect production choices, and resulting farm structure. Traditional econometric models are typically restricted to detecting average effects of subsidies on certain farm types or regions and cannot easily incorporate complex subsidy design or the multi-output, heterogeneous nature of many farming activities. Programming approaches may help address the broad scope of agricultural production but have less empirical measures for behavioral and technological parameters. This paper uses a recurrent neural network and detailed panel data to estimate the effect of subsidies on the structure of Norwegian farming. Specifically, we use the model to determine how the varying marginal subsidies have affected the distribution of Norwegian farms and their range of agricultural activities. We use the predictive capacity of this flexible, multi-output machine learning model to identify the effects of agricultural subsidies on farm activity and structure, as well as their detailed distributional effects.
    Keywords: Agricultural and Food Policy, Farm Management, Land Economics/Use, Research Methods/ Statistical Methods
    Date: 2019–04–10
    URL: http://d.repec.org/n?u=RePEc:ags:ubfred:287343&r=all
  5. By: Antonio Lima; Hasan Bakhshi
    Abstract: Rapid technological, social and economic change is having significant impacts on the nature of jobs. In fast-changing environments it is crucial that policymakers have a clear and timely picture of the labour market. Policymakers use standardised occupational classifications, such as the Office for National Statistics’ Standard Occupational Classification (SOC) in the UK to analyse the labour market. These permit the occupational composition of the workforce to be tracked on a consistent and transparent basis over time and across industrial sectors. However, such systems are by their nature costly to maintain, slow to adapt and not very flexible. For that reason, additional tools are needed. At the same time, policymakers over the world are revisiting how active skills development policies can be used to equip workers with the capabilities needed to meet the new labour market realities. There is in parallel a desire for more granular understandings of what skills combinations are required of occupations, in part so that policymakers are better sighted on how individuals can redeploy these skills as and when employer demands change further. In this paper, we investigate the possibility of complementing traditional occupational classifications with more flexible methods centred around employers’ characterisations of the skills and knowledge requirements of occupations as presented in job advertisements. We use data science methods to classify job advertisements as STEM or non-STEM (Science, Technology, Engineering and Mathematics) and creative or non-creative, based on the content of ads in a database of UK job ads posted online belonging to Boston-based job market analytics company, Burning Glass Technologies. In doing so, we first characterise each SOC code in terms of its skill make-up; this step allows us to describe each SOC skillset as a mathematical object that can be compared with other skillsets. Then we develop a classifier that predicts the SOC code of a job based on its required skills. Finally, we develop two classifiers that decide whether a job vacancy is STEM/non-STEM and creative/non-creative, based again on its skill requirements.
    Keywords: labour demand, occupational classification, online job adverts, big data, machine learning, STEM, STEAM, creative economy
    JEL: C18 J23 J24
    Date: 2018–07
    URL: http://d.repec.org/n?u=RePEc:nsr:escoed:escoe-dp-2018-08&r=all
  6. By: Yoshiharu Sato
    Abstract: Financial portfolio management is one of the problems that are most frequently encountered in the investment industry. Nevertheless, it is not widely recognized that both Kelly Criterion and Risk Parity collapse into Mean Variance under some conditions, which implies that a universal solution to the portfolio optimization problem could potentially exist. In fact, the process of sequential computation of optimal component weights that maximize the portfolio's expected return subject to a certain risk budget can be reformulated as a discrete-time Markov Decision Process (MDP) and hence as a stochastic optimal control, where the system being controlled is a portfolio consisting of multiple investment components, and the control is its component weights. Consequently, the problem could be solved using model-free Reinforcement Learning (RL) without knowing specific component dynamics. By examining existing methods of both value-based and policy-based model-free RL for the portfolio optimization problem, we identify some of the key unresolved questions and difficulties facing today's portfolio managers of applying model-free RL to their investment portfolios.
    Date: 2019–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1904.04973&r=all
  7. By: SOLOGON Denisa; ALMEIDA Vanda; VAN KERM Philippe
    Abstract: This paper develops a new method to model the household disposable income distribution and decompose changes in this distribution (or functionals such as inequality measures) over time. It integrates both a micro-econometric and microsimulation approaches, combining a flexible parametric modelling of the distribution of market income with the EUROMOD microsimulation model to simulate the value of taxes and benefits. The method allows for the quantification of the contributions of four main factors to changes in the disposable income distribution between any two years: (i) labour market structure; (ii) returns; (iii) demographic composition; and (iv) tax-benefit system. We apply this new framework to the study of changes in the income distribution in Portugal between 2007 and 2013, accounting for the distributional effects of the 2007-2008 crisis and aftermath policies, in particular the Economic Adjustment Program (EAP). Results show that these effects were substantial and reflected markedly different developments over two periods: 2007-2009, when stimulus packages determined important income gains for the bottom of the distribution and a decrease in income inequality; 2010-2013, when the crisis and austerity measures took a toll on the incomes of Portuguese households, particularly those at the bottom and top of the distribution, leading to an increase in income inequality.
    Keywords: income distribution; inequality; decompositions; microsimulation; tax-benefit policies; crisis; austerity; overtime comparison
    JEL: I38
    Date: 2019–04
    URL: http://d.repec.org/n?u=RePEc:irs:cepswp:2019-05&r=all
  8. By: Pierre Henry-Labord`ere (SOCIETE GENERALE)
    Abstract: In this paper, we introduce a primal-dual algorithm for solving (martingale) optimal transportation problems, with cost functions satisfying the twist condition, close to the one that has been used recently for training generative adversarial networks. As some additional applications, we consider anomaly detection and automatic generation of financial data.
    Date: 2019–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1904.04546&r=all
  9. By: Bryan Lim; Stefan Zohren; Stephen Roberts
    Abstract: While time series momentum is a well-studied phenomenon in finance, common strategies require the explicit definition of both a trend estimator and a position sizing rule. In this paper, we introduce Deep Momentum Networks -- a hybrid approach which injects deep learning based trading rules into the volatility scaling framework of time series momentum. The model also simultaneously learns both trend estimation and position sizing in a data-driven manner, with networks directly trained by optimising the Sharpe ratio of the signal. Backtesting on a portfolio of 88 continuous futures contracts, we demonstrate that the Sharpe-optimised LSTM improved traditional methods by more than two times in the absence of transactions costs, and continue outperforming when considering transaction costs up to 2-3 basis points. To account for more illiquid assets, we also propose a turnover regularisation term which trains the network to factor in costs at run-time.
    Date: 2019–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1904.04912&r=all
  10. By: Hali Edison (Williams College); Hector Carcel (Bank of Lithuania)
    Abstract: This paper applies Latent Dirichlet Allocation (LDA), a machine learning algorithm, to analyze the transcripts of the U.S. Federal Open Market Committee (FOMC) covering the period 2003 – 2012, including 45,346 passages. The goal is to detect the evolution of the different topics discussed by the members of the FOMC. The results of this exercise show that discussions on economic modelling were dominant during the Global Financial Crisis (GFC), with an increase in discussion of the banking system in the years following the GFC. Discussions on communication gained relevance toward the end of the sample as the Federal Reserve adopted a more transparent approach. The paper suggests that LDA analysis could be further exploited by researchers at central banks and institutions to identify topic priorities in relevant documents such as FOMC transcripts.
    Keywords: FOMC, Text data analysis, Transcripts, Latent Dirichlet Allocation
    JEL: E52 E58 D78
    Date: 2019–04–05
    URL: http://d.repec.org/n?u=RePEc:lie:dpaper:11&r=all
  11. By: Nguyen, Cuong
    Abstract: This study predicts the impact of increasing VAT on household welfare as measured by the average expenditure and poverty rate in Viet Nam. We forecast the impact of two scenarios of increasing VAT. Scenario 1 is to increase VAT by 1.2 times, i.e. increasing 5% VAT and 10% VAT to 6% and 12% VAT, respectively. Scenario 2 applies a common rate of 10% on all items, i.e., commodities subject to 5% tax can be taxed by a 10% rate. The results show that Scenario 1 has a stronger impact on households compared to Scenario 2. In particular, Scenario 1 reduces households' expenditure by 0.89%, while Scenario 2 decreases households’ expenditure by 0.32%. Under Scenario 1, the poverty rate is increased by 0.26 percentage points, while under Scenario 2, the poverty rate is increased by 0.22 percentage points. The number of poor people increases approximately by 240 and 202 thousand people in Scenarios 1 and 2, respectively. Regarding the impact on poverty, VAT only affects the near poor households. Better-off households are also affected, but this effect does not cause them fall into poverty.
    Keywords: Value added tax, simulation, poverty, household expenditure, Vietnam
    JEL: H2 O2
    Date: 2017–08–17
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:93139&r=all
  12. By: Pauline Affeldt; Tomaso Duso; Florian Szücs
    Abstract: We study the evolution of the EC’s merger decision procedure over the first 25 years of European competition policy. Using a novel dataset constructed at the level of the relevant markets and containing all merger cases over the 1990-2014 period, we evaluate how consistently arguments related to structural market parameters were applied over time. Using non-parametric machine learning techniques, we find that the importance of market shares and concentration measures has declined while the importance of barriers to entry and the risk of foreclosure has increased in the EC’s merger assessment following the 2004 merger policy reform.
    Keywords: Merger policy, DG competition, causal forests
    JEL: K21 L40
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:diw:diwwpp:dp1797&r=all
  13. By: Fossen, Frank M. (University of Nevada, Reno); Sorgner, Alina (John Cabot University)
    Abstract: We investigate heterogeneous effects of new digital technologies on the individual-level employment- and wage dynamics in the U.S. labor market in the period from 2011-2018. We employ three measures that reflect different aspects of impacts of new digital technologies on occupations. The first measure, as developed by Frey and Osborne (2017), assesses the computerization risk of occupations, the second measure, developed by Felten et al. (2018), provides an estimate of recent advances in artificial intelligence (AI), and the third measure assesses the suitability of occupations for machine learning (Brynjolfsson et al., 2018), which is a subfield of AI. Our empirical analysis is based on large representative panel data, the matched monthly Current Population Survey (CPS) and its Annual Social and Economic Supplement (ASEC). The results suggest that the effects of new digital technologies on employment stability and wage growth are already observable at the individual level. High computerization risk is associated with a high likelihood of switching one's occupation or becoming non-employed, as well as a decrease in wage growth. However, advances in AI are likely to improve an individual's job stability and wage growth. We further document that the effects are heterogeneous. In particular, individuals with high levels of formal education and older workers are most affected by new digital technologies.
    Keywords: digitalization, artificial intelligence, machine learning, employment stability, unemployment, wage dynamics
    JEL: J22 J23 O33
    Date: 2019–03
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp12242&r=all

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.