nep-cmp New Economics Papers
on Computational Economics
Issue of 2018‒07‒09
seventeen papers chosen by

  1. A Machine Learning Framework for Stock Selection By XingYu Fu; JinHong Du; YiFeng Guo; MingWen Liu; Tao Dong; XiuWen Duan
  2. Capital Requirements, Risk-Taking and Welfare in a Growing Economy By Pierre-Richard Agénor; Luiz A. Pereira da Silva
  3. Sustainability of the pension system in Macedonia: A comprehensive analysis and reform proposal with MK-PENS - dynamic microsimulation model By Blagica Petreski; Pavle Gacov
  4. Trade Network Reconstruction and Simulation with Changes in Trade Policy By Yuichi Ikeda; Hiroshi Iyetomi
  5. Estimating option prices using multilevel particle filters By P. P. Osei; A. Jasra
  6. A Quantitative Analysis of Possible Futures of Autonomous Transport By Christopher L. Benson; Pranav D Sumanth; Alina P Colling
  7. Textual Sentiment, Option Characteristics, and Stock Return Predictability By Yi-Hsuan Chen, Cathy; Fengler, Matthias; Härdle, Wolfgang Karl; Liu, Yanchu
  8. Heterogeneous Agent Models in Finance By Roberto Dieci; Xue-Zhong He
  9. General multilevel Monte Carlo methods for pricing discretely monitored Asian options By Nabil Kahale
  10. Order-book modelling and market making strategies By Xiaofei Lu; Fr\'ed\'eric Abergel
  11. A Deep Learning Based Illegal Insider-Trading Detection and Prediction Technique in Stock Market By Sheikh Rabiul Islam
  12. Can an Emission Trading Scheme really reduce CO2 emissions in the short term? Evidence from a maritime fleet composition and deployment model By Gu, Yewen; Wallace, Stein W.; Wang, Xin
  13. On the backtesting of trading strategies By Yen H. Lok
  14. Parameter Learning and Change Detection Using a Particle Filter With Accelerated Adaptation By Karol Gellert; Erik Schl\"ogl
  15. Hedonic Recommendations: An Econometric Application on Big Data By Okay Gunes
  16. A note on the predictive power of survey data in nowcasting euro area GDP By Kurz-Kim, Jeong-Ryeol
  17. Economic Policy Uncertainty Spillovers in Booms and Busts By Giovanni Caggiano; Efrem Castelnuovo

  1. By: XingYu Fu; JinHong Du; YiFeng Guo; MingWen Liu; Tao Dong; XiuWen Duan
    Abstract: This paper demonstrates how to apply machine learning algorithms to distinguish good stocks from the bad stocks. To this end, we construct 244 technical and fundamental features to characterize each stock, and label stocks according to their ranking with respect to the return-to-volatility ratio. Algorithms ranging from traditional statistical learning methods to recently popular deep learning method, e.g. Logistic Regression (LR), Random Forest (RF), Deep Neural Network (DNN), and the Stacking, are trained to solve the classification task. Genetic Algorithm (GA) is also used to implement feature selection. The effectiveness of the stock selection strategy is validated in Chinese stock market in both statistical and practical aspects, showing that: 1) Stacking outperforms other models reaching an AUC score of 0.972; 2) Genetic Algorithm picks a subset of 114 features and the prediction performances of all models remain almost unchanged after the selection procedure, which suggests some features are indeed redundant; 3) LR and DNN are radical models; RF is risk-neutral model; Stacking is somewhere between DNN and RF. 4) The portfolios constructed by our models outperform market average in back tests.
    Date: 2018–06
  2. By: Pierre-Richard Agénor; Luiz A. Pereira da Silva
    Abstract: The effects of capital requirements on risk-taking and welfare are studied in a stochastic overlapping generations model of endogenous growth with banking, limited liability, and government guarantees. Capital producers face a choice between a safe technology and a risky (but socially inefficient) technology, and bank risk-taking is endogenous. Setting the capital adequacy ratio above a structural threshold can eliminate the equilibrium with risky loans (and thus inefficient risk-taking), but numerical simulations show that this may entail a welfare loss. In addition, the optimal ratio may be too high in practice and may concomitantly require a broadening of the perimeter of regulation and a strengthening of financial supervision to prevent disintermediation and distortions in financial markets.
    Keywords: Capital Requirements, Bank risk-taking, Investment, Financial Stability, Economic Growth, Capital Goods, Financial Regulation, Financial Intermediaries, Financial Markets, risky investments, financial stability, financial regulation
    JEL: O41 G28 E44
    Date: 2017–03
  3. By: Blagica Petreski; Pavle Gacov
    Date: 2018–02
  4. By: Yuichi Ikeda; Hiroshi Iyetomi
    Abstract: The interdependent nature of the global economy has become stronger with increases in international trade and investment. We propose a new model to reconstruct the international trade network and associated cost network by maximizing entropy based on local information about inward and outward trade. We show that the trade network can be successfully reconstructed using the proposed model. In addition to this reconstruction, we simulated structural changes in the international trade network caused by changing trade tariffs in the context of the government's trade policy. The simulation for the FOOD category shows that import of FOOD from the U.S. to Japan increase drastically by halving the import cost. Meanwhile, the simulation for the MACHINERY category shows that exports from Japan to the U.S. decrease drastically by doubling the export cost, while exports to the EU increased.
    Date: 2018–06
  5. By: P. P. Osei; A. Jasra
    Abstract: Option valuation problems are often solved using standard Monte Carlo (MC) methods. These techniques can often be enhanced using several strategies especially when one discretizes the dynamics of the underlying asset, of which we assume follows a diffusion process. We consider the combination of two methodologies in this direction. The first is the well-known multilevel Monte Carlo (MLMC) method, which is known to reduce the computational effort to achieve a given level of mean square error relative to MC in some cases. Sequential Monte Carlo (or the particle filter (PF)) methods have also been shown to be beneficial in many option pricing problems potentially reducing variances by large magnitudes (relative to MC). We propose a multilevel particle filter (MLPF) as an alternative approach to price options. The computational savings obtained in using MLPF over PF for pricing both vanilla and exotic options is demonstrated via numerical simulations.
    Date: 2018–06
  6. By: Christopher L. Benson; Pranav D Sumanth; Alina P Colling
    Abstract: Autonomous ships (AS) used for cargo transport have gained a considerable amount of attention in recent years. They promise benefits such as reduced crew costs, increased safety and increased flexibility. This paper explores the effects of a faster increase in technological performance in maritime shipping achieved by leveraging fast-improving technological domains such as computer processors, and advanced energy storage. Based on historical improvement rates of several modes of transport (Cargo Ships, Air, Rail, Trucking) a simplified Markov-chain Monte-Carlo (MCMC) simulation of an intermodal transport model (IMTM) is used to explore the effects of differing technological improvement rates for AS. The results show that the annual improvement rates of traditional shipping (Ocean Cargo Ships = 2.6%, Air Cargo = 5.5%, Trucking = 0.6%, Rail = 1.9%, Inland Water Transport = 0.4%) improve at lower rates than technologies associated with automation such as Computer Processors (35.6%), Fuel Cells (14.7%) and Automotive Autonomous Hardware (27.9%). The IMTM simulations up to the year 2050 show that the introduction of any mode of autonomous transport will increase competition in lower cost shipping options, but is unlikely to significantly alter the overall distribution of transport mode costs. Secondly, if all forms of transport end up converting to autonomous systems, then the uncertainty surrounding the improvement rates yields a complex intermodal transport solution involving several options, all at a much lower cost over time. Ultimately, the research shows a need for more accurate measurement of current autonomous transport costs and how they are changing over time.
    Date: 2018–06
  7. By: Yi-Hsuan Chen, Cathy; Fengler, Matthias; Härdle, Wolfgang Karl; Liu, Yanchu
    Abstract: We distill sentiment from a huge assortment of NASDAQ news articles by means of machine learning methods and examine its predictive power in single-stock option markets and equity markets. We provide evidence that single-stock options react to contemporaneous sentiment. Next, examining return predictability, we discover that while option variables indeed predict stock returns, sentiment variables add further informational content. In fact, both in a regression and a trading context, option variables orthogonalized to public and sentimental news are even more informative predictors of stock returns. Distinguishing further between overnight and trading-time news, we find the first to be more informative. From a statistical topic model, we uncover that this is attributable to the differing thematic coverage of the alternate archives. Finally, we show that sentiment disagreement commands a strong positive risk premium above and beyond market volatility and that lagged returns predict future returns in concentrated sentiment environments.
    Keywords: investor disagreement, option markets, overnight information, stock return predictability, textual sentiment, topic model, trading-time information
    JEL: C58 G12 G14
    Date: 2018–06
  8. By: Roberto Dieci (University of Bologna); Xue-Zhong He (Finance Discipline Group, UTS Business School, University of Technology Sydney)
    Abstract: This chapter surveys the state-of-art of heterogeneous agent models (HAMs) in finance using a jointly theoretical and empirical analysis, combined with numerical and Monte Carlo analysis from the latest development in computational finance. It provides supporting evidence on the explanatory power of HAMs to various stylized facts and market anomalies through model calibration, estimation, and economic mechanisms analysis. It presents a unified framework in continuous time to study the impact of historical price information on price dynamics, profitability and optimality of fundamental and momentum trading. It demonstrates how HAMs can help to understand stock price co-movements and to build evolutionary CAPM. It also introduces a new HAMs perspective on house price dynamics and an integrate approach to study dynamics of limit order markets. The survey provides further insights into the complexity and efficiency of financial markets and policy implications.
    Keywords: Heterogeneity; bounded rationality; heterogeneous agent-based models; stylized facts; asset pricing; housing bubbles; limit order markets; information efficiency; comovement
    Date: 2018–01–01
  9. By: Nabil Kahale
    Abstract: We describe general multilevel Monte Carlo methods that estimate the price of an Asian option monitored at $m$ fixed dates. Our algorithms yield an unbiased estimator with standard deviation $O(\epsilon)$ in $O(m + (1/\epsilon)^{2})$ expected time for a variety of processes such as the Black-Scholes model, the CEV model, Merton's jump-diffusion model, a class of exponential Levy processes and, via the Milstein scheme, processes driven by scalar stochastic differential equations. Using the Euler scheme, our approach estimates the Asian option price with root mean square error $O(\epsilon)$ in $O(m+(\ln(\epsilon)/\epsilon)^{2})$ expected time for processes driven by multidimensional stochastic differential equations. Preliminary numerical experiments confirm that our approach outperforms the conventional Monte Carlo method by a factor of order $m$.
    Date: 2018–05
  10. By: Xiaofei Lu; Fr\'ed\'eric Abergel
    Abstract: Market making is one of the most important aspects of algorithmic trading, and it has been studied quite extensively from a theoretical point of view. The practical implementation of so-called "optimal strategies" however suffers from the failure of most order book models to faithfully reproduce the behaviour of real market participants. This paper is twofold. First, some important statistical properties of order driven markets are identified, advocating against the use of purely Markovian order book models. Then, market making strategies are designed and their performances are compared, based on simulation as well as backtesting. We find that incorporating some simple non-Markovian features in the limit order book greatly improves the performances of market making strategies in a realistic context.
    Date: 2018–06
  11. By: Sheikh Rabiul Islam
    Abstract: The stock market is a nonlinear, nonstationary, dynamic, and complex system. There are several factors that affect the stock market conditions, such as news, social media, expert opinion, political transitions, and natural disasters. In addition, the market must also be able to handle the situation of illegal insider trading, which impacts the integrity and value of stocks. Illegal insider trading occurs when trading is performed based on non-public (private, leaked, tipped) information (e.g., new product launch, quarterly financial report, acquisition or merger plan) before the information is made public. Preventing illegal insider trading is a priority of the regulatory authorities (e.g., SEC) as it involves billions of dollars, and is very difficult to detect. In this work, we present different types of insider trading approaches, techniques and our proposed approach for detecting and predicting insider trader using a deep-learning based approach combined with discrete signal processing on time series data.
    Date: 2018–07
  12. By: Gu, Yewen (Dept. of Business and Management Science, Norwegian School of Economics); Wallace, Stein W. (Dept. of Business and Management Science, Norwegian School of Economics); Wang, Xin (Dept. of Industrial Economics and Technology Management, Norwegian University of Science and Technology)
    Abstract: Global warming has become one of the most popular topics on this planet in the past decades, since it is the challenge that needs the efforts from the whole mankind. Maritime transportation, which carries more than 90% of the global trade, plays a critical role in the contribution of green house gases (GHGs) emission. Unfortunately, the GHGs emitted by the global fleet still falls outside the emission reduction scheme established by the Kyoto Protocol. Alternative solutions are therefore strongly desired. Several market-based measures are proposed and submitted to IMO for discussion and evaluation. In this paper, we choose to focus on one of these measures, namely Maritime Emissions Trading Scheme (METS). An optimization model integrating the classical fleet composition and deployment problem with the application of ETS (global or regional) is proposed. This model is used as a tool to study the actual impact of METS on fleet operation and corresponding CO2 emission. The results of the computational study suggest that in the short term the implementation of METS may not guarantee further emission reduction in certain scenarios. However, in other scenarios with low bunker price, high allowance cost or global METS coverage, a more significant CO2 decrease in the short term can be expected.
    Keywords: Maritime Emissions Trading Scheme; CO2 emissions; maritime fleet composition; deployment model
    JEL: C44 C60 Q50
    Date: 2018–06–27
  13. By: Yen H. Lok
    Abstract: The contribution of this paper is two-fold. The first contribution is the development of a filter-combine scheme for trading strategies to diversify model risk. Multiple statistical machine learning models are used to predict the price direction of multiple assets. We demonstrate the effectiveness of model-averaging after under-performing models are removed via a filtering algorithm. The second contribution is the identification of appropriate measures of performance for selecting models. In the literature, different measures are usually designed for different applications and purposes, and it is not always clear as to whether certain measures are relevant to a particular trading strategy. By identifying relevant measures, one can identify the key drivers underlying well-performing models, and allocate more resources in optimising and improving the appropriate models.
    JEL: C51 C52
    Date: 2018–06–22
  14. By: Karol Gellert; Erik Schl\"ogl
    Abstract: This paper presents the construction of a particle filter, which incorporates elements inspired by genetic algorithms, in order to achieve accelerated adaptation of the estimated posterior distribution to changes in model parameters. Specifically, the filter is designed for the situation where the subsequent data in online sequential filtering does not match the model posterior filtered based on data up to a current point in time. The examples considered encompass parameter regime shifts and stochastic volatility. The filter adapts to regime shifts extremely rapidly and delivers a clear heuristic for distinguishing between regime shifts and stochastic volatility, even though the model dynamics assumed by the filter exhibit neither of those features.
    Date: 2018–06
  15. By: Okay Gunes (CES - Centre d'économie de la Sorbonne - UP1 - Université Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique)
    Abstract: This work will demonstrate how economic theory can be applied to big data analysis. To do this, I propose two layers of machine learning that use econometric models introduced into a recommender system. The reason for doing so is to challenge traditional recommendation approaches. These approaches are inherently biased due to the fact that they ignore the final preference order for each individual and under-specify the interaction between the socio-economic characteristics of the participants and the characteristics of the commodities in question. In this respect, our hedonic recommendation approach proposes to first correct the internal preferences with respect to the tastes of each individual under the characteristics of given products. In the second layer, the relative preferences across participants are predicted by socio-economic characteristics. The robustness of the model is tested with the MovieLens (100k data consists of 943 users over 1682 movies) run by GroupLens. Our methodology shows the importance and the necessity of correcting the data set by using economic theory. This methodology can be applied for all recommender systems using ratings based on consumer decisions.
    Abstract: Ce travail démontre comment la théorie économique peut être appliquée à l'analyse de Big Data. On propose deux couches d'apprentissage automatique qui utilisent des modèles économétriques introduits dans un système de recommandation. La raison de le faire est de remettre en question les approches de recommandation traditionnelles. Ces approches sont intrinsèquement biaisées en raison du fait qu'elles ignorent l'ordre de préférence final pour chaque individu et sous-spécifient l'interaction entre les caractéristiques socio-économiques des participants et les caractéristiques des produits en question. A cet égard, notre approche de recommandation hédonique propose de corriger d'abord les préférences internes par rapport aux go&ucric;ts de chaque individu en fonction des caractéristiques des produits donnés. Dans la deuxième couche, les préférences relatives entre les participants sont prédites par les caractéristiques socio-économiques. La robustesse du modèle est testée avec les MovieLens (100k données se composent de 943 utilisateurs sur 1682 films) gérés par GroupLens. Notre méthodologie montre l'importance et la nécessité de corriger l'ensemble de données en utilisant la théorie économique. Cette méthodologie peut être appliquée à tous les systèmes de recommandation qui utilisent des votes basées sur les décisions.
    Keywords: Big Data,Machine learning,Recommendation Engine,Econometrics,Données massives,Python,R,Apprentissage automatique,Système recommandation,Econométrie
    Date: 2017–12
  16. By: Kurz-Kim, Jeong-Ryeol
    Abstract: This paper investigates the trade-off between timeliness and quality in nowcasting practices. This trade-off arises when the frequency of the variable to be nowcast, such as GDP, is quarterly, while that of the underlying panel data is monthly; and the latter contains both survey and macroeconomic data. These two categories of data have different properties regarding timeliness and quality: the survey data are timely available (but might possess less predictive power), while the macroeconomic data possess more predictive power (but are not timely available because of their publication lags). In our empirical analysis, we use a modified dynamic factor model which takes three refinements for the standard dynamic factor model of Stock and Watson (2002) into account, namely mixed frequency, pre-selections and co-integration among the economic variables. Our main finding from a historical nowcasting simulation based on euro area GDP is that the predictive power of the survey data depends on the economic circumstances, namely, that survey data are more useful in tranquil times, and less so in times of turmoil.
    Keywords: nowcasting,dynamic factor model,mixed frequency,pre-selections,co-integration,survey data,trade-off between timeliness and quality,turmoil and tranquility
    JEL: C22 C38 C53 E37
    Date: 2018
  17. By: Giovanni Caggiano (University of Padova); Efrem Castelnuovo (University of Padova)
    Abstract: We estimate a nonlinear VAR to quantify the impact of economic policy uncertainty shocks originating in the US on the Canadian unemployment rate in booms and busts. We find strong evidence in favor of asymmetric spillover effects. Unemployment in Canada is shown to react to uncertainty shocks in economic busts only. Such shocks explain about 13% of the variance of the 2-year ahead forecast error of the Canadian unemployment rate in periods of slack vs. just 2% during economic booms. Counterfactual simulations lead to the identification of a novel "economic policy uncertainty spillovers channel". According to this channel, jumps in US uncertainty foster economic policy uncertainty in Canada in the first place and, because of the latter, lead to a temporary increase in the Canadian unemployment rate. Evidence of asymmetric spillover effects due to US EPU shocks are also found for the UK economy. This evidence, which refers to a large economy having a low trade intensity with the US, supports our view that a channel other than trade could be behind our empirical results.
    Keywords: Economic Policy Uncertainty Shocks, Spillover Effects, Unemployment Dynamics, Smooth Transition Vector AutoRegressions, Recessions
    JEL: C32 E32 E52
    Date: 2018–06

General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.