nep-cmp New Economics Papers
on Computational Economics
Issue of 2019‒04‒08
eleven papers chosen by



  1. Using Deep Learning Neural Networks and Candlestick Chart Representation to Predict Stock Market By Rosdyana Mangir Irawan Kusuma; Trang-Thi Ho; Wei-Chun Kao; Yu-Yen Ou; Kai-Lung Hua
  2. Deep Learning in Asset Pricing By Luyang Chen; Markus Pelger; Jason Zhu
  3. Tragedy of the Commons and Evolutionary Games in Social Networks: The Economics of Social Punishment By Marco, Jorge; Goetz, Renan
  4. Dynamically optimal treatment allocation using Reinforcement Learning By Karun Adusumilli; Friedrich Geiecke; Claudio Schilter
  5. Improving metadata infrastructure for complex surveys: 
Insights from the Fragile Families Challenge By Alexander Kindel; Vineet Bansal; Kristin Catena; Thomas Hartshorne; Kate Jaeger
  6. REPPlab: An R package for detecting clusters and outliers using exploratory projection pursuit By Fischer, Daniel; Berro, Alain; Nordhausen, Klaus; Ruiz-Gazen, Anne
  7. What Is the Value Added by Using Causal Machine Learning Methods in a Welfare Experiment Evaluation? By Strittmatter, Anthony
  8. Synthetic learner: model-free inference on treatments over time By Davide Viviano; Jelena Bradic
  9. Modeling the increase in the retirement age in the Russian economy using the global CGE-OLG model By Zubarev, Andrey (Зубарев, Андрей); Nesterova, Kristina (Нестерова, Кристина)
  10. The race against the robots and the fallacy of the giant cheesecake: Immediate and imagined impacts of artificial intelligence By Naude, Wim
  11. Bayesian Trading Cost Analysis and Ranking of Broker Algorithms By Vladimir Markov

  1. By: Rosdyana Mangir Irawan Kusuma; Trang-Thi Ho; Wei-Chun Kao; Yu-Yen Ou; Kai-Lung Hua
    Abstract: Stock market prediction is still a challenging problem because there are many factors effect to the stock market price such as company news and performance, industry performance, investor sentiment, social media sentiment and economic factors. This work explores the predictability in the stock market using Deep Convolutional Network and candlestick charts. The outcome is utilized to design a decision support framework that can be used by traders to provide suggested indications of future stock price direction. We perform this work using various types of neural networks like convolutional neural network, residual network and visual geometry group network. From stock market historical data, we converted it to candlestick charts. Finally, these candlestick charts will be feed as input for training a Convolutional Neural Network model. This Convolutional Neural Network model will help us to analyze the patterns inside the candlestick chart and predict the future movements of stock market. The effectiveness of our method is evaluated in stock market prediction with a promising results 92.2% and 92.1% accuracy for Taiwan and Indonesian stock market dataset respectively. The constructed model have been implemented as a web-based system freely available at http://140.138.155.216/deepcandle/ for predicting stock market using candlestick chart and deep learning neural networks.
    Date: 2019–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1903.12258&r=all
  2. By: Luyang Chen; Markus Pelger; Jason Zhu
    Abstract: We estimate a general non-linear asset pricing model with deep neural networks applied to all U.S. equity data combined with a substantial set of macroeconomic and firm-specific information. Our crucial innovation is the use of the no-arbitrage condition as part of the neural network algorithm. We estimate the stochastic discount factor (SDF or pricing kernel) that explains all asset prices from the conditional moment constraints implied by no-arbitrage. For this purpose, we combine three different deep neural network structures in a novel way: A feedforward network to capture non-linearities, a recurrent Long-Short-Term-Memory network to find a small set of economic state processes, and a generative adversarial network to identify the portfolio strategies with the most unexplained pricing information. Our model allows us to understand what are the key factors that drive asset prices, identify mis-pricing of stocks and generate the mean-variance efficient portfolio. Empirically, our approach outperforms out-of-sample all other benchmark approaches: Our optimal portfolio has an annual Sharpe Ratio of 2.1, we explain 8% of the variation in individual stock returns and explain over 90% of average returns for all anomaly sorted portfolios.
    Date: 2019–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1904.00745&r=all
  3. By: Marco, Jorge; Goetz, Renan
    Abstract: This study revisits the problem of the tragedy of the commons. Extracting agents participate in an evolutionary game in a complex social network and are subject to social pressure if they do not comply with the social norms. Social pressure depends on the dynamics of the resource, the network and the population of compliers. We analyze the influence the network structure has on the agents’ behavior and determine the economic value of the intangible good - social pressure. For a socially optimal management of the resource, an initially high share of compliers is necessary but is not sufficient. The analysis shows the extent to which the remaining level of the resource, the share of compliers and the size, density and local cohesiveness of the network contribute to overcoming the tragedy of the commons. The study suggests that the origin of the problem – shortsighted behavior - is also the starting point for a solution in the form of a one-time payment. A numerical analysis of a social network comprising 7500 agents and a realistic topological structure is performed using empirical data from the western La Mancha aquifer in Spain.
    Keywords: Research Methods/ Statistical Methods
    Date: 2017–07–13
    URL: http://d.repec.org/n?u=RePEc:ags:feemth:259486&r=all
  4. By: Karun Adusumilli; Friedrich Geiecke; Claudio Schilter
    Abstract: Consider a situation wherein a stream of individuals arrive sequentially - for example when they get unemployed - to a social planner. Once each individual arrives, the planner needs to decide instantaneously on an action or treatment assignment - for example offering job training - while taking into account various institutional constraints such as limited budget and capacity. In this paper, we show how one can use offline observational data to estimate an optimal policy rule that maximizes ex-ante expected welfare in this dynamic context. Importantly, we are able to find the optimal policy within a pre-specified class of policy rules. The policies may be restricted for computational, legal or incentive compatibility reasons. For each policy, we show that a Partial Differential Equation (PDE) characterizes the evolution of the value function under that policy. Using the data, one can write down a sample version of the PDE that provides estimates of these value functions. We then propose a modified Reinforcement Learning algorithm to solve for the policy rule that achieves the best value in the pre-specified class. The algorithm is easily implementable and computationally efficient, with speedups achieved through multiple reinforcement learning agents simultaneously learning the problem in parallel processes. By exploiting the properties of the PDEs, we show that the average social welfare attained by the estimated policy rule converges at a $n^{-1/2}$ rate to the maximum attainable within the specified class of policy functions; this is the same rate as that obtained in the static case. Finally we also allow for non-compliance using instrumental variables, and show how one can accommodate compliance heterogeneity in a dynamic setting.
    Date: 2019–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1904.01047&r=all
  5. By: Alexander Kindel (Princeton University); Vineet Bansal (Princeton University); Kristin Catena (Princeton University); Thomas Hartshorne (Princeton University); Kate Jaeger (Princeton University)
    Abstract: Researchers rely on metadata systems to prepare data for analysis. As the complexity of datasets increases and the breadth of data analysis practices grow, existing metadata systems can limit the efficiency and quality of data preparation. This article describes the redesign of a metadata system supporting the Fragile Families and Child Wellbeing Study based on the experiences of participants in the Fragile Families Challenge. We demonstrate how treating metadata as data—that is, releasing comprehensive information about variables in a format amenable to both automated and manual processing—can make the task of data preparation less arduous and less error-prone for all types of data analysis. We hope that our work will facilitate new applications of machine learning methods to longitudinal surveys and inspire research on data preparation in the social sciences. We have open-sourced the tools we created so that others can use and improve them.
    Keywords: metadata, survey research, data sharing, quantitative methodology, computational social science
    JEL: F13
    Date: 2018–10
    URL: http://d.repec.org/n?u=RePEc:pri:crcwel:wp18-10-ff&r=all
  6. By: Fischer, Daniel; Berro, Alain; Nordhausen, Klaus; Ruiz-Gazen, Anne
    Abstract: The R-package REPPlab is designed to explore multivariate data sets using one-dimensional unsupervised projection pursuit. It is useful as a preprocessing step to find clusters or as an outlier detection tool for multivariate data. Except from the packages tourr and rggobi, there is no implementation of exploratory projection pursuit tools available in R. REPPlab is an R interface for the Java program EPP-lab that implements four projection indices and three biologically inspired optimization algorithms. It also proposes new tools for plotting and combining the results and specific tools for outlier detection. The functionality of the package is illustrated through some simulations and using some real data.
    Keywords: genetic algorithms; Java, kurtosis, particle swarm optimization; projection index; Tribes; projection matrix; unsupervised data analysis
    Date: 2019–03–26
    URL: http://d.repec.org/n?u=RePEc:tse:wpaper:122892&r=all
  7. By: Strittmatter, Anthony
    Abstract: Recent studies have proposed causal machine learning (CML) methods to estimate conditional average treatment effects (CATEs). In this study, I investigate whether CML methods add value compared to conventional CATE estimators by re-evaluating Connecticut's Jobs First welfare experiment. This experiment entails a mix of positive and negative work incentives. Previous studies show that it is hard to tackle the effect heterogeneity of Jobs First by means of CATEs. I report evidence that CML methods can provide support for the theoretical labor supply predictions. Furthermore, I document reasons why some conventional CATE estimators fail and discuss the limitations of CML methods.
    Keywords: Labor supply, individualized treatment effects, conditional average treatment effects, random forest
    JEL: H75 I38 J22 J31 C21
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:zbw:glodps:336&r=all
  8. By: Davide Viviano; Jelena Bradic
    Abstract: Understanding of the effect of a particular treatment or a policy pertains to many areas of interest -- ranging from political economics, marketing to health-care and personalized treatment studies. In this paper, we develop a non-parametric, model-free test for detecting the effects of treatment over time that extends widely used Synthetic Control tests. The test is built on counterfactual predictions arising from many learning algorithms. In the Neyman-Rubin potential outcome framework with possible carry-over effects, we show that the proposed test is asymptotically consistent for stationary, beta mixing processes. We do not assume that class of learners captures the correct model necessarily. We also discuss estimates of the average treatment effect, and we provide regret bounds on the predictive performance. To the best of our knowledge, this is the first set of results that allow for example any Random Forest to be useful for provably valid statistical inference in the Synthetic Control setting. In experiments, we show that our Synthetic Learner is substantially more powerful than classical methods based on Synthetic Control or Difference-in-Differences, especially in the presence of non-linear outcome models.
    Date: 2019–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1904.01490&r=all
  9. By: Zubarev, Andrey (Зубарев, Андрей) (The Russian Presidential Academy of National Economy and Public Administration); Nesterova, Kristina (Нестерова, Кристина) (The Russian Presidential Academy of National Economy and Public Administration)
    Abstract: This paper is aimed at modeling of the proposed rise in retirement age for Russian economy (from 60 to 65 for men and from 55 to 60 for women) in the setting of a global CGE-OLGE model that includes over 100 countries grouped into 17 regions. The model takes into account the relevant long run demographic forecasts made by the UN and the current budget structure for all the 17 regions/ the results suggest a weak effect of the rise in retirement age on the economic activity and a substantial positive effect on a long-term balance of the state budget.
    Keywords: general equilibrium model, pension reform, retirement age
    Date: 2019–03
    URL: http://d.repec.org/n?u=RePEc:rnp:wpaper:031952&r=all
  10. By: Naude, Wim (UNU-MERIT, Maastricht University and MSM, and RWTH Aachen, and IZA Bonn)
    Abstract: After a number of AI-winters, AI is back with a boom. There are concerns that it will disrupt society. The immediate concern is whether labor can win a `race against the robots' and the longer-term concern is whether an artificial general intelligence (super-intelligence) can be controlled. This paper describes the nature and context of these concerns, reviews the current state of the empirical and theoretical literature in economics on the impact of AI on jobs and inequality, and discusses the challenge of AI arms races. It is concluded that despite the media hype neither massive jobs losses nor a `Singularity' are imminent. In part, this is because current AI, based on deep learning, is expensive and dificult for (especially small) businesses to adopt, can create new jobs, and is an unlikely route to the invention of a super-intelligence. Even though AI is unlikely to have either utopian or apocalyptic impacts, it will challenge economists in coming years. The challenges include regulation of data and algorithms; the (mis-) measurement of value added; market failures, anti-competitive behaviour and abuse of market power; surveillance, censorship, cybercrime; labor market discrimination, declining job quality; and AI in emerging economies.
    Keywords: Technology, artificial intelligence, productivity, labor demand, innovation, inequality
    JEL: O47 O33 J24 E21 E25
    Date: 2019–03–07
    URL: http://d.repec.org/n?u=RePEc:unm:unumer:2019005&r=all
  11. By: Vladimir Markov
    Abstract: We present a formulation of the transaction cost analysis (TCA) in the Bayesian framework for the primary purpose of comparing broker algorithms using standardized benchmarks. Our formulation allows effective calculation of the expected value of trading benchmarks with only a finite sample of data relevant to practical applications. We discuss the nature of distribution of implementation shortfall, volume-weighted average price, participation-weighted price and short-term reversion benchmarks. Our model takes into account fat tails, skewness of the distributions and heteroscedasticity of benchmarks. The proposed framework allows the use of hierarchical models to transfer approximate knowledge from a large aggregated sample of observations to a smaller sample of a particular algorithm.
    Date: 2019–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1904.01566&r=all

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.