nep-cmp New Economics Papers
on Computational Economics
Issue of 2019‒01‒21
fourteen papers chosen by



  1. Learning from the "Best": The Impact of Tax-Benefit Systems in Africa By Bargain, Olivier; Jara Tamayo, Holguer Xavier; Kwenda, Prudence; Ntuli, Miracle
  2. R&I and Low-carbon investment in Apulia, Italy: The RHOMOLO assessment By Olga Diukanova; Giovanni Mandras; Andrea Conte; Simone Salotti
  3. Deep Learning for Ranking Response Surfaces with Applications to Optimal Stopping Problems By Ruimeng Hu
  4. Learning Policy Levers: Toward Automated Policy Analysis Using Judicial Corpora By Ash, Elliott; Chen, Daniel L.; Delgado, Raul; Fierro, Eduardo; Lin, Shasha
  5. Optimal Paternalistic Savings Policies By Moser, Christian; Olea de Souza e Silva, Pedro
  6. Double Deep Q-Learning for Optimal Execution By Brian Ning; Franco Ho Ting Ling; Sebastian Jaimungal
  7. Empirical Asset Pricing via Machine Learning By Shihao Gu; Bryan Kelly; Dacheng Xiu
  8. Can Deep Learning Predict Risky Retail Investors? A Case Study in Financial Risk Behavior Forecasting By Yaodong Yang; Alisa Kolesnikova; Stefan Lessmann; Tiejun Ma; Ming-Chien Sung; Johnnie E. V. Johnson
  9. The US trade dispute: blunt offense or rational strategy? By Hübler, Michael; Axel Herdecke
  10. What is the Value Added by using Causal Machine Learning Methods in a Welfare Experiment Evaluation? By Anthony Strittmatter
  11. Machine Learning and Rule of Law By Chen, Daniel L.
  12. Machine Learning Estimation of Heterogeneous Causal Effects: Empirical Monte Carlo Evidence By Knaus, Michael C.; Lechner, Michael; Strittmatter, Anthony
  13. Forecasting economic decisions under risk: The predictive importance of choice-process data By Steffen Q. Mueller; Patrick Ring; Maria Schmidt
  14. Judicial Analytics and the Great Transformation of American Law By Chen, Daniel L.

  1. By: Bargain, Olivier (University of Bordeaux); Jara Tamayo, Holguer Xavier (University of Essex); Kwenda, Prudence (Wits University); Ntuli, Miracle (Wits University)
    Abstract: Redistributive systems in Africa are still in their infancy but are constantly expanding in order to finance increasing public spending. This paper aims at characterizing the redistributive potential of six African countries: Ghana, Zambia, Mozambique, Tanzania, Ethiopia and South Africa. These countries show contrasted situations in terms of income distribution. We assess the role of tax-benefit systems to explain these differences. Using newly developed tax-benefit microsimulations for all six countries, we produce counterfactual simulations whereby the system of the most (least) redistributive country is applied to the population of all other countries. In this way, we can decompose the total country difference in income distribution between the contribution of tax-benefit policies versus the contribution of other factors (market income distributions, demographics, etc.). This analysis contributes to the recent literature on the redistributive role of socio-fiscal policies in developing countries and highlights the role of microsimulation techniques to characterize how different African countries can learn from each other to improve social protection and reduce inequality.
    Keywords: tax-benefit policy, microsimulation, inequality, poverty, Africa
    JEL: H23 H53 I32
    Date: 2018–12
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp12017&r=all
  2. By: Olga Diukanova (European Commission - JRC); Giovanni Mandras (European Commission - JRC); Andrea Conte (European Commission - JRC); Simone Salotti (European Commission - JRC)
    Abstract: The European Cohesion Policy supports eleven thematic objectives. Four of these are key priorities for the European Regional Development Fund: Research and Innovation (R&I), Information and Communication Technologies (ICT), SME competitiveness, and Low-carbon economy. The European Commission's Joint Research Centre (JRC) is supporting Apulia, Italy, with the design and implementation of Regional Innovation Strategies for Smart Specialisation (RIS3). Quantitative tools such as the RHOMOLO model could help evaluate the impact of funding programmes in different policy areas across European regions. R&I and Low-Carbon ERDF Investments aim at generating sustainable growth and supporting the capacity of regional economies to innovate in line with the Energy Union strategy and the EU's transition to a low-carbon economy. Policy simulations using the RHOMOLO dynamic CGE model show positive macro-economic effects of the ERDF investments related to the R&I and Low-carbon thematic objectives in Apulia both within the region and in its neighbouring regions.
    Keywords: rhomolo, region, growth, impact assessment, modelling, Apulia, Italy, Cohesion Policy, ERDF, investment
    JEL: C54 C68 E62 R13
    Date: 2018–12
    URL: http://d.repec.org/n?u=RePEc:ipt:iptwpa:jrc115019&r=all
  3. By: Ruimeng Hu
    Abstract: In this paper, we propose deep learning algorithms for ranking response surfaces, with applications to optimal stopping problems in financial mathematics. The problem of ranking response surfaces is motivated by estimating optimal feedback policy maps in stochastic control problems, aiming to efficiently find the index associated to the minimal response across the entire continuous input space $\mathcal{X} \subseteq \mathbb{R}^d$. By considering points in $\mathcal{X}$ as pixels and indices of the minimal surfaces as labels, we recast the problem as an image segmentation problem, which assigns a label to every pixel in an image such that pixels with the same label share certain characteristics. This provides an alternative method for efficiently solving the problem instead of using sequential design in our previous work [R. Hu and M. Ludkovski, SIAM/ASA Journal on Uncertainty Quantification, 5 (2017), 212--239]. Deep learning algorithms are scalable, parallel and model-free, i.e., no parametric assumptions needed on the response surfaces. Considering ranking response surfaces as image segmentation allows one to use a broad class of deep neural networks, e.g., UNet, SegNet, DeconvNet, which have been widely applied and numerically proved to possess high accuracy in the field. We also systematically study the dependence of deep learning algorithms on the input data generated on uniform grids or by sequential design sampling, and observe that the performance of deep learning is {\it not} sensitive to the noise and locations (close to/away from boundaries) of training data. We present a few examples including synthetic ones and the Bermudan option pricing problem to show the efficiency and accuracy of this method.
    Date: 2019–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1901.03478&r=all
  4. By: Ash, Elliott; Chen, Daniel L.; Delgado, Raul; Fierro, Eduardo; Lin, Shasha
    Abstract: To build inputs for end-to-end machine learning estimates of the causal impacts of law, we consider the problem of automatically classifying cases by their policy impact. We propose and implement a semi-supervised multi-class learning model, with the training set being a hand-coded dataset of thousands of cases in over 20 politically salient policy topics. Using opinion text features as a set of predictors, our model can classify labeled cases by topic correctly 91% of the time. We then take the model to the broader set of unlabeled cases and show that it can identify new groups of cases by shared policy impact.
    Date: 2018–08
    URL: http://d.repec.org/n?u=RePEc:tse:wpaper:33153&r=all
  5. By: Moser, Christian (Federal Reserve Bank of Minneapolis); Olea de Souza e Silva, Pedro (Uber Technologies)
    Abstract: We study optimal savings policies when there is a dual concern about undersaving for retirement and income inequality. Agents differ in present bias and earnings ability, both unobservable to a planner with paternalistic and redistributive motives. We characterize the solution to this two-dimensional screening problem and provide a decentralization using realistic policy instruments: mandatory savings at low incomes but a choice between subsidized savings vehicles at high incomes—resembling Social Security, 401(k), and IRA accounts in the US. Offering more savings choice at higher incomes facilitates redistribution. To solve large-scale versions of this problem numerically, we propose a general, computationally stable, and efficient active-set algorithm. Relative to the current US retirement system, we find significant welfare gains from increasing mandatory savings and limiting savings choice at low incomes.
    Keywords: Optimal taxation; Multidimensional screening; Present bias; Preference heterogeneity; Paternalism; Retirement; Savings; Social Security; Active-set algorithm
    JEL: E62 H21 H55
    Date: 2019–01–10
    URL: http://d.repec.org/n?u=RePEc:fip:fedmoi:0017&r=all
  6. By: Brian Ning; Franco Ho Ting Ling; Sebastian Jaimungal
    Abstract: Optimal trade execution is an important problem faced by essentially all traders. Much research into optimal execution uses stringent model assumptions and applies continuous time stochastic control to solve them. Here, we instead take a model free approach and develop a variation of Deep Q-Learning to estimate the optimal actions of a trader. The model is a fully connected Neural Network trained using Experience Replay and Double DQN with input features given by the current state of the limit order book, other trading signals, and available execution actions, while the output is the Q-value function estimating the future rewards under an arbitrary action. We apply our model to nine different stocks and find that it outperforms the standard benchmark approach on most stocks using the measures of (i) mean and median out-performance, (ii) probability of out-performance, and (iii) gain-loss ratios.
    Date: 2018–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1812.06600&r=all
  7. By: Shihao Gu; Bryan Kelly; Dacheng Xiu
    Abstract: We synthesize the field of machine learning with the canonical problem of empirical asset pricing: measuring asset risk premia. In the familiar empirical setting of cross section and time series stock return prediction, we perform a comparative analysis of methods in the machine learning repertoire, including generalized linear models, dimension reduction, boosted regression trees, random forests, and neural networks. At the broadest level, we find that machine learning offers an improved description of expected return behavior relative to traditional forecasting methods. Our implementation establishes a new standard for accuracy in measuring risk premia summarized by an unprecedented out-of-sample return prediction R2. We identify the best performing methods (trees and neural nets) and trace their predictive gains to allowance of nonlinear predictor interactions that are missed by other methods. Lastly, we find that all methods agree on the same small set of dominant predictive signals that includes variations on momentum, liquidity, and volatility. Improved risk premia measurement through machine learning can simplify the investigation into economic mechanisms of asset pricing and justifies its growing role in innovative financial technologies.
    JEL: C45 C58 G11 G12
    Date: 2018–12
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:25398&r=all
  8. By: Yaodong Yang; Alisa Kolesnikova; Stefan Lessmann; Tiejun Ma; Ming-Chien Sung; Johnnie E. V. Johnson
    Abstract: The success of deep learning for unstructured data analysis is well documented but little evidence has emerged related to the structured, tabular datasets used in decision support. We address this research gap by considering the potential of deep learning to support financial risk management. In particular, we develop a deep learning model for predicting whether individual spread traders are likely to secure profits from future trades. This embodies typical modeling challenges faced in risk and behavior forecasting. Conventional machine learning requires data that is representative of the feature-target relationship and relies on the often costly development, maintenance, and revision of handcrafted features. Consequently, modeling highly variable, heterogeneous patterns such as the behavior of traders is challenging. Deep learning promises a remedy. Learning hierarchical distributed representations of the raw data in an automatic manner (e.g. risk taking behavior), it uncovers generative features that determine the target (e.g., trader's profitability), avoids manual feature engineering, and is more robust toward change (e.g. dynamic market conditions). The results of employing a deep network for operational risk forecasting confirm the feature learning capability of deep learning, provide guidance on designing a suitable network architecture and demonstrate the superiority of deep learning over powerful machine learning benchmarks. Empirical results suggest that the financial institution which provided the data can increase annual profits by 16% through implementing a deep learning based risk management policy. The findings demonstrate the potential of applying deep learning methods for management science problems in finance, marketing, and accounting.
    Date: 2018–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1812.06175&r=all
  9. By: Hübler, Michael; Axel Herdecke
    Abstract: This article evaluates the recent protectionist US trade policy and the retaliation of the EU and China. The article employs a New Quantitative Trade Theory model and an Armington model for comparison. The simulation results show that US car tariffs are a credible threat to the EU, but the steel and aluminum tariffs are not. China suffers considerably from the US tariffs, especially the extended, tightened tariffs that have been announced. The retaliation measures of the EU and China, however, do not cause US welfare losses compared to the situation without such a trade policy.
    Keywords: Trade policy, trade war, numerical model, USA, EU, China
    JEL: F11 F17 F42
    Date: 2019–01
    URL: http://d.repec.org/n?u=RePEc:han:dpaper:dp-648&r=all
  10. By: Anthony Strittmatter
    Abstract: I investigate causal machine learning (CML) methods to estimate effect heterogeneity by means of conditional average treatment effects (CATEs). In particular, I study whether the estimated effect heterogeneity can provide evidence for the theoretical labour supply predictions of Connecticut's Jobs First welfare experiment. For this application, Bitler, Gelbach, and Hoynes (2017) show that standard CATE estimators fail to provide evidence for theoretical labour supply predictions. Therefore, this is an interesting benchmark to showcase the value added by using CML methods. I report evidence that the CML estimates of CATEs provide support for the theoretical labour supply predictions. Furthermore, I document some reasons why standard CATE estimators fail to provide evidence for the theoretical predictions. However, I show the limitations of CML methods that prevent them from identifying all the effect heterogeneity of Jobs First.
    Date: 2018–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1812.06533&r=all
  11. By: Chen, Daniel L.
    Abstract: Predictive judicial analytics holds the promise of increasing the fairness of law. Much empirical work observes inconsistencies in judicial behavior. By predicting judicial decisions—with more or less accuracy depending on judicial attributes or case characteristics—machine learning offers an approach to detecting when judges most likely to allow extralegal biases to influence their decision making. In particular, low predictive accuracy may identify cases of judicial “indifference,” where case characteristics (interacting with judicial attributes) do no strongly dispose a judge in favor of one or another outcome. In such cases, biases may hold greater sway, implicating the fairness of the legal system.
    Date: 2018–12
    URL: http://d.repec.org/n?u=RePEc:tse:wpaper:33149&r=all
  12. By: Knaus, Michael C. (University of St. Gallen); Lechner, Michael (University of St. Gallen); Strittmatter, Anthony (University of St. Gallen)
    Abstract: We investigate the finite sample performance of causal machine learning estimators for heterogeneous causal effects at different aggregation levels. We employ an Empirical Monte Carlo Study that relies on arguably realistic data generation processes (DGPs) based on actual data. We consider 24 different DGPs, eleven different causal machine learning estimators, and three aggregation levels of the estimated effects. In the main DGPs, we allow for selection into treatment based on a rich set of observable covariates. We provide evidence that the estimators can be categorized into three groups. The first group performs consistently well across all DGPs and aggregation levels. These estimators have multiple steps to account for the selection into the treatment and the outcome process. The second group shows competitive performance only for particular DGPs. The third group is clearly outperformed by the other estimators.
    Keywords: causal machine learning, conditional average treatment effects, selection-on-observables, random forest, causal forest, lasso
    JEL: C21
    Date: 2018–12
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp12039&r=all
  13. By: Steffen Q. Mueller (Chair for Economic Policy, University of Hamburg); Patrick Ring (Social and Behavioral Approaches to Global Problems, Kiel Institute for the World Economy); Maria Schmidt (Department of Psychology, Kiel University)
    Abstract: We investigate various statistical methods for forecasting risky choices and identify important decision predictors. Subjects (n=44) are presented a series of 50/50 gambles that each involves a potential gain and a potential loss, and subjects can choose to either accept or reject a displayed lottery. From this data, we use information on 8800 individual lottery gambles and specify four predictor-sets that include different combinations of input categories: lottery design, socioeconomic characteristics, past gambling behavior, eye-movements, and various psychophysiological measures that are recorded during the first three seconds of lottery-information processing. The results of our forecasting experiment show that choice-process data can effectively be used to forecast risky gambling decisions; however, we find large differences among models’ forecasting capabilities with respect to subjects, predictor-sets, and lottery payoff structures.
    Keywords: Forecasting, lottery, risk, choice-process tracing, experiments, machine learning, decision theory
    JEL: C44 C45 C53 D87 D91
    Date: 2019–01–11
    URL: http://d.repec.org/n?u=RePEc:hce:wpaper:066&r=all
  14. By: Chen, Daniel L.
    Abstract: Predictive judicial analytics holds the promise of increasing efficiency and fairness of law. Judicial analytics can assess extra-legal factors that influence decisions. Behavioral anomalies in judicial decision-making offer an intuitive understanding of feature relevance, which can then be used for debiasing the law. A conceptual distinction between inter-judge disparities in predictions and interjudge disparities in prediction accuracy suggests another normatively relevant criterion with regards to fairness. Predictive analytics can also be used in the first step of causal inference, where the features employed in the first step are exogenous to the case. Machine learning thus offers an approach to assess bias in the law and evaluate theories about the potential consequences of legal change.
    Keywords: Judicial Analytics; Causal Inference; Behavioral Judging
    Date: 2018–12
    URL: http://d.repec.org/n?u=RePEc:tse:wpaper:33147&r=all

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.