nep-cmp New Economics Papers
on Computational Economics
Issue of 2018‒04‒09
nine papers chosen by

  1. Credit Risk Analysis using Machine and Deep learning models By Peter Martey Addo; Dominique Guegan; Bertrand Hassani
  2. Evaluating Conditional Cash Transfer Policies with Machine Learning Methods By Tzai-Shuen Chen
  3. Mixing LSMC and PDE Methods to Price Bermudan Options By David Farahany; Kenneth Jackson; Sebastian Jaimungal
  4. Horizontal Mergers and Product Innovation By Federico, Giulio; Langus, Gregor; Valletti, Tommaso
  5. Agent-based model of system-wide implications of funding risk By Hałaj, Grzegorz
  6. Machine Learning Indices, Political Institutions, and Economic Development By Klaus Gründler; Tommy Krieger
  7. Machine Learning with Screens for Detecting Bid-Rigging Cartels By Huber, Martin; Imhof, David
  8. Convergence of Computed Dynamic Models with Unbounded Shock By Kosaku Takanashi
  9. Universal features of price formation in financial markets: perspectives from Deep Learning By Justin Sirignano; Rama Cont

  1. By: Peter Martey Addo (Expert Synapses SNCF Mobilité; LabEx ReFi); Dominique Guegan (University Paris 1 Pantheon Sorbonne; Ca' Foscari Unversity Venice; IPAG Business School; LabEx ReFi); Bertrand Hassani (Capgemini Consulting; LabEx ReFi)
    Abstract: Due to the hyper technology associated to Big Data, data availability and computing power, most banks or lending financial institutions are renewing their business models. Credit risk predictions, monitoring, model reliability and effective loan processing are key to decision making and transparency. In this work, we build binary classifiers based on machine and deep learning models on real data in predicting loan default probability. The top 10 important features from these models are selected and then used in the modelling process to test the stability of binary classifiers by comparing performance on separate data. We observe that tree-based models are more stable than models based on multilayer artificial neural networks. This opens several questions relative to the intensive used of deep learning systems in the enterprises.
    Keywords: Credit risk, Financial regulation, Data Science, Bigdata, Deep learning
    Date: 2018
  2. By: Tzai-Shuen Chen
    Abstract: This paper presents an out-of-sample prediction comparison between major machine learning models and the structural econometric model. Over the past decade, machine learning has established itself as a powerful tool in many prediction applications, but this approach is still not widely adopted in empirical economic studies. To evaluate the benefits of this approach, I use the most common machine learning algorithms, CART, C4.5, LASSO, random forest, and adaboost, to construct prediction models for a cash transfer experiment conducted by the Progresa program in Mexico, and I compare the prediction results with those of a previous structural econometric study. Two prediction tasks are performed in this paper: the out-of-sample forecast and the long-term within-sample simulation. For the out-of-sample forecast, both the mean absolute error and the root mean square error of the school attendance rates found by all machine learning models are smaller than those found by the structural model. Random forest and adaboost have the highest accuracy for the individual outcomes of all subgroups. For the long-term within-sample simulation, the structural model has better performance than do all of the machine learning models. The poor within-sample fitness of the machine learning model results from the inaccuracy of the income and pregnancy prediction models. The result shows that the machine learning model performs better than does the structural model when there are many data to learn; however, when the data are limited, the structural model offers a more sensible prediction. The findings of this paper show promise for adopting machine learning in economic policy analyses in the era of big data.
    Date: 2018–03
  3. By: David Farahany; Kenneth Jackson; Sebastian Jaimungal
    Abstract: We develop a mixed least squares Monte Carlo-partial differential equation (LSMC-PDE) method for pricing Bermudan style options on assets whose volatility is stochastic. The algorithm is formulated for an arbitrary number of assets and driving processes and we prove the algorithm converges probabilistically. We also discuss two methods to greatly improve the algorithm's computational complexity. Our numerical examples focus on the single ($2d$) and multi-dimensional ($4d$) Heston model and we compare our hybrid algorithm with classical LSMC approaches. In both cases, we demonstrate that the hybrid algorithm has significantly lower variance than traditional LSMC. Moreover, for the $2d$ example, where it is possible to visualize, we demonstrate that the optimal exercise strategy from the hybrid algorithm is significantly more accurate compared to the one from the full LSMC when using a finite difference approach as a reference.
    Date: 2018–03
  4. By: Federico, Giulio; Langus, Gregor; Valletti, Tommaso
    Abstract: We set up a stylized oligopoly model of uncertain product innovation to analyze the effects of a merger on innovation incentives and on consumer surplus. The model incorporates two competitive channels for merger effects: the "price coordination" channel and the internalization of the "innovation externality". We solve the model numerically and find that price coordination between the two products of the merged firm tends to stimulate innovation, while internalization of the innovation externality depresses it. The latter effect is stronger in our simulations and, as a result, the merger leads to lower innovation incentives for the merged entity, absent cost efficiencies and knowledge spillovers. In our numerical analysis both overall innovation and consumer welfare fall after a merger.
    Keywords: Innovation; mergers; R&D
    JEL: D43 G34 L40 O30
    Date: 2018–02
  5. By: Hałaj, Grzegorz
    Abstract: Liquidity has its systemic aspect that is frequently neglected in research and risk management applications. We build a model that focuses on systemic aspects of liquidity and its links with solvency conditions accounting for pertinent interactions between market participants in an agent-based modelling fashion. The model is confronted with data from the 2014 EU stress test covering all the major banking groups in the EU. The potential amplification role of asset managers is taken into account in a stylised fashion. In particular, we investigate the importance of the channels through which the funding shock to financial institutions can spread across the financial system. JEL Classification: G11, G21, C61
    Keywords: ABM, liquidity, systemic risk
    Date: 2018–01
  6. By: Klaus Gründler; Tommy Krieger
    Abstract: We present a new aggregation method - called SVM algorithm - and use this technique to produce novel measures of democracy (186 countries, 1960-2014). The method takes its name from a machine learning technique for pattern recognition and has three notable features: it makes functional assumptions unnecessary, it accounts for measurement uncertainty, and it creates continuous and dichotomous indices. We use the SVM indices to investigate the effect of democratic institutions on economic development, and find that democracies grow faster than autocracies. Furthermore, we illustrate how the estimation results are affected by conceptual and methodological changes in the measure of democracy. In particular, we show that instrumental variables cannot compensate for measurement errors produced by conventional aggregation methods, and explain why this failure leads to an overestimation of regression coefficients.
    Keywords: democracy, development, economic growth, estimation bias, indices, institutions, machine learning, support vector machines
    JEL: C26 C43 N40 O10 P16 P48
    Date: 2018
  7. By: Huber, Martin; Imhof, David
    Abstract: We combine machine learning techniques with statistical screens computed from the distribution of bids in tenders within the Swiss construction sector to predict collusion through bid-rigging cartels. We assess the out of sample performance of this approach and find it to correctly classify more than 80% of the total of bidding processes as collusive or non-collusive. As the correct classification rate, however, differs across truly non-collusive and collusive processes, we also investigate tradeoffs in reducing false positive vs. false negative predictions. Finally, we discuss policy implications of our method for competition agencies aiming at detecting bid-rigging cartels.
    Keywords: Bid rigging detection; screening methods; variance screen; cover bidding screen; structural and behavioural screens; machine learning; lasso; ensemble methods
    JEL: C21 C45 C52 D22 D40 K40
    Date: 2018–03–29
  8. By: Kosaku Takanashi (Faculty of Economics, Keio University)
    Abstract: The purpose of this paper is to provide the conditions for the convergence of invariant measure obtained from numerical simulations to the exact invariant measure. Santos and Peralta-Alva (2005) have studied the convergence of computed invariant measure of economic models which cannot be solved analytically and must be solved numerically or with some other form of approximation. However, they assume that the state space is compact and therefore, the support of the shock of dynamical system is assumed to be bounded. This paper is to relax the compactness assumption for the convergence of the approximated invariant measure.
    Keywords: Economic Dynamics, Computational Approximation, Invariant Measure, Rate of convergence of Approximation
    JEL: C63 C61 C18
    Date: 2018–03–05
  9. By: Justin Sirignano; Rama Cont
    Abstract: Using a large-scale Deep Learning approach applied to a high-frequency database containing billions of electronic market quotes and transactions for US equities, we uncover nonparametric evidence for the existence of a universal and stationary price formation mechanism relating the dynamics of supply and demand for a stock, as revealed through the order book, to subsequent variations in its market price. We assess the model by testing its out-of-sample predictions for the direction of price moves given the history of price and order flow, across a wide range of stocks and time periods. The universal price formation model is shown to exhibit a remarkably stable out-of-sample prediction accuracy across time, for a wide range of stocks from different sectors. Interestingly, these results also hold for stocks which are not part of the training sample, showing that the relations captured by the model are universal and not asset-specific. The universal model --- trained on data from all stocks --- outperforms, in terms of out-of-sample prediction accuracy, asset-specific linear and nonlinear models trained on time series of any given stock, showing that the universal nature of price formation weighs in favour of pooling together financial data from various stocks, rather than designing asset- or sector-specific models as commonly done. Standard data normalizations based on volatility, price level or average spread, or partitioning the training data into sectors or categories such as large/small tick stocks, do not improve training results. On the other hand, inclusion of price and order flow history over many past observations is shown to improve forecasting performance, showing evidence of path-dependence in price dynamics.
    Date: 2018–03

General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.