nep-cmp New Economics Papers
on Computational Economics
Issue of 2018‒07‒23
nine papers chosen by



  1. Beating the curse of dimensionality in options pricing and optimal stopping By David A. Goldberg; Yilun Chen
  2. Public Procurement and Reputation: An Agent-Based Model By Nadia Fiorino; Emma Galli; Ilde Rizzo; Marco Valente
  3. Assessing Autonomous Algorithmic Collusion: Q-Learning Under Sequential Pricing By Timo Klein
  4. Policy options to support the Agriculture Sector Growth and Transformation Strategy in Kenya: A CGE analysis By Pierre Boulanger; Hasan Dudu; Emanuele Ferrari; Alfredo Mainar Causape; Jean Balie; Lucia Battaglia
  5. Long Short-Term Memory Networks for CSI300 Volatility Prediction with Baidu Search Volume By Yu-Long Zhou; Ren-Jie Han; Qian Xu; Wei-Ke Zhang
  6. Double/de-biased machine learning using regularized Riesz representers By Victor Chernozhukov; Whitney K. Newey; James Robins
  7. Early Detection of Students at Risk - Predicting Student Dropouts Using Administrative Student Data and Machine Learning Methods Abstract: High rates of student attrition in tertiary education are a major concern for universities and public policy, as dropout is not only costly for the students but also wastes public funds. To successfully reduce student attrition, it is imperative to understand which students are at risk of dropping out and what are the underlying determinants of dropout. We develop an early detection system (EDS) that uses machine learning and classic regression techniques to predict student success in tertiary education as a basis for a targeted intervention. The method developed in this paper is highly standardized and can be easily implemented in every German institution of higher education, as it uses student performance and demographic data collected, stored, and maintained by legal mandate at all German universities and therefore self-adjusts to the university where it is employed. The EDS uses regression analysis and machine learning methods, such as neural networks, decision trees and the AdaBoost algorithm to identify student characteristics which distinguish potential dropouts from graduates. The EDS we present is tested and applied on a medium-sized state university with 23,000 students and a medium-sized private university of applied sciences with 6,700 students. Both institutes of higher education differ considerably in their organization, tuition fees and student-teacher ratios. Our results indicate a prediction accuracy at the end of the first semester of 79% for the state university and 85% for the private university of applied sciences. Furthermore, accuracy of the EDS increases with each completed semester as new performance data becomes available. After the fourth semester, the accuracy improves to 90% for the state university and 95% for the private university of applied sciences. At the day of enrollment the accuracy, relying only on demographic data, is 68% for the state university and 67% for the private university. By Johannes Berens; Simon Oster; Kerstin Schneider; Julian Burghoff
  8. Prospects and Macroeconomic Consequences of the Development of Integration within the Framework of the EAEU By Kuznetsov, Dmitriy; Sedalishchev, Vladimir; Knobel, Alexander
  9. Elephants, Donkeys, and Colonel Blotto By Ivan P. Yamshchikov; Sharwin Rezagholi

  1. By: David A. Goldberg; Yilun Chen
    Abstract: The fundamental problems of pricing high-dimensional path-dependent options and optimal stopping are central to applied probability and financial engineering. Modern approaches, often relying on ADP, simulation, and/or duality, have limited rigorous guarantees, which may scale poorly and/or require previous knowledge of basis functions. A key difficulty with many approaches is that to yield stronger guarantees, they would necessitate the computation of deeply nested conditional expectations, with the depth scaling with the time horizon T. We overcome this fundamental obstacle by providing an algorithm which can trade-off between the guaranteed quality of approximation and the level of nesting required in a principled manner, without requiring a set of good basis functions. We develop a novel pure-dual approach, inspired by a connection to network flows. This leads to a representation for the optimal value as an infinite sum for which: 1. each term is the expectation of an elegant recursively defined infimum; 2. the first k terms only require k levels of nesting; and 3. truncating at the first k terms yields an error of 1/k. This enables us to devise a simple randomized algorithm whose runtime is effectively independent of the dimension, beyond the need to simulate sample paths of the underlying process. Indeed, our algorithm is completely data-driven in that it only needs the ability to simulate the original process, and requires no prior knowledge of the underlying distribution. Our method allows one to elegantly trade-off between accuracy and runtime through a parameter epsilon controlling the associated performance guarantee, with computational and sample complexity both polynomial in T (and effectively independent of the dimension) for any fixed epsilon, in contrast to past methods typically requiring a complexity scaling exponentially in these parameters.
    Date: 2018–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1807.02227&r=cmp
  2. By: Nadia Fiorino; Emma Galli; Ilde Rizzo; Marco Valente
    Abstract: Based on the literature on public procurement regulation, we use an Agent-Based Model to assess the performance of different selection procedures. Specifically, we aim at investigating whether and how the inclusion of reputation of firms in the public procurement selection process affects the final cost of the contract. The model defines two types of actors: i) firms potentially competing to win the contract; ii) a contracting authority, aiming at minimizing procurement costs. These actors respond to environmental conditions affecting the actual costs of carrying on the project and unknown to firms at the time of bidding and to the contracting authority. The results from the model are generated through simulations by considering different congurations and varying some parameters of the model, such as the firms' skills, the level of opportunistic rebate, the relative weight of reputation and rebate. The main conclusion is that reputation matters and some policy implications are drawn.
    Keywords: Public works; Procurement; Agent-based modelling
    Date: 2018–06–20
    URL: http://d.repec.org/n?u=RePEc:ssa:lemwps:2018/18&r=cmp
  3. By: Timo Klein (University of Amsterdam)
    Abstract: A novel debate within competition policy and regulation circles is whether autonomous machine learning algorithms are able to tacitly collude on prices. Using a general framework, we show how autonomous Q-learning -- a simple but well-established machine learning algorithm -- is able to achieve supracompetitive profits in a stylized oligopoly environment with sequential price competition. This occurs without any communication or explicit instructions to collude, suggesting tacit collusion. The intuition is that the algorithm is able to learn and exploit the dynamics of Edgeworth price cycles, where periodic price increases reset a gradual downward spiral of price competition. The general framework used can guide future research into the capacity of various algorithms to collude in environments that are less stylized or more case-specific.
    Keywords: pricing algorithms; algorithmic collusion; machine learning; Q-learning; sequential pricing
    JEL: K21 L13 L49
    Date: 2018–06–21
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20180056&r=cmp
  4. By: Pierre Boulanger (European Commission – JRC); Hasan Dudu (European Commission – JRC); Emanuele Ferrari (European Commission – JRC); Alfredo Mainar Causape (European Commission – JRC); Jean Balie (FAO); Lucia Battaglia (FAO)
    Abstract: This report provides scientific evidences supporting the new Agriculture Sector Growth and Transformation Strategy in Kenya. A Computable General Equilibrium (CGE) model specifically modified for the context of Kenya is used to address the impacts of six policy changes. For the purpose of the study, a desegregated version of a 2014 Social Accounting Matrix (SAM) has been developed for Kenya. Multi-sectoral analytical tools are used to describe the Kenyan economy and inform about which agri-food value chains have the greatest impact in terms of output, employment and value added. Then, results of simulated policy changes are presented in this report, considering that a more careful analysis at regional and household type levels is required in order to draw robust policy recommendations.
    Keywords: CGE, Kenya, Agricultural policy
    JEL: C68
    Date: 2018–06
    URL: http://d.repec.org/n?u=RePEc:ipt:iptwpa:jrc111251&r=cmp
  5. By: Yu-Long Zhou; Ren-Jie Han; Qian Xu; Wei-Ke Zhang
    Abstract: Intense volatility in financial markets affect humans worldwide. Therefore, relatively accurate prediction of volatility is critical. We suggest that massive data sources resulting from human interaction with the Internet may offer a new perspective on the behavior of market participants in periods of large market movements. First we select 28 key words, which are related to finance as indicators of the public mood and macroeconomic factors. Then those 28 words of the daily search volume based on Baidu index are collected manually, from June 1, 2006 to October 29, 2017. We apply a Long Short-Term Memory neural network to forecast CSI300 volatility using those search volume data. Compared to the benchmark GARCH model, our forecast is more accurate, which demonstrates the effectiveness of the LSTM neural network in volatility forecasting.
    Date: 2018–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1805.11954&r=cmp
  6. By: Victor Chernozhukov (Institute for Fiscal Studies and MIT); Whitney K. Newey (Institute for Fiscal Studies and MIT); James Robins (Institute for Fiscal Studies)
    Abstract: We provide adaptive inference methods for linear functionals of L1-regularized linear approximations to the conditional expectation function. Examples of such functionals include average derivatives, policy effects, average treatment effects, and many others. The construction relies on building Neyman-orthogonal equations that are approximately invariant to perturbations of the nuisance parameters, including the Riesz representer for the linear functionals. We use L1-regularized methods to learn the approximations to the regression function and the Riesz representer, and construct the estimator for the linear functionals as the solution to the orthogonal estimating equations. We establish that under weak assumptions the estimator concentrates in a 1/vn neighborhood of the target with deviations controlled by the normal laws, and the estimator attains the semi-parametric efficiency bound in many cases. In particular, either the approximation to the regression function or the approximation to the Rietz representer can be “dense” as long as one of them is sufficiently “sparse”. Our main results are non-asymptotic and imply asymptotic uniform validity over large classes of models.
    Keywords: Approximate Sparsity vs. Density, Double/De-biased Machine Learning, Regularized Riesz Representers, Linear Functionals
    Date: 2018–03–02
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:15/18&r=cmp
  7. Early Detection of Students at Risk - Predicting Student Dropouts Using Administrative Student Data and Machine Learning Methods Abstract: High rates of student attrition in tertiary education are a major concern for universities and public policy, as dropout is not only costly for the students but also wastes public funds. To successfully reduce student attrition, it is imperative to understand which students are at risk of dropping out and what are the underlying determinants of dropout. We develop an early detection system (EDS) that uses machine learning and classic regression techniques to predict student success in tertiary education as a basis for a targeted intervention. The method developed in this paper is highly standardized and can be easily implemented in every German institution of higher education, as it uses student performance and demographic data collected, stored, and maintained by legal mandate at all German universities and therefore self-adjusts to the university where it is employed. The EDS uses regression analysis and machine learning methods, such as neural networks, decision trees and the AdaBoost algorithm to identify student characteristics which distinguish potential dropouts from graduates. The EDS we present is tested and applied on a medium-sized state university with 23,000 students and a medium-sized private university of applied sciences with 6,700 students. Both institutes of higher education differ considerably in their organization, tuition fees and student-teacher ratios. Our results indicate a prediction accuracy at the end of the first semester of 79% for the state university and 85% for the private university of applied sciences. Furthermore, accuracy of the EDS increases with each completed semester as new performance data becomes available. After the fourth semester, the accuracy improves to 90% for the state university and 95% for the private university of applied sciences. At the day of enrollment the accuracy, relying only on demographic data, is 68% for the state university and 67% for the private university.
    By: Johannes Berens (WIB, University of Wuppertal); Simon Oster (WIB, University of Wuppertal); Kerstin Schneider (WIB, University of Wuppertal and CESifo); Julian Burghoff (University of Düsseldorf)
    Date: 2018–07
    URL: http://d.repec.org/n?u=RePEc:bwu:schdps:sdp18006&r=cmp
  8. By: Kuznetsov, Dmitriy (Russian Presidential Academy of National Economy and Public Administration (RANEPA)); Sedalishchev, Vladimir (Russian Presidential Academy of National Economy and Public Administration (RANEPA)); Knobel, Alexander (Russian Presidential Academy of National Economy and Public Administration (RANEPA))
    Abstract: The purpose of this study is to assess the impact of various directions of economic integration within the framework of the EAEU on the economies of the countries of this association. The analysis is carried out on the basis of a computable general equilibrium (CGE) model, which is suitable for modeling non-tariff barriers and monopolistic competition. Based on the results obtained in the study, a conclusion is made on the priority of deepening economic integration within EAEU as compared to the expansion of the membership of the EAEU and to the FTA between the EAEU and the main trading partners of the EAEU countries.
    Keywords: computable general equilibrium model, ÅAEU, tariff liberalization, non-tariff barriers in trade
    Date: 2018–06
    URL: http://d.repec.org/n?u=RePEc:rnp:wpaper:061825&r=cmp
  9. By: Ivan P. Yamshchikov; Sharwin Rezagholi
    Abstract: This paper employs a novel method for the empirical analysis of political discourse and develops a model that demonstrates dynamics comparable with the empirical data. Applying a set of binary text classifiers based on convolutional neural networks, we label statements in the political programs of the Democratic and the Republican Party in the United States. Extending the framework of the Colonel Blotto game by a stochastic activation structure, we show that, under a simple learning rule, the simulated game exhibits dynamics that resemble the empirical data.
    Date: 2018–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1805.12083&r=cmp

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.