nep-cmp New Economics Papers
on Computational Economics
Issue of 2020‒05‒11
thirty-six papers chosen by



  1. Differential Machine Learning By Antoine Savine; Brian Huge
  2. Neural Networks and Value at Risk By Alexander Arimond; Damian Borth; Andreas Hoepner; Michael Klawunn; Stefan Weisheit
  3. A neural network model for solvency calculations in life insurance By Lucio Fernandez-Arjona
  4. Deep xVA solver -- A neural network based counterparty credit risk management framework By Alessandro Gnoatto; Athena Picarelli; Christoph Reisinger
  5. A machine learning approach to portfolio pricing and risk management for high-dimensional problems By Lucio Fernandez Arjona; Damir Filipović
  6. On the Equivalence of Neural and Production Networks By Roy Gernhardt; Bjorn Persson
  7. A Time Series Analysis-Based Stock Price Prediction Using Machine Learning and Deep Learning Models By Sidra Mehtab; Jaydip Sen
  8. Hedging with Neural Networks By Johannes Ruf; Weiguan Wang
  9. A generative adversarial network approach to calibration of local stochastic volatility models By Christa Cuchiero; Wahid Khosrawi; Josef Teichmann
  10. A machine learning approach to portfolio pricing and risk management for high-dimensional problems By Lucio Fernandez-Arjona; Damir Filipovi\'c
  11. Hedging and machine learning driven crude oil data analysis using a refined Barndorff-Nielsen and Shephard model By Humayra Shoshi; Indranil SenGupta
  12. Sequential hypothesis testing in machine learning driven crude oil jump detection By Michael Roberts; Indranil SenGupta
  13. Machine Learning Econometrics: Bayesian algorithms and methods By Dimitris Korobilis; Davide Pettenuzzo
  14. A Multialternative Neural Decision Process By Simone Cerreia-Vioglio; Fabio Maccheroni; Massimo Marinacci
  15. Interbank risk assessment: A simulation approach By Jager, Maximilian; Siemsen, Thomas; Vilsmeier, Johannes
  16. Computing Bayes: Bayesian Computation from 1763 to the 21st Century By Gael M. Martin; David T. Frazier; Christian P. Robert
  17. Consistent Calibration of Economic Scenario Generators: The Case for Conditional Simulation By Misha van Beek
  18. Environmental Economics and Uncertainty: Review and a Machine Learning Outlook By Ruda Zhang; Patrick Wingo; Rodrigo Duran; Kelly Rose; Jennifer Bauer; Roger Ghanem
  19. Confronting climate change: Adaptation vs. migration strategies in Small Island Developing States By Lesly Cassin; Paolo Melindi-Ghidi; Fabien Prieur
  20. Long short-term memory networks and laglasso for bond yield forecasting: Peeping inside the black box By Manuel Nunes; Enrico Gerding; Frank McGroarty; Mahesan Niranjan
  21. A Stochastic LQR Model for Child Order Placement in Algorithmic Trading By Jackie Jianhong Shen
  22. Microsimulation of residential activity for alternative urban development scenarios: A case study on brussels and flemish brabant By Frederik Priem; Philip Stessens; Frank Canters
  23. Comparing conventional and machine-learning approaches to risk assessment in domestic abuse cases By Grogger, Jeffrey; Ivandic, Ria; Kirchmaier, Thomas
  24. Optimal Taxation in an Endogenous Fertility Model with Non-Cooperative Couples By Takuya Obara; Yoshitomo Ogawa
  25. Best Practices for Artificial Intelligence in Life Sciences Research By Makarov, Vladimir; Stouch, Terry; Allgood, Brandon; Willis, Christopher; Lynch, Nick
  26. An Interior-Point Path-Following Method to Compute Stationary Equilibria in Stochastic Games By Dang, Chuangyin; Herings, P. Jean-Jacques; Li, Peixuan
  27. It Takes a Village: The Economics of Parenting with Neighborhood and Peer Effects By Agostinelli, Francesco; Doepke, Matthias; Sorrenti, Giuseppe; Zilibotti, Fabrizio
  28. Macroeconomic impacts of the public health response to COVID-19 By Eric Kemp-Benedict
  29. Permutation Tests for Comparing Inequality Measures By Jean-Marie Dufour; Emmanuel Flachaire; Lynda Khalaf
  30. A comparison of Swiss, German and Polish fiscal rules using Monte Carlo simulations By Adam Pigoñ; Micha³ Ramsza
  31. The Propagation of Demand Shocks Through Housing Markets By Elliot Anenberg; Daniel R. Ringo
  32. Modeling R&D spillovers to productivity. The effects of tax policy By Thomas von Brasch; Ådne Cappelen; Håvard Hungnes; Terje Skjerpen
  33. ESG2Risk: A Deep Learning Framework from ESG News to Stock Volatility Prediction By Tian Guo; Nicolas Jamet; Valentin Betrix; Louis-Alexandre Piquet; Emmanuel Hauptmann
  34. The economic impact of public R&D: an international perspective By Soete, Luc; Verspagen, Bart; Ziesemer, Thomas
  35. Multimarket Contact and Collusion in Online Retail By Poppius, Hampus
  36. Do Female Role Models Reduce the Gender Gap in Science? Evidence from French High Schools By Breda, Thomas; Grenet, Julien; Monnet, Marion; Van Effenterre, Clémentine

  1. By: Antoine Savine; Brian Huge
    Abstract: Differential machine learning (ML) extends supervised learning, with models trained on examples of not only inputs and labels, but also differentials of labels to inputs. Differential ML is applicable in all situations where high quality first order derivatives wrt training inputs are available. In the context of financial Derivatives risk management, pathwise differentials are efficiently computed with automatic adjoint differentiation (AAD). Differential ML, combined with AAD, provides extremely effective pricing and risk approximations. We can produce fast pricing analytics in models too complex for closed form solutions, extract the risk factors of complex transactions and trading books, and effectively compute risk management metrics like reports across a large number of scenarios, backtesting and simulation of hedge strategies, or capital regulations. The article focuses on differential deep learning (DL), arguably the strongest application. Standard DL trains neural networks (NN) on punctual examples, whereas differential DL teaches them the shape of the target function, resulting in vastly improved performance, illustrated with a number of numerical examples, both idealized and real world. In the online appendices, we apply differential learning to other ML models, like classic regression or principal component analysis (PCA), with equally remarkable results. This paper is meant to be read in conjunction with its companion GitHub repo https://github.com/differential-machine-learning, where we posted a TensorFlow implementation, tested on Google Colab, along with examples from the article and additional ones. We also posted appendices covering many practical implementation details not covered in the paper, mathematical proofs, application to ML models besides neural networks and extensions necessary for a reliable implementation in production.
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2005.02347&r=all
  2. By: Alexander Arimond; Damian Borth; Andreas Hoepner; Michael Klawunn; Stefan Weisheit
    Abstract: Utilizing a generative regime switching framework, we perform Monte-Carlo simulations of asset returns for Value at Risk threshold estimation. Using equity markets and long term bonds as test assets in the global, US, Euro area and UK setting over an up to 1,250 weeks sample horizon ending in August 2018, we investigate neural networks along three design steps relating (i) to the initialization of the neural network, (ii) its incentive function according to which it has been trained and (iii) the amount of data we feed. First, we compare neural networks with random seeding with networks that are initialized via estimations from the best-established model (i.e. the Hidden Markov). We find latter to outperform in terms of the frequency of VaR breaches (i.e. the realized return falling short of the estimated VaR threshold). Second, we balance the incentive structure of the loss function of our networks by adding a second objective to the training instructions so that the neural networks optimize for accuracy while also aiming to stay in empirically realistic regime distributions (i.e. bull vs. bear market frequencies). In particular this design feature enables the balanced incentive recurrent neural network (RNN) to outperform the single incentive RNN as well as any other neural network or established approach by statistically and economically significant levels. Third, we half our training data set of 2,000 days. We find our networks when fed with substantially less data (i.e. 1,000 days) to perform significantly worse which highlights a crucial weakness of neural networks in their dependence on very large data sets ...
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2005.01686&r=all
  3. By: Lucio Fernandez-Arjona
    Abstract: Insurance companies make extensive use of Monte Carlo simulations in their capital and solvency models. To overcome the computational problems associated with Monte Carlo simulations, most large life insurance companies use proxy models such as replicating portfolios. In this paper, we present an example based on a variable annuity guarantee, showing the main challenges faced by practitioners in the construction of replicating portfolios: the feature engineering step and subsequent basis function selection problem. We describe how neural networks can be used as a proxy model and how to apply risk-neutral pricing on a neural network to integrate such a model into a market risk framework. The proposed model naturally solves the feature engineering and feature selection problems of replicating portfolios.
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2005.02318&r=all
  4. By: Alessandro Gnoatto; Athena Picarelli; Christoph Reisinger
    Abstract: In this paper, we present a novel computational framework for portfolio-wide risk management problems where the presence of a potentially large number of risk factors makes traditional numerical techniques ineffective. The new method utilises a coupled system of BSDEs for the valuation adjustments (xVA) and solves these by a recursive application of a neural network based BSDE solver. This not only makes the computation of xVA for high-dimensional problems feasible, but also produces hedge ratios and dynamic risk measures for xVA, and allows simulations of the collateral account.
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2005.02633&r=all
  5. By: Lucio Fernandez Arjona (Zurich Insurance Group); Damir Filipović (Ecole Polytechnique Fédérale de Lausanne; Swiss Finance Institute)
    Abstract: We present a general framework for portfolio risk management in discrete time, based on a replicating martingale. This martingale is learned from a finite sample in a supervised setting. The model learns the features necessary for an effective low-dimensional representation, overcoming the curse of dimensionality common to function approximation in high-dimensional spaces. We show results based on polynomial and neural network bases. Both offer superior results to naive Monte Carlo methods and other existing methods like least-squares Monte Carlo and replicating portfolios.
    Keywords: Solvency capital; dimensionality reduction; neural networks; nested Monte Carlo; replicating portfolios.
    Date: 2020–04
    URL: http://d.repec.org/n?u=RePEc:chf:rpseri:rp2028&r=all
  6. By: Roy Gernhardt; Bjorn Persson
    Abstract: This paper identifies for the first time the mathematical equivalence between economic networks of Cobb-Douglas agents and Artificial Neural Networks. It explores two implications of this equivalence under general conditions. First, a burgeoning literature has established that network propagation can transform microeconomic perturbations into large aggregate shocks. Neural network equivalence amplifies the magnitude and complexity of this phenomenon. Second, if economic agents adjust their production and utility functions in optimal response to local conditions, market pricing is a sufficient and robust channel for information feedback leading to global, macro-scale learning at the level of the economy as a whole.
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2005.00510&r=all
  7. By: Sidra Mehtab; Jaydip Sen
    Abstract: Prediction of future movement of stock prices has always been a challenging task for the researchers. While the advocates of the efficient market hypothesis (EMH) believe that it is impossible to design any predictive framework that can accurately predict the movement of stock prices, there are seminal work in the literature that have clearly demonstrated that the seemingly random movement patterns in the time series of a stock price can be predicted with a high level of accuracy. Design of such predictive models requires choice of appropriate variables, right transformation methods of the variables, and tuning of the parameters of the models. In this work, we present a very robust and accurate framework of stock price prediction that consists of an agglomeration of statistical, machine learning and deep learning models. We use the daily stock price data, collected at five minutes interval of time, of a very well known company that is listed in the National Stock Exchange (NSE) of India. The granular data is aggregated into three slots in a day, and the aggregated data is used for building and training the forecasting models. We contend that the agglomerative approach of model building that uses a combination of statistical, machine learning, and deep learning approaches, can very effectively learn from the volatile and random movement patterns in a stock price data. We build eight classification and eight regression models based on statistical and machine learning approaches. In addition to these models, a deep learning regression model using a long-and-short-term memory (LSTM) network is also built. Extensive results have been presented on the performance of these models, and the results are critically analyzed.
    Date: 2020–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2004.11697&r=all
  8. By: Johannes Ruf; Weiguan Wang
    Abstract: We study neural networks as nonparametric estimation tools for the hedging of options. To this end, we design a network, named HedgeNet, that directly outputs a hedging strategy. This network is trained to minimise the hedging error instead of the pricing error. Applied to end-of-day and tick prices of S&P 500 and Euro Stoxx 50 options, the network is able to reduce the mean squared hedging error of the Black-Scholes benchmark significantly. We illustrate, however, that a similar benefit arises by simple linear regressions that incorporate the leverage effect. Finally, we show how a faulty training/test data split, possibly along with an additional 'tagging' of data, leads to a significant overestimation of the outperformance of neural networks.
    Date: 2020–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2004.08891&r=all
  9. By: Christa Cuchiero; Wahid Khosrawi; Josef Teichmann
    Abstract: We propose a fully data driven approach to calibrate local stochastic volatility (LSV) models, circumventing in particular the ad hoc interpolation of the volatility surface. To achieve this, we parametrize the leverage function by a family of feed forward neural networks and learn their parameters directly from the available market option prices. This should be seen in the context of neural SDEs and (causal) generative adversarial networks: we generate volatility surfaces by specific neural SDEs, whose quality is assessed by quantifying, in an adversarial manner, distances to market prices. The minimization of the calibration functional relies strongly on a variance reduction technique based on hedging and deep hedging, which is interesting in its own right: it allows to calculate model prices and model implied volatilities in an accurate way using only small sets of sample paths. For numerical illustration we implement a SABR-type LSV model and conduct a thorough statistical performance analyis on many samples of implied volatility smiles, showing the accuracy and stability of the method.
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2005.02505&r=all
  10. By: Lucio Fernandez-Arjona (University of Zurich); Damir Filipovi\'c (EPFL and Swiss Finance Institute)
    Abstract: We present a general framework for portfolio risk management in discrete time, based on a replicating martingale. This martingale is learned from a finite sample in a supervised setting. The model learns the features necessary for an effective low-dimensional representation, overcoming the curse of dimensionality common to function approximation in high-dimensional spaces. We show results based on polynomial and neural network bases. Both offer superior results to naive Monte Carlo methods and other existing methods like least-squares Monte Carlo and replicating portfolios.
    Date: 2020–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2004.14149&r=all
  11. By: Humayra Shoshi; Indranil SenGupta
    Abstract: In this paper, a refined Barndorff-Nielsen and Shephard (BN-S) model is implemented to find an optimal hedging strategy for commodity markets. The refinement of the BN-S model is obtained with various machine and deep learning algorithms. The refinement leads to the extraction of a deterministic parameter from the empirical data set. The problem is transformed to an appropriate classification problem with a couple of different approaches: the volatility approach and the duration approach. The analysis is implemented to the Bakken crude oil data and the aforementioned deterministic parameter is obtained for a wide range of data sets. With the implementation of this parameter in the refined model, the resulting model performs much better than the classical BN-S model.
    Date: 2020–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2004.14862&r=all
  12. By: Michael Roberts; Indranil SenGupta
    Abstract: In this paper we present a sequential hypothesis test for the detection of general jump size distrubution. Infinitesimal generators for the corresponding log-likelihood ratios are presented and analyzed. Bounds for infinitesimal generators in terms of super-solutions and sub-solutions are computed. This is shown to be implementable in relation to various classification problems for a crude oil price data set. Machine and deep learning algorithms are implemented to extract a specific deterministic component from the crude oil data set, and the deterministic component is implemented to improve the Barndorff-Nielsen and Shephard model, a commonly used stochastic model for derivative and commodity market analysis.
    Date: 2020–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2004.08889&r=all
  13. By: Dimitris Korobilis; Davide Pettenuzzo
    Abstract: As the amount of economic and other data generated worldwide increases vastly, a challenge for future generations of econometricians will be to master efficient algorithms for inference in empirical models with large information sets. This Chapter provides a review of popular estimation algorithms for Bayesian inference in econometrics and surveys alternative algorithms developed in machine learning and computing science that allow for efficient computation in high-dimensional settings. The focus is on scalability and parallelizability of each algorithm, as well as their ability to be adopted in various empirical settings in economics and finance.
    Date: 2020–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2004.11486&r=all
  14. By: Simone Cerreia-Vioglio; Fabio Maccheroni; Massimo Marinacci
    Abstract: We introduce an algorithmic decision process for multialternative choice that combines binary comparisons and Markovian exploration. We show that a functional property, transitivity, makes it testable.
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2005.01081&r=all
  15. By: Jager, Maximilian; Siemsen, Thomas; Vilsmeier, Johannes
    Abstract: We introduce a novel simulation-based network approach, which provides full-edged distributions of potential interbank losses. Based on those distributions we propose measures for (i) systemic importance of single banks, (ii) vulnerability of single banks, and (iii) vulnerability of the whole sector. The framework can be used for the calibration of macro-prudential capital charges, the assessment of systemic risks in the banking sector, and for the calculation of banks' interbank loss distributions in general. Our application to German regulatory data from End-2016 shows that the German interbank network was at that time in general resilient to the default of large banks, i.e. did not exhibit substantial contagion risk. Even though up to four contagion defaults could occur due to an exogenous shock, the system-wide 99.9% VaR barely exceeds 1.5% of banks' CET 1 capital. For single institutions, however, we found indications for elevated vulnerabilities and hence the need for a close supervision.
    Keywords: Interbank contagion,credit risk,systemic risk,loss simulation
    JEL: G17 G21 G28
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:zbw:bubdps:232020&r=all
  16. By: Gael M. Martin; David T. Frazier; Christian P. Robert
    Abstract: The Bayesian statistical paradigm uses the language of probability to express uncertainty about the phenomena that generate observed data. Probability distributions thus characterize Bayesian inference, with the rules of probability used to transform prior probability distributions for all unknowns - models, parameters, latent variables - into posterior distributions, subsequent to the observation of data. Conducting Bayesian inference requires the evaluation of integrals in which these probability distributions appear. Bayesian computation is all about evaluating such integrals in the typical case where no analytical solution exists. This paper takes the reader on a chronological tour of Bayesian computation over the past two and a half centuries. Beginning with the one-dimensional integral first confronted by Bayes in 1763, through to recent problems in which the unknowns number in the millions, we place all computational problems into a common framework, and describe all computational methods using a common notation. The aim is to help new researchers in particular - and more generally those interested in adopting a Bayesian approach to empirical work - make sense of the plethora of computational techniques that are now on offer; understand when and why different methods are useful; and see the links that do exist, between them all.
    Keywords: history of Bayesian computation, Laplace approximation, Markov chain Monte Carlo, importance sampling, approximate Bayesian computation, Bayesian synthetic likelihood, variational Bayes, integrated nested Laplace approximation.
    JEL: C11 C15 C52
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2020-14&r=all
  17. By: Misha van Beek
    Abstract: Economic Scenario Generators (ESGs) simulate economic and financial variables forward in time for risk management and asset allocation purposes. It is often not feasible to calibrate the dynamics of all variables within the ESG to historical data alone. Calibration to forward-information such as future scenarios and return expectations is needed for stress testing and portfolio optimization, but no generally accepted methodology is available. This paper introduces the Conditional Scenario Simulator, which is a framework for consistently calibrating simulations and projections of economic and financial variables both to historical data and forward-looking information. The framework can be viewed as a multi-period, multi-factor generalization of the Black-Litterman model, and can embed a wide array of financial and macroeconomic models. Two practical examples demonstrate this in a frequentist and Bayesian setting.
    Date: 2020–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2004.09042&r=all
  18. By: Ruda Zhang; Patrick Wingo; Rodrigo Duran; Kelly Rose; Jennifer Bauer; Roger Ghanem
    Abstract: Economic assessment in environmental science concerns the measurement or valuation of environmental impacts, adaptation, and vulnerability. Integrated assessment modeling is a unifying framework of environmental economics, which attempts to combine key elements of physical, ecological, and socioeconomic systems. Uncertainty characterization in integrated assessment varies by component models: uncertainties associated with mechanistic physical models are often assessed with an ensemble of simulations or Monte Carlo sampling, while uncertainties associated with impact models are evaluated by conjecture or econometric analysis. Manifold sampling is a machine learning technique that constructs a joint probability model of all relevant variables which may be concentrated on a low-dimensional geometric structure. Compared with traditional density estimation methods, manifold sampling is more efficient especially when the data is generated by a few latent variables. The manifold-constrained joint probability model helps answer policy-making questions from prediction, to response, and prevention. Manifold sampling is applied to assess risk of offshore drilling in the Gulf of Mexico.
    Date: 2020–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2004.11780&r=all
  19. By: Lesly Cassin (EconomiX - UPN - Université Paris Nanterre - CNRS - Centre National de la Recherche Scientifique); Paolo Melindi-Ghidi (EconomiX - UPN - Université Paris Nanterre - CNRS - Centre National de la Recherche Scientifique, AMSE - Aix-Marseille Sciences Economiques - EHESS - École des hautes études en sciences sociales - AMU - Aix Marseille Université - ECM - École Centrale de Marseille - CNRS - Centre National de la Recherche Scientifique); Fabien Prieur (CEE-M - Centre d'Economie de l'Environnement - Montpellier - FRE2010 - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement - Montpellier SupAgro - Institut national d’études supérieures agronomiques de Montpellier - UM - Université de Montpellier - CNRS - Centre National de la Recherche Scientifique)
    Abstract: This paper examines the optimal adaptation policy of Small Island Developing States (SIDS) to cope with climate change. We build a dynamic optimization problem to incorporate the following ingredients: (i) local production uses labor and natural capital, which is degraded as a result of climate change; (ii) governments have two main policy options: control migration and/or conventional adaptation measures ; (iii) migration decisions drive changes in the population size; (iv) expatriates send remittances back home. We show that the optimal policy depends on the interplay between the two policy instruments that can be either complements or substitutes depending on the individual characteristics and initial conditions. Using a numerical analysis based on the calibration of the model for different SIDS, we identify that only large islands use the two tools from the beginning, while for the smaller countries, there is a substitution between migration and conventional adaption at the initial period
    Keywords: SIDS,climate change,adaptation,migration,natural capital.
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:hal:wpceem:hal-02515116&r=all
  20. By: Manuel Nunes; Enrico Gerding; Frank McGroarty; Mahesan Niranjan
    Abstract: Modern decision-making in fixed income asset management benefits from intelligent systems, which involve the use of state-of-the-art machine learning models and appropriate methodologies. We conduct the first study of bond yield forecasting using long short-term memory (LSTM) networks, validating its potential and identifying its memory advantage. Specifically, we model the 10-year bond yield using univariate LSTMs with three input sequences and five forecasting horizons. We compare those with multilayer perceptrons (MLP), univariate and with the most relevant features. To demystify the notion of black box associated with LSTMs, we conduct the first internal study of the model. To this end, we calculate the LSTM signals through time, at selected locations in the memory cell, using sequence-to-sequence architectures, uni and multivariate. We then proceed to explain the states' signals using exogenous information, for what we develop the LSTM-LagLasso methodology. The results show that the univariate LSTM model with additional memory is capable of achieving similar results as the multivariate MLP using macroeconomic and market information. Furthermore, shorter forecasting horizons require smaller input sequences and vice-versa. The most remarkable property found consistently in the LSTM signals, is the activation/deactivation of units through time, and the specialisation of units by yield range or feature. Those signals are complex but can be explained by exogenous variables. Additionally, some of the relevant features identified via LSTM-LagLasso are not commonly used in forecasting models. In conclusion, our work validates the potential of LSTMs and methodologies for bonds, providing additional tools for financial practitioners.
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2005.02217&r=all
  21. By: Jackie Jianhong Shen
    Abstract: Modern Algorithmic Trading ("Algo") allows institutional investors and traders to liquidate or establish big security positions in a fully automated or low-touch manner. Most existing academic or industrial Algos focus on how to "slice" a big parent order into smaller child orders over a given time horizon. Few models rigorously tackle the actual placement of these child orders. Instead, placement is mostly done with a combination of empirical signals and heuristic decision processes. A self-contained, realistic, and fully functional Child Order Placement (COP) model may never exist due to all the inherent complexities, e.g., fragmentation due to multiple venues, dynamics of limit order books, lit vs. dark liquidity, different trading sessions and rules. In this paper, we propose a reductionism COP model that focuses exclusively on the interplay between placing passive limit orders and sniping using aggressive takeout orders. The dynamic programming model assumes the form of a stochastic linear-quadratic regulator (LQR) and allows closed-form solutions under the backward Bellman equations. Explored in detail are model assumptions and general settings, the choice of state and control variables and the cost functions, and the derivation of the closed-form solutions.
    Date: 2020–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2004.13797&r=all
  22. By: Frederik Priem; Philip Stessens; Frank Canters
    Abstract: The historically rooted suburbanization of Flanders and the Brussels Capital Region (BCR) in Belgium has resulted in severe urban sprawl, traffic congestion, natural land degradation and many related problems. Recent policy proposals put forward by the two regions aim for more compact urban development in well-serviced areas. Yet, it is unclear how these proposed policies may impact residential dynamics over the coming decades. To address this issue, we developed a Residential Microsimulation (RM) framework that spatially refines coarse-scale demographic projections at the district level to the level of census tracts. The validation of simulated changes from 2001 to 2011 reveals that the proposed framework succeeds in modelling historic trends and clearly outperforms a random model. To support simulation from 2011 to 2040, two alternative urban development scenarios are defined. The Business As Usual (BAU) scenario essentially represents a continuation of urban sprawl development, whereas the Sustainable Development (SUS) scenario strives for higher-density development around strategic well-serviced nodes in line with proposed policies. This study demonstrates how residential microsimulation supported by scenario analysis can play a constructive role in urban policy design and evaluation.
    Keywords: Compact development; Discrete choice modelling; Flanders; Residential location choice; Scenario analysis; Urban sprawl
    Date: 2020–03
    URL: http://d.repec.org/n?u=RePEc:ulb:ulbeco:2013/305003&r=all
  23. By: Grogger, Jeffrey; Ivandic, Ria; Kirchmaier, Thomas
    Abstract: We compare predictions from a conventional protocol-based approach to risk assessment with those based on a machine-learning approach. We first show that the conventional predictions are less accurate than, and have similar rates of negative prediction error as, a simple Bayes classifier that makes use only of the base failure rate. A random forest based on the underlying risk assessment questionnaire does better under the assumption that negative prediction errors are more costly than positive prediction errors. A random forest based on two-year criminal histories does better still. Indeed, adding the protocol-based features to the criminal histories adds almost nothing to the predictive adequacy of the model. We suggest using the predictions based on criminal histories to prioritize incoming calls for service, and devising a more sensitive instrument to distinguish true from false positives that result from this initial screening.
    Keywords: domestic abuse; risk assessment; machine learning
    JEL: K42
    Date: 2020–02–01
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:104159&r=all
  24. By: Takuya Obara (Faculty of Economics, Tohoku Gakuin University); Yoshitomo Ogawa (School of Economics, Kwansei Gakuin University)
    Abstract: This study examines the optimal tax structure in an endogenous fertility model with non-cooperative couples. In the model, both the quality and number of children are sub- optimal because of the non-cooperative behavior of couples. Moreover, we consider the external e?ects of children on society and center-based childcare services. In such a uni- fied model, we characterize the formulae for optimal income tax rates, child tax/subsidy rates, and tax/subsidy rates on center-based childcare services. We find that income taxation, but not a child subsidy, corrects the suboptimal low fertility rate caused by the non-cooperative behavior of couples. To alleviate the deadweight loss from income taxation, a child tax is useful. The child tax (subsidy) becomes optimal if the required tax revenue is larger (smaller) than the external e?ects. The subsidy for external child- care services corrects the external e?ect of children, not the non-cooperative behavior. These results are reinforced by the numerical analysis.
    Keywords: Non-Cooperative Couple, Endogenous Fertility, Optimal Income Tax, Opti- mal Child Tax/Subsidy
    JEL: H21 J13 J16
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:kgu:wpaper:211&r=all
  25. By: Makarov, Vladimir; Stouch, Terry; Allgood, Brandon; Willis, Christopher; Lynch, Nick
    Abstract: We describe 11 best practices for the successful use of Artificial Intelligence and Machine Learning in the pharmaceutical and biotechnology research, on the data, technology, and organizational management levels.
    Date: 2020–04–20
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:eqm9j&r=all
  26. By: Dang, Chuangyin; Herings, P. Jean-Jacques (RS: GSBE Theme Data-Driven Decision-Making, RS: GSBE Theme Conflict & Cooperation, General Economics 1 (Micro)); Li, Peixuan
    Abstract: Subgame perfect equilibrium in stationary strategies (SSPE) is the most important solution concept used in applications of stochastic games, which makes it imperative to develop efficient numerical methods to compute an SSPE. For this purpose, this paper develops an interior-point path-following method (IPM), which remedies a number of issues with the existing method called stochastic linear tracing procedure (SLTP). The homotopy system of IPM is derived from the optimality conditions of an artificial barrier game, whose objective function is a combination of the original payoff function and a logarithmic term. Unlike SLTP, the starting stationary strategy profile can be arbitrarily chosen and IPM does not need switching between different systems of equations. The use of a perturbation term makes IPM applicable to all stochastic games, whereas SLTP only works for a generic stochastic game. A transformation of variables reduces the number of equations and variables of by roughly one half. Numerical results show that our method is more than three times as efficient as SLTP.
    JEL: C62 C72 C73
    Date: 2020–02–17
    URL: http://d.repec.org/n?u=RePEc:unm:umagsb:2020001&r=all
  27. By: Agostinelli, Francesco (University of Pennsylvania); Doepke, Matthias (Northwestern University); Sorrenti, Giuseppe (University of Amsterdam); Zilibotti, Fabrizio (Yale University)
    Abstract: As children reach adolescence, peer interactions become increasingly central to their development, whereas the direct influence of parents wanes. Nevertheless, parents may continue to exert leverage by shaping their children's peer groups. We study interactions of parenting style and peer effects in a model where children's skill accumulation depends on both parental inputs and peers, and where parents can affect the peer group by restricting who their children can interact with. We estimate the model and show that it can capture empirical patterns regarding the interaction of peer characteristics, parental behavior, and skill accumulation among US high school students. We use the estimated model for policy simulations. We find that interventions (e.g., busing) that move children to a more favorable neighborhood have large effects but lose impact when they are scaled up because parents' equilibrium responses push against successful integration with the new peer group.
    Keywords: skill acquisition, peer effects, parenting, parenting style, neighborhood effects
    JEL: I24 J13 J24 R20
    Date: 2020–04
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp13161&r=all
  28. By: Eric Kemp-Benedict (Stockholm Environment Institute (SE))
    Abstract: The economic impact of public health measures to contain the COVID-19 novel coronavirus is a matter of contentious debate. Given the high uncertainties, there is a need for combined epidemiological-macroeconomic scenarios. We present a model in this paper for developing such scenarios. The epidemiological sub-model is a discrete-time matrix implementation of an SEIR model. This approach avoids known problems with the more usual set of continuous-time differential equations. The post-Keynesian macroeconomic sub-model is a stylized representation of the United States economy with three sectors: core, social (most impacted by social distancing), and hospital, which may experience excessive demand. Simulations with the model show the clear superiority of a rigorous testing and contact tracing regime in which infected individuals, symptomatic or not, are isolated. Social distancing leads to an abrupt and deep recession. With expanded unemployment benefits, the drop is shallower. When testing and contact tracing is introduced, social spending can be scaled back and the economy recovers quickly. Ending social distancing without a testing and tracing regime leads to a high death toll and severe economic impacts. Results suggest that social distancing and fiscal stimulus have had their desired effects of reducing the health and economic impacts of the disease.
    Keywords: SARS-CoV-2; coronavirus; COVID-19; macroeconomy; post-Keynesian; SEIR model
    JEL: E00 E11 I18
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:pke:wpaper:pkwp2011&r=all
  29. By: Jean-Marie Dufour (Université McGill); Emmanuel Flachaire (AMSE - Aix-Marseille Sciences Economiques - EHESS - École des hautes études en sciences sociales - AMU - Aix Marseille Université - ECM - École Centrale de Marseille - CNRS - Centre National de la Recherche Scientifique); Lynda Khalaf (Carleton University)
    Abstract: Asymptotic and bootstrap tests for inequality measures are known to perform poorly in finite samples when the underlying distribution is heavy-tailed. We propose Monte Carlo permutation and bootstrap methods for the problem of testing the equality of inequality measures between two samples. Results cover the Generalized Entropy class, which includes Theil's index, the Atkinson class of indices, and the Gini index. We analyze finite-sample and asymptotic conditions for the validity of the proposed methods, and we introduce a convenient rescaling to improve finite-sample performance. Simulation results show that size correct inference can be obtained with our proposed methods despite heavy tails if the underlying distributions are sufficiently close in the upper tails. Substantial reduction in size distortion is achieved more generally. Studentized rescaled Monte Carlo permutation tests outperform the competing methods we consider in terms of power.
    Keywords: Bootstrap,Income distribution,Inequality measures,Permutation test
    Date: 2019–07
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-02172793&r=all
  30. By: Adam Pigoñ; Micha³ Ramsza
    Abstract: Authors assess the economic implications of existing fiscal rules in Poland, Switzerland and Germany. In the analysis they establish economic relationships between output, government revenues and expenditures estimating a VAR model on US data for the years 1960-2015. Imposing fiscal policies implied by a given rule on those relationships, they analyze the consequences for the simulated paths of debts, deficits and expenditures in terms of stability and cyclicality. They find that the Swiss and German rules are strict and stabilize deficits at low levels. However, this may still not be sufficient to stabilize debt, in the long run, in a strict sense. The Polish rule stabilizes the debt level at about 40-50% of the GDP in the long run. All rules imply an anticyclical fiscal policy: an increase of the deficit to GDP ratio implied by changes in the output gap equals, at most, 2.2 pp, 3.3 pp and 3.9 pp over the whole business cycle for the Polish, Swiss and German rules, respectively. These results can be perceived as satisfactory for the Swiss and German rules.
    Keywords: fiscal policy, fiscal rules
    JEL: C32 E62 H62 H63
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:ibt:wpaper:wp102019&r=all
  31. By: Elliot Anenberg; Daniel R. Ringo
    Abstract: Housing demand stimulus produces a multiplier effect by freeing up owners attempting to sell their current home, allowing them to re-enter the market as buyers and triggering a chain of further transactions. Exploiting a shock to first-time home buyer demand caused by the 2015 surprise cut in Federal Housing Administration mortgage insurance premiums, we find that homeowners buy their next home sooner when the probability of their current home selling increases. This effect is especially pronounced in cold housing markets, in which homes take a long time to sell. We build and calibrate a model of the joint buyer-seller search decision that explains these findings as a result of homeowners avoiding the cost of owning two homes simultaneously. Simulations of the model demonstrate that stimulus to home buying generates a substantial multiplier effect, particularly in cold housing markets.
    Keywords: Housing search; Housing stimulus; Multiplier effects; Joint buyer-seller
    Date: 2019–12–16
    URL: http://d.repec.org/n?u=RePEc:fip:fedgfe:2019-84&r=all
  32. By: Thomas von Brasch; Ådne Cappelen; Håvard Hungnes; Terje Skjerpen (Statistics Norway)
    Abstract: We study the role of R&D spillovers when modelling total factor productivity (TFP) by industry. Using Norwegian industry level data, we find that for many industries there are significant spillovers from both domestic sources and from technological change at the international frontier. International spillovers contributed with 38 per cent to the total growth in TFP from 1982 to 2018 while domestic channels contributed with 44 per cent. The remaining 18 per cent is due to interaction effects. We include these channels into a large-scale econometric model of the Norwegian economy to study how R&D policies can promote economic growth. We find that current R&D policies in the form of generous tax deductions have increased growth in productivity and income in the Norwegian economy. The simulation results lend some support to the view that there are fiscal policy instruments that may have very large multipliers, even in the case of a fully financed policy change.
    Keywords: R&D spillovers; total factor productivity; innovation policies
    JEL: C32 C51 D24 E17 O32
    Date: 2020–04
    URL: http://d.repec.org/n?u=RePEc:ssb:dispap:927&r=all
  33. By: Tian Guo; Nicolas Jamet; Valentin Betrix; Louis-Alexandre Piquet; Emmanuel Hauptmann
    Abstract: Incorporating environmental, social, and governance (ESG) considerations into systematic investments has drawn numerous attention recently. In this paper, we focus on the ESG events in financial news flow and exploring the predictive power of ESG related financial news on stock volatility. In particular, we develop a pipeline of ESG news extraction, news representations, and Bayesian inference of deep learning models. Experimental evaluation on real data and different markets demonstrates the superior predicting performance as well as the relation of high volatility prediction to stocks with potential high risk and low return. It also shows the prospect of the proposed pipeline as a flexible predicting framework for various textual data and target variables.
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2005.02527&r=all
  34. By: Soete, Luc (UNU-MERIT, Maastricht University); Verspagen, Bart (UNU-MERIT, Maastricht University); Ziesemer, Thomas (UNU-MERIT, Maastricht University)
    Abstract: Despite the fact that Research and Development (R&D) activities are carried out in most countries in public research institutes such as universities and public research organisations, there have been few studies that attempted to estimate the economic impact of such public investment in R&D. In this paper we analyse the relations between total factor productivity (TFP) and R&D as well as GDP for a set of 17 OECD countries using a vector-error-correction model (VECM). We find that for the period 1975-2014, investment in public R&D has had a clearly positive effect on TFP growth in the majority of countries analysed. In simulations allowing for a permanent positive shock on public R&D, we observe a strong dynamic complementarity between the public and private (domestic) stocks of R&D for a number of countries. In countries where this complementarity is strong, the TFP effect of extra public R&D investments is also strong. We also show that the share of foreign funding of R&D performed in the business sector combined with a high business R&D intensity, tends to be low in countries with high complementarity between private and public R&D. On the other hand, the share of basic R&D in business R&D combined with a higher public R&D intensity, tends to be higher in countries with strong complementarity.
    Keywords: R&D policy, public R&D investment, economic effects of R&D, vector-error-correction model
    JEL: O38 O30 H40
    Date: 2020–04–14
    URL: http://d.repec.org/n?u=RePEc:unm:unumer:2020014&r=all
  35. By: Poppius, Hampus (Department of Economics, Lund University)
    Abstract: When firms meet in multiple markets, they can leverage punishment ability in one market to sustain collusion in another. This is the first paper to test this theory for multiproduct retailers that sell consumer goods online. With data on the universe of consumer goods sold online in Sweden, I estimate that multimarket contact increases prices. To more closely investigate what drives the effect, I employ a machine-learning method to estimate effect heterogeneity. The main finding is that multimarket contact increases prices to a higher extent if there are fewer firms participating in the contact markets, which is one of the theoretical predictions. Previous studies focus on geographical markets, where firms provide a good or service in different locations. I instead define markets as different product markets, where each market is defined by the type of good. This is the first paper to study multimarket contact and collusion with this type of market definition. The effect is stronger than in previously studied settings.
    Keywords: Tacit collusion; pricing; e-commerce; causal machine learning
    JEL: D22 D43 L41 L81
    Date: 2020–04–08
    URL: http://d.repec.org/n?u=RePEc:hhs:lunewp:2020_005&r=all
  36. By: Breda, Thomas (Paris School of Economics); Grenet, Julien (Paris School of Economics); Monnet, Marion (Paris School of Economics); Van Effenterre, Clémentine (University of Toronto)
    Abstract: This paper, based on a large-scale field experiment, tests whether a one-hour exposure to external female role models with a background in science affects students' perceptions and choice of field of study. Using a random assignment of classroom interventions carried out by 56 female scientists among 20,000 high school students in the Paris Region, we provide the first evidence of the positive impact of external female role models on student enrollment in STEM fields. We show that the interventions increased the share of Grade 12 girls enrolling in selective (male-dominated) STEM programs in higher education, from 11 to 14.5 percent. These effects are driven by high-achieving girls in mathematics. We find limited effects on boys' educational choices in Grade 12, and no effect for students in Grade 10. Evidence from survey data shows that the program raised students' interest in science-related careers and slightly improved their math self-concept. It sharply reduced the prevalence of stereotypes associated with jobs in science and gender differences in abilities, but it made the underrepresentation of women in science more salient. Using machine learning methods, we leverage the diversity of role model profiles to document substantial heterogeneity in the effectiveness of role models and shed light on the channels through which they can influence female students' choice of study. Results suggest that emphasis on the gender theme is less important to the effectiveness of this type of intervention than the ability of role models to convey a positive and more inclusive image of STEM careers.
    Keywords: role models, gender gap, STEM, stereotypes, choice of studies
    JEL: C93 I24 J16
    Date: 2020–04
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp13163&r=all

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.