nep-cmp New Economics Papers
on Computational Economics
Issue of 2020‒06‒29
28 papers chosen by



  1. Prior knowledge distillation based on financial time series By Jie Fang; Jianwu Lin
  2. Designing new models and algorithms to improve order picking operations By Žulj, Ivan
  3. Deep learning Profit & Loss By Pietro Rossi; Flavio Cocco; Giacomo Bormetti
  4. Does an artificial intelligence perform market manipulation with its own discretion? -- A genetic algorithm learns in an artificial market simulation By Takanobu Mizuta
  5. Learning a functional control for high-frequency finance By Laura Leal; Mathieu Lauri\`ere; Charles-Albert Lehalle
  6. A Computational Approach to Hedging Credit Valuation Adjustment in a Jump-Diffusion Setting By T. van der Zwaard; L. A. Grzelak; C. W. Oosterlee
  7. Consistent Recalibration Models and Deep Calibration By Matteo Gambara; Josef Teichmann
  8. Shallow Neural Hawkes: Non-parametric kernel estimation for Hawkes processes By Sobin Joseph; Lekhapriya Dheeraj Kashyap; Shashi Jain
  9. Accuracy of Deep Learning in Calibrating HJM Forward Curves By Fred Espen Benth; Nils Detering; Silvia Lavagnini
  10. Deep Reinforcement Learning for Foreign Exchange Trading By Yun-Cheng Tsai; Chun-Chieh Wang
  11. Object Oriented (Dynamic) Programming: Replication, Innovation and "Structural" Estimation By Christopher Ferrall
  12. A Tweet-based Dataset for Company-Level Stock Return Prediction By Karolina Sowinska; Pranava Madhyastha
  13. Short-term forecasting of the Coronavirus Pandemic - 2020-04-27 By Jennifer L. Castle; Jurgen A. Doornik; David F. Hendry
  14. Assessing variable importance in clustering: a new method based on unsupervised binary decision trees By Ghattas Badih; Michel Pierre; Boyer Laurent
  15. Adversarial Robustness of Deep Convolutional Candlestick Learner By Jun-Hao Chen; Samuel Yen-Chi Chen; Yun-Cheng Tsai; Chih-Shiang Shur
  16. The Importance of Low Latency to Order Book Imbalance Trading Strategies By David Byrd; Sruthi Palaparthi; Maria Hybinette; Tucker Hybinette Balch
  17. So close and so far. Finding similar tendencies in econometrics and machine learning papers. Topic models comparison. By Marcin Chlebus; Maciej Stefan Świtała
  18. The importance of being discrete: on the (in-)accuracy of continuous approximations in auction theory By Itzhak Rasooly; Carlos Gavidia-Calderon
  19. Pairs Trading with Nonlinear and Non-Gaussian State Space Models By Guang Zhang
  20. Assessing concerns for the economic consequence of the COVID-19 response and mental health problems associated with economic vulnerability and negative economic shock in Italy, Spain, and the United Kingdom By codagnone, cristiano; Bogliacino, Francesco; Gómez, Camilo Ernesto; Charris, Rafael Alberto; Montealegre, Felipe; Liva, Giovanni; Villanueva, Francisco Lupiañez; Folkvord, F.; Veltri, Giuseppe Alessandro Prof
  21. Machine Learning Fund Categorizations By Dhagash Mehta; Dhruv Desai; Jithin Pradeep
  22. Computations and Complexities of Tarski's Fixed Points and Supermodular Games By Chuangyin Dang; Qi Qi; Yinyu Ye
  23. Corruption in the times of pandemia By Gallego, J; Prem, M; Vargas, J. F
  24. Value Creation in Private Equity By Biesinger, Markus; Bircan, Cagatay; Ljungqvist, Alexander P.
  25. 25 Years of European Merger Control By Affeldt, Pauline; Duso, Tomaso; Szücs, Florian
  26. Blowing against the Wind? A Narrative Approach to Central Bank Foreign Exchange Intervention By Alain Naef
  27. Going Beyond Average – Using Machine Learning to Evaluate the Effectiveness of Environmental Subsidies at Micro-Level By Stetter, Christian; Menning, Philipp; Sauer, Johannes
  28. Productivity dispersion and persistence among the world's most numerous firms By Burke, Marshall; Emerick, Kyle; Maue, Casey

  1. By: Jie Fang; Jianwu Lin
    Abstract: One of the major characteristics of financial time series is that they contain a large amount of non-stationary noise, which is challenging for deep neural networks. People normally use various features to address this problem. However, the performance of these features depends on the choice of hyper-parameters. In this paper, we propose to use neural networks to represent these indicators and train a large network constructed of smaller networks as feature layers to fine-tune the prior knowledge represented by the indicators. During back propagation, prior knowledge is transferred from human logic to machine logic via gradient descent. Prior knowledge is the deep belief of neural network and teaches the network to not be affected by non-stationary noise. Moreover, co-distillation is applied to distill the structure into a much smaller size to reduce redundant features and the risk of overfitting. In addition, the decisions of the smaller networks in terms of gradient descent are more robust and cautious than those of large networks. In numerical experiments, we find that our algorithm is faster and more accurate than traditional methods on real financial datasets. We also conduct experiments to verify and comprehend the method.
    Date: 2020–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2006.09247&r=all
  2. By: Žulj, Ivan
    Abstract: Order picking has been identified as a crucial factor for the competitiveness of a supply chain because inadequate order picking performance causes customer dissatisfaction and high costs. This dissertation aims at designing new models and algorithms to improve order picking operations and to support managerial decisions on facing current challenges in order picking. First, we study the standard order batching problem (OBP) to optimize the batching of customer orders with the objective of minimizing the total length of order picking tours. We present a mathematical model formulation of the problem and develop a hybrid solution approach of an adaptive large neighborhood search and a tabu search method. In numerical studies, we conduct an extensive comparison of our method to all previously published OBP methods that used standard benchmark sets to investigate their performance. Our hybrid outperforms all comparison methods with respect to average solution quality and runtime. Compared to the state-of-the-art, the hybrid shows the clearest advantages on the larger instances of the existing benchmark sets, which assume a larger number of customer orders and larger capacities of the picking device. Finally, our method is able to solve newly generated large-scale instances with up to 600 customer orders and six items per customer order with reasonable runtimes and convincing scaling behavior and robustness. Next, we address a problem based on a practical case, which is inspired by a warehouse of a German manufacturer of household products. In this warehouse, heavy items are not allowed to be placed on top of light items during picking to prevent damage to the light items. Currently, the case company determines the sequence for retrieving the items from their storage locations by applying a simple S-shape strategy that neglects this precedence constraint. As a result, order pickers place the collected items next to each other in plastic boxes and sort the items respecting the precedence constraint at the end of the order picking process. To avoid this sorting, we propose a picker routing strategy that incorporates the precedence constraint by picking heavy items before light items, and we develop an exact solution method to evaluate the strategy. We assess the performance of our strategy on a dataset provided to us by the manufacturer. We compare our strategy to the strategy used in the warehouse of the case company, and to an exact picker routing approach that does not consider the given precedence constraint. The results clearly demonstrate the convincing performance of our strategy even if we compare our strategy to the exact solution method that neglects the precedence constraint. Last, we investigate a new order picking problem, in which human order pickers of the traditional picker-to-parts setup are supported by automated guided vehicles (AGVs). We introduce two mathematical model formulations of the problem, and we develop a heuristic to solve the NP-hard problem. In numerical studies, we assess the solution quality of the heuristic in comparison to optimal solutions. The results demonstrate the ability of the heuristic in finding high-quality solutions within a negligible computation time. We conduct several computational experiments to investigate the effect of different numbers of AGVs and different traveling and walking speed ratios between AGVs and order pickers on the average total tardiness. The results of our experiments indicate that by adding (or removing) AGVs or by increasing (or decreasing) the AGV speed to adapt to different workloads, a large number of customer orders can be completed until the respective due date.
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:dar:wpaper:121209&r=all
  3. By: Pietro Rossi; Flavio Cocco; Giacomo Bormetti
    Abstract: Building the future profit and loss (P&L) distribution of a portfolio holding, among other assets, highly non-linear and path-dependent derivatives is a challenging task. We provide a simple machinery where more and more assets could be accounted for in a simple and semi-automatic fashion. We resort to a variation of the Least Square Monte Carlo algorithm where interpolation of the continuation value of the portfolio is done with a feed forward neural network. This approach has several appealing features. Neural networks are extremely flexible regressors. We do not need to worry about the fact that for multi assets payoff, the exercise surface could be non connected. Neither we have to search for smart regressors. The idea is to use, regardless of the complexity of the payoff, only the underlying processes. Neural networks with many outputs can interpolate every single assets in the portfolio generated by a single Monte Carlo simulation. This is an essential feature to account for the P&L distribution of the whole portfolio when the dependence structure between the different assets is very strong like the case where one has contingent claims written on the same underlying.
    Date: 2020–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2006.09955&r=all
  4. By: Takanobu Mizuta
    Abstract: Who should be charged with responsibility for an artificial intelligence performing market manipulation have been discussed. In this study, I constructed an artificial intelligence using a genetic algorithm that learns in an artificial market simulation, and investigated whether the artificial intelligence discovers market manipulation through learning with an artificial market simulation despite a builder of artificial intelligence has no intention of market manipulation. As a result, the artificial intelligence discovered market manipulation as an optimal investment strategy. This result suggests necessity of regulation, such as obligating builders of artificial intelligence to prevent artificial intelligence from performing market manipulation.
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2005.10488&r=all
  5. By: Laura Leal; Mathieu Lauri\`ere; Charles-Albert Lehalle
    Abstract: We use a deep neural network to generate controllers for optimal trading on high frequency data. For the first time, a neural network learns the mapping between the preferences of the trader, i.e. risk aversion parameters, and the optimal controls. An important challenge in learning this mapping is that in intraday trading, trader's actions influence price dynamics in closed loop via the market impact. The exploration--exploitation tradeoff generated by the efficient execution is addressed by tuning the trader's preferences to ensure long enough trajectories are produced during the learning phase. The issue of scarcity of financial data is solved by transfer learning: the neural network is first trained on trajectories generated thanks to a Monte-Carlo scheme, leading to a good initialization before training on historical trajectories. Moreover, to answer to genuine requests of financial regulators on the explainability of machine learning generated controls, we project the obtained "blackbox controls" on the space usually spanned by the closed-form solution of the stylized optimal trading problem, leading to a transparent structure. For more realistic loss functions that have no closed-form solution, we show that the average distance between the generated controls and their explainable version remains small. This opens the door to the acceptance of ML-generated controls by financial regulators.
    Date: 2020–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2006.09611&r=all
  6. By: T. van der Zwaard; L. A. Grzelak; C. W. Oosterlee
    Abstract: This study contributes to understanding Valuation Adjustments (xVA) by focussing on the dynamic hedging of Credit Valuation Adjustment (CVA), corresponding Profit & Loss (P&L) and the P&L explain. This is done in a Monte Carlo simulation setting, based on a theoretical hedging framework discussed in existing literature. We look at CVA hedging for a portfolio with European options on a stock, first in a Black-Scholes setting, then in a Merton jump-diffusion setting. Furthermore, we analyze the trading business at a bank after including xVAs in pricing. We provide insights into the hedging of derivatives and their xVAs by analyzing and visualizing the cash-flows of a portfolio from a desk structure perspective. The case study shows that not charging CVA at trade inception results in a guaranteed loss. Furthermore, hedging CVA is crucial to end up with a stable trading strategy. In the Black-Scholes setting this can be done using the underlying stock, whereas in the Merton jump-diffusion setting we need to add extra options to the hedge portfolio to properly hedge the jump risk. In addition to the simulation, we derive analytical results that explain our observations from the numerical experiments. Understanding the hedging of CVA helps to deal with xVAs in a practical setting.
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2005.10504&r=all
  7. By: Matteo Gambara; Josef Teichmann
    Abstract: Consistent Recalibration models (CRC) have been introduced to capture in necessary generality the dynamic features of term structures of derivatives' prices. Several approaches have been suggested to tackle this problem, but all of them, including CRC models, suffered from numerical intractabilities mainly due to the presence of complicated drift terms or consistency conditions. We overcome this problem by machine learning techniques, which allow to store the crucial drift term's information in neural network type functions. This yields first time dynamic term structure models which can be efficiently simulated.
    Date: 2020–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2006.09455&r=all
  8. By: Sobin Joseph; Lekhapriya Dheeraj Kashyap; Shashi Jain
    Abstract: Multi-dimensional Hawkes process (MHP) is a class of self and mutually exciting point processes that find wide range of applications -- from prediction of earthquakes to modelling of order books in high frequency trading. This paper makes two major contributions, we first find an unbiased estimator for the log-likelihood estimator of the Hawkes process to enable efficient use of the stochastic gradient descent method for maximum likelihood estimation. The second contribution is, we propose a specific single hidden layered neural network for the non-parametric estimation of the underlying kernels of the MHP. We evaluate the proposed model on both synthetic and real datasets, and find the method has comparable or better performance than existing estimation methods. The use of shallow neural network ensures that we do not compromise on the interpretability of the Hawkes model, while at the same time have the flexibility to estimate any non-standard Hawkes excitation kernel.
    Date: 2020–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2006.02460&r=all
  9. By: Fred Espen Benth; Nils Detering; Silvia Lavagnini
    Abstract: We price European-style options written on forward contracts in a commodity market, which we model with a state-dependent infinite-dimensional Heath-Jarrow-Morton (HJM) approach. We introduce a new class of volatility operators which map the square integrable noise into the Filipovi\'{c} space of forward curves, and we specify a deterministic parametrized version of it. For calibration purposes, we train a neural network to approximate the option price as a function of the model parameters. We then use it to calibrate the HJM parameters starting from (simulated) option market data. Finally we introduce a new loss function that takes into account bid and ask prices and offers a solution to calibration in illiquid markets. A key issue discovered is that the trained neural network might be non-injective, which could potentially lead to poor accuracy in calibrating the forward curve parameters, even when showing a high degree of accuracy in recovering the prices. This reveals that the original meaning of the parameters gets somehow lost in the approximation.
    Date: 2020–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2006.01911&r=all
  10. By: Yun-Cheng Tsai; Chun-Chieh Wang
    Abstract: Reinforcement learning can interact with the environment and is suitable for applications in decision control systems. Therefore, we used the reinforcement learning method to establish a foreign exchange transaction, avoiding the long-standing problem of unstable trends in deep learning predictions. In the system design, we optimized the Sure-Fire statistical arbitrage policy, set three different actions, encoded the continuous price over a period of time into a heat-map view of the Gramian Angular Field (GAF) and compared the Deep Q Learning (DQN) and Proximal Policy Optimization (PPO) algorithms. To test feasibility, we analyzed three currency pairs, namely EUR/USD, GBP/USD, and AUD/USD. We trained the data in units of four hours from 1 August 2018 to 30 November 2018 and tested model performance using data between 1 December 2018 and 31 December 2018. The test results of the various models indicated that favorable investment performance was achieved as long as the model was able to handle complex and random processes and the state was able to describe the environment, validating the feasibility of reinforcement learning in the development of trading strategies.
    Date: 2019–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1908.08036&r=all
  11. By: Christopher Ferrall
    Abstract: This paper discusses how to design, solve and estimate dynamic programming models using the open source package niqlow. Reasons are given for why such a package has not appeared earlier and why the object-oriented approach followed by niqlow seems essential. An example is followed that starts with basic coding then expands the model and applies different solution methods to finally estimate parameters from data. Using niqlow to organize the empirical DP literature may support new research better than traditional surveys. Replication of results in several published papers validate niqlow, but it also raises doubt that complex models solved with purpose-built code can ever be independently verified.
    Keywords: Dynamic Programming, Computational Methods, Replication Studies
    JEL: C63 C51 C54
    Date: 2020–06
    URL: http://d.repec.org/n?u=RePEc:qed:wpaper:1432&r=all
  12. By: Karolina Sowinska; Pranava Madhyastha
    Abstract: Public opinion influences events, especially related to stock market movement, in which a subtle hint can influence the local outcome of the market. In this paper, we present a dataset that allows for company-level analysis of tweet based impact on one-, two-, three-, and seven-day stock returns. Our dataset consists of 862, 231 labelled instances from twitter in English, we also release a cleaned subset of 85, 176 labelled instances to the community. We also provide baselines using standard machine learning algorithms and a multi-view learning based approach that makes use of different types of features. Our dataset, scripts and models are publicly available at: https://github.com/ImperialNLP/stockretu rnpred.
    Date: 2020–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2006.09723&r=all
  13. By: Jennifer L. Castle (Dept of Economics, Institute for New Economic Thinking at the Oxford Martin School and Magdalen College, University of Oxford); Jurgen A. Doornik (Dept of Economics, Institute for New Economic Thinking at the Oxford Martin School and Climate Econometrics, Nuffield College, University of Oxford); David F. Hendry (Dept of Economics, Institute for New Economic Thinking at the Oxford Martin School and Climate Econometrics, Nuffield College, University of Oxford)
    Abstract: We have been publishing real-time forecasts of confirmed cases and deaths for COVID-19 online at www.doornik.com/COVID-19 from mid-March 2020. These forecasts are short-term statistical extrapolations of past and current data. They assume that the underlying trend is informative of short term developments, without requiring other assumptions of how the SARS-CoV-2 virus is spreading, or whether preventative policies are effective. As such they are complementary to forecasts from epidemiological models. The forecasts are based on extracting trends from windows of the data, applying machine learning, and then computing forecasts by applying some constraints to this flexible extracted trend. The methods have previously been applied to various other time series data and have performed well. They are also effective in this setting, providing better forecasts than some epidemiological models.
    Keywords: Autometrics; Cardt; COVID-19; Epidemiology; Forecasting; Forecast averaging; Machine learning; Smoothing; Trend Indicator Saturation.
    Date: 2020–04–27
    URL: http://d.repec.org/n?u=RePEc:nuf:econwp:2006&r=all
  14. By: Ghattas Badih (I2M - Institut de Mathématiques de Marseille - AMU - Aix Marseille Université - ECM - École Centrale de Marseille - CNRS - Centre National de la Recherche Scientifique); Michel Pierre (I2M - Institut de Mathématiques de Marseille - AMU - Aix Marseille Université - ECM - École Centrale de Marseille - CNRS - Centre National de la Recherche Scientifique, CEReSS - Centre d'études et de recherche sur les services de santé et la qualité de vie - AMU - Aix Marseille Université); Boyer Laurent (CEReSS - Centre d'études et de recherche sur les services de santé et la qualité de vie - AMU - Aix Marseille Université)
    Abstract: We consider different approaches for assessing variable importance in clustering. We focus on clustering using binary decision trees (CUBT), which is a non-parametric top-down hierarchical clustering method designed for both continuous and nominal data. We suggest a measure of variable importance for this method similar to the one used in Breiman's classification and regression trees. This score is useful to rank the variables in a dataset, to determine which variables are the most important or to detect the irrelevant ones. We analyze both stability and efficiency of this score on different data simulation models in the presence of noise, and compare it to other classical variable importance measures. Our experiments show that variable importance based on CUBT is much more efficient than other approaches in a large variety of situations.
    Keywords: Variables ranking,Variable importance,Unsupervised learning,CUBT,Deviance
    Date: 2019–03
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-02007388&r=all
  15. By: Jun-Hao Chen; Samuel Yen-Chi Chen; Yun-Cheng Tsai; Chih-Shiang Shur
    Abstract: Deep learning (DL) has been applied extensively in a wide range of fields. However, it has been shown that DL models are susceptible to a certain kinds of perturbations called \emph{adversarial attacks}. To fully unlock the power of DL in critical fields such as financial trading, it is necessary to address such issues. In this paper, we present a method of constructing perturbed examples and use these examples to boost the robustness of the model. Our algorithm increases the stability of DL models for candlestick classification with respect to perturbations in the input data.
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2006.03686&r=all
  16. By: David Byrd; Sruthi Palaparthi; Maria Hybinette; Tucker Hybinette Balch
    Abstract: There is a pervasive assumption that low latency access to an exchange is a key factor in the profitability of many high-frequency trading strategies. This belief is evidenced by the "arms race" undertaken by certain financial firms to co-locate with exchange servers. To the best of our knowledge, our study is the first to validate and quantify this assumption in a continuous double auction market with a single exchange similar to the New York Stock Exchange. It is not feasible to conduct this exploration with historical data in which trader identity and location are not reported. Accordingly, we investigate the relationship between latency of access to order book information and profitability of trading strategies exploiting that information with an agent-based interactive discrete event simulation in which thousands of agents pursue archetypal trading strategies. We introduce experimental traders pursuing a low-latency order book imbalance (OBI) strategy in a controlled manner across thousands of simulated trading days, and analyze OBI trader profit while varying distance (latency) from the exchange. Our experiments support that latency is inversely related to profit for the OBI traders, but more interestingly show that latency rank, rather than absolute magnitude, is the key factor in allocating returns among agents pursuing a similar strategy.
    Date: 2020–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2006.08682&r=all
  17. By: Marcin Chlebus (Faculty of Economic Sciences, University of Warsaw); Maciej Stefan Świtała (Faculty of Economic Sciences, University of Warsaw)
    Abstract: The paper takes into consideration the broad idea of topic modelling and its application. The aim of the research was to identify mutual tendencies in econometric and machine learning abstracts. Different topic models were compared in terms of their performance and interpretability. The former was measured with a newly introduced approach. Summaries collected from esteemed journals were analysed with LSA, LDA and CTM algorithms. The obtained results enable finding similar trends in both corpora. Probabilistic models – LDA and CTM – outperform the semantic alternative – LSA. It appears that econometrics and machine learning are fields that consider problems being rather homogenous at the level of concept. However, they differ in terms of used tools and dominance in particular areas.
    Keywords: abstracts, comparison, interpretability, tendencies, topics
    JEL: A12 C18 C38 C52 C61
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:war:wpaper:2020-16&r=all
  18. By: Itzhak Rasooly; Carlos Gavidia-Calderon
    Abstract: While auction theory views bids and valuations as continuous variables, real-world auctions are necessarily discrete. In this paper, we use a combination of analytical and computational methods to investigate whether incorporating discreteness substantially changes the predictions of auction theory, focusing on the case of uniformly distributed valuations so that our results bear on the majority of auction experiments. In some cases, we find that introducing discreteness changes little. For example, the first-price auction with two bidders and an even number of values has a symmetric equilibrium that closely resembles its continuous counterpart and converges to its continuous counterpart as the discretisation goes to zero. In others, however, we uncover discontinuity results. For instance, introducing an arbitrarily small amount of discreteness into the all-pay auction makes its symmetric, pure-strategy equilibrium disappear; and appears (based on computational experiments) to rob the game of pure-strategy equilibria altogether. These results raise questions about the continuity approximations on which auction theory is based and prompt a re-evaluation of the experimental literature.
    Date: 2020–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2006.03016&r=all
  19. By: Guang Zhang
    Abstract: This paper studies pairs trading using a nonlinear and non-Gaussian state-space model framework. We model the spread between the prices of two assets as an unobservable state variable and assume that it follows a mean-reverting process. This new model has two distinctive features: (1) The innovations to the spread is non-Gaussianity and heteroskedastic. (2) The mean reversion of the spread is nonlinear. We show how to use the filtered spread as the trading indicator to carry out statistical arbitrage. We also propose a new trading strategy and present a Monte Carlo based approach to select the optimal trading rule. As the first empirical application, we apply the new model and the new trading strategy to two examples: PEP vs KO and EWT vs EWH. The results show that the new approach can achieve a 21.86% annualized return for the PEP/KO pair and a 31.84% annualized return for the EWT/EWH pair. As the second empirical application, we consider all the possible pairs among the largest and the smallest five US banks listed on the NYSE. For these pairs, we compare the performance of the proposed approach with that of the existing popular approaches, both in-sample and out-of-sample. Interestingly, we find that our approach can significantly improve the return and the Sharpe ratio in almost all the cases considered.
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2005.09794&r=all
  20. By: codagnone, cristiano; Bogliacino, Francesco (Universidad Nacional de Colombia); Gómez, Camilo Ernesto (Centro de Investigaciones para el Desarrollo); Charris, Rafael Alberto (Universidad Nacional de Colombia); Montealegre, Felipe (Universidad Nacional de Colombia); Liva, Giovanni; Villanueva, Francisco Lupiañez; Folkvord, F.; Veltri, Giuseppe Alessandro Prof (University of Trento)
    Abstract: Currently, many different countries are under lockdown or extreme social distancing measures to control the spread of COVID-19. The potentially far-reaching side effects of these measures have not yet been fully understood. In this study we analyse the results of a multi-country survey conducted in Italy (N=3,504), Spain (N=3,524) and the United Kingdom (N=3,523), with two separate analyses. In the first analysis, we examine the elicitation of citizens’ concerns over the downplaying of the economic consequences of the lockdown during the COVID-19 pandemic. We control for Social Desirability Bias through a list experiment included in the survey. In the second analysis, we examine the data from the same survey to estimate the consequences of the economic lockdown in terms of mental health, by predicting the level of stress, anxiety and depression associated with being economically vulnerable and having been affected by a negative economic shock. To accomplish this, we have used a prediction algorithm based on machine learning techniques. To quantify the size of this affected population, we compare its magnitude with the number of people affected by COVID-19 using measures of susceptibility, vulnerability and behavioural change collected in the same questionnaire. We find that the concern for the economy and for “the way out” of the lockdown is diffuse and there is evidence of minor underreporting. Additionally, we estimate that around 42.8% of the populations in the three countries are at high risk of stress, anxiety and depression, based on their level of economic vulnerability and their exposure to a negative economic shock. Therefore, it can be concluded that the lockdown and extreme social distancing in the three countries has had an enormous impact on individuals’ mental health and this should be taken into account for future decisions made on regulations concerning the pandemic.
    Date: 2020–05–30
    URL: http://d.repec.org/n?u=RePEc:osf:socarx:x9m36&r=all
  21. By: Dhagash Mehta; Dhruv Desai; Jithin Pradeep
    Abstract: Given the surge in popularity of mutual funds (including exchange-traded funds (ETFs)) as a diversified financial investment, a vast variety of mutual funds from various investment management firms and diversification strategies have become available in the market. Identifying similar mutual funds among such a wide landscape of mutual funds has become more important than ever because of many applications ranging from sales and marketing to portfolio replication, portfolio diversification and tax loss harvesting. The current best method is data-vendor provided categorization which usually relies on curation by human experts with the help of available data. In this work, we establish that an industry wide well-regarded categorization system is learnable using machine learning and largely reproducible, and in turn constructing a truly data-driven categorization. We discuss the intellectual challenges in learning this man-made system, our results and their implications.
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2006.00123&r=all
  22. By: Chuangyin Dang; Qi Qi; Yinyu Ye
    Abstract: We consider two models of computation for Tarski's order preserving function f related to fixed points in a complete lattice: the oracle function model and the polynomial function model. In both models, we find the first polynomial time algorithm for finding a Tarski's fixed point. In addition, we provide a matching oracle bound for determining the uniqueness in the oracle function model and prove it is Co-NP hard in the polynomial function model. The existence of the pure Nash equilibrium in supermodular games is proved by Tarski's fixed point theorem. Exploring the difference between supermodular games and Tarski's fixed point, we also develop the computational results for finding one pure Nash equilibrium and determining the uniqueness of the equilibrium in supermodular games.
    Date: 2020–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2005.09836&r=all
  23. By: Gallego, J; Prem, M; Vargas, J. F
    Abstract: The public health crisis caused by the COVID-19 pandemic, coupled with the subsequent economic emergency and social turmoil, has pushed governments to substantially and swiftly increase spending. Because of the pressing nature of the crisis, public procurement rules and procedures have been relaxed in many places in order to expedite transactions. However, this may also create opportunities for corruption. Using contract-level information on public spending from Colombia’s e-procurement platform, and a differencein-differences identification strategy, we find that municipalities classified by a machine learning algorithm as traditionally more prone to corruption react to the pandemic-led spending surge by using a larger proportion of discretionary non-competitive contracts and increasing their average value. This is especially so in the case of contracts to procure crisisrelated goods and services. Our evidence suggests that large negative shocks that require fast and massive spending may increase corruption, thus at least partially offsetting the mitigating effects of this fiscal instrument.
    Keywords: Corruption, COVID-19, Public procurement, Machine learning
    JEL: H57 H75 D73 I18
    Date: 2020–05–22
    URL: http://d.repec.org/n?u=RePEc:col:000092:018178&r=all
  24. By: Biesinger, Markus; Bircan, Cagatay; Ljungqvist, Alexander P.
    Abstract: We open up the black box of value creation in private equity with the help of confidential information on value creation plans and their execution. Plans are tailored to each portfolio company's needs and circumstances, have become more hands-on, and vary with deal type, ownership, growth strategy, and geographic focus. Successful execution is subject to resource constraints, economies of specialization, and diminishing returns, and varies systematically across funds. Successful execution is a key driver of investor returns, especially in growth, buyout, and secondary deals. Company operations and profitability improve in ways consistent with successful execution, even beyond PE funds' exit.
    Keywords: financial returns; Growth investing; Machine Learning; private equity; secondaries; value creation; venture capital
    JEL: G11 G24 G30 G32 L26
    Date: 2020–04
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:14676&r=all
  25. By: Affeldt, Pauline; Duso, Tomaso; Szücs, Florian
    Abstract: We study the evolution of EC merger decisions over the first 25 years of common European merger policy. Using a novel dataset at the level of the relevant antitrust markets and containing all merger cases scrutinized by the Commission over the 1990-2014 period, we evaluate how consistently arguments related to structural market parameters â?? dominance, concentration, barriers to entry, and foreclosure â?? were applied over time and across different dimensions such as the geographic market definition and the complexity of the merger. Simple, linear probability models as usually applied in the literature overestimate on average the effects of the structural indicators. Using non-parametric machine learning techniques, we find that dominance is positively correlated with competitive concerns, especially in concentrated markets and in complex mergers. Yet, its importance has decreased over time and significantly following the 2004 merger policy reform. The Commission's competitive concerns are also correlated with concentration and the more so, the higher the entry barriers and the risks of foreclosure. These patterns are not changing over time. The role of the structural indicators in explaining competitive concerns does not change depending on the geographic market definition.
    Keywords: causal forests; Concentration; Dominance; Entry Barriers; EU Commission; foreclosure; Merger Policy
    JEL: K21 L40
    Date: 2020–04
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:14548&r=all
  26. By: Alain Naef (University of California, Berkeley)
    Abstract: Few studies on foreign exchange intervention convincingly address the causal effect of intervention on exchange rates. By using a narrative approach, I address a major issue in the literature: the endogeneity of intraday news which influence the exchange rate alongside central bank operations. Some studies find that interventions work in up to 80% of cases. Yet, by accounting for intraday market moving news, I find that in adverse conditions, the Bank of England managed to influence the exchange rate only in 8% of cases. I use both machine learning and human assessment to confirm the validity of the narrative approach.
    Keywords: intervention, foreign exchange, natural language processing, central bank, Bank of England.
    JEL: F31 E5 N14 N24
    Date: 2020–06
    URL: http://d.repec.org/n?u=RePEc:hes:wpaper:0188&r=all
  27. By: Stetter, Christian; Menning, Philipp; Sauer, Johannes
    Abstract: Legislators in the EU have long been concerned with the environmental impact of farming activities. As a means to mitigate adverse ecological effects and foster desirable ecosystem services in agriculture, the EU introduced so-called agri-environment schemes (AES). This study suggests a machine learning method based on generalized random forests (GRF) for assessing the environmental effectiveness of such agri-environment payment schemes at the farm-level. We exploit a set of more than 130 contextual predictors to assess the individual impact of participating in agri-environment schemes in the EU. Results from our empirical application for Southeast Germany suggest the existence of heterogeneous impacts of environmental subsidies on mineral fertiliser quantities, greenhouse gas emissions and crop diversity. Individual treatment effects largely differ from traditionally used average treatment effects, thus indicating the importance of considering the farming context in agricultural policy evaluation. Furthermore, we provide important insights into the optimal targeting of agrienvironment schemes for maximising the environmental efficacy of existing policies.
    Keywords: Agricultural and Food Policy, Environmental Economics and Policy, Farm Management
    Date: 2020–04
    URL: http://d.repec.org/n?u=RePEc:ags:aesc20:303699&r=all
  28. By: Burke, Marshall; Emerick, Kyle; Maue, Casey
    Abstract: A vast firm productivity literature finds that otherwise similar firms differ widely in their productivity and that these differences persist through time, with important implications for the broader macroeconomy. These stylized facts derive largely from studies of manufacturing firms in wealthy countries, and thus have unknown relevance for the world's most common firm type, the smallholder farm. We use detailed micro data from over 12,000 smallholder farms and nearly 100,000 agricultural plots across four countries in Africa to study the size, source, and persistence of productivity dispersion among smallholder farmers. Applying standard regression-based approaches to measuring productivity residuals, we find much larger dispersion but less persistence than benchmark estimates from manufacturing. We then show, using a novel framework that combines physical output measurement, estimates from satellites, and machine learning, that about half of this discrepancy can be accounted for by measurement error in output. After correcting for measurement error, productivity differences across firms and over time in our smallholder agricultural setting closely match benchmark estimates for non-agricultural firms. These results question some common implications of observed dispersion, such as the importance of misallocation of factors of production.
    Date: 2020–04
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:14553&r=all

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.