nep-cmp New Economics Papers
on Computational Economics
Issue of 2021‒04‒19
twenty-six papers chosen by



  1. Who will pay for improved health standards in U.S. meat-processing plants? Simulation results from the USAGE model By Peter B. Dixon; Maureen T. Rimmer
  2. Enabling Machine Learning Algorithms for Credit Scoring -- Explainable Artificial Intelligence (XAI) methods for clear understanding complex predictive models By Przemys{\l}aw Biecek; Marcin Chlebus; Janusz Gajda; Alicja Gosiewska; Anna Kozak; Dominik Ogonowski; Jakub Sztachelski; Piotr Wojewnik
  3. Event-Driven LSTM For Forex Price Prediction By Ling Qi; Matloob Khushi; Josiah Poon
  4. Financial Markets Prediction with Deep Learning By Jia Wang; Tong Sun; Benyuan Liu; Yu Cao; Degang Wang
  5. Single and multiple-group penalized factor analysis: a trust-region algorithm approach with integrated automatic multiple tuning parameter selection By Geminiani, Elena; Marra, Giampiero; Moustaki, Irini
  6. Optimal Market Making by Reinforcement Learning By Matias Selser; Javier Kreiner; Manuel Maurette
  7. A comparative study of Different Machine Learning Regressors For Stock Market Prediction By Nazish Ashfaq; Zubair Nawaz; Muhammad Ilyas
  8. The Risk of Algorithm Transparency: How Algorithm Complexity Drives the Effects on Use of Advice By Christiane B. Haubitz; Cedric A. Lehmann; Andreas Fügener; Ulrich W. Thonemann
  9. Autocalibration and Tweedie-dominance for insurance pricing with machine learning By Denuit, Michel; Charpentier, Arthur; Trufin, Julien
  10. Time Series (re)sampling using Generative Adversarial Networks By Christian M. Dahl; Emil N. S{\o}rensen
  11. The Efficient Hedging Frontier with Deep Neural Networks By Zheng Gong; Carmine Ventre; John O'Hara
  12. The Value Added Tax Simulation Model: VATSIM-DF (II) By Cristina Cirillo; Lucia Imperioli; Marco Manzo
  13. Nonstationary Portfolios: Diversification in the Spectral Domain By Bruno Scalzo; Alvaro Arroyo; Ljubisa Stankovic; Danilo P. Mandic
  14. Measuring National Life Satisfaction with Music By Benetos, Emmanouil; Ragano, Alessandro; Sgroi, Daniel; Tuckwell, Anthony
  15. Recurrent Dictionary Learning for State-Space Models with an Application in Stock Forecasting By Shalini Sharma; Víctor Elvira; Emilie Chouzenoux; Angshul Majumdar
  16. Response versus gradient boosting trees, GLMs and neural networks under Tweedie loss and log-link By Hainaut, Donatien; Trufin, Julien; Denuit, Michel
  17. Black-box model risk in finance By Samuel N. Cohen; Derek Snow; Lukasz Szpruch
  18. Deep Hedging under Rough Volatility By Blanka Horvath; Josef Teichmann; Zan Zuric
  19. Assessing the Impact of COVID-19 on Trade: a Machine Learning Counterfactual Analysis By Marco Due\~nas; V\'ictor Ortiz; Massimo Riccaboni; Francesco Serti
  20. Profitability Analysis in Stock Investment Using an LSTM-Based Deep Learning Model By Jaydip Sen; Abhishek Dutta; Sidra Mehtab
  21. Inflation Thresholds and Policy-Rule Inertia: Some Simulation Results By Cristina Fuentes-Albero; John M. Roberts
  22. Is the Covid equity bubble rational? A machine learning answer By Jean Jacques Ohana; Eric Benhamou; David Saltiel; Beatrice Guez
  23. Uncovering commercial activity in informal cities By Daniel Straulino; Juan C. Saldarriaga; Jairo A. G\'omez; Juan C. Duque; Neave O'Clery
  24. "We're rolling". Our Uncertainty Perception Indicator (UPI) in Q4 2020: introducing RollingLDA, a new method for the measurement of evolving economic narratives By Müller, Henrik; Rieger, Jonas; Hornig, Nico
  25. Analysis of bank leverage via dynamical systems and deep neural networks By Fabrizio Lillo; Giulia Livieri; Stefano Marmi; Anton Solomko; Sandro Vaienti
  26. Local mortality estimates during the COVID-19 pandemic in Italy By Augusto Cerqua; Roberta Di Stefano; Marco Letta; Sara Miccoli

  1. By: Peter B. Dixon; Maureen T. Rimmer
    Abstract: It is possible that Covid will produce permanent changes in work practices that increase costs in U.S. meat-processing plants. These changes may be beneficial for the safety of meat-processing workers and the health of the community more generally. However, they will have economic costs. In this paper we use USAGE-Food, a detailed computable general equilibrium (CGE) model of the U.S., to work out how those costs would be distributed between farmers and consumers of meat products. We also calculate industry and macroeconomic effects. Despite modelling the farmers as owning fixed factors, principally their own labour, we find that the farmer share in extra processing costs is likely to be quite moderate. Throughout the paper, we support simulation results by back-of-the-envelope calculations, diagrams and sensitivity analysis. These devices identify the mechanisms in the model and key data points that are responsible for the main results. In this way, we avoid the black-box criticism that is sometimes levelled at CGE modelling.
    Keywords: split of meat-processing costs between farmers and consumers computable general equilibrium simulations back-of-the-envelope explanations diagrammatic analysis
    JEL: D58 Q12 Q13 Q17 Q18
    Date: 2021–03
    URL: http://d.repec.org/n?u=RePEc:cop:wpaper:g-314&r=all
  2. By: Przemys{\l}aw Biecek; Marcin Chlebus; Janusz Gajda; Alicja Gosiewska; Anna Kozak; Dominik Ogonowski; Jakub Sztachelski; Piotr Wojewnik
    Abstract: Rapid development of advanced modelling techniques gives an opportunity to develop tools that are more and more accurate. However as usually, everything comes with a price and in this case, the price to pay is to loose interpretability of a model while gaining on its accuracy and precision. For managers to control and effectively manage credit risk and for regulators to be convinced with model quality the price to pay is too high. In this paper, we show how to take credit scoring analytics in to the next level, namely we present comparison of various predictive models (logistic regression, logistic regression with weight of evidence transformations and modern artificial intelligence algorithms) and show that advanced tree based models give best results in prediction of client default. What is even more important and valuable we also show how to boost advanced models using techniques which allow to interpret them and made them more accessible for credit risk practitioners, resolving the crucial obstacle in widespread deployment of more complex, 'black box' models like random forests, gradient boosted or extreme gradient boosted trees. All this will be shown on the large dataset obtained from the Polish Credit Bureau to which all the banks and most of the lending companies in the country do report the credit files. In this paper the data from lending companies were used. The paper then compares state of the art best practices in credit risk modelling with new advanced modern statistical tools boosted by the latest developments in the field of interpretability and explainability of artificial intelligence algorithms. We believe that this is a valuable contribution when it comes to presentation of different modelling tools but what is even more important it is showing which methods might be used to get insight and understanding of AI methods in credit risk context.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.06735&r=all
  3. By: Ling Qi; Matloob Khushi; Josiah Poon
    Abstract: The majority of studies in the field of AI guided financial trading focus on purely applying machine learning algorithms to continuous historical price and technical analysis data. However, due to non-stationary and high volatile nature of Forex market most algorithms fail when put into real practice. We developed novel event-driven features which indicate a change of trend in direction. We then build long deep learning models to predict a retracement point providing a perfect entry point to gain maximum profit. We use a simple recurrent neural network (RNN) as our baseline model and compared with short-term memory (LSTM), bidirectional long short-term memory (BiLSTM) and gated recurrent unit (GRU). Our experiment results show that the proposed event-driven feature selection together with the proposed models can form a robust prediction system which supports accurate trading strategies with minimal risk. Our best model on 15-minutes interval data for the EUR/GBP currency achieved RME 0.006x10^(-3) , RMSE 2.407x10^(-3), MAE 1.708x10^(-3), MAPE 0.194% outperforming previous studies.
    Date: 2021–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.01499&r=all
  4. By: Jia Wang; Tong Sun; Benyuan Liu; Yu Cao; Degang Wang
    Abstract: Financial markets are difficult to predict due to its complex systems dynamics. Although there have been some recent studies that use machine learning techniques for financial markets prediction, they do not offer satisfactory performance on financial returns. We propose a novel one-dimensional convolutional neural networks (CNN) model to predict financial market movement. The customized one-dimensional convolutional layers scan financial trading data through time, while different types of data, such as prices and volume, share parameters (kernels) with each other. Our model automatically extracts features instead of using traditional technical indicators and thus can avoid biases caused by selection of technical indicators and pre-defined coefficients in technical indicators. We evaluate the performance of our prediction model with strictly backtesting on historical trading data of six futures from January 2010 to October 2017. The experiment results show that our CNN model can effectively extract more generalized and informative features than traditional technical indicators, and achieves more robust and profitable financial performance than previous machine learning approaches.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.05413&r=all
  5. By: Geminiani, Elena; Marra, Giampiero; Moustaki, Irini
    Abstract: Penalized factor analysis is an efficient technique that produces a factor loading matrix with many zero elements thanks to the introduction of sparsity-inducing penalties within the estimation process. However, sparse solutions and stable model selection procedures are only possible if the employed penalty is non-differentiable, which poses certain theoretical and computational challenges. This article proposes a general penalized likelihood-based estimation approach for single and multiple-group factor analysis models. The framework builds upon differentiable approximations of non-differentiable penalties, a theoretically founded definition of degrees of freedom, and an algorithm with integrated automatic multiple tuning parameter selection that exploits second-order analytical derivative information. The proposed approach is evaluated in two simulation studies and illustrated using a real data set. All the necessary routines are integrated into the R package penfa.
    Keywords: effective degrees of freedom; generalized information criterion; measurement invariance; penalized likelihood; simple structure; CRUI-CARE Agreement; Alma Mater Studiorum - Universitá di Bologna within the CRUI-CARE Agreement
    JEL: C1
    Date: 2021–03–26
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:108873&r=all
  6. By: Matias Selser; Javier Kreiner; Manuel Maurette
    Abstract: We apply Reinforcement Learning algorithms to solve the classic quantitative finance Market Making problem, in which an agent provides liquidity to the market by placing buy and sell orders while maximizing a utility function. The optimal agent has to find a delicate balance between the price risk of her inventory and the profits obtained by capturing the bid-ask spread. We design an environment with a reward function that determines an order relation between policies equivalent to the original utility function. When comparing our agents with the optimal solution and a benchmark symmetric agent, we find that the Deep Q-Learning algorithm manages to recover the optimal agent.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.04036&r=all
  7. By: Nazish Ashfaq; Zubair Nawaz; Muhammad Ilyas
    Abstract: For the development of successful share trading strategies, forecasting the course of action of the stock market index is important. Effective prediction of closing stock prices could guarantee investors attractive benefits. Machine learning algorithms have the ability to process and forecast almost reliable closing prices for historical stock patterns. In this article, we intensively studied NASDAQ stock market and targeted to choose the portfolio of ten different companies belongs to different sectors. The objective is to compute opening price of next day stock using historical data. To fulfill this task nine different Machine Learning regressor applied on this data and evaluated using MSE and R2 as performance metric.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.07469&r=all
  8. By: Christiane B. Haubitz (Department of Supply Chain Management and Management Science, University of Cologne, 50923 Cologne, Germany); Cedric A. Lehmann (Department of Supply Chain Management and Management Science, University of Cologne, 50923 Cologne, Germany); Andreas Fügener (Department of Supply Chain Management and Management Science, University of Cologne, 50923 Cologne, Germany); Ulrich W. Thonemann (Department of Supply Chain Management and Management Science, University of Cologne, 50923 Cologne, Germany)
    Abstract: Algorithmic decision support is omnipresent in many managerial tasks, but human judgment often makes the final call. A lack of algorithm transparency is often stated as a barrier to successful human-machine collaboration. In this paper, we analyze the effects of algorithm transparency on the use of advice from algorithms with different degrees of complexity. We conduct a preregistered laboratory experiment where participants receive identical advice from algorithms with different levels of transparency and complexity. The results of the experiment show that increasing the transparency of a simple algorithm reduces the use of advice, while increasing the transparency of a complex algorithm increases it. Our results also indicate that the individually perceived appropriateness of algorithmic complexity moderates the effects of transparency on the use of advice. While perceiving an algorithm as too simple severely harms the use of its advice, the perception of an algorithm being too complex has no significant effect on it. Our results suggest that managers do not have to be concerned about revealing complex algorithms to decision makers, even if the decision makers do not fully comprehend them. However, making simple algorithms transparent bears the risk of disappointing people’s expectations, which can reduce the use of algorithms' advice.
    Keywords: Algorithm Transparency; Decision Making; Decision Support; Use of Advice
    JEL: C91
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:ajk:ajkdps:078&r=all
  9. By: Denuit, Michel (Université catholique de Louvain, LIDAM/ISBA, Belgium); Charpentier, Arthur (UQAM); Trufin, Julien (ULB)
    Abstract: Boosting techniques and neural networks are particularly effective machine learning methods for insurance pricing. Often in practice, there are nevertheless endless debates about the choice of the right loss function to be used to train the machine learning model, as well as about the appropriate metric to assess the performances of competing models. Also, the sum of fitted values can depart from the observed totals to a large extent and this often confuses actuarial analysts. The lack of balance inherent to training models by minimizing deviance outside the familiar GLM with canonical link setting has been empirically documented in Wüthrich (2019, 2020) who attributes it to the early stopping rule in gradient descent methods for model fitting. The present paper aims to further study this phenomenon when learning proceeds by minimizing Tweedie deviance. It is shown that minimizing deviance involves a trade-off between the integral of weighted differences of lower partial moments and the bias measured on a specific scale. Autocalibration is then proposed as a remedy. This new method to correct for bias adds an extra local GLM step to the analysis. Theoretically, it is shown that it implements the autocalibration concept in pure premium calculation and ensures that balance also holds on a local scale, not only at portfolio level as with existing bias-correction techniques. The convex order appears to be the natural tool to compare competing models, putting a new light on the diagnostic graphs and associated metrics proposed by Denuit et al. (2019).
    Keywords: Risk classification ; Tweedie distribution family ; Concentration curve ; Bregman loss ; Convex order
    Date: 2021–03–04
    URL: http://d.repec.org/n?u=RePEc:aiz:louvad:2021013&r=all
  10. By: Christian M. Dahl; Emil N. S{\o}rensen
    Abstract: We propose a novel bootstrap procedure for dependent data based on Generative Adversarial networks (GANs). We show that the dynamics of common stationary time series processes can be learned by GANs and demonstrate that GANs trained on a single sample path can be used to generate additional samples from the process. We find that temporal convolutional neural networks provide a suitable design for the generator and discriminator, and that convincing samples can be generated on the basis of a vector of iid normal noise. We demonstrate the finite sample properties of GAN sampling and the suggested bootstrap using simulations where we compare the performance to circular block bootstrapping in the case of resampling an AR(1) time series processes. We find that resampling using the GAN can outperform circular block bootstrapping in terms of empirical coverage.
    Date: 2021–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.00208&r=all
  11. By: Zheng Gong; Carmine Ventre; John O'Hara
    Abstract: The trade off between risks and returns gives rise to multi-criteria optimisation problems that are well understood in finance, efficient frontiers being the tool to navigate their set of optimal solutions. Motivated by the recent advances in the use of deep neural networks in the context of hedging vanilla options when markets have frictions, we introduce the Efficient Hedging Frontier (EHF) by enriching the pipeline with a filtering step that allows to trade off costs and risks. This way, a trader's risk preference is matched with an expected hedging cost on the frontier, and the corresponding hedging strategy can be computed with a deep neural network. We further develop our framework to improve the EHF and find better hedging strategies. By adding a random forest classifier to the pipeline to forecast market movements, we show how the frontier shifts towards lower costs and reduced risks, which indicates that the overall hedging performances have improved. In addition, by designing a new recurrent neural network, we also find strategies on the frontier where hedging costs are even lower.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.05280&r=all
  12. By: Cristina Cirillo (Ministry of Economy and Finance); Lucia Imperioli (Ministry of Economy and Finance); Marco Manzo (Ministry of Economy and Finance)
    Abstract: This paper describes the VATSIM-DF (II), a non-behavioural microsimulation model on the Value Added Tax (VAT), recently developed to support policy makers in designing VAT related policies in Italy. The most important goals of VATSIM-DF (II) are to estimate actual and expected VAT revenues, assess the VAT incidence on household disposable income, and simulate the distributional effects of changes in fiscal policies. Our results for 2019, at current VAT legislation, confirm the regressivity of VAT with respect to household income. Compared to existing models, the VATSIM-DF (II) has the great advantage of using Tax Register and National Accounts data, which make our model ideal for microsimulation purposes and perfectly consistent with the most updated macroeconomic data. To develop VATSIM-DF (II), we produce an original dataset by merging different data sources. Results for 2019, at current VAT legislation, show the VAT burden on Italian households and confirm the regressivity of VAT. Finally, we test the distributional effect of a revenue neutral reform, with two VAT rates, which applies the reduced VAT rate also to female and babies sanitary products.
    Keywords: redistributive effects, simulation, taxation, Value Added Tax
    JEL: H2 H22 H23
    Date: 2021–03
    URL: http://d.repec.org/n?u=RePEc:ahg:wpaper:wp2021-12&r=all
  13. By: Bruno Scalzo; Alvaro Arroyo; Ljubisa Stankovic; Danilo P. Mandic
    Abstract: Classical portfolio optimization methods typically determine an optimal capital allocation through the implicit, yet critical, assumption of statistical time-invariance. Such models are inadequate for real-world markets as they employ standard time-averaging based estimators which suffer significant information loss if the market observables are non-stationary. To this end, we reformulate the portfolio optimization problem in the spectral domain to cater for the nonstationarity inherent to asset price movements and, in this way, allow for optimal capital allocations to be time-varying. Unlike existing spectral portfolio techniques, the proposed framework employs augmented complex statistics in order to exploit the interactions between the real and imaginary parts of the complex spectral variables, which in turn allows for the modelling of both harmonics and cyclostationarity in the time domain. The advantages of the proposed framework over traditional methods are demonstrated through numerical simulations using real-world price data.
    Date: 2021–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.00477&r=all
  14. By: Benetos, Emmanouil (Queen Mary, University of London); Ragano, Alessandro (The Alan Turing Institute); Sgroi, Daniel (University of Warwick); Tuckwell, Anthony (University of Warwick)
    Abstract: National life satisfaction is an important way to measure societal well-being and since 2011 has been used to judge the effectiveness of government policy across the world. However, there is a paucity of historical data making limiting long-run comparisons with other data. We construct a new measure based on the emotional content of music. We first trained a machine learning model using 191 different audio features embedded within music and use this model to construct a long-run Music Valence Index derived from chart-topping songs. This index correlates strongly and significantly with survey-based life satisfaction and outperforms an equivalent text-based measure. Our results have implications for the role of music in society, and validate a new use of music as a long-run measure of public sentiment.
    Keywords: historical subjective wellbeing, life satisfaction, music, sound data, language, big data
    JEL: C8 N3 N4 O1 D6
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp14258&r=all
  15. By: Shalini Sharma (IIIT-Delhi - Indraprastha Institute of Information Technology [New Delhi]); Víctor Elvira (School of Mathematics - University of Edinburgh - University of Edinburgh); Emilie Chouzenoux (OPIS - OPtimisation Imagerie et Santé - CVN - Centre de vision numérique - CentraleSupélec - Université Paris-Saclay - Inria - Institut National de Recherche en Informatique et en Automatique - Inria Saclay - Ile de France - Inria - Institut National de Recherche en Informatique et en Automatique); Angshul Majumdar (IIIT-Delhi - Indraprastha Institute of Information Technology [New Delhi])
    Abstract: In this work, we introduce a new modeling and inferential tool for dynamical processing of time series. The approach is called recurrent dictionary learning (RDL). The proposed model reads as a linear Gaussian Markovian state-space model involving two linear operators, the state evolution and the observation matrices, that we assumed to be unknown. These two unknown operators (that can be seen interpreted as dictionaries) and the sequence of hidden states are jointly learnt via an expectation-maximization algorithm. The RDL model gathers several advantages, namely online processing, probabilistic inference, and a high model expressiveness which is usually typical of neural networks. RDL is particularly well suited for stock forecasting. Its performance is illustrated on two problems: next day forecasting (regression problem) and next day trading (classification problem), given past stock market observations. Experimental results show that our proposed method excels over state-of-the-art stock analysis models such as CNN-TA, MFNN, and LSTM.
    Keywords: Stock Forecasting,Recurrent dictionary learning,Kalman filter,expectation-minimization,dynamical modeling,uncertainty quantification
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-03184841&r=all
  16. By: Hainaut, Donatien (Université catholique de Louvain, LIDAM/ISBA, Belgium); Trufin, Julien (Université Libre de Bruxelles); Denuit, Michel (Université catholique de Louvain, LIDAM/ISBA, Belgium)
    Abstract: Thanks to its outstanding performances, boosting has rapidly gained wide acceptance among actuaries. To speed calculations, boosting is often applied to gradients of the loss function, not to responses (hence the name gradient boosting). When the model is trained by minimizing Poisson deviance, this amounts to apply the least-squares principle to raw residuals. This exposes gradient boosting to the same problems that lead to replace least-squares with Poisson GLM to analyze low counts (typically, the number of reported claims at policy level in personal lines). This paper shows that boosting can be conducted directly on the response under Tweedie loss function and log-link, by adapting the weights at each step. Numerical illustrations demonstrate improved performances compared to gradient boosting when trees, GLMs and neural networks are used as weak learners.
    Keywords: Risk classification ; Boosting ; Gradient Boosting ; Regression Trees ; GLM ; Neural Networks
    Date: 2021–01–01
    URL: http://d.repec.org/n?u=RePEc:aiz:louvad:2021012&r=all
  17. By: Samuel N. Cohen; Derek Snow; Lukasz Szpruch
    Abstract: Machine learning models are increasingly used in a wide variety of financial settings. The difficulty of understanding the inner workings of these systems, combined with their wide applicability, has the potential to lead to significant new risks for users; these risks need to be understood and quantified. In this sub-chapter, we will focus on a well studied application of machine learning techniques, to pricing and hedging of financial options. Our aim will be to highlight the various sources of risk that the introduction of machine learning emphasises or de-emphasises, and the possible risk mitigation and management strategies that are available.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.04757&r=all
  18. By: Blanka Horvath; Josef Teichmann; Zan Zuric
    Abstract: We investigate the performance of the Deep Hedging framework under training paths beyond the (finite dimensional) Markovian setup. In particular we analyse the hedging performance of the original architecture under rough volatility models with view to existing theoretical results for those. Furthermore, we suggest parsimonious but suitable network architectures capable of capturing the non-Markoviantity of time-series. Secondly, we analyse the hedging behaviour in these models in terms of P\&L distributions and draw comparisons to jump diffusion models if the the rebalancing frequency is realistically small.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.01962&r=all
  19. By: Marco Due\~nas; V\'ictor Ortiz; Massimo Riccaboni; Francesco Serti
    Abstract: By interpreting exporters' dynamics as a complex learning process, this paper constitutes the first attempt to investigate the effectiveness of different Machine Learning (ML) techniques in predicting firms' trade status. We focus on the probability of Colombian firms surviving in the export market under two different scenarios: a COVID-19 setting and a non-COVID-19 counterfactual situation. By comparing the resulting predictions, we estimate the individual treatment effect of the COVID-19 shock on firms' outcomes. Finally, we use recursive partitioning methods to identify subgroups with differential treatment effects. We find that, besides the temporal dimension, the main factors predicting treatment heterogeneity are interactions between firm size and industry.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.04570&r=all
  20. By: Jaydip Sen; Abhishek Dutta; Sidra Mehtab
    Abstract: Designing robust systems for precise prediction of future prices of stocks has always been considered a very challenging research problem. Even more challenging is to build a system for constructing an optimum portfolio of stocks based on the forecasted future stock prices. We present a deep learning-based regression model built on a long-and-short-term memory network (LSTM) network that automatically scraps the web and extracts historical stock prices based on a stock's ticker name for a specified pair of start and end dates, and forecasts the future stock prices. We deploy the model on 75 significant stocks chosen from 15 critical sectors of the Indian stock market. For each of the stocks, the model is evaluated for its forecast accuracy. Moreover, the predicted values of the stock prices are used as the basis for investment decisions, and the returns on the investments are computed. Extensive results are presented on the performance of the model. The analysis of the results demonstrates the efficacy and effectiveness of the system and enables us to compare the profitability of the sectors from the point of view of the investors in the stock market.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.06259&r=all
  21. By: Cristina Fuentes-Albero; John M. Roberts
    Abstract: In August 2020, the Federal Open Market Committee approved a revised Statement on Longer-Run Goals and Monetary Policy Strategy (FOMC, 2020) and in the subsequent FOMC meetings, the Committee made material changes to its forward guidance to bring it in line with the new framework. Clarida (2021) characterizes the new framework as comprising a number of key features.
    Date: 2021–04–12
    URL: http://d.repec.org/n?u=RePEc:fip:fedgfn:2021-04-12&r=all
  22. By: Jean Jacques Ohana; Eric Benhamou (MILES - Machine Intelligence and Learning Systems - LAMSADE - Laboratoire d'analyse et modélisation de systèmes pour l'aide à la décision - Université Paris Dauphine-PSL - PSL - Université Paris sciences et lettres - CNRS - Centre National de la Recherche Scientifique, LAMSADE - Laboratoire d'analyse et modélisation de systèmes pour l'aide à la décision - Université Paris Dauphine-PSL - PSL - Université Paris sciences et lettres - CNRS - Centre National de la Recherche Scientifique); David Saltiel; Beatrice Guez
    Abstract: Is the Covid Equity bubble rational? In 2020, stock prices ballooned with S&P 500 gaining 16%, and the tech-heavy Nasdaq soaring to 43%, while fundamentals deteriorated with decreasing GDP forecasts, shrinking sales and revenues estimates and higher government deficits. To answer this fundamental question, with little bias as possible, we explore a gradient boosting decision trees (GBDT) approach that enables us to crunch numerous variables and let the data speak. We define a crisis regime to identify specific downturns in stock markets and normal rising equity markets. We test our approach and report improved accuracy of GBDT over other ML methods. Thanks to Shapley values, we are able to identify most important features, making this current work innovative and a suitable answer to the justification of current equity level.
    Date: 2021–04–05
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-03189799&r=all
  23. By: Daniel Straulino; Juan C. Saldarriaga; Jairo A. G\'omez; Juan C. Duque; Neave O'Clery
    Abstract: Knowledge of the spatial organisation of economic activity within a city is key to policy concerns. However, in developing cities with high levels of informality, this information is often unavailable. Recent progress in machine learning together with the availability of street imagery offers an affordable and easily automated solution. Here we propose an algorithm that can detect what we call 'visible firms' using street view imagery. Using Medell\'in, Colombia as a case study, we illustrate how this approach can be used to uncover previously unseen economic activity. Applying spatial analysis to our dataset we detect a polycentric structure with five distinct clusters located in both the established centre and peripheral areas. Comparing the density of visible and registered firms, we find that informal activity concentrates in poor but densely populated areas. Our findings highlight the large gap between what is captured in official data and the reality on the ground.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.04545&r=all
  24. By: Müller, Henrik; Rieger, Jonas; Hornig, Nico
    Abstract: In this paper, we present a new dynamic topic modeling method to build stable models and consistent time series. We call this new method RollingLDA. It has the potential to overcome several difficulties researchers, who use unsupervised probabilistic topic models, have grappled with: namely the problem of arbitrary selection, which is aggravated when models are to be updated with new sequences of data. RollingLDA is derived by combining the LDAPrototype approach (Rieger, Jentsch and Rahnenführer, 2020) with an implementation that uses preceding LDA results as an initialization for subsequent quarters, while allowing topics to change over time. Squaring dual-process theory, employed in Behavioral Economics (Kahneman, 2011), with the evolving theory of Economic Narratives (Shiller, 2017), RollingLDA is applied to the measurement of economic uncertainty. The new version of our Uncertainty Perception Indicator (UPI), based on a newspaper corpus of 2.8 million German newspaper articles, published between 1 January 2001 and 31 December 2020, proves indeed capable of detecting an uncertainty narrative. The narrative, derived from the thorough quantitative-qualitative analysis of a key-topic of our model, can be interpreted as collective memory of past uncertainty shocks, their causes and the societal reactions to them. The uncertainty narrative can be seen as a collective intangible cultural asset (Haskel and Westlake, 2017), accumulated in the past, informing the present and potentially the future, as the story is being updated and partly overwritten by new experiences. This concept opens up a fascinating new field for future research. We would like to encourage researchers to use our data and are happy to share it on request.
    Keywords: Uncertainty,Narratives,Latent Dirichlet Allocation,Business Cycles,Covid-19,Text Mining,Computational Methods,Behavioral Economics
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:zbw:docmaw:6&r=all
  25. By: Fabrizio Lillo; Giulia Livieri; Stefano Marmi; Anton Solomko; Sandro Vaienti
    Abstract: We consider a model of a simple financial system consisting of a leveraged investor that invests in a risky asset and manages risk by using Value-at-Risk (VaR). The VaR is estimated by using past data via an adaptive expectation scheme. We show that the leverage dynamics can be described by a dynamical system of slow-fast type associated with a unimodal map on [0,1] with an additive heteroscedastic noise whose variance is related to the portfolio rebalancing frequency to target leverage. In absence of noise the model is purely deterministic and the parameter space splits in two regions: (i) a region with a globally attracting fixed point or a 2-cycle; (ii) a dynamical core region, where the map could exhibit chaotic behavior. Whenever the model is randomly perturbed, we prove the existence of a unique stationary density with bounded variation, the stochastic stability of the process and the almost certain existence and continuity of the Lyapunov exponent for the stationary measure. We then use deep neural networks to estimate map parameters from a short time series. Using this method, we estimate the model in a large dataset of US commercial banks over the period 2001-2014. We find that the parameters of a substantial fraction of banks lie in the dynamical core, and their leverage time series are consistent with a chaotic behavior. We also present evidence that the time series of the leverage of large banks tend to exhibit chaoticity more frequently than those of small banks.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.04960&r=all
  26. By: Augusto Cerqua (Sapienza Università di Roma); Roberta Di Stefano (Sapienza Università di Roma); Marco Letta (Sapienza Università di Roma); Sara Miccoli (Sapienza Università di Roma)
    Abstract: Estimates of the real death toll of the COVID-19 pandemic have proven to be problematic in many countries, Italy being no exception. Mortality estimates at the local level are even more uncertain as they require stringent conditions, such as granularity and accuracy of the data at hand, which are rarely met. The ‘official’ approach adopted by public institutions to estimate the ‘excess mortality’ during the pandemic draws on a comparison between observed all-cause mortality data for 2020 and averages of mortality figures in the past years for the same period. In this paper, we apply the recently developed machine learning control method to build a more realistic counterfactual scenario of mortality in the absence of COVID-19. We demonstrate that supervised machine learning techniques outperform the official method by substantially improving prediction accuracy of local mortality in ‘ordinary’ years, especially in small- and medium-sized municipalities. We then apply the best-performing algorithms to derive estimates of local excess mortality for the period between February and June 2020. Such estimates allow us to provide insights about the demographic evolution of the pandemic throughout the country. To help improve diagnostic and monitoring efforts, our dataset is freely available to the research community.
    Keywords: COVID-19, coronavirus, local mortality, Italy, machine learning, counterfactual building
    JEL: C21 C52 I10 J11
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:ahy:wpaper:wp6&r=all

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.