nep-cmp New Economics Papers
on Computational Economics
Issue of 2021‒03‒22
29 papers chosen by



  1. DeepSets and their derivative networks for solving symmetric PDEs * By Maximilien Germain; Mathieu Laurière; Huyên Pham; Xavier Warin
  2. The impact of online machine-learning methods on long-term investment decisions and generator utilization in electricity markets By Alexander J. M. Kell; A. Stephen McGough; Matthew Forshaw
  3. A nonparametric algorithm for optimal stopping based on robust optimization By Bradley Sturt
  4. A Neural Network Ensemble Approach for GDP Forecasting By Luigi Longo; Massimo Riccaboni; Armando Rungi
  5. Predicting the Behavior of Dealers in Over-The-Counter Corporate Bond Markets By Yusen Lin; Jinming Xue; Louiqa Raschid
  6. A Survey of Forex and Stock Price Prediction Using Deep Learning By Zexin Hu; Yiqi Zhao; Matloob Khushi
  7. Deep Hedging, Generative Adversarial Networks, and Beyond By Hyunsu Kim
  8. Macroeconomic Policy Adjustments due to COVID-19: Scenarios to 2025 with a Focus on Asia By Fernando, Roshen; McKibbin, Warwick J.
  9. Forecasting commodity prices using long-short-term memory neural networks By Ly, Racine; Traore, Fousseini; Dia, Khadim
  10. New Social Accounting Matrix for Jordan: A 2015 Nexus project Social Accounting Matrix By Raouf, Mariam; Randriamamonjy, Josée; Elsabbagh, Dalia; Wiebelt, Manfred
  11. People Meet People - A Microlevel Approach to Predicting the Effect of Policies on the Spread of COVID-19 By Janos Gabler; Tobias Raabe; Klara Röhrl
  12. Autocalibration and Tweedie-dominance for Insurance Pricing with Machine Learning By Michel Denuit; Arthur Charpentier; Julien Trufin
  13. WP 01-20 - The PLANET Model : Methodological Report PLANET 4.0 By Coraline Daubresse; Benoît Laine
  14. Efficient Solution and Computation of Models With Occasionally Binding Constraints By Gregor Boehl
  15. Modelling Artificial Intelligence in Economics By Gries, Thomas; Naudé, Wim
  16. Phase Transitions in Kyle's Model with Market Maker Profit Incentives By Charles-Albert Lehalle; Eyal Neuman; Segev Shlomov
  17. CEO Stress, Aging, and Death By Mark Borgschulte; Marius Guenzel; Canyao Liu; Ulrike Malmendier
  18. DoubleML -- An Object-Oriented Implementation of Double Machine Learning in R By Philipp Bach; Victor Chernozhukov; Malte S. Kurz; Martin Spindler
  19. Feature Learning for Stock Price Prediction Shows a Significant Role of Analyst Rating By Jaideep Singh; Matloob Khushi
  20. SCARE: when Economics meets Epidemiology with COVID-19, first wave By André de Palma; Nathalie Picard; Stef Proost
  21. Introducing individual savings accounts for severance pay in Spain: An ex-ante assessment of the distributional effects By Alexander Hijzen; Andrea Salvatori
  22. Statistical Arbitrage Risk Premium by Machine Learning By Raymond C. W. Leung; Yu-Man Tam
  23. Dynamic Econometrics in Action: A Biography of David F. Hendry By Neil R. Ericsson
  24. Coordinating Human and Machine Learning for Effective Organizational Learning By Sturm, Timo; Gerlach, Jin; Pumplun, Luisa; Mesbah, Neda; Peters, Felix; Tauchert, Christoph; Nan, Ning; Buxmann, Peter
  25. The Community Explorer: Informing Policy with County-Level Data By Lopez, Claude; Butler, Brittney
  26. Prediction of financial time series using LSTM and data denoising methods By Qi Tang; Tongmei Fan; Ruchen Shi; Jingyan Huang; Yidan Ma
  27. Optimal Targeting in Fundraising: A Machine Learning Approach By Tobias Cagala; Ulrich Glogowsky; Johannes Rincke; Anthony Strittmatter
  28. The Gender Pay Gap Revisited with Big Data: Do Methodological Choices Matter? By Anthony Strittmatter; Conny Wunsch
  29. The Impact of the Agency Model on E-book Prices: Evidence from the UK By Maximilian Maurice Gail; Phil-Adrian Klotz

  1. By: Maximilien Germain (EDF - EDF, LPSM (UMR_8001) - Laboratoire de Probabilités, Statistiques et Modélisations - SU - Sorbonne Université - CNRS - Centre National de la Recherche Scientifique - UP - Université de Paris); Mathieu Laurière (ORFE - Department of Operations Research and Financial Engineering - Princeton University, School of Engineering and Applied Science); Huyên Pham (LPSM (UMR_8001) - Laboratoire de Probabilités, Statistiques et Modélisations - UPD7 - Université Paris Diderot - Paris 7 - SU - Sorbonne Université - CNRS - Centre National de la Recherche Scientifique, FiME Lab - Laboratoire de Finance des Marchés d'Energie - Université Paris Dauphine-PSL - PSL - Université Paris sciences et lettres - CREST - EDF R&D - EDF R&D - EDF - EDF); Xavier Warin (EDF - EDF, FiME Lab - Laboratoire de Finance des Marchés d'Energie - Université Paris Dauphine-PSL - PSL - Université Paris sciences et lettres - CREST - EDF R&D - EDF R&D - EDF - EDF)
    Abstract: Machine learning methods for solving nonlinear partial differential equations (PDEs) are hot topical issues, and different algorithms proposed in the literature show efficient numerical approximation in high dimension. In this paper, we introduce a class of PDEs that are invariant to permutations, and called symmetric PDEs. Such problems are widespread, ranging from cosmology to quantum mechanics, and option pricing/hedging in multi-asset market with exchangeable payoff. Our main application comes actually from the particles approximation of mean-field control problems. We design deep learning algorithms based on certain types of neural networks, named PointNet and DeepSet (and their associated derivative networks), for computing simultaneously an approximation of the solution and its gradient to symmetric PDEs. We illustrate the performance and accuracy of the PointNet/DeepSet networks compared to classical feedforward ones, and provide several numerical results of our algorithm for the examples of a mean-field systemic risk, mean-variance problem and a min/max linear quadratic McKean-Vlasov control problem.
    Keywords: Permutation-invariant PDEs,symmetric neural networks,exchangeability,deep backward scheme,mean-field control
    Date: 2021–02–27
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-03154116&r=all
  2. By: Alexander J. M. Kell; A. Stephen McGough; Matthew Forshaw
    Abstract: Electricity supply must be matched with demand at all times. This helps reduce the chances of issues such as load frequency control and the chances of electricity blackouts. To gain a better understanding of the load that is likely to be required over the next 24h, estimations under uncertainty are needed. This is especially difficult in a decentralized electricity market with many micro-producers which are not under central control. In this paper, we investigate the impact of eleven offline learning and five online learning algorithms to predict the electricity demand profile over the next 24h. We achieve this through integration within the long-term agent-based model, ElecSim. Through the prediction of electricity demand profile over the next 24h, we can simulate the predictions made for a day-ahead market. Once we have made these predictions, we sample from the residual distributions and perturb the electricity market demand using the simulation, ElecSim. This enables us to understand the impact of errors on the long-term dynamics of a decentralized electricity market. We show we can reduce the mean absolute error by 30% using an online algorithm when compared to the best offline algorithm, whilst reducing the required tendered national grid reserve required. This reduction in national grid reserves leads to savings in costs and emissions. We also show that large errors in prediction accuracy have a disproportionate error on investments made over a 17-year time frame, as well as electricity mix.
    Date: 2021–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2103.04327&r=all
  3. By: Bradley Sturt
    Abstract: Optimal stopping is a class of stochastic dynamic optimization problems with applications in finance and operations management. In this paper, we introduce a new method for solving stochastic optimal stopping problems with known probability distributions. First, we use simulation to construct a robust optimization problem that approximates the stochastic optimal stopping problem to any arbitrary accuracy. Second, we characterize the structure of optimal policies for the robust optimization problem, which turn out to be simple and finite-dimensional. Harnessing this characterization, we develop exact and approximation algorithms for solving the robust optimization problem, which in turn yield policies for the stochastic optimal stopping problem. Numerical experiments show that this combination of robust optimization and simulation can find policies that match, and in some cases significantly outperform, those from state-of-the-art algorithms on low-dimensional, non-Markovian optimal stopping problems from options pricing.
    Date: 2021–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2103.03300&r=all
  4. By: Luigi Longo (IMT School for advanced studies); Massimo Riccaboni (IMT School for advanced studies); Armando Rungi (IMT School for advanced studies)
    Abstract: We propose an ensemble learning methodology to forecast the future US GDP growth release. Our approach combines a Recurrent Neural Network (RNN) with a Dynamic Factor model accounting for time-variation in mean with a General- ized Autoregressive Score (DFM-GAS). The analysis is based on a set of predictors encompassing a wide range of variables measured at different frequencies. The forecast exercise is aimed at evaluating the predictive ability of each model's com- ponent of the ensemble by considering variations in mean, potentially caused by recessions affecting the economy. Thus, we show how the combination of RNN and DFM-GAS improves forecasts of the US GDP growth rate in the aftermath of the 2008-09 global financial crisis. We find that a neural network ensemble markedly reduces the root mean squared error for the short-term forecast horizon.
    Keywords: macroeconomic forecasting; machine learning; neural networks; dynamic factor model; Covid-19 crisis
    JEL: C53 E37
    Date: 2021–03
    URL: http://d.repec.org/n?u=RePEc:ial:wpaper:2/2021&r=all
  5. By: Yusen Lin; Jinming Xue; Louiqa Raschid
    Abstract: Trading in Over-The-Counter (OTC) markets is facilitated by broker-dealers, in comparison to public exchanges, e.g., the New York Stock Exchange (NYSE). Dealers play an important role in stabilizing prices and providing liquidity in OTC markets. We apply machine learning methods to model and predict the trading behavior of OTC dealers for US corporate bonds. We create sequences of daily historical transaction reports for each dealer over a vocabulary of US corporate bonds. Using this history of dealer activity, we predict the future trading decisions of the dealer. We consider a range of neural network-based prediction models. We propose an extension, the Pointwise-Product ReZero (PPRZ) Transformer model, and demonstrate the improved performance of our model. We show that individual history provides the best predictive model for the most active dealers. For less active dealers, a collective model provides improved performance. Further, clustering dealers based on their similarity can improve performance. Finally, prediction accuracy varies based on the activity level of both the bond and the dealer.
    Date: 2021–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2103.09098&r=all
  6. By: Zexin Hu; Yiqi Zhao; Matloob Khushi
    Abstract: The prediction of stock and foreign exchange (Forex) had always been a hot and profitable area of study. Deep learning application had proven to yields better accuracy and return in the field of financial prediction and forecasting. In this survey we selected papers from the DBLP database for comparison and analysis. We classified papers according to different deep learning methods, which included: Convolutional neural network (CNN), Long Short-Term Memory (LSTM), Deep neural network (DNN), Recurrent Neural Network (RNN), Reinforcement Learning, and other deep learning methods such as HAN, NLP, and Wavenet. Furthermore, this paper reviewed the dataset, variable, model, and results of each article. The survey presented the results through the most used performance metrics: RMSE, MAPE, MAE, MSE, accuracy, Sharpe ratio, and return rate. We identified that recent models that combined LSTM with other methods, for example, DNN, are widely researched. Reinforcement learning and other deep learning method yielded great returns and performances. We conclude that in recent years the trend of using deep-learning based method for financial modeling is exponentially rising.
    Date: 2021–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2103.09750&r=all
  7. By: Hyunsu Kim
    Abstract: This paper introduces a potential application of deep learning and artificial intelligence in finance, particularly its application in hedging. The major goal encompasses two objectives. First, we present a framework of a direct policy search reinforcement agent replicating a simple vanilla European call option and use the agent for the model-free delta hedging. Through the first part of this paper, we demonstrate how the RNN-based direct policy search RL agents can perform delta hedging better than the classic Black-Scholes model in Q-world based on parametrically generated underlying scenarios, particularly minimizing tail exposures at higher values of the risk aversion parameter. In the second part of this paper, with the non-parametric paths generated by time-series GANs from multi-variate temporal space, we illustrate its delta hedging performance on various values of the risk aversion parameter via the basic RNN-based RL agent introduced in the first part of the paper, showing that we can potentially achieve higher average profits with a rather evident risk-return trade-off. We believe that this RL-based hedging framework is a more efficient way of performing hedging in practice, addressing some of the inherent issues with the classic models, providing promising/intuitive hedging results, and rendering a flexible framework that can be easily paired with other AI-based models for many other purposes.
    Date: 2021–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2103.03913&r=all
  8. By: Fernando, Roshen (Asian Development Bank Institute); McKibbin, Warwick J. (Asian Development Bank Institute)
    Abstract: We update the analysis of the global macroeconomic consequences of the COVID-19 pandemic in earlier papers by the authors with data as of late October 2020. It also extends the focus to Asian economies and explores four alternative policy interventions that are coordinated across all economies. The first three policies relate to fiscal policy: an increase in transfers to households of an additional 2% of the GDP in 2020; an increase in government spending on goods and services in all economies of 2% of their GDP in 2020; and an increase in government infrastructure spending in all economies in 2020. The fourth policy is a public health intervention similar to the approach of Australia that successfully manages the virus (flattens the curve) through testing, contact tracing, and isolating infected people coupled with the rapid deployment of an effective vaccine by mid-2021. The policy that is most supportive of a global economic recovery is the successfully implemented public health policy. Each of the fiscal policies assists in the economic recovery with public sector infrastructure having the most short-term stimulus and longer-term growth benefits.
    Keywords: COVID-19; pandemics; infectious diseases; risk; macroeconomics; DSGE; CGE; G-Cubed
    JEL: C54 C68 F41
    Date: 2021–03–02
    URL: http://d.repec.org/n?u=RePEc:ris:adbiwp:1219&r=all
  9. By: Ly, Racine; Traore, Fousseini; Dia, Khadim
    Abstract: This paper applies a recurrent neural network (RNN) method to forecast cotton and oil prices. We show how these new tools from machine learning, particularly Long-Short Term Memory (LSTM) models, complement traditional methods. Our results show that machine learning methods fit reasonably well with the data but do not outperform systematically classical methods such as Autoregressive Integrated Moving Average (ARIMA) or the naïve models in terms of out of sample forecasts. However, averaging the forecasts from the two type of models provide better results compared to either method. Compared to the ARIMA and the LSTM, the Root Mean Squared Error (RMSE) of the average forecast was 0.21 and 21.49 percent lower, respectively, for cotton. For oil, the forecast averaging does not provide improvements in terms of RMSE. We suggest using a forecast averaging method and extending our analysis to a wide range of commodity prices.
    Keywords: WORLD; forecasting; models; prices; commodities; machine learning; neural networks; cotton; oils; Recurrent Neural networks; LSTM; commodity prices; Long-Short Term Memory; Autoregressive Integrated Moving Average (ARIMA)
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:fpr:ifprid:2000&r=all
  10. By: Raouf, Mariam; Randriamamonjy, Josée; Elsabbagh, Dalia; Wiebelt, Manfred
    Abstract: This new Social Accounting Matrix (SAM) for Jordan is a snapshot representation of the Jordanian economy in which productive activities, factors of production, and economic transactions between the main agents, including households, government, and the rest of the world, are illustrated in a circular flow. It has been constructed using IFPRI's Nexus format, which uses common data standards, procedures, and classification systems for constructing and updating national SAMs. This new SAM for Jordan is expected to be an important dataset for the Arab (Agricultural) Investment for Development Analyzer (AIDA), which is tool based on computable general equilibrium (CGE) model analyses. AIDA was developed to inform national and regional development strategies by providing evidence on the impact of agricultural investments on economic development.
    Keywords: JORDAN, MIDDLE EAST, ASIA, models, crops, livestock, mining, households, capital, income, commodities, agricultural products, government, labour, Social Accounting Matrix (SAM), Agriculture Investment for Development Analyzer (AIDA), factors of production
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:fpr:menawp:32&r=all
  11. By: Janos Gabler; Tobias Raabe; Klara Röhrl
    Abstract: Governments worldwide have been adopting diverse and nuanced policy measures to contain the spread of Covid-19. However, epidemiological models usually lack the detailed representation of human meeting patterns to credibly predict the effects such policies. We propose a novel simulation-based model to address these shortcomings. We build on state-of-the-art agent-based simulation models, greatly increasing the amount of detail and realism with which contacts take place. Firstly, we allow for different contact types (such as work, school, households or leisure), distinguish recurrent and non-recurrent contacts and allow the infectiousness of meetings to vary between contact types. Secondly, we allow agents to seek tests and react to information, such as experiencing symptoms, receiving a positive test or a known case among their contacts, by reducing their own contacts. This allows us to model the effects of a wide array very targeted policies such as split classes, mandatory work from home schemes or test-and-trace policies. To validate our model, we show that it can predict the effect of the German November lockdown even if no similar policy has been observed during the time period that was used to estimate the model parameters.
    Keywords: Covid-19, agent based simulation model, public health measures
    JEL: C63 I18
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:bon:boncrc:crctr224_2021_265&r=all
  12. By: Michel Denuit; Arthur Charpentier; Julien Trufin
    Abstract: Boosting techniques and neural networks are particularly effective machine learning methods for insurance pricing. Often in practice, there are nevertheless endless debates about the choice of the right loss function to be used to train the machine learning model, as well as about the appropriate metric to assess the performances of competing models. Also, the sum of fitted values can depart from the observed totals to a large extent and this often confuses actuarial analysts. The lack of balance inherent to training models by minimizing deviance outside the familiar GLM with canonical link setting has been empirically documented in W\"uthrich (2019, 2020) who attributes it to the early stopping rule in gradient descent methods for model fitting. The present paper aims to further study this phenomenon when learning proceeds by minimizing Tweedie deviance. It is shown that minimizing deviance involves a trade-off between the integral of weighted differences of lower partial moments and the bias measured on a specific scale. Autocalibration is then proposed as a remedy. This new method to correct for bias adds an extra local GLM step to the analysis. Theoretically, it is shown that it implements the autocalibration concept in pure premium calculation and ensures that balance also holds on a local scale, not only at portfolio level as with existing bias-correction techniques. The convex order appears to be the natural tool to compare competing models, putting a new light on the diagnostic graphs and associated metrics proposed by Denuit et al. (2019).
    Date: 2021–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2103.03635&r=all
  13. By: Coraline Daubresse; Benoît Laine
    Abstract: PLANET is a model developed by the Belgian Federal PLANning Bureau that models the relationship between Economy and Transport. Its aim is to produce: (i) medium- and long-term projections of transport demand in Belgium, both for passenger and freight transport; (ii) simulations of the effects of transport policy measures; (iii) cost-benefit analyses of transport policy measures. This methodological report describes the main features of the PLANET model, and more specifically, the version 4.0 used for the transport outlook published in January 2019.
    JEL: R41 R48
    Date: 2020–02–27
    URL: http://d.repec.org/n?u=RePEc:fpb:wpaper:2001&r=all
  14. By: Gregor Boehl
    Abstract: Structural macroeconometric analysis and new HANK-type models with extremely high dimensionality require fast and robust methods to efficiently deal with occasionally binding constraints (OBCs), especially since major developed economies have again hit the zero lower bound on nominal interest rates. This paper shows that a linear dynamic rational expectations system with OBCs, depending on the expected duration of the constraint, can be represented in closed form. Combined with a set of simple equilibrium conditions, this can be exploited to avoid matrix inversions and simulations at runtime for significant gains in computational speed. An efficient implementation is provided in Python programming language. Benchmarking results show that for medium-scale models with an OBC, more than 150,000 state vectors can be evaluated per second. This is an improvement of more than three orders of magnitude over existing alternatives. Even state evaluations of large HANK-type models with almost 1000 endogenous variables require only 0.1 ms.
    Keywords: Occasionally Binding Constraints, Effective Lower Bound, Computational Methods
    Date: 2021–01
    URL: http://d.repec.org/n?u=RePEc:bon:boncrc:crctr224_2021_253&r=all
  15. By: Gries, Thomas (University of Paderborn); Naudé, Wim (University College Cork)
    Abstract: Economists' two main theoretical approaches to understanding Artificial Intelligence (AI) impacts have been the task-approach to labor markets and endogenous growth theory. Therefore, the recent integration of the task-approach into an endogenous growth model by Acemoglu and Restrepo (AR) is a useful advance. However, it is subject to the shortcoming that it does not explicitly model AI and its technological feasibility. The AR model focuses on tasks and skills but not on abilities, while abilities better characterize AI services' nature. This paper addresses this shortcoming by elaborating the task-approach with AI abilities for use within endogenous growth models. This more ability-sensitive specification of the task-approach allows for more nuanced and realistic impacts of progress in artificial intelligence (AI) on the economy to be captured.
    Keywords: Artificial Intelligence, endogenous growth theory, labor economics, mathematical models
    JEL: O47 O33 J24 E21 E25
    Date: 2021–03
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp14171&r=all
  16. By: Charles-Albert Lehalle; Eyal Neuman; Segev Shlomov
    Abstract: We consider a stochastic game between three types of players: an inside trader, noise traders and a market maker. In a similar fashion to Kyle's model, we assume that the insider first chooses the size of her market-order and then the market maker determines the price by observing the total order-flow resulting from the insider and the noise traders transactions. In addition to the classical framework, a revenue term is added to the market maker's performance function, which is proportional to the order flow and to the size of the bid-ask spread. We derive the maximizer for the insider's revenue function and prove sufficient conditions for an equilibrium in the game. Then, we use neural networks methods to verify that this equilibrium holds. We show that the equilibrium state in this model experience interesting phase transitions, as the weight of the revenue term in the market maker's performance function changes. Specifically, the asset price in equilibrium experience three different phases: a linear pricing rule without a spread, a pricing rule that includes a linear mid-price and a bid-ask spread, and a metastable state with a zero mid-price and a large spread.
    Date: 2021–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2103.04481&r=all
  17. By: Mark Borgschulte; Marius Guenzel; Canyao Liu; Ulrike Malmendier
    Abstract: We estimate the long-term effects of experiencing high levels of job demands on the mortality and aging of CEOs. The estimation exploits variation in takeover protection and industry crises. First, using hand-collected data on the dates of birth and death for 1,605 CEOs of large, publicly-listed U.S. firms, we estimate the resulting changes in mortality. The hazard estimates indicate that CEOs’ lifespan increases by two years when insulated from market discipline via anti-takeover laws, and decreases by 1.5 years in response to an industry-wide downturn. Second, we apply neural-network based machine-learning techniques to assess visible signs of aging in pictures of CEOs. We estimate that exposure to a distress shock during the Great Recession increases CEOs’ apparent age by one year over the next decade. Our findings imply significant health costs of managerial stress, also relative to known health risks.
    JEL: G01 G3 I10 J01
    Date: 2021–03
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:28550&r=all
  18. By: Philipp Bach; Victor Chernozhukov; Malte S. Kurz; Martin Spindler
    Abstract: The R package DoubleML implements the double/debiased machine learning framework of Chernozhukov et al. (2018). It provides functionalities to estimate parameters in causal models based on machine learning methods. The double machine learning framework consist of three key ingredients: Neyman orthogonality, high-quality machine learning estimation and sample splitting. Estimation of nuisance components can be performed by various state-of-the-art machine learning methods that are available in the mlr3 ecosystem. DoubleML makes it possible to perform inference in a variety of causal models, including partially linear and interactive regression models and their extensions to instrumental variable estimation. The object-oriented implementation of DoubleML enables a high flexibility for the model specification and makes it easily extendable. This paper serves as an introduction to the double machine learning framework and the R package DoubleML. In reproducible code examples with simulated and real data sets, we demonstrate how DoubleML users can perform valid inference based on machine learning methods.
    Date: 2021–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2103.09603&r=all
  19. By: Jaideep Singh; Matloob Khushi
    Abstract: To reject the Efficient Market Hypothesis a set of 5 technical indicators and 23 fundamental indicators was identified to establish the possibility of generating excess returns on the stock market. Leveraging these data points and various classification machine learning models, trading data of the 505 equities on the US S&P500 over the past 20 years was analysed to develop a classifier effective for our cause. From any given day, we were able to predict the direction of change in price by 1% up to 10 days in the future. The predictions had an overall accuracy of 83.62% with a precision of 85% for buy signals and a recall of 100% for sell signals. Moreover, we grouped equities by their sector and repeated the experiment to see if grouping similar assets together positively effected the results but concluded that it showed no significant improvements in the performance rejecting the idea of sector-based analysis. Also, using feature ranking we could identify an even smaller set of 6 indicators while maintaining similar accuracies as that from the original 28 features and also uncovered the importance of buy, hold and sell analyst ratings as they came out to be the top contributors in the model. Finally, to evaluate the effectiveness of the classifier in real-life situations, it was backtested on FAANG equities using a modest trading strategy where it generated high returns of above 60% over the term of the testing dataset. In conclusion, our proposed methodology with the combination of purposefully picked features shows an improvement over the previous studies, and our model predicts the direction of 1% price changes on the 10th day with high confidence and with enough buffer to even build a robotic trading system.
    Date: 2021–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2103.09106&r=all
  20. By: André de Palma; Nathalie Picard; Stef Proost (Université de Cergy-Pontoise, THEMA)
    Abstract: We develop an epidemic model to explain and predict the dynamics of the SARS-CoV-2 virus and to assess the economic costs of lockdown scenarios. The standard epidemic three-variable model, SIR (Susceptible, Infected and Removed) is extended into a fivevariable model SCARE: Susceptible, Carrier, Affected (i.e. sick), Recovered and Eliminated (i.e. dead). Using WHO and Oxford data on cases and deaths, we rely on indirect inference techniques to estimate the parameters of SIR and SCARE. We consider different observation rates and stringencies of lockdown. Both models are estimated for five countries and provide predictions on the number of cases, the number of deaths, and the basic reproduction number, R0. SCARE is used to test the impact of lockdown policies on economic costs for the well-documented Belgium case. Economic assessments of epidemic results on hospital, morbidity and mortality together with macro-economic impacts show that the total net benefits of the Belgian lockdown policy is negative for low valuations of life years lost. The gains of extending the Belgian lockdown policy are negative even for high valuation of life.
    Keywords: COVID-19, Public health, Policy, Simulation, Social contact
    JEL: I12 I18 I38
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:ema:worpap:2021-10&r=all
  21. By: Alexander Hijzen; Andrea Salvatori
    Abstract: This report provides an ex ante assessment of the distributional effects of introducing portable severance pay accounts in Spain based on micro-simulations. In the current system, permanent workers who are dismissed from their job are entitled to 20 days of severance pay per year of service, which is relatively high by OECD standards. The report considers a reform that replaces the current severance payment system with individual saving accounts financed through periodic contributions by employers. The report focuses on two versions of the reform that keep constant respectively the total compensation in case of dismissal (“constant benefit”) or the expected costs for firms of employing a permanent worker (“constant-cost”). Importantly, the analysis in the report does do not take account of the behavioural responses of firms and workers to the reform.
    Keywords: employment protection, individual savings accounts, job mobility, microsimulation
    JEL: H55 J32 J62
    Date: 2021–03–17
    URL: http://d.repec.org/n?u=RePEc:oec:elsaab:259-en&r=all
  22. By: Raymond C. W. Leung; Yu-Man Tam
    Abstract: How to hedge factor risks without knowing the identities of the factors? We first prove a general theoretical result: even if the exact set of factors cannot be identified, any risky asset can use some portfolio of similar peer assets to hedge against its own factor exposures. A long position of a risky asset and a short position of a "replicate portfolio" of its peers represent that asset's factor residual risk. We coin the expected return of an asset's factor residual risk as its Statistical Arbitrage Risk Premium (SARP). The challenge in empirically estimating SARP is finding the peers for each asset and constructing the replicate portfolios. We use the elastic-net, a machine learning method, to project each stock's past returns onto that of every other stock. The resulting high-dimensional but sparse projection vector serves as investment weights in constructing the stocks' replicate portfolios. We say a stock has high (low) Statistical Arbitrage Risk (SAR) if it has low (high) R-squared with its peers. The key finding is that "unique" stocks have both a higher SARP and higher excess returns than "ubiquitous" stocks: in the cross-section, high SAR stocks have a monthly SARP (monthly excess returns) that is 1.101% (0.710%) greater than low SAR stocks. The average SAR across all stocks is countercyclical. Our results are robust to controlling for various known priced factors and characteristics.
    Date: 2021–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2103.09987&r=all
  23. By: Neil R. Ericsson (Division of International Finance, Board of Governors of the Federal Reserve System)
    Abstract: David Hendry has made-and continues to make-pivotal contributions to the econometrics of empirical economic modeling, economic forecasting, econometrics software, substantive empirical economic model design, and economic policy. This paper reviews his contributions by topic, emphasizing the overlaps between different strands in his research and the importance of real-world problems in motivating that research.
    Keywords: cointegration, consumers' expenditure, dynamic specification, equilibrium correction, forecasting, machine learning, model evaluation, money demand, PcGive, structural breaks
    JEL: C52 C53
    Date: 2021–03
    URL: http://d.repec.org/n?u=RePEc:gwc:wpaper:2021-001&r=all
  24. By: Sturm, Timo; Gerlach, Jin; Pumplun, Luisa; Mesbah, Neda; Peters, Felix; Tauchert, Christoph; Nan, Ning; Buxmann, Peter
    Date: 2021–03–11
    URL: http://d.repec.org/n?u=RePEc:dar:wpaper:125653&r=all
  25. By: Lopez, Claude; Butler, Brittney
    Abstract: This report proposes a new approach to investigate US health disparities that focuses on understanding populations' specificities before looking at their health profile. It first identifies the US's different populations or communities based on their behavioral, demographic, economic, and social profiles. Then it links these profiles to chronic disease prevalence rates. https://milkeninstitute.org/reports/comm unity-explorer-county-level
    Keywords: Community, Disparities, Health policy, machine learning
    JEL: C38 I1 I3
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:106289&r=all
  26. By: Qi Tang; Tongmei Fan; Ruchen Shi; Jingyan Huang; Yidan Ma
    Abstract: In order to further overcome the difficulties of the existing models in dealing with the non-stationary and nonlinear characteristics of high-frequency financial time series data, especially its weak generalization ability, this paper proposes an ensemble method based on data denoising methods, including the wavelet transform (WT) and singular spectrum analysis (SSA), and long-term short-term memory neural network (LSTM) to build a data prediction model, The financial time series is decomposed and reconstructed by WT and SSA to denoise. Under the condition of denoising, the smooth sequence with effective information is reconstructed. The smoothing sequence is introduced into LSTM and the predicted value is obtained. With the Dow Jones industrial average index (DJIA) as the research object, the closing price of the DJIA every five minutes is divided into short-term (1 hour), medium-term (3 hours) and long-term (6 hours) respectively. . Based on root mean square error (RMSE), mean absolute error (MAE), mean absolute percentage error (MAPE) and absolute percentage error standard deviation (SDAPE), the experimental results show that in the short-term, medium-term and long-term, data denoising can greatly improve the accuracy and stability of the prediction, and can effectively improve the generalization ability of LSTM prediction model. As WT and SSA can extract useful information from the original sequence and avoid overfitting, the hybrid model can better grasp the sequence pattern of the closing price of the DJIA. And the WT-LSTM model is better than the benchmark LSTM model and SSA-LSTM model.
    Date: 2021–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2103.03505&r=all
  27. By: Tobias Cagala; Ulrich Glogowsky; Johannes Rincke; Anthony Strittmatter
    Abstract: This paper studies optimal targeting as a means to increase fundraising efficacy. We randomly provide potential donors with an unconditional gift and use causal-machine learning techniques to "optimally" target this fundraising tool to the predicted net donors: individuals who, in expectation, give more than their solicitation costs. With this strategy, our fundraiser avoids lossy solicitations, significantly boosts available funds, and, consequently, can increase service and goods provision. Further, to realize these gains, the charity can merely rely on readily available data. We conclude that charities that refrain from fundraising targeting waste significant resources.
    Date: 2021–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2103.10251&r=all
  28. By: Anthony Strittmatter; Conny Wunsch
    Abstract: The vast majority of existing studies that estimate the average unexplained gender pay gap use unnecessarily restrictive linear versions of the Blinder-Oaxaca decomposition. Using a notably rich and large data set of 1.7 million employees in Switzerland, we investigate how the methodological improvements made possible by such big data affect estimates of the unexplained gender pay gap. We study the sensitivity of the estimates with regard to i) the availability of observationally comparable men and women, ii) model flexibility when controlling for wage determinants, and iii) the choice of different parametric and semi-parametric estimators, including variants that make use of machine learning methods. We find that these three factors matter greatly. Blinder-Oaxaca estimates of the unexplained gender pay gap decline by up to 39% when we enforce comparability between men and women and use a more flexible specification of the wage equation. Semi-parametric matching yields estimates that when compared with the Blinder-Oaxaca estimates, are up to 50% smaller and also less sensitive to the way wage determinants are included.
    Keywords: gender inequality, gender pay gap, common support, model specification, matching estimator, machine learning
    JEL: J31 C21
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_8912&r=all
  29. By: Maximilian Maurice Gail (Justus Liebig University Giessen); Phil-Adrian Klotz (Justus Liebig University Giessen)
    Abstract: This paper empirically analyzes the effect of the widely used agency model on the retail prices of e-books in United Kingdom. Using a unique cross-sectional data set of e-book prices for a large sample of book titles across all major publishing houses, we exploit cross-genre and cross-publisher variation to identify the mean effect of the agency model on e-book prices. Since the genre information is ambiguous and even missing for some titles in our original dataset, we use a Latent Dirichlet Allocation (LDA) approach to determine detailed book genres based on the book's descriptions. We find that e-book prices for titles that are sold under the agency model are 36% cheaper than titles sold under the wholesale model on average. Our results are robust to different specifications, a Lewbel instrumental variable approach, and machine learning techniques.
    Keywords: e-books, agency, resale price maintenance, Amazon, double machine learning, Latent Dirichlet allocation
    JEL: D12 D22 L42 L81 L82 Z11
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:mar:magkse:202111&r=all

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.