nep-cmp New Economics Papers
on Computational Economics
Issue of 2021‒06‒21
28 papers chosen by
Stan Miles
Thompson Rivers University

  1. Signatured Deep Fictitious Play for Mean Field Games with Common Noise By Ming Min; Ruimeng Hu
  2. Reservoir optimization and Machine Learning methods By Xavier Warin
  3. A Simple and General Debiased Machine Learning Theorem with Finite Sample Guarantees By Victor Chernozhukov; Whitney K. Newey; Rahul Singh
  4. Classification of monetary and fiscal dominance regimes using machine learning techniques By Hinterlang, Natascha; Hollmayr, Josef
  5. How Using Machine Learning Classification as a Variable in Regression Leads to Attenuation Bias and What to Do About It By Zhang, Han
  6. Slow Momentum with Fast Reversion: A Trading Strategy Using Deep Learning and Changepoint Detection By Kieran Wood; Stephen Roberts; Stefan Zohren
  7. Economic Nowcasting with Long Short-Term Memory Artificial Neural Networks (LSTM) By Daniel Hopp
  8. Price graphs: Utilizing the structural information of financial time series for stock prediction By Junran Wu; Ke Xu; Xueyuan Chen; Shangzhe Li; Jichang Zhao
  9. Estimating air quality co-benefits of energy transition using machine learning By Da Zhang; Qingyi Wang; Shaojie Song; Simiao Chen; Mingwei Li; Lu Shen; Siqi Zheng; Bofeng Cai; Shenhao Wang
  10. Urban economics in a historical perspective: Recovering data with machine learning By Pierre-Philippe Combes; Laurent Gobillon; Yanos Zylberberg
  11. Certification Systems for Machine Learning: Lessons from Sustainability By Matus, Kira; Veale, Michael
  12. Random feature neural networks learn Black-Scholes type PDEs without curse of dimensionality By Lukas Gonon
  13. Artificial intelligence masters’ programmes - An analysis of curricula building blocks By Juan Manuel Dodero
  14. Constraint-Based Inference of Heuristics for Foreign Exchange Trade Model Optimization By Nikolay Ivanov; Qiben Yan
  15. Fighting for Curb Space: Parking, Ride-Hailing, Urban Freight Deliveries, and Other Users By Jaller, Miguel; Rodier, Caroline; Zhang, Michael; Lin, Huachao; Lewis, Kathryn
  16. Variable time-step: A method for improving computational tractability for energy system models with long-term storage By Paul de Guibert; Behrang Shirizadeh; Philippe Quirion
  17. Online Trading Models in the Forex Market Considering Transaction Costs By Koya Ishikawa; Kazuhide Nakata
  18. Deep Learning Statistical Arbitrage By Jorge Guijarro-Ordonez; Markus Pelger; Greg Zanotti
  19. Predicting French SME Failures: New Evidence from Machine Learning Techniques By Christophe Schalck; Meryem Schalck
  20. Unbiased Optimal Stopping via the MUSE By Zhengqing Zhou; Guanyang Wang; Jose Blanchet; Peter W. Glynn
  21. Modeling and forecasting production indices using artificial neural networks, taking into account intersectoral relationships and comparing the predictive qualities of various architectures By Kaukin Andrey; Kosarev Vladimir
  22. EXPENDITURE PATTERNS, HETEROGENEITY AND LONG-TERM STRUCTURAL CHANGE By Kenneth W. Clements; Marc Jim M. Mariano; George Verikios
  23. Deep reinforcement learning on a multi-asset environment for trading By Ali Hirsa; Joerg Osterrieder; Branka Hadji-Misheva; Jan-Alexander Posth
  24. What Data Augmentation Do We Need for Deep-Learning-Based Finance? By Liu Ziyin; Kentaro Minami; Kentaro Imajo
  25. Fast and Robust Online Inference with Stochastic Gradient Descent via Random Scaling By Sokbae Lee; Yuan Liao; Myung Hwan Seo; Youngki Shin
  26. Credit spread approximation and improvement using random forest regression By Mathieu Mercadier; Jean-Pierre Lardy
  27. Parallel machine scheduling under uncertainty: The battle for robustness By Guopeng Song; Roel Leus
  28. Mission-Oriented Policies and the "Entrepreneurial State" at Work: An Agent-Based Exploration By Giovanni Dosi; Francesco Lamperti; Mariana Mazzucato; Mauro Napoletano; Andrea Roventini

  1. By: Ming Min; Ruimeng Hu
    Abstract: Existing deep learning methods for solving mean-field games (MFGs) with common noise fix the sampling common noise paths and then solve the corresponding MFGs. This leads to a nested-loop structure with millions of simulations of common noise paths in order to produce accurate solutions, which results in prohibitive computational cost and limits the applications to a large extent. In this paper, based on the rough path theory, we propose a novel single-loop algorithm, named signatured deep fictitious play, by which we can work with the unfixed common noise setup to avoid the nested-loop structure and reduce the computational complexity significantly. The proposed algorithm can accurately capture the effect of common uncertainty changes on mean-field equilibria without further training of neural networks, as previously needed in the existing machine learning algorithms. The efficiency is supported by three applications, including linear-quadratic MFGs, mean-field portfolio game, and mean-field game of optimal consumption and investment. Overall, we provide a new point of view from the rough path theory to solve MFGs with common noise with significantly improved efficiency and an extensive range of applications. In addition, we report the first deep learning work to deal with extended MFGs (a mean-field interaction via both the states and controls) with common noise.
    Date: 2021–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2106.03272&r=
  2. By: Xavier Warin
    Abstract: After showing the efficiency of feedforward networks to estimate control in high dimension in the global optimization of some storages problems, we develop a modification of an algorithm based on some dynamic programming principle. We show that classical feedforward networks are not effective to estimate Bellman values for reservoir problems and we propose some neural networks giving far better results. At last, we develop a new algorithm mixing LP resolution and conditional cuts calculated by neural networks to solve some stochastic linear problems.
    Date: 2021–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2106.08097&r=
  3. By: Victor Chernozhukov; Whitney K. Newey; Rahul Singh
    Abstract: Debiased machine learning is a meta algorithm based on bias correction and sample splitting to calculate confidence intervals for functionals (i.e. scalar summaries) of machine learning algorithms. For example, an analyst may desire the confidence interval for a treatment effect estimated with a neural network. We provide a nonasymptotic debiased machine learning theorem that encompasses any global or local functional of any machine learning algorithm that satisfies a few simple, interpretable conditions. Formally, we prove consistency, Gaussian approximation, and semiparametric efficiency by finite sample arguments. The rate of convergence is root-n for global functionals, and it degrades gracefully for local functionals. Our results culminate in a simple set of conditions that an analyst can use to translate modern learning theory rates into traditional statistical inference. The conditions reveal a new double robustness property for ill posed inverse problems.
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2105.15197&r=
  4. By: Hinterlang, Natascha; Hollmayr, Josef
    Abstract: This paper identiftes U.S. monetary and ftscal dominance regimes using machine learning techniques. The algorithms are trained and verifted by employing simulated data from Markov-switching DSGE models, before they classify regimes from 1968-2017 using actual U.S. data. All machine learning methods outperform a standard logistic regression concerning the simulated data. Among those the Boosted Ensemble Trees classifter yields the best results. We ftnd clear evidence of ftscal dominance before Volcker. Monetary dominance is detected between 1984-1988, before a ftscally led regime turns up around the stock market crash lasting until 1994. Until the beginning of the new century, monetary dominance is established, while the more recent evidence following the ftnancial crisis is mixed with a tendency towards ftscal dominance.
    Keywords: Monetary-fiscal interaction,Machine Learning,Classification,Markov-switching DSGE
    JEL: C38 E31 E63
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:zbw:imfswp:160&r=
  5. By: Zhang, Han (The Hong Kong University of Science and Technology)
    Abstract: Social scientists have increasingly been applying machine learning algorithms to "big data" to measure theoretical concepts they cannot easily measure before, and then been using these machine-predicted variables in a regression. This article first demonstrates that directly inserting binary predictions (i.e., classification) without regard for prediction error will generally lead to attenuation biases of either slope coefficients or marginal effect estimates. We then propose several estimators to obtain consistent estimates of coefficients. The estimators require the existence of validation data, of which researchers have both machine prediction and true values. This validation data is either automatically available during training algorithms or can be easily obtained. Monte Carlo simulations demonstrate the effectiveness of the proposed estimators. Finally, we summarize the usage pattern of machine learning predictions in 18 recent publications in top social science journals, apply our proposed estimators to two of them, and offer some practical recommendations.
    Date: 2021–05–29
    URL: http://d.repec.org/n?u=RePEc:osf:socarx:453jk&r=
  6. By: Kieran Wood; Stephen Roberts; Stefan Zohren
    Abstract: Momentum strategies are an important part of alternative investments and are at the heart of commodity trading advisors (CTAs). These strategies have however been found to have difficulties adjusting to rapid changes in market conditions, such as during the 2020 market crash. In particular, immediately after momentum turning points, where a trend reverses from an uptrend (downtrend) to a downtrend (uptrend), time-series momentum (TSMOM) strategies are prone to making bad bets. To improve the response to regime change, we introduce a novel approach, where we insert an online change-point detection (CPD) module into a Deep Momentum Network (DMN) [1904.04912] pipeline, which uses an LSTM deep-learning architecture to simultaneously learn both trend estimation and position sizing. Furthermore, our model is able to optimise the way in which it balances 1) a slow momentum strategy which exploits persisting trends, but does not overreact to localised price moves, and 2) a fast mean-reversion strategy regime by quickly flipping its position, then swapping it back again to exploit localised price moves. Our CPD module outputs a changepoint location and severity score, allowing our model to learn to respond to varying degrees of disequilibrium, or smaller and more localised changepoints, in a data driven manner. Using a portfolio of 50, liquid, continuous futures contracts over the period 1990-2020, the addition of the CPD module leads to an improvement in Sharpe ratio of $33\%$. Even more notably, this module is especially beneficial in periods of significant nonstationarity, and in particular, over the most recent years tested (2015-2020) the performance boost is approximately $400\%$. This is especially interesting as traditional momentum strategies have been underperforming in this period.
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2105.13727&r=
  7. By: Daniel Hopp
    Abstract: Artificial neural networks (ANNs) have been the catalyst to numerous advances in a variety of fields and disciplines in recent years. Their impact on economics, however, has been comparatively muted. One type of ANN, the long short-term memory network (LSTM), is particularly wellsuited to deal with economic time-series. Here, the architecture's performance and characteristics are evaluated in comparison with the dynamic factor model (DFM), currently a popular choice in the field of economic nowcasting. LSTMs are found to produce superior results to DFMs in the nowcasting of three separate variables; global merchandise export values and volumes, and global services exports. Further advantages include their ability to handle large numbers of input features in a variety of time frequencies. A disadvantage is the inability to ascribe contributions of input features to model outputs, common to all ANNs. In order to facilitate continued applied research of the methodology by avoiding the need for any knowledge of deep-learning libraries, an accompanying Python library was developed using PyTorch, https://pypi.org/project/nowcast-lstm/.
    Date: 2021–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2106.08901&r=
  8. By: Junran Wu; Ke Xu; Xueyuan Chen; Shangzhe Li; Jichang Zhao
    Abstract: Stock prediction, with the purpose of forecasting the future price trends of stocks, is crucial for maximizing profits from stock investments. While great research efforts have been devoted to exploiting deep neural networks for improved stock prediction, the existing studies still suffer from two major issues. First, the long-range dependencies in time series are not sufficiently captured. Second, the chaotic property of financial time series fundamentally lowers prediction performance. In this study, we propose a novel framework to address both issues regarding stock prediction. Specifically, in terms of transforming time series into complex networks, we convert market price series into graphs. Then, structural information, referring to associations among temporal points and the node weights, is extracted from the mapped graphs to resolve the problems regarding long-range dependencies and the chaotic property. We take graph embeddings to represent the associations among temporal points as the prediction model inputs. Node weights are used as a priori knowledge to enhance the learning of temporal attention. The effectiveness of our proposed framework is validated using real-world stock data, and our approach obtains the best performance among several state-of-the-art benchmarks. Moreover, in the conducted trading simulations, our framework further obtains the highest cumulative profits. Our results supplement the existing applications of complex network methods in the financial realm and provide insightful implications for investment applications regarding decision support in financial markets.
    Date: 2021–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2106.02522&r=
  9. By: Da Zhang; Qingyi Wang; Shaojie Song; Simiao Chen; Mingwei Li; Lu Shen; Siqi Zheng; Bofeng Cai; Shenhao Wang
    Abstract: Estimating health benefits of reducing fossil fuel use from improved air quality provides important rationales for carbon emissions abatement. Simulating pollution concentration is a crucial step of the estimation, but traditional approaches often rely on complicated chemical transport models that require extensive expertise and computational resources. In this study, we develop a novel and succinct machine learning framework that is able to provide precise and robust annual average fine particle (PM2.5) concentration estimations directly from a high-resolution fossil energy use data set. The accessibility and applicability of this framework show great potentials of machine learning approaches for integrated assessment studies. Applications of the framework with Chinese data reveal highly heterogeneous health benefits of reducing fossil fuel use in different sectors and regions in China with a mean of \$34/tCO2 and a standard deviation of \$84/tCO2. Reducing rural and residential coal use offers the highest co-benefits with a mean of \$360/tCO2. Our findings prompt careful policy designs to maximize cost-effectiveness in the transition towards a carbon-neutral energy system.
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2105.14318&r=
  10. By: Pierre-Philippe Combes (Institut d'Études Politiques [IEP] - Paris, CNRS - Centre National de la Recherche Scientifique); Laurent Gobillon (PSE - Paris School of Economics - ENPC - École des Ponts ParisTech - ENS Paris - École normale supérieure - Paris - PSL - Université Paris sciences et lettres - UP1 - Université Paris 1 Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique - EHESS - École des hautes études en sciences sociales - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement, PJSE - Paris Jourdan Sciences Economiques - UP1 - Université Paris 1 Panthéon-Sorbonne - ENS Paris - École normale supérieure - Paris - PSL - Université Paris sciences et lettres - EHESS - École des hautes études en sciences sociales - ENPC - École des Ponts ParisTech - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement); Yanos Zylberberg (University of Bristol [Bristol])
    Abstract: A recent literature has used a historical perspective to better understand fundamental questions of urban economics. However, a wide range of historical documents of exceptional quality remain underutilised: their use has been hampered by their original format or by the massive amount of information to be recovered. In this paper, we describe how and when the flexibility and predictive power of machine learning can help researchers exploit the potential of these historical documents. We first discuss how important questions of urban economics rely on the analysis of historical data sources and the challenges associated with transcription and harmonisation of such data. We then explain how machine learning approaches may address some of these challenges and we discuss possible applications.
    Keywords: urban economics,history,machine learning
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:halshs-03231786&r=
  11. By: Matus, Kira; Veale, Michael
    Abstract: Forthcoming (open access) in Regulation and Governance Abstract—The increasing deployment of machine learning systems has raised many concerns about its varied negative societal impacts. Notable among policy proposals to mitigate these issues is the notion that (some) machine learning systems should be certified. In this paper, we illustrate how recent approaches to certifying machine learning may be building upon the wrong foundations and examine what better foundations may look like. While prominent approaches to date have centered on networking standards initiatives led by organizations including the IEEE or ISO, we argue that machine learning certification may be better grounded in the very different institutional structures found in the sustainability domain. We first illustrate how policy challenges of machine learning and sustainability have significant structural similarities. Like many commodities, machine learning is characterized by difficult or impossible to observe credence properties, such as the characteristics of data collection, or carbon emissions from model training, as well as value chain issues, such as emerging core-periphery inequalities, networks of labor, and fragmented and modular value creation. We examine how focusing on networking standards, as is currently done, is likely to fail as a method to govern the credence properties of machine learning. While networking standards typically draw their adoption and enforcement from a functional need to conform in order to participate in a network, salient policy issues in machine learning issues benefit from no such dynamic. Finally, we apply existing research on certification systems for sustainability to the qualities and challenges of machine learning to generate lessons across the two, aiming to inform design considerations for emerging regimes.
    Date: 2021–06–02
    URL: http://d.repec.org/n?u=RePEc:osf:socarx:pm3wy&r=
  12. By: Lukas Gonon
    Abstract: This article investigates the use of random feature neural networks for learning Kolmogorov partial (integro-)differential equations associated to Black-Scholes and more general exponential L\'evy models. Random feature neural networks are single-hidden-layer feedforward neural networks in which only the output weights are trainable. This makes training particularly simple, but (a priori) reduces expressivity. Interestingly, this is not the case for Black-Scholes type PDEs, as we show here. We derive bounds for the prediction error of random neural networks for learning sufficiently non-degenerate Black-Scholes type models. A full error analysis is provided and it is shown that the derived bounds do not suffer from the curse of dimensionality. We also investigate an application of these results to basket options and validate the bounds numerically. These results prove that neural networks are able to \textit{learn} solutions to Black-Scholes type PDEs without the curse of dimensionality. In addition, this provides an example of a relevant learning problem in which random feature neural networks are provably efficient.
    Date: 2021–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2106.08900&r=
  13. By: Juan Manuel Dodero (School of Engineering - University of Cadiz)
    Abstract: This report identifies building blocks of master programs on Artificial Intelligence (AI), on the basis of the existing programs available in the European Union. These building blocks provide a first analysis that requires acceptance and sharing by the AI community. The proposal analyses first, the knowledge contents, and second, the educational competences declared as the learning outcomes, of 45 post-graduate academic masters’ programs related with AI from universities in 13 European countries (Belgium, Denmark, Finland, France, Germany, Italy, Ireland, Netherlands, Portugal, Spain, and Sweden in the EU; plus Switzerland and the United Kingdom). As a closely related and relevant part of Informatics and Computer Science, major AI-related curricula on data science have been also taken into consideration for the analysis. The definition of a specific AI curriculum besides data science curricula is motivated by the necessity of a deeper understanding of topics and skills of the former that build up the foundations of strong AI versus narrow AI, which is the general focus of the latter. The body of knowledge with the proposed building blocks for AI consists of a number of knowledge areas, which are classified as Essential, Core, General and Applied. First, the AI Essentials cover topics and competences from foundational disciplines that are fundamental to AI. Second, topics and competences showing a close interrelationship and specific of AI are classified in a set of AI Core domain-specific areas, plus one AI General area for non-domain-specific knowledge. Third, AI Applied areas are built on top of topics and competences required to develop AI applications and services under a more philosophical and ethical perspective. All the knowledge areas are refined into knowledge units and topics for the analysis. As the result of studying core AI knowledge topics from the master programs sample, machine learning is observed to prevail, followed in order by: computer vision; human-computer interaction; knowledge representation and reasoning; natural language processing; planning, search and optimisation; and robotics and intelligent automation. A significant number of master programs analysed are significantly focused on machine learning topics, despite being initially classified in another domain. It is noteworthy that machine learning topics, along with selected topics on knowledge representation, depict a high degree of commonality in AI and data science programs. Finally, the competence-based analysis of the sample master programs’ learning outcomes, based on Bloom’s cognitive levels, outputs that understanding and creating cognitive levels are dominant. Besides, analysing and evaluating are the most scarce cognitive levels. Another relevant outcome is that master programs on AI under the disciplinary lenses of engineering studies show a notable scarcity of competences related with informatics or computing, which are fundamental to AI.
    Keywords: artificial intelligence, competence-based curriculum, master program, higher education, digital skills
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:ipt:iptwpa:jrc123713&r=
  14. By: Nikolay Ivanov; Qiben Yan
    Abstract: The Foreign Exchange (Forex) is a large decentralized market, on which trading analysis and algorithmic trading are popular. Research efforts have been focusing on proof of efficiency of certain technical indicators. We demonstrate, however, that the values of indicator functions are not reproducible and often reduce the number of trade opportunities, compared to price-action trading. In this work, we develop two dataset-agnostic Forex trading heuristic templates with high rate of trading signals. In order to determine most optimal parameters for the given heuristic prototypes, we perform a machine learning simulation of 10 years of Forex price data over three low-margin instruments and 6 different OHLC granularities. As a result, we develop a specific and reproducible list of most optimal trade parameters found for each instrument-granularity pair, with 118 pips of average daily profit for the optimized configuration.
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2105.14194&r=
  15. By: Jaller, Miguel; Rodier, Caroline; Zhang, Michael; Lin, Huachao; Lewis, Kathryn
    Abstract: There is a need to optimally allocate curb space-one of the scarcest resources in urban areas-to the different and growing needs of passenger and freight transport. Although there are plenty of linear miles of curbside space in every city, the growing adoption of ride-hailing services and the rise of e-commerce with its residential deliveries, and the increased number of micro-mobility services, have increased pressure on the already saturated transportation system. Traditional curbside planning strategies have relied on land-use based demand estimates to allocate access priority to the curb (e.g., pedestrian and transit for residential areas, commercial vehicles for commercial and industrial zones). In some locales, new guidelines provide ideas on flexible curbside management, but lack the systems to gather and analyze the data, and optimally and dynamically allocate the space to the different users and needs. This study conducted a comprehensive literature review on several topics related to curb space management, discussing various users (e.g., pedestrians, bicycles, transit, taxis, and commercial freight vehicles), summarizing different experiences, and focusing the discussion on Complete Street strategies. Moreover, the authors reviewed the academic literature on curbside and parking data collection, and simulation and optimization techniques. Considering a case study around the downtown area in San Francisco, the authors evaluated the performance of the system with respect to a number of parking behavior scenarios. In doing so, the authors developed a parking simulation in SUMO following a set of parking behaviors (e.g., parking search, parking with off-street parking information availability, double-parking). These scenarios were tested in three different (land use-based) sub-study areas representing residential, commercial and mixed-use. View the NCST Project Webpage
    Keywords: Engineering, Social and Behavioral Sciences, Parking, curbside management, simulation, congestion, emissions, travel distances
    Date: 2021–06–01
    URL: http://d.repec.org/n?u=RePEc:cdl:itsdav:qt3jn371hw&r=
  16. By: Paul de Guibert; Behrang Shirizadeh; Philippe Quirion (CIRED - Centre International de Recherche sur l'Environnement et le Développement - Cirad - Centre de Coopération Internationale en Recherche Agronomique pour le Développement - EHESS - École des hautes études en sciences sociales - AgroParisTech - ENPC - École des Ponts ParisTech - Université Paris-Saclay - CNRS - Centre National de la Recherche Scientifique)
    Abstract: Optimizing an energy system model featuring a large proportion of variable (non-dispatchable) renewable energy requires a fine temporal resolution and a long period of weather data to provide robust results. Many models are optimized over a limited set of 'representative' periods (e.g. weeks) but this precludes a realistic representation of long-term energy storage. To tackle this issue, we introduce a new method based on a variable time-step. Critical periods that may be important for dimensioning part of the electricity system are defined, during which we use an hourly temporal resolution. For the other periods, the temporal resolution is coarser. This method brings very accurate results in terms of system cost, curtailment, storage losses and installed capacity, even though the optimization time is reduced by a factor of around 60. Results are less accurate for battery volume. We conclude that further research into this 'variable time-step' method would be worthwhile.
    Keywords: complexity reduction,time series aggregation,renewable energies,computational tractability,Energy system model
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-03100309&r=
  17. By: Koya Ishikawa; Kazuhide Nakata
    Abstract: In recent years, a wide range of investment models have been created using artificial intelligence. Automatic trading by artificial intelligence can expand the range of trading methods, such as by conferring the ability to operate 24 hours a day and the ability to trade with high frequency. Automatic trading can also be expected to trade with more information than is available to humans if it can sufficiently consider past data. In this paper, we propose an investment agent based on a deep reinforcement learning model, which is an artificial intelligence model. The model considers the transaction costs involved in actual trading and creates a framework for trading over a long period of time so that it can make a large profit on a single trade. In doing so, it can maximize the profit while keeping transaction costs low. In addition, in consideration of actual operations, we use online learning so that the system can continue to learn by constantly updating the latest online data instead of learning with static data. This makes it possible to trade in non-stationary financial markets by always incorporating current market trend information.
    Date: 2021–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2106.03035&r=
  18. By: Jorge Guijarro-Ordonez; Markus Pelger; Greg Zanotti
    Abstract: Statistical arbitrage identifies and exploits temporal price differences between similar assets. We propose a unifying conceptual framework for statistical arbitrage and develop a novel deep learning solution, which finds commonality and time-series patterns from large panels in a data-driven and flexible way. First, we construct arbitrage portfolios of similar assets as residual portfolios from conditional latent asset pricing factors. Second, we extract the time series signals of these residual portfolios with one of the most powerful machine learning time-series solutions, a convolutional transformer. Last, we use these signals to form an optimal trading policy, that maximizes risk-adjusted returns under constraints. We conduct a comprehensive empirical comparison study with daily large cap U.S. stocks. Our optimal trading strategy obtains a consistently high out-of-sample Sharpe ratio and substantially outperforms all benchmark approaches. It is orthogonal to common risk factors, and exploits asymmetric local trend and reversion patterns. Our strategies remain profitable after taking into account trading frictions and costs. Our findings suggest a high compensation for arbitrageurs to enforce the law of one price.
    Date: 2021–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2106.04028&r=
  19. By: Christophe Schalck; Meryem Schalck
    Abstract: The aim of this study is to provide new insights into French small and medium-sized enterprises (SME) failure prediction using a unique database of French SMEs over the 2012?2018 period including both financial and nonfinancial variables. We also include text variables related to the type of activity. We compare the predictive performance of three estimation methods: a dynamic Probit model, logistic Lasso regression, and XGBoost algorithm. The results show that the XGBoost algorithm has the highest performance in predicting business failure from a broad dataset. We use SHAP values to interpret the results and identify the main factors of failure. Our analysis shows that both financial and nonfinancial variables are failure factors. Our results confirm the role of financial variables in predicting business failure, while self-employment is the factor that most strongly increases the probability of failure. The size of the SME is also a business failure factor. Our results show that a number of nonfinancial variables, such as localization and economic conditions, are drivers of SME failure. The results also show that certain activities are associated with a prediction of lower failure probability while some activities are associated with a prediction of higher failure.
    Keywords: SME; failure prediction; Machine learning; XGBoost; SHAP values
    JEL: G33 C41 C46
    Date: 2021–01–01
    URL: http://d.repec.org/n?u=RePEc:ipg:wpaper:2021-009&r=
  20. By: Zhengqing Zhou; Guanyang Wang; Jose Blanchet; Peter W. Glynn
    Abstract: We propose a new unbiased estimator for estimating the utility of the optimal stopping problem. The MUSE, short for `Multilevel Unbiased Stopping Estimator', constructs the unbiased Multilevel Monte Carlo (MLMC) estimator at every stage of the optimal stopping problem in a backward recursive way. In contrast to traditional sequential methods, the MUSE can be implemented in parallel when multiple processors are available. We prove the MUSE has finite variance, finite computational complexity, and achieves $\varepsilon$-accuracy with $O(1/\varepsilon^2)$ computational cost under mild conditions. We demonstrate MUSE empirically in several numerical examples, including an option pricing problem with high-dimensional inputs, which illustrates the use of the MUSE on computer clusters.
    Date: 2021–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2106.02263&r=
  21. By: Kaukin Andrey (Russian Presidential Academy of National Economy and Public Administration); Kosarev Vladimir (Russian Presidential Academy of National Economy and Public Administration)
    Abstract: This paper analyzes the possibilities of using convolutional and recurrent neural networks to predict the indices of industrial production of the Russian economy. Since the indices are asymmetric in periods of growth and decline, it was hypothesized that nonlinear methods will improve the quality of the forecast relative to linear ones.
    Keywords: convolutional neural networks, recurrent neural networks
    Date: 2021–01
    URL: http://d.repec.org/n?u=RePEc:rnp:wpaper:s21105&r=
  22. By: Kenneth W. Clements (Business School, The University of Western Australia); Marc Jim M. Mariano (KPMG Economics); George Verikios (KPMG Economics and Griffith University)
    Abstract: The simplicity and parsimony of the linear expenditure system (LES) of consumer demand accounts for its influence and popularity in numerous applications. But the model struggles to deal adequately with heterogeneity mainly because of its linear Engel curves. In this paper we deal with the issue by disaggregating consumers according to their income. Application to a large Australian database reveals (i) noticeable differences in demand responses are masked when the LES is constrained to have the same parameters across the income distribution; and (ii) a substantial improvement in the fit of the disaggregated model. The disaggregated demand system is then embedded in a CGE model to give it microsimulation capabilities. Stochastic simulations of over a 30-year horizon demonstrate the disaggregated approach is a significant channel of long-term structural change.
    Keywords: Consumer behaviour, heterogeneity, linear expenditure system, Engel curves, CGE models, structural change
    JEL: D12 C31 C68
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:uwa:wpaper:21-10&r=
  23. By: Ali Hirsa; Joerg Osterrieder; Branka Hadji-Misheva; Jan-Alexander Posth
    Abstract: Financial trading has been widely analyzed for decades with market participants and academics always looking for advanced methods to improve trading performance. Deep reinforcement learning (DRL), a recently reinvigorated method with significant success in multiple domains, still has to show its benefit in the financial markets. We use a deep Q-network (DQN) to design long-short trading strategies for futures contracts. The state space consists of volatility-normalized daily returns, with buying or selling being the reinforcement learning action and the total reward defined as the cumulative profits from our actions. Our trading strategy is trained and tested both on real and simulated price series and we compare the results with an index benchmark. We analyze how training based on a combination of artificial data and actual price series can be successfully deployed in real markets. The trained reinforcement learning agent is applied to trading the E-mini S&P 500 continuous futures contract. Our results in this study are preliminary and need further improvement.
    Date: 2021–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2106.08437&r=
  24. By: Liu Ziyin; Kentaro Minami; Kentaro Imajo
    Abstract: The main task we consider is portfolio construction in a speculative market, a fundamental problem in modern finance. While various empirical works now exist to explore deep learning in finance, the theory side is almost non-existent. In this work, we focus on developing a theoretical framework for understanding the use of data augmentation for deep-learning-based approaches to quantitative finance. The proposed theory clarifies the role and necessity of data augmentation for finance; moreover, our theory motivates a simple algorithm of injecting a random noise of strength $\sqrt{|r_{t-1}|}$ to the observed return $r_{t}$. This algorithm is shown to work well in practice.
    Date: 2021–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2106.04114&r=
  25. By: Sokbae Lee; Yuan Liao; Myung Hwan Seo; Youngki Shin
    Abstract: We develop a new method of online inference for a vector of parameters estimated by the Polyak-Ruppert averaging procedure of stochastic gradient descent (SGD) algorithms. We leverage insights from time series regression in econometrics and construct asymptotically pivotal statistics via random scaling. Our approach is fully operational with online data and is rigorously underpinned by a functional central limit theorem. Our proposed inference method has a couple of key advantages over the existing methods. First, the test statistic is computed in an online fashion with only SGD iterates and the critical values can be obtained without any resampling methods, thereby allowing for efficient implementation suitable for massive online data. Second, there is no need to estimate the asymptotic variance and our inference method is shown to be robust to changes in the tuning parameters for SGD algorithms in simulation experiments with synthetic data.
    Date: 2021–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2106.03156&r=
  26. By: Mathieu Mercadier (LAPE, Université de Limoges); Jean-Pierre Lardy
    Abstract: Credit Default Swap (CDS) levels provide a market appreciation of companies' default risk. These derivatives are not always available, creating a need for CDS approximations. This paper offers a simple, global and transparent CDS structural approximation, which contrasts with more complex and proprietary approximations currently in use. This Equity-to-Credit formula (E2C), inspired by CreditGrades, obtains better CDS approximations, according to empirical analyses based on a large sample spanning 2016-2018. A random forest regression run with this E2C formula and selected additional financial data results in an 87.3% out-of-sample accuracy in CDS approximations. The transparency property of this algorithm confirms the predominance of the E2C estimate, and the impact of companies' debt rating and size, in predicting their CDS.
    Keywords: Structural Model,Finance,Random Forests,Credit Default Swaps,Risk Analysis
    Date: 2019–08
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-03241566&r=
  27. By: Guopeng Song; Roel Leus
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:ete:kbiper:675543&r=
  28. By: Giovanni Dosi (Scuola Superiore Sant'Anna, Pisa (Italy)); Francesco Lamperti (Institute of Economics and EMbeDS, Scuola Superiore Sant'Anna; RFF-CMCC European Institute on Economics and the Environment); Mariana Mazzucato (Institute for Public Purpose and Policy, University College London (London, UK)); Mauro Napoletano (Author-Workplace-Name: Université Côte d'Azur, CNRS, GREDEG, France; SKEMA Business School; OFCE Sciences-Po); Andrea Roventini (Institute of Economics and EMbeDS, Scuola Superiore Sant'Anna; Sciences Po, OFCE)
    Abstract: We study the impact of alternative innovation policies on the short- and long-run performance of the economy, as well as on public finances, extending the Schumpeter meeting Keynes agentbased model (Dosi et al., 2010). In particular, we consider market-based innovation policies such as R&D subsidies to firms, tax discount on investment, and direct policies akin to the "Entrepreneurial State" (Mazzucato, 2013), involving the creation of public research-oriented firms diffusing technologies along specic trajectories, and funding a Public Research Lab conducting basic research to achieve radical innovations that enlarge the technological opportunities of the economy. Simulation results show that all policies improve productivity and GDP growth, but the best outcomes are achieved by active discretionary State policies, which are also able to crowd-in private investment and have positive hysteresis effects on growth dynamics. For the same size of public resources allocated to market-based interventions, "Mission" innovation policies deliver significantly better aggregate performance if the government is patient enough and willing to bear the intrinsic risks related to innovative activities.
    Keywords: Innovation policy, mission-oriented R&D, entrepreneurial state, agent-based modelling
    JEL: O33 O38 O31 O40 C63
    Date: 2021–06
    URL: http://d.repec.org/n?u=RePEc:gre:wpaper:2021-25&r=

This nep-cmp issue is ©2021 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.