|
on Computational Economics |
Issue of 2020‒03‒30
eighteen papers chosen by |
By: | Ziming Gao; Yuan Gao; Yi Hu; Zhengyong Jiang; Jionglong Su |
Abstract: | Machine Learning algorithms and Neural Networks are widely applied to many different areas such as stock market prediction, face recognition and population analysis. This paper will introduce a strategy based on the classic Deep Reinforcement Learning algorithm, Deep Q-Network, for portfolio management in stock market. It is a type of deep neural network which is optimized by Q Learning. To make the DQN adapt to financial market, we first discretize the action space which is defined as the weight of portfolio in different assets so that portfolio management becomes a problem that Deep Q-Network can solve. Next, we combine the Convolutional Neural Network and dueling Q-net to enhance the recognition ability of the algorithm. Experimentally, we chose five lowrelevant American stocks to test the model. The result demonstrates that the DQN based strategy outperforms the ten other traditional strategies. The profit of DQN algorithm is 30% more than the profit of other strategies. Moreover, the Sharpe ratio associated with Max Drawdown demonstrates that the risk of policy made with DQN is the lowest. |
Date: | 2020–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2003.06365&r=all |
By: | Ermanno Catullo (Research Department, Link Campus University, Rome, Italy); Mauro Gallegati (Department of Management, Università Politecnica delle Marche, Acona, Italy); Alberto Russo (Department of Management, Università Politecnica delle Marche, Ancona, Italy and Department of Economics, Universitat Jaume I, Castellón, Spain) |
Abstract: | The aim of this paper is to investigate how different degrees of sophistication in agents’ behavioural rules may affect individual and macroeconomic performances. In particular, we analyze the effects of introducing into an agentbased macro model firms that are able to formulate effective sales forecasts by using machine learning. These techniques are able to provide predictions that are unbiased and present a certain degree of accuracy, especially in the case of a genetic algorithm. We observe that machine learning allows firms to increase profits, though this result in a declining wage share and a smaller long-run growth rate. Moreover, the predictive methods are able to formulate expectations that remain unbiased when shocks are not massive, thus providing firms with forecasting capabilities that to a certain extent may be consistent with the Lucas Critique. |
Keywords: | agent-based model, machine learning, genetic algorithm, forecasting, policy shocks |
JEL: | C63 D84 E32 E37 |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:jau:wpaper:2020/17&r=all |
By: | Rogerio Silva Mattos (Universidade Federal de Juiz de Fora) |
Abstract: | Advancing artificial Intelligence draws most of its power from the artificial neural network, a software technique that has successfully replicated some information processing functions of the human brain and the unconscious mind. Jobs are at risk to disappear because even the tacit knowledge typically used by humans to perform complex tasks is now amenable to computerization. The paper discusses implications of this technology for capitalism and jobs, concluding that a very long run transition to a jobless economy should not be discarded. Rising business models and new collaborative schemes provide clues for how things may unfold. A scenario in which society is close enough to full unemployment is analyzed and strategic paths to tackle the challenges involved are discussed. The analysis follows an eclectic approach, based on the Marxist theory of historical materialism and the job task model created by mainstream economists. |
Keywords: | artificial intelligence,historical materialism,task model,neural networks,jobless society |
Date: | 2019–01–01 |
URL: | http://d.repec.org/n?u=RePEc:hal:wpaper:hal-02502178&r=all |
By: | Ayman Chaouki; Stephen Hardiman; Christian Schmidt; Emmanuel S\'eri\'e; Joachim de Lataillade |
Abstract: | Can deep reinforcement learning algorithms be exploited as solvers for optimal trading strategies? The aim of this work is to test reinforcement learning algorithms on conceptually simple, but mathematically non-trivial, trading environments. The environments are chosen such that an optimal or close-to-optimal trading strategy is known. We study the deep deterministic policy gradient algorithm and show that such a reinforcement learning agent can successfully recover the essential features of the optimal trading strategies and achieve close-to-optimal rewards. |
Date: | 2020–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2003.06497&r=all |
By: | Luca Riccetti (Department of Economics and Law, Università degli Studi di Macerata, Italy); Alberto Russo (Department of Management, Università Politecnica delle Marche, Ancona, Italy and Department of Economics, Universitat Jaume I, Castellón, Spain); Mauro Gallegati (Department of Management, Università Politecnica delle Marche, Acona, Italy) |
Abstract: | We present an agent-based model to study firm-bank credit market interactions in different phases of the business cycle. The business cycle is exogenously set and it can give rise to various scenarios. Compared to other models in this literature strand, we improve the mechanism according to which the dividends are distributed, including the possibility of stock repurchase by firms. In addition, we locate firms and banks over a space and firms may ask credit to many banks, resulting in a complex spatial network. The model reproduces a long list of stylized facts and their dynamic evolution as described by the cross-correlations among model variables. The model allows us to test the effectiveness of rules designed by the current financial regulation, such as the Basel 3 countercyclical capital buffer. We find that its effectiveness of this rule changes in different business cycle environments and this should be considered by policy makers. |
Keywords: | Agent-based modeling, credit network, business cycle, financial regulation, macroprudential policy |
JEL: | C63 E32 E52 G01 |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:jau:wpaper:2020/16&r=all |
By: | Christian Bayer; Chiheb Ben Hammouda; Raul Tempone |
Abstract: | When approximating the expectation of a functional of a certain stochastic process, the efficiency and performance of deterministic quadrature methods, and hierarchical variance reduction methods such as multilevel Monte Carlo (MLMC), is highly deteriorated in different ways by the low regularity of the integrand with respect to the input parameters. To overcome this issue, a smoothing procedure is needed to uncover the available regularity and improve the performance of the aforementioned methods. In this work, we consider cases where we cannot perform an analytic smoothing. Thus, we introduce a novel numerical smoothing technique based on root-finding combined with a one dimensional integration with respect to a single well-chosen variable. We prove that under appropriate conditions, the resulting function of the remaining variables is highly smooth, potentially allowing a higher efficiency of adaptive sparse grids quadrature (ASGQ), in particular when combined with hierarchical representations to treat the high dimensionality effectively. Our study is motivated by option pricing problems and our main focus is on dynamics where a discretization of the asset price is needed. Our analysis and numerical experiments illustrate the advantage of combining numerical smoothing with ASGQ compared to the Monte Carlo method. Furthermore, we demonstrate how numerical smoothing significantly reduces the kurtosis at the deep levels of MLMC, and also improves the strong convergence rate. Given a pre-selected tolerance, $\text{TOL}$, this results in an improvement of the complexity from $\mathcal{O}\left(\text{TOL}^{-2.5}\right)$ in the standard case to $\mathcal{O}\left(\text{TOL}^{-2} \log(\text{TOL})^2\right)$. Finally, we show how our numerical smoothing enables MLMC to estimate density functions, which standard MLMC (without smoothing) fails to achieve. |
Date: | 2020–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2003.05708&r=all |
By: | Fang Cai; Zhaonan Qu; Li Xia; Zhengyuan Zhou |
Abstract: | We study an offline multi-action policy learning algorithm based on doubly robust estimators from causal inference settings, using argmax linear policy function classes. For general policy classes, we establish the connection of the regret bound with a generalization of the VC dimension in higher dimensions and specialize this to prove optimal regret bounds for the argmax linear function class. We also study various optimization approaches to solving the non-smooth non-convex problem associated with the argmax linear class, including convex relaxation, softmax relaxation, and Bayesian optimization. We find that Bayesian optimization with the Gradient-based Adaptive Stochastic Search (GASS) algorithm consistently outperforms convex relaxation in terms of policy value, and is much faster compared to softmax relaxation. Finally, we apply the algorithms to simulated and warfarin dataset. On the warfarin dataset the offline algorithm trained using only a subset of features achieves state-of-the-art accuracy. |
Date: | 2020–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2003.07545&r=all |
By: | Michael C. Knaus |
Abstract: | This paper consolidates recent methodological developments based on Double Machine Learning (DML) with a focus on program evaluation under unconfoundedness. DML based methods leverage flexible prediction methods to control for confounding in the estimation of (i) standard average effects, (ii) different forms of heterogeneous effects, and (iii) optimal treatment assignment rules. We emphasize that these estimators build all on the same doubly robust score, which allows to utilize computational synergies. An evaluation of multiple programs of the Swiss Active Labor Market Policy shows how DML based methods enable a comprehensive policy analysis. However, we find evidence that estimates of individualized heterogeneous effects can become unstable. |
Date: | 2020–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2003.03191&r=all |
By: | Crowley, Patrick M.; Hudgins, David |
Abstract: | When the central bank sets monetary policy according to a conventional or modified Taylor rule (which is known as the Taylor Principle), does this deliver the best outcome for the mac-roeconomy as a whole? This question is addressed by extending the wavelet-based control (WBC) model of Crowley and Hudgins (2015) to evaluate macroeconomic performance when the central bank sets interest rates based on a conventional or modified Taylor rule (TR). We compare the simulated performance of jointly optimal fiscal and monetary policy under an unrestricted baseline model with performance under the TR. We simulate the model un-der relatively small and large weighting of the output gap in the TR specification, and for both low and high inflation environments. The results show that the macroeconomic outcome de-pends on whether the conventional or modified Taylor rule is used, and whether the central bank is operating in a low or high inflation environment. |
Keywords: | Discrete Wavelet Analysis,Monetary Policy,Optimal Control |
JEL: | C61 C63 C88 E52 E61 F47 |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:zbw:bofecr:12020&r=all |
By: | Sebastian G. Kessing (University of Siegen); Vilen Lipatov (Justus Liebig University Giessen); J. Malte Zoubek (University of Siegen) |
Abstract: | We study how regional productivity differences and labor mobility shape optimal Mirrleesian tax-transfer schemes. When tax schedules are not allowed to differ across regions, productivity-enhancing inter-regional migration exerts a downward pressure on optimal marginal tax rates. When regionally differentiated taxation is allowed, marginal tax rates in high-(low-)productivity regions should be corrected downwards (upwards) relative to the benchmark without migration. Simulations of the productivity differences between metropolitan and other areas of the US indicate that migration affects the optimal tax-transfer schedule more strongly in the regionally differentiated rather than in the undifferentiated case. |
Keywords: | Optimal taxation, place-based redistribution, regional inequality, migration, multidimensional screening, delayed optimal control |
JEL: | H11 J45 R12 |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:mar:magkse:202013&r=all |
By: | Qiu, Yun; Chen, Xi; Shi, Wei |
Abstract: | This paper examines the role of various socioeconomic factors in mediating the local and cross-city transmissions of the novel coronavirus 2019 (COVID-19) in China. We implement a machine learning approach to select instrumental variables that strongly predict virus transmission among the rich exogenous weather characteristics. Our 2SLS estimates show that the stringent quarantine, massive lockdown and other public health measures imposed in late January significantly reduced the transmission rate of COVID-19. By early February, the virus spread had been contained. While many socioeconomic factors mediate the virus spread, a robust government response since late January played a determinant role in the containment of the virus. We also demonstrate that the actual population ow from the outbreak source poses a higher risk to the destination than other factors such as geographic proximity and similarity in economic conditions. The results have rich implications for ongoing global efforts in containment of COVID-19. |
Keywords: | 2019 novel coronavirus,transmission |
JEL: | I18 I12 C23 |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:zbw:glodps:494&r=all |
By: | Daniela V. Guío-Martínez; Juan J. Ospina-Tejeiro (Banco de la República de Colombia); Germán A. Muñoz-Bravo (Banco de la República de Colombia); Julián A. Parra-Polanía (Banco de la República de Colombia) |
Abstract: | Con base en el uso de Latent Dirichlet Allocation, una herramienta de lingüística computacional cuya finalidad es develar los patrones temáticos subyacentes que agrupan las palabras de un conjunto de textos, analizamos dos tipos esenciales de documentos en la comunicación del Banco de la República, las minutas y los informes de política monetaria, para el periodo comprendido entre marzo de 2007 y diciembre de 2018. Encontramos que estos dos tipos de documentos giran primordialmente en torno a ocho temas, siendo el más importante (en promedio a través del tiempo) el que contiene términos principalmente relacionados con demanda interna y sectores económicos. Describimos tanto las similitudes como las diferencias que se observan, entre las minutas y los informes, en la participación de cada tema dentro de los documentos y en la evolución de esa participación en el tiempo. **** ABSTRACT: Based on the use of Latent Dirichlet Allocation, a computational linguistics tool whose purpose is to identify the underlying thematic patterns that group the words of a set of documents, we analyse two essential outlets in the Banco de la Republica’s communication, the minutes and monetary policy reports, from March 2007 to December 2018. We find that these two outlets discuss primarily about eight topics, the most important (on average, over time) being the one that contains expressions mainly related to domestic demand and economic sectors. We describe both similarities and differences that are observed, between the minutes and the reports, in the participation of each topic within the documents and in the evolution of that participation over time. |
Keywords: | Comunicación, Política Monetaria, Minería de Texto, LDA, Communication, Monetary Policy, Text Mining, LDA. |
JEL: | E52 E58 C40 |
Date: | 2020–03 |
URL: | http://d.repec.org/n?u=RePEc:bdr:borrec:1108&r=all |
By: | Alix-Garcia,Jennifer M.; Sims,Katharine R. Emans; Phaneuf,Daniel J. |
Abstract: | Cost-effective allocation of conditional cash transfers (CCT) requires identifying recipients with low opportunity costs who might change behavior. This paper develops a low-cost approach for improving program implementation by using a stated preference, referendum-style survey question to calculate willingness to accept (WTA) for CCT contracts. This is illustrated in the context of Mexico's Payments for Ecosystem Services Program, with the paper finding that the estimated social cost based on WTA is substantially lower than actual payments. Simulation of three geographic targeting approaches shows that joint selection using deforestation risk and WTA could increase program impact under the same budget. The paper also simulates modified payment schedules based on predicted WTA and demonstrates that these could reduce program cost. |
Keywords: | Environmental Disasters&Degradation,Global Environment,Conditional Cash Transfers,Services&Transfers to Poor,Disability,Access of Poor to Social Services,Economic Assistance,Biodiversity,Global Environment Facility |
Date: | 2019–01–17 |
URL: | http://d.repec.org/n?u=RePEc:wbk:wbrwps:8708&r=all |
By: | Shan Luo (Federal Reserve Bank of Chicago); Anthony Murphy |
Abstract: | We study and model the determinants of exposure at default (EAD) for large U.S. construction and land development loans from 2010 to 2017. EAD is an important component of credit risk, and commercial real estate (CRE) construction loans are more risky than income producing loans. This is the first study modeling the EAD of construction loans. The underlying EAD data come from a large, confidential supervisory dataset used in the U.S. Federal Reserve’s annual Comprehensive Capital Assessment Review (CCAR) stress tests. EAD reflects the relative bargaining ability and information sets of banks and obligors. We construct OLS and Tobit regression models, as well as several other machine-learning models, of EAD conversion measures, using a four-quarter horizon. The popular LEQ and CCF conversion measure is unstable, so we focus on EADF and AUF measures. Property type, the lagged utilization rate and loan size are important drivers of EAD. Changing local and national economic conditions also matter, so EAD is sensitive to macro-economic conditions. Even though default and EAD risk are negatively correlated, a conservative assumption is that all undrawn construction commitments will be fully drawn in default. |
Keywords: | Credit Risk; Commercial Real Estate (CRE); Construction; Exposure at Default; EAD Conversion Measures; Macro-sensitivity; Machine Learning |
JEL: | G21 G28 |
Date: | 2020–03–17 |
URL: | http://d.repec.org/n?u=RePEc:fip:feddwp:87677&r=all |
By: | Emiliano Delfau |
Abstract: | Es sabido que en Argentina, contrariamente a la región, el crédito interno sobre PBI históricamente se encontró en niveles bajos. Analizando los últimos diez años podemos ver que el país fluctuó siempre entre un 12 y 15% de bancarización respecto al PBI (Banco Mundial, 2017). Asimismo, encontramos que el sector micro informal de la estructura productiva alcanzó el 49,3% hacia fines de 2018 (UCA, 2018). Esto nos muestra que el desafío no es solo bancarizar a la población no bancarizada sino, además, lograr bancarizar a parte de la población del sector micro informal. De esta manera estaríamos no solo abordando un proyecto de inclusión financiera, sino que asimismo trataríamos de minimizar el uso de efectivo por otros medios de pago y transacciones. No obstante las características mencionadas anteriormente, la Argentina si se encuentra dentro de las tendencias tecnológicas mundiales y, por lo tanto, el país cuenta con un “ecosistema” tecnológico que le permitiría afrontar los desafíos antes mencionados. Bajo la premisa de “todo dato es dato crediticio” se plantea la creación de una plataforma o banco con abordaje 100% digital cuyo motor principal sea un score de crédito basado en análisis de Big Data y técnicas de Machine Learning. Finalmente se enumeran algunos casos de éxito sobre este nuevo modelo de negocio mediante la aplicación de Big Data y técnicas de Machine Learning |
Keywords: | Inclusión Financiera, Big Data, Fintechs, Datos no Estructurados, Analytics, Scoring, Recnología Digital, Banca Digital |
Date: | 2020–02 |
URL: | http://d.repec.org/n?u=RePEc:cem:doctra:716&r=all |
By: | Antonio Peyrache (CEPA - School of Economics, The University of Queensland); Angelo Zago (Dipartimento di Scienze Economiche, Università degli Studi di Verona) |
Abstract: | In this paper we propose an equilibrium computational model of the market for justice that focuses on supply policies aiming to increase the efficiency of the system. We measure performance in terms of completion times and inefficiency in terms of the discrepancy between observed completion time and an efficient benchmark (equilibrium) completion time. By using a rather general production model that can take into account resource use, we can study the (steady state) performance of the justice sector as a whole and improve both on the analysis of length of trials and on standard measures of partial productivity (like the number of defined cases per judge). In order to identify demand and supply and run our counterfactual equilibrium analysis, we focus on a recently collected dataset on the Italian courts of justice system. The Italian case is useful because it provides exogeneous variation in the quantity of interest that allows for identification. It is also interesting because of the heterogeneity of the system in terms of completion times. Overall we find that three supply policies can make a significant contribution to the efficiency of the system: introduction of best practices, break-ups of large courts of justice into smaller ones (to exploit economies of scale), and optimal reallocation of judges across courts (in order to enhance efficiency). We find that, even without introduction of best practices, break- ups and reallocation can reduce the system completion time by around 30%. Although we are critical about the external validity of our results, these results point to the fact that there is large scope for supply policies aiming at improving the processing time of judicial systems. |
Keywords: | Courts of Justice; Efficiency; Equilibrium; Completion Times |
Date: | 2020–03 |
URL: | http://d.repec.org/n?u=RePEc:qld:uqcepa:147&r=all |
By: | Moshe Babaioff; Michal Feldman; Yannai A. Gonczarowski; Brendan Lucier; Inbal Talgam-Cohen |
Abstract: | A single seller wishes to sell $n$ items to a single unit-demand buyer. We consider a robust version of this revenue-maximization pricing problem, where the seller knows the buyer's marginal distributions of values for each item, but not the joint distribution, and wishes to maximize worst-case revenue over all possible correlation structures. We devise a computationally efficient (polynomial in the support size of the marginals) algorithm that computes the worst-case joint distribution for any choice of item prices. And yet, in sharp contrast to the additive buyer case (Carroll, 2017), we show that it is NP-hard to approximate the optimal choice of prices to within any factor better than $n^{1/2-\epsilon}$. For the special case of marginal distributions that satisfy the monotone hazard rate property, we show how to guarantee a constant fraction of the optimal worst-case revenue using item pricing; this pricing equates revenue across all possible correlations and can be computed efficiently. |
Date: | 2020–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2003.05913&r=all |
By: | Christian Bongiorno (MICS - Mathématiques et Informatique pour la Complexité et les Systèmes - CentraleSupélec); Damien Challet (MICS - Mathématiques et Informatique pour la Complexité et les Systèmes - CentraleSupélec) |
Abstract: | Cleaning covariance matrices is a highly non-trivial problem, yet of central importance in the statistical inference of dependence between objects. We propose here a probabilistic hierarchical clustering method, named Bootstrapped Average Hierarchical Clustering (BAHC) that is particularly effective in the high-dimensional case, i.e., when there are more objects than features. When applied to DNA microarray, our method yields distinct hierarchical structures that cannot be accounted for by usual hierarchical clustering. We then use global minimum-variance risk management to test our method and find that BAHC leads to significantly smaller realized risk compared to state-of-the-art linear and nonlinear filtering methods in the high-dimensional case. Spectral decomposition shows that BAHC better captures the persistence of the dependence structure between asset price returns in the calibration and the test periods. |
Date: | 2020–03–12 |
URL: | http://d.repec.org/n?u=RePEc:hal:wpaper:hal-02506848&r=all |