|
on Computational Economics |
Issue of 2021‒01‒25
43 papers chosen by |
By: | Cogliano, Jonathan F.; Veneziani, Roberto; Yoshihara, Naoki |
Abstract: | This article surveys computational approaches to classical-Marxian economics. These approaches include a range of techniques - such as numerical simulations, agent-based models, and Monte Carlo methods - and cover many areas within the classical-Marxian tradition. We focus on three major themes in classical-Marxian economics, namely price and value theory; inequality, exploitation, and classes; and technical change, profitability, growth and cycles. We show that computational methods are particularly well-suited to capture certain key elements of the vision of the classical-Marxian approach and can be fruitfully used to make significant progress in the study of classical-Marxian topics. |
Keywords: | Computational Methods, Agent-Based Models, Classical Economists, Marx |
JEL: | C63 B51 B41 |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:hit:hituec:716&r=all |
By: | Kentaro Imajo; Kentaro Minami; Katsuya Ito; Kei Nakagawa |
Abstract: | Recent developments in deep learning techniques have motivated intensive research in machine learning-aided stock trading strategies. However, since the financial market has a highly non-stationary nature hindering the application of typical data-hungry machine learning methods, leveraging financial inductive biases is important to ensure better sample efficiency and robustness. In this study, we propose a novel method of constructing a portfolio based on predicting the distribution of a financial quantity called residual factors, which is known to be generally useful for hedging the risk exposure to common market factors. The key technical ingredients are twofold. First, we introduce a computationally efficient extraction method for the residual information, which can be easily combined with various prediction algorithms. Second, we propose a novel neural network architecture that allows us to incorporate widely acknowledged financial inductive biases such as amplitude invariance and time-scale invariance. We demonstrate the efficacy of our method on U.S. and Japanese stock market data. Through ablation experiments, we also verify that each individual technique contributes to improving the performance of trading strategies. We anticipate our techniques may have wide applications in various financial problems. |
Date: | 2020–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2012.07245&r=all |
By: | Dong An; Noah Linden; Jin-Peng Liu; Ashley Montanaro; Changpeng Shao; Jiasu Wang |
Abstract: | Inspired by recent progress in quantum algorithms for ordinary and partial differential equations, we study quantum algorithms for stochastic differential equations (SDEs). Firstly we provide a quantum algorithm that gives a quadratic speed-up for multilevel Monte Carlo methods in a general setting. As applications, we apply it to compute expection values determined by classical solutions of SDEs, with improved dependence on precision. We demonstrate the use of this algorithm in a variety of applications arising in mathematical finance, such as the Black-Scholes and Local Volatility models, and Greeks. We also provide a quantum algorithm based on sublinear binomial sampling for the binomial option pricing model with the same improvement. |
Date: | 2020–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2012.06283&r=all |
By: | Jiequn Han; Ruimeng Hu |
Abstract: | Stochastic control problems with delay are challenging due to the path-dependent feature of the system and thus its intrinsic high dimensions. In this paper, we propose and systematically study deep neural networks-based algorithms to solve stochastic control problems with delay features. Specifically, we employ neural networks for sequence modeling (\emph{e.g.}, recurrent neural networks such as long short-term memory) to parameterize the policy and optimize the objective function. The proposed algorithms are tested on three benchmark examples: a linear-quadratic problem, optimal consumption with fixed finite delay, and portfolio optimization with complete memory. Particularly, we notice that the architecture of recurrent neural networks naturally captures the path-dependent feature with much flexibility and yields better performance with more efficient and stable training of the network compared to feedforward networks. The superiority is even evident in the case of portfolio optimization with complete memory, which features infinite delay. |
Date: | 2021–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2101.01385&r=all |
By: | Emanuele Ciola (Department of Management, Universita' Politecnica delle Marche (Italy)); Edoardo Gaffeo (Department of Economics and Management, Universita' degli Studi di Trento (Italy).); Mauro Gallegati (Department of Management, Universita' Politecnica delle Marche (Italy)) |
Abstract: | This paper develops and estimates a macroeconomic model of real-financial markets interactions in which the behaviour of banks generates endogenous business cycles. We do so in the context of a computational agent-based framework, where the channelling of funds from depositors to investors occurring through intermediaries nformation and matching frictions. Since banks compete in both deposit and credit markets, the whole dynamic is driven by endogenous fluctuations in their profits. In particular, we assume that intermediaries adopt a simple learning process, which consists of copying the strategy of the most profitable competitors while setting their interest rates. Accordingly, the emergence of strategic complementarity - mainly due to the accumulation of information capital - leads to periods of sustained growth followed by sharp recessions in the simulated economy. |
Keywords: | Keywords: Agent-based macroeconomics, Simulation-based estimation, Intermediaries behaviour, Business cycles |
JEL: | C15 C51 C63 E32 E44 |
Date: | 2021–01 |
URL: | http://d.repec.org/n?u=RePEc:anc:wpaper:448&r=all |
By: | Mariano Zeron; Ignacio Ruiz |
Abstract: | Inspired by a series of remarkable papers in recent years that use Deep Neural Nets to substantially speed up the calibration of pricing models, we investigate the use of Chebyshev Tensors instead of Deep Neural Nets. Given that Chebyshev Tensors can be, under certain circumstances, more efficient than Deep Neural Nets at exploring the input space of the function to be approximated, due to their exponential convergence, the problem of calibration of pricing models seems, a priori, a good case where Chebyshev Tensors can be used. In this piece of research, we built Chebyshev Tensors, either directly or with the help of the Tensor Extension Algorithms, to tackle the computational bottleneck associated with the calibration of the rough Bergomi volatility model. Results are encouraging as the accuracy of model calibration via Chebyshev Tensors is similar to that when using Deep Neural Nets, but with building efforts that range between 5 and 100 times more efficient in the experiments run. Our tests indicate that when using Chebyshev Tensors, the calibration of the rough Bergomi volatility model is around 40,000 times more efficient than if calibrated via brute-force (using the pricing function). |
Date: | 2020–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2012.07440&r=all |
By: | Kathrin Glau; Linus Wunderlich |
Abstract: | We propose the deep parametric PDE method to solve high-dimensional parametric partial differential equations. A single neural network approximates the solution of a whole family of PDEs after being trained without the need of sample solutions. As a practical application, we compute option prices in the multivariate Black-Scholes model. After a single training phase, the prices for different time, state and model parameters are available in milliseconds. We evaluate the accuracy in the price and a generalisation of the implied volatility with examples of up to 25 dimensions. A comparison with alternative machine learning approaches, confirms the effectiveness of the approach. |
Date: | 2020–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2012.06211&r=all |
By: | Santosh Kumar Radha |
Abstract: | In this paper we reformulate the problem of pricing options in a quantum setting. Our proposed algorithm involves preparing an initial state, representing the option price, and then evolving it using existing imaginary time simulation algorithms. This way of pricing options boils down to mapping an initial option price to a quantum state and then simulating the time dependence in Wick's imaginary time space. We numerically verify our algorithm for European options using a particular imaginary time evolution algorithm as proof of concept and show how it can be extended to path dependent options like Asian options. As the proposed method uses a hybrid variational algorithm, it is bound to be relevant for near-term quantum computers. |
Date: | 2021–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2101.04280&r=all |
By: | Daniel Borup (Aarhus University, CREATES and the Danish Finance Institute (DFI)); David E. Rapach (Washington University in St. Louis and Saint Louis University); Erik Christian Montes Schütte (Aarhus University, CREATES and the Danish Finance Institute (DFI)) |
Abstract: | We generate a sequence of now- and backcasts of weekly unemployment insurance initial claims (UI) based on a rich trove of daily Google Trends (GT) search-volume data for terms related to unemployment. To harness the information in a high-dimensional set of daily GT terms, we estimate predictive models using machine-learning techniques in a mixed-frequency framework. In a simulated out-of-sample exercise, now- and backcasts of weekly UI that incorporate the information in the daily GT terms substantially outperform models that ignore the information. The relevance of GT terms for predicting UI is strongly linked to the COVID-19 crisis. |
Keywords: | Unemployment insurance, Internet search, Mixed-frequency data, Penalized regression, Neural network, Variable importance |
JEL: | C45 C53 C55 E24 E27 J65 |
Date: | 2021–01–11 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2021-02&r=all |
By: | Tamara, Novian; Dwi Muchisha, Nadya; Andriansyah, Andriansyah; Soleh, Agus M |
Abstract: | GDP is very important to be monitored in real time because of its usefulness for policy making. We built and compared the ML models to forecast real-time Indonesia's GDP growth. We used 18 variables that consist a number of quarterly macroeconomic and financial market statistics. We have evaluated the performance of six popular ML algorithms, such as Random Forest, LASSO, Ridge, Elastic Net, Neural Networks, and Support Vector Machines, in doing real-time forecast on GDP growth from 2013:Q3 to 2019:Q4 period. We used the RMSE, MAD, and Pearson correlation coefficient as measurements of forecast accuracy. The results showed that the performance of all these models outperformed AR (1) benchmark. The individual model that showed the best performance is random forest. To gain more accurate forecast result, we run forecast combination using equal weighting and lasso regression. The best model was obtained from forecast combination using lasso regression with selected ML models, which are Random Forest, Ridge, Support Vector Machine, and Neural Network. |
Keywords: | Nowcasting, Indonesian GDP, Machine Learning |
JEL: | C55 E30 O40 |
Date: | 2020–06–26 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:105235&r=all |
By: | Peter B. Dixon; Maureen T. Rimmer; Daniel Mason-D'Croz |
Abstract: | We use USAGE-Food, a modified version of the USAGE model, to simulate the effects on the U.S. economy of reductions in meat consumption brought about by health-related preference changes or induced by taxes. Modifications include: (a) separate identification of Beef processing; (b) estimates of price elasticities of demand for beef and other food products derived from a survey of econometric studies; (c) nesting in the household utility function and in the production functions of food-serving industries to represent substitution between flesh and non-flesh food; and (d) allowance for flows of agricultural land between agricultural activities. At the macro level, the main influences on our results are health-related effects on medical expenditures and labour supply. The pure food-chain effects have negligible macroeconomic consequences. Other conclusions are: 1: using beef-tax revenue to subsidize healthy foods strongly accentuates substitution away from beef towards healthy foods. However, the subsidy leads to an overall increase in the consumption of food. 2: using beef-tax revenue to expand public consumption has a negative effect on private consumption. In terms of aggregate demand, the two effects are broadly offsetting. |
Keywords: | Reducing U S beef consumption CGE simulations effects via health expenditures effects via labour supply |
JEL: | C68 I19 Q18 |
Date: | 2020–12 |
URL: | http://d.repec.org/n?u=RePEc:cop:wpaper:g-311&r=all |
By: | Fischer, Benjamin; Hügle, Dominik |
Abstract: | We quantify the private and fiscal lifetime returns to higher education in Germany accounting for the redistribution through the tax-and-transfer system, cohort effects, and the effect of income pooling within households. For this purpose we build a dynamic microsimulation model that simulates individual life cycles of a young German cohort in terms of several key variables, such as employment, earnings, and household formation. To estimate the returns to higher education, we link our dynamic microsimulation model to a tax-benefit simulator that allows converting gross wages into disposable incomes. On average, we find private and fiscal returns that are substantially higher than current market interest rates. However, analyzing the distribution of returns we also find that there is a considerable share of young adults for whom we forecast vocational training, the alternative to higher education, to be financially more rewarding. We demonstrate how the taxtransfer system and income pooling within couple households affect private returns and decompose the fiscal returns into its major components. |
Keywords: | Higher education,Returns to education,Dynamic microsimulation |
JEL: | C53 I23 I26 |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:zbw:fubsbe:202021&r=all |
By: | Mykola Babiak; Jozef Barunik |
Abstract: | We study dynamic portfolio choice of a long-horizon investor who uses deep learning methods to predict equity returns when forming optimal portfolios. Our results show statistically and economically significant benefits from using deep learning to form optimal portfolios through certainty equivalent returns and Sharpe ratios. Return predictability via deep learning also generates substantially improved portfolio performance across different subsamples, particularly during recessionary periods. These gains are robust to including transaction costs, short-selling and borrowing constraints. |
Keywords: | return predictability; portfolio allocation; machine learning; neural networks; empirical asset pricing; |
JEL: | C45 C53 E37 G11 G17 |
Date: | 2020–12 |
URL: | http://d.repec.org/n?u=RePEc:cer:papers:wp677&r=all |
By: | Rodriguez Castelan, Carlos (World Bank); Araar, Abdelkrim (Université Laval); Malásquez, Eduardo A. (World Bank); Ochoa, Rogelio Granguillhome (World Bank) |
Abstract: | This paper presents a novel method for estimating the likely welfare effects of competition reforms for both current and new consumers. Using household budget survey data for 2015/16 for Ethiopia and assuming a reform scenario that dilutes the market share of the state-owned monopoly to 45 percent, the model predicts a 25.3 percent reduction in the price of mobile services and an increase of 4.6 million new users. This reform would generate a welfare gain of 1.37 percent among all consumers. Poverty rates are expected to decline by 0.31 percentage points, driven by a reduction of 0.22 percentage points for current consumers and 0.09 percentage points among new users. Inequality would increase by 0.23 Gini points since better off consumers are more likely to reap the benefits of greater competition. This method represents a powerful tool for supporting the analysis of competition reforms in developing countries, particularly in sectors known for excluding significant segments of the population due to high consumer prices. |
Keywords: | competition reform, ICT, welfare effects, simulations, Ethiopia |
JEL: | C15 D40 D60 I32 L86 N77 |
Date: | 2021–01 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp14044&r=all |
By: | Kim Christensen (Aarhus University and CREATES); Mathias Siggaard (Aarhus University and CREATES); Bezirgen Veliyev (Aarhus University and CREATES) |
Abstract: | We show that machine learning (ML) algorithms improve one-day-ahead forecasts of realized variance from 29 Dow Jones Industrial Average index stocks over the sample period 2001 - 2017. We inspect several ML approaches: Regularization, tree-based algorithms, and neural networks. Off-the-shelf ML implementations beat the Heterogeneous AutoRegressive (HAR) model, even when the only predictors employed are the daily, weekly, and monthly lag of realized variance. Moreover, ML algorithms are capable of extracting substantial more information from additional predictors of volatility, including firm-specific characteristics and macroeconomic indicators, relative to an extended HAR model (HAR-X). ML automatically deciphers the often nonlinear relationship among the variables, allowing to identify key associations driving volatility. With accumulated local effect (ALE) plots we show there is a general agreement about the set of the most dominant predictors, but disagreement on their ranking. We investigate the robustness of ML when a large number of irrelevant variables, exhibiting serial correlation and conditional heteroscedasticity, are added to the information set. We document sustained forecasting improvements also in this setting. |
Keywords: | Gradient boosting, high-frequency data, machine learning, neural network, random forest, realized variance, regularization, volatility forecasting |
JEL: | C10 C50 |
Date: | 2021–01–18 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2021-03&r=all |
By: | Le Trung Hieu |
Abstract: | Stock portfolio optimization is the process of constant re-distribution of money to a pool of various stocks. In this paper, we will formulate the problem such that we can apply Reinforcement Learning for the task properly. To maintain a realistic assumption about the market, we will incorporate transaction cost and risk factor into the state as well. On top of that, we will apply various state-of-the-art Deep Reinforcement Learning algorithms for comparison. Since the action space is continuous, the realistic formulation were tested under a family of state-of-the-art continuous policy gradients algorithms: Deep Deterministic Policy Gradient (DDPG), Generalized Deterministic Policy Gradient (GDPG) and Proximal Policy Optimization (PPO), where the former two perform much better than the last one. Next, we will present the end-to-end solution for the task with Minimum Variance Portfolio Theory for stock subset selection, and Wavelet Transform for extracting multi-frequency data pattern. Observations and hypothesis were discussed about the results, as well as possible future research directions.1 |
Date: | 2020–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2012.06325&r=all |
By: | KONDO Keisuke |
Abstract: | This study develops a spatial Susceptible–Exposed–Infectious–Recovered (SEIR) model that analyzes the effect of interregional mobility on the spatial spread of the coronavirus disease 2019 (COVID-19) outbreak in Japan. National and local governments have requested that residents refrain from traveling between 47 prefectures during the state of emergency. However, the extent to which restricting the interregional mobility prevents infection expansion has not been elucidated. Our spatial SEIR model describes the spatial spread pattern of COVID-19 when people commute to a prefecture where they work or study during the daytime and return to their residential prefecture at night. We assume that people are exposed to infection risk during their daytime activities. According to our simulation results, interregional mobility restriction can prevent geographical expansion of the infection. However, in prefectures with many infectious individuals, residents are exposed to higher infection risk when their mobility is restricted. Our simulation results also show that interregional mobility restriction plays a limited role in reducing the national total number of infected individuals. |
Date: | 2020–12 |
URL: | http://d.repec.org/n?u=RePEc:eti:dpaper:20089&r=all |
By: | Hezhi Luo (deceased); Yuanyuan Chen (deceased); Xianye Zhang (deceased); Duan Li (deceased) |
Abstract: | We investigate the optimal portfolio deleveraging (OPD) problem with permanent and temporary price impacts, where the objective is to maximize equity while meeting a prescribed debt/equity requirement. We take the real situation with cross impact among different assets into consideration. The resulting problem is, however, a non-convex quadratic program with a quadratic constraint and a box constraint, which is known to be NP-hard. In this paper, we first develop a successive convex optimization (SCO) approach for solving the OPD problem and show that the SCO algorithm converges to a KKT point of its transformed problem. Second, we propose an effective global algorithm for the OPD problem, which integrates the SCO method, simple convex relaxation and a branch-and-bound framework, to identify a global optimal solution to the OPD problem within a pre-specified $\epsilon$-tolerance. We establish the global convergence of our algorithm and estimate its complexity. We also conduct numerical experiments to demonstrate the effectiveness of our proposed algorithms with both the real data and the randomly generated medium- and large-scale OPD problem instances. |
Date: | 2020–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2012.07368&r=all |
By: | Andrei Cozma; Christoph Reisinger |
Abstract: | In this short paper, we study the simulation of a large system of stochastic processes subject to a common driving noise and fast mean-reverting stochastic volatilities. This model may be used to describe the firm values of a large pool of financial entities. We then seek an efficient estimator for the probability of a default, indicated by a firm value below a certain threshold, conditional on common factors. We first analyse the convergence of the Euler--Maruyama scheme applied to the fast Ornstein--Uhlenbeck SDE for the volatility, and show that the first order strong error is robust with respect to the mean reversion speed (only) if the step size is scaled appropriately. Next, we consider approximations where coefficients containing the fast volatility are replaced by certain ergodic averages (a type of law of large numbers), and study a correction term (of central limit theorem-type). The accuracy of these approximations is assessed by numerical simulation of pathwise losses and the estimation of payoff functions as they appear in basket credit derivatives. |
Date: | 2020–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2012.09726&r=all |
By: | Hull, Isaiah (Research Department, Central Bank of Sweden); Sattath, Or (Department of Computer Science); Diamanti, Eleni (LIP6, CNRS); Wendin, Göran (Department of Microtechnology and Nanoscience) |
Abstract: | Research on quantum technology spans multiple disciplines: physics, computer science, engineering, and mathematics. The objective of this manuscript is to provide an accessible introduction to this emerging field for economists that is centered around quantum computing and quantum money. We proceed in three steps. First, we discuss basic concepts in quantum computing and quantum communication, assuming knowledge of linear algebra and statistics, but not of computer science or physics. This covers fundamental topics, such as qubits, superposition, entanglement, quantum circuits, oracles, and the no-cloning theorem. Second, we provide an overview of quantum money, an early invention of the quantum communication literature that has recently been partially implemented in an experimental setting. One form of quantum money offers the privacy and anonymity of physical cash, the option to transact without the involvement of a third party, and the efficiency and convenience of a debit card payment. Such features cannot be achieved in combination with any other form of money. Finally, we review all existing quantum speedups that have been identified for algorithms used to solve and estimate economic models. This includes function approximation, linear systems analysis, Monte Carlo simulation, matrix inversion, principal component analysis, linear regression, interpolation, numerical differentiation, and true random number generation. We also discuss the difficulty of achieving quantum speedups and comment on common misconceptions about what is achievable with quantum computing. |
Keywords: | Quantum Computing; Econometrics; Computational Economics; Money; Central Banks |
JEL: | C50 C60 E40 E50 |
Date: | 2020–12–01 |
URL: | http://d.repec.org/n?u=RePEc:hhs:rbnkwp:0398&r=all |
By: | Louis Golowich; Shengwu Li |
Abstract: | We present a polynomial-time algorithm that determines, given some choice rule, whether there exists an obviously strategy-proof mechanism for that choice rule. |
Date: | 2021–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2101.05149&r=all |
By: | Heinrich, Florian; Appel, Franziska; Balmann, Alfons |
Abstract: | After land prices in Germany increased continuously since 2006, policy makers, representatives of farmers' unions, NGOs, and farmers started and continued to discuss or propose new land market regulations to stop price increases and to protect particularly smaller farmers. In this paper we analyze different types of regulations for the land rental market with the agent-based model AgriPoliS. Our simulation results show that price and farm size limitations may inhibit rental price increases and reduce structural change. The regulations do however not lead to a conservation in the number of small farms; neither do they have a substantial positive impact on their profitability and competitiveness. Many small farms still exit agricultural production and only few are able to grow into a larger size class. Beyond redistributional costs, e.g. beared by landowners, economic and social costs result from reduced average economic land rents, less regional value-added and less employment caused by a reduced functionality of the land market and biased incentives. |
Keywords: | structural change,land market,land market regulation,agent-based modeling |
JEL: | Q15 Q18 C63 |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:zbw:forlwp:122019&r=all |
By: | NARITA Yusuke; AIHARA Shunsuke; SAITO Yuta; MATSUTANI Megumi; YATA Kohei |
Abstract: | From public policy to business, machine learning and other algorithms produce a growing portion of treatment decisions and recommendations. Such algorithmic decisions are natural experiments (conditionally quasi-randomly assigned instruments) since the algorithms make decisions based only on observable input variables. We use this observation to characterize the sources of causal-effect identification for a class of stochastic and deterministic algorithms. This identification result translates into consistent estimators of causal effects and the counterfactual performance of new algorithms. We apply our method to improve a large-scale fashion e-commerce platform (ZOZOTOWN). We conclude by providing public policy applications. |
Date: | 2020–12 |
URL: | http://d.repec.org/n?u=RePEc:eti:rdpsjp:20045&r=all |
By: | Böhl, Gregor |
Abstract: | Structural macroeconometric analysis and new HANK-type models with extremely high dimensionality require fast and robust methods to efficiently deal with occasionally binding constraints (OBCs), especially since major developed economies have again hit the zero lower bound on nominal interest rates. This paper shows that a linear dynamic rational expectations system with OBCs, depending on the expected duration of the constraint, can be represented in closed form. Combined with a set of simple equilibrium conditions, this can be exploited to avoid matrix inversions and simulations at runtime for significant gains in computational speed. An efficient implementation is provided in Python programming language. Benchmarking results show that for medium-scale models with an OBC, more than 150,000 state vectors can be evaluated per second. This is an improvement of more than three orders of magnitude over existing alternatives. Even state evaluations of large HANK-type models with almost 1000 endogenous variables require only 0.1 ms. |
Keywords: | Occasionally Binding Constraints,Effective Lower Bound,Computational Methods |
Date: | 2021 |
URL: | http://d.repec.org/n?u=RePEc:zbw:imfswp:148&r=all |
By: | Daniel Poh; Bryan Lim; Stefan Zohren; Stephen Roberts |
Abstract: | The success of a cross-sectional systematic strategy depends critically on accurately ranking assets prior to portfolio construction. Contemporary techniques perform this ranking step either with simple heuristics or by sorting outputs from standard regression or classification models, which have been demonstrated to be sub-optimal for ranking in other domains (e.g. information retrieval). To address this deficiency, we propose a framework to enhance cross-sectional portfolios by incorporating learning-to-rank algorithms, which lead to improvements of ranking accuracy by learning pairwise and listwise structures across instruments. Using cross-sectional momentum as a demonstrative case study, we show that the use of modern machine learning ranking algorithms can substantially improve the trading performance of cross-sectional strategies -- providing approximately threefold boosting of Sharpe Ratios compared to traditional approaches. |
Date: | 2020–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2012.07149&r=all |
By: | Gabriel Ahlfeldt; Thilo N. H. Albers; Kristian Behrens |
Abstract: | We harness big data to detect prime locations—large clusters of knowledge-based tradable services—in 125 global cities and track changes in the within-city geography of prime service jobs over a century. Historically smaller cities that did not develop early public transit networks are less concentrated today and have prime locations farther away from their historic cores. We rationalize these findings in an agent-based model that features extreme agglomeration, multiple equilibria, and path dependence. Both city size and public transit networks anchor city structure. Exploiting major disasters and using a novel instrument—subway potential—we provide causal evidence for these mechanisms and disentangle size- from transport network effects. |
Keywords: | prime services, internal city structure, agent-based model, multiple equilibria and path dependence, transport networks |
JEL: | R38 R52 R58 |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:ces:ceswps:_8768&r=all |
By: | Rodríguez-García, Jair Hissarly; Venegas-Martínez, Francisco |
Abstract: | Resumen El otorgamiento de microcréditos de forma eficiente y transparente a través de plataformas digitales a individuos que desarrollan actividades económicas y que buscan mantener su empleo y el de sus trabajadores y que no tienen acceso al sistema financiero convencional es, sin duda, un problema urgente por resolver en la crisis sanitaria por la que atraviesa actualmente México. La presente investigación desarrolla varios modelos y estrategias de riesgo de crédito que permiten promover la inclusión crediticia en México de manera justa y sostenible en un ambiente de incertidumbre generada por los estragos presentes y esperados por la pandemia COVID-19. Para ello se utiliza el enfoque de ciencia de datos de machine learning, particularmente, se emplean las herramientas: regresión del árbol de decisión, bosques aleatorios, función de base radial, boosting, K-Nearest Neigbor (KNN) y Redes Neuronales. Abstract The efficient and transparent granting of microcredits through digital platforms to people who carry out economic activities and who seek to maintain their employment and that of their workers and who do not have access to the conventional financial system is, without a doubt, an urgent problem be solved in the health crisis that Mexico is going through. This research develops various credit risk models and strategies that allow promoting credit inclusion in Mexico in a fair and sustainable manner in an environment of uncertainty generated by the present and expected ravages of the COVID-19 pandemic. For this, the data science approach of machine learning is used, in particular, the used tools are: decision tree regression, random forests, radial basis function, boosting, K-Nearest Neigbor (KNN), and Neural Networks. |
Keywords: | riesgo crédito, ciencia de datos, mercados de créditos, instituciones financieras, inclusión financiera. credit risk, data science, credit markets, financial institutions, financial inclusion. |
JEL: | G23 |
Date: | 2021–01–04 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:105133&r=all |
By: | Daniel Fehrle (University of Augsburg, Department of Economics); Christopher Heiberger (University of Augsburg, Department of Economics); Johannes Huber (University of Augsburg, Department of Economics) |
Abstract: | Polynomial chaos expansion (PCE) provides a method that enables the user to represent a quantity of interest (QoI) of a model's solution as a series expansion of uncertain model inputs, usually its parameters. Among the QoIs are the policy function, the second moments of observables, or the posterior kernel. Hence, PCE sidesteps the repeated and time consuming evaluations of the model's outcomes. The paper discusses the suitability of PCE for computational economics. We, therefore, introduce to the theory behind PCE, analyze the convergence behavior for different elements of the solution of the standard real business cycle model as illustrative example, and check the accuracy, if standard empirical methods are applied. The results are promising, both in terms of accuracy and efficiency. |
Keywords: | Polynomial Chaos Expansion, parameter inference, parameter uncertainty, solution methods |
JEL: | C11 C13 C32 C63 |
Date: | 2020–12 |
URL: | http://d.repec.org/n?u=RePEc:aug:augsbe:0341&r=all |
By: | Karim Barigou (SAF - Laboratoire de Sciences Actuarielle et Financière - UCBL - Université Claude Bernard Lyon 1 - Université de Lyon); Valeria Bignozzi (Department of Statistics and Quantitative Methods University of Milano-Bicocca); Andreas Tsanakas (The Business School (formerly Cass), City, University of London) |
Abstract: | Current approaches to fair valuation in insurance often follow a two-step approach, combining quadratic hedging with application of a risk measure on the residual liability, to obtain a cost-of-capital margin. In such approaches, the preferences represented by the regulatory risk measure are not reflected in the hedging process. We address this issue by an alternative two-step hedging procedure, based on generalised regression arguments, which leads to portfolios that are neutral with respect to a risk measure, such as Value-at-Risk or the expectile. First, a portfolio of traded assets aimed at replicating the liability is determined by local quadratic hedging. Second, the residual liability is hedged using an alternative objective function. The risk margin is then defined as the cost of the capital required to hedge the residual liability. In the case quantile regression is used in the second step, yearly solvency constraints are naturally satisfied; furthermore, the portfolio is a risk minimiser among all hedging portfolios that satisfy such constraints. We present a neural network algorithm for the valuation and hedging of insurance liabilities based on a backward iterations scheme. The algorithm is fairly general and easily applicable, as it only requires simulated paths of risk drivers. |
Keywords: | Market-consistent valuation,Quantile regression,Solvency II,Cost-of-capital,Dynamic risk measurement |
Date: | 2020–12–07 |
URL: | http://d.repec.org/n?u=RePEc:hal:wpaper:hal-03043244&r=all |
By: | Gadat, Sébastien; Gavra, Ioana |
Abstract: | This paper studies some asymptotic properties of adaptive algorithms widely used in optimization and machine learning, and among them Adagrad and Rmsprop, which are involved in most of the blackbox deep learning algorithms. Our setup is the non-convex landscape optimization point of view, we consider a one time scale parametrization and we consider the situation where these algorithms may be used or not with mini-batches. We adopt the point of view of stochastic algorithms and establish the almost sure convergence of these methods when using a decreasing step-size towards the set of critical points of the target function. With a mild extra assumption on the noise, we also obtain the convergence towards the set of minimizers of the function. Along our study, we also obtain a \convergence rate" of the methods, in the vein of the works of [GL13]. |
Keywords: | Stochastic optimization; Stochastic adaptive algorithm; Convergence of random variables |
Date: | 2021–01–07 |
URL: | http://d.repec.org/n?u=RePEc:tse:wpaper:125116&r=all |
By: | Xavier Warin |
Abstract: | We propose deep neural network algorithms to calculate efficient frontier in some Mean-Variance and Mean-CVaR portfolio optimization problems. We show that we are able to deal with such problems when both the dimension of the state and the dimension of the control are high. Adding some additional constraints, we compare different formulations and show that a new projected feedforward network is able to deal with some global constraints on the weights of the portfolio while outperforming classical penalization methods. All developed formulations are compared in between. Depending on the problem and its dimension, some formulations may be preferred. |
Date: | 2021–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2101.02044&r=all |
By: | Max Kleinebrahm; Jacopo Torriti; Russell McKenna; Armin Ardone; Wolf Fichtner |
Abstract: | Models simulating household energy demand based on different occupant and household types and their behavioral patterns have received increasing attention over the last years due the need to better understand fundamental characteristics that shape the demand side. Most of the models described in the literature are based on Time Use Survey data and Markov chains. Due to the nature of the underlying data and the Markov property, it is not sufficiently possible to consider day to day dependencies in occupant behavior. An accurate mapping of day to day dependencies is of increasing importance for accurately reproducing mobility patterns and therefore for assessing the charging flexibility of electric vehicles. This study bridges the gap between energy related activity modelling and novel machine learning approaches with the objective to better incorporate findings from the field of social practice theory in the simulation of occupancy behavior. Weekly mobility data are merged with daily time use survey data by using attention based models. In a first step an autoregressive model is presented, which generates synthetic weekly mobility schedules of individual occupants and thereby captures day to day dependencies in mobility behavior. In a second step, an imputation model is presented, which enriches the weekly mobility schedules with detailed information about energy relevant at home activities. The weekly activity profiles build the basis for modelling consistent electricity, heat and mobility demand profiles of households. Furthermore, the approach presented forms the basis for providing data on socio-demographically differentiated occupant behavior to the general public. |
Date: | 2021–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2101.00940&r=all |
By: | Pumplun, Luisa; Fecho, Mariska; Islam, Nihal; Buxmann, Peter |
Date: | 2021–01–05 |
URL: | http://d.repec.org/n?u=RePEc:dar:wpaper:124660&r=all |
By: | Dominick Bartelme; Ting Lan; Andrei A. Levchenko |
Abstract: | This paper estimates the impact of external demand shocks on real income. Our empirical strategy is based on a first order approximation to a wide class of small open economy models that feature sector-level gravity in trade flows. The framework allows us to measure foreign shocks and characterize their impact on income in terms of reduced-form elasticities. We use machine learning techniques to group 4-digit manufacturing sectors into a smaller number of clusters, and show that the cluster-level elasticities of income with respect to foreign shocks can be estimated using high-dimensional statistical techniques. We find clear evidence of heterogeneity in the income responses to different foreign shocks. Foreign demand shocks in complex intermediate and capital goods have large positive impacts on real income, whereas impacts in other sectors are negligible. The estimates imply that the pattern of sectoral specialization plays a quantitatively large role in how foreign shocks affect real income, while geographic position plays a smaller role. Finally, a calibrated multi-sector production and trade model can rationalize both the average and the heterogeneity in real income elasticities to foreign shocks under reasonable values of structural parameters. |
JEL: | F43 F62 |
Date: | 2020–12 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:28274&r=all |
By: | Jeffrey Grogger; Sean Gupta; Ria Ivandic; Tom Kirchmaier |
Abstract: | We compare predictions from a conventional protocol-based approach to risk assessment with those based on a machine-learning approach. We first show that the conventional predictions are less accurate than, and have similar rates of negative prediction error as, a simple Bayes classifier that makes use only of the base failure rate. Machine learning algorithms based on the underlying risk assessment questionnaire do better under the assumption that negative prediction errors are more costly than positive prediction errors. Machine learning models based on two-year criminal histories do even better. Indeed, adding the protocol-based features to the criminal histories adds little to the predictive adequacy of the model. We suggest using the predictions based on criminal histories to prioritize incoming calls for service, and devising a more sensitive instrument to distinguish true from false positives that result from this initial screening. |
JEL: | K14 K36 |
Date: | 2020–12 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:28293&r=all |
By: | Yiyan Huang; Cheuk Hang Leung; Xing Yan; Qi Wu; Nanbo Peng; Dongdong Wang; Zhixiang Huang |
Abstract: | This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly. |
Date: | 2020–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2012.09448&r=all |
By: | Hinrichs, Nils; Kolbe, Jens; Werwatz, Axel |
Abstract: | In this paper, we apply Ridge Regression, the Lasso and the Elastic Net to a rich and reliable data set of condominiums sold in Berlin, Germany, between 1996 and 2013. We their predictive performance in a rolling window design to a simple linear OLS procedure. Our results suggest that Ridge Regression, the Lasso and the Elastic Net show potential as AVM procedures but need to be handled with care because of their uneven prediction performance. At least in our application, these procedures are not the "automated" solution to Automated Valuation Modeling that they may seem to be. |
Keywords: | Automated valuation,Machine learning,Elastic Net,Forecastperformance |
JEL: | R31 C14 |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:zbw:forlwp:222020&r=all |
By: | Jeremy Fouliard; Michael Howell; Hélène Rey |
Abstract: | Financial crises cause economic, social and political havoc. Macroprudential policies are gaining traction but are still severely under-researched compared to monetary policy and fiscal policy. We use the general framework of sequential predictions also called online machine learning to forecast crises out-of-sample. Our methodology is based on model averaging and is meta-statistic since we can incorporate any predictive model of crises in our set of experts and test its ability to add information. We are able to predict systemic financial crises twelve quarters ahead out-of-sample with high signal-to-noise ratio in most cases. We analyse which experts provide the most information for our predictions at each point in time and for each country, allowing us to gain some insights into economic mechanisms underlying the building of risk in economies. |
JEL: | G01 G15 |
Date: | 2020–12 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:28302&r=all |
By: | Andrew Bennett; Nathan Kallus |
Abstract: | The conditional moment problem is a powerful formulation for describing structural causal parameters in terms of observables, a prominent example being instrumental variable regression. A standard approach is to reduce the problem to a finite set of marginal moment conditions and apply the optimally weighted generalized method of moments (OWGMM), but this requires we know a finite set of identifying moments, can still be inefficient even if identifying, or can be unwieldy and impractical if we use a growing sieve of moments. Motivated by a variational minimax reformulation of OWGMM, we define a very general class of estimators for the conditional moment problem, which we term the variational method of moments (VMM) and which naturally enables controlling infinitely-many moments. We provide a detailed theoretical analysis of multiple VMM estimators, including based on kernel methods and neural networks, and provide appropriate conditions under which these estimators are consistent, asymptotically normal, and semiparametrically efficient in the full conditional moment model. This is in contrast to other recently proposed methods for solving conditional moment problems based on adversarial machine learning, which do not incorporate optimal weighting, do not establish asymptotic normality, and are not semiparametrically efficient. |
Date: | 2020–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2012.09422&r=all |
By: | Juan Arismendi-Zambrano; Massimo Guidolin; Alessia Paccagnini |
Abstract: | We construct a communication risk profile of the U.S. Federal Reserve Chair by measuring the sentiment of their public statements during their tenure. Communications’ sentiment impact on the interest rates price discovery process by the market after the FOMC meeting is analyzed. The results show that there is a significant difference in the communications’ sentiment that is heterogeneous on the personal characteristics, controlling for the economic environment, and that the Chair communications’ sentiment plays a role in diminishing the surprise of Federal Reserve announcements. |
Keywords: | Federal Reserve, Monetary Policy, Communications, Federal Funds Rate, Machine Learning |
JEL: | G12 G14 G18 G21 G28 G41 |
Date: | 2020–12 |
URL: | http://d.repec.org/n?u=RePEc:een:camaaa:2020-105&r=all |
By: | Nicola Cortinovis; Frank van der Wouden; |
Abstract: | Research on team-work has mostly focused on scientific, technological and corporate domains, in which team-work is organized in systematic, coordinated and formal processes. However, it is unclear to what extent these findings apply to fields in which team-work is less institutionalized, unregulated and occurs outside corporate and academic boundaries. In this paper we study team-work among board-game designers to bring new insights on the effect of team-composition on performance. The board-game industry offers important advantages to complement the extant literature, because team-work during game designing is a rather informal, unstructured process that relies on creativity, imagination and out-of-the-box thinking. We apply econometric and machine learning tools to a novel detail-rich database with information on 10,000 quality-rated games and their 5,167 designers. We examine whether collaborating with someone with higher past ratings increases the quality of output of the collaborated board-game. In addition, we explore three well-documented characteristics that may also impact the quality of output through collaboration. Our findings indicate that the quality of the output of a board-game designer significantly increases when (1) collaborating with a better performing designer, (2) having little or a lot of overlap in terms of expertise with the collaborator and (3) being geographical proximate to the collaborator. These findings suggest that the relation between team-work and performance in the board-game industry is different than in industries and sectors in which collaboration is coordinated in formal settings. We connect our results to other debates in the innovation literature and propose policy and managerial implications. |
Date: | 2021–01 |
URL: | http://d.repec.org/n?u=RePEc:egu:wpaper:2104&r=all |
By: | Alexis Marchal |
Abstract: | I propose a new tool to characterize the resolution of uncertainty around FOMC press conferences. It relies on the construction of a measure capturing the level of discussion complexity between the Fed Chair and reporters during the Q&A sessions. I show that complex discussions are associated with higher equity returns and a drop in realized volatility. The method creates an attention score by quantifying how much the Chair needs to rely on reading internal documents to be able to answer a question. This is accomplished by building a novel dataset of video images of the press conferences and leveraging recent deep learning algorithms from computer vision. This alternative data provides new information on nonverbal communication that cannot be extracted from the widely analyzed FOMC transcripts. This paper can be seen as a proof of concept that certain videos contain valuable information for the study of financial markets. |
Date: | 2020–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2012.06573&r=all |
By: | McNamara, Sarah |
Abstract: | This paper provides estimates of the short-term individual returns to Higher Education (HE) in the United Kingdom, focusing on the effects of attending HE on the labour market outcomes for dropouts. Results show differential labour market outcomes for dropouts vs. individuals who have never attended HE, where outcomes are employment, wages and occupational status. I find that female dropouts, on average, have a higher occupational status than those who have never participated in HE, but do not experience a wage premium. Conversely, male dropouts experience a wage premium relative those who have never participated in HE, but the effect on occupational status is comparatively small. The evidence is mixed, however, as both male and female dropouts are more likely to be unemployed, though the effect is larger for males. |
Keywords: | university education,higher education,graduation,dropout,returns to education |
JEL: | I23 I26 J31 |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:zbw:zewdip:20084&r=all |