Computational Economics
http://lists.repec.org/mailman/listinfo/nep-cmp
Computational Economics
2020-07-27
Applying Dynamic Training-Subset Selection Methods Using Genetic Programming for Forecasting Implied Volatility
http://d.repec.org/n?u=RePEc:arx:papers:2007.07207&r=cmp
Volatility is a key variable in option pricing, trading and hedging strategies. The purpose of this paper is to improve the accuracy of forecasting implied volatility using an extension of genetic programming (GP) by means of dynamic training-subset selection methods. These methods manipulate the training data in order to improve the out of sample patterns fitting. When applied with the static subset selection method using a single training data sample, GP could generate forecasting models which are not adapted to some out of sample fitness cases. In order to improve the predictive accuracy of generated GP patterns, dynamic subset selection methods are introduced to the GP algorithm allowing a regular change of the training sample during evolution. Four dynamic training-subset selection methods are proposed based on random, sequential or adaptive subset selection. The latest approach uses an adaptive subset weight measuring the sample difficulty according to the fitness cases errors. Using real data from SP500 index options, these techniques are compared to the static subset selection method. Based on MSE total and percentage of non fitted observations, results show that the dynamic approach improves the forecasting performance of the generated GP models, specially those obtained from the adaptive random training subset selection method applied to the whole set of training samples.
Sana Ben Hamida
Wafa Abdelmalek
Fathi Abid
2020-06
Dynamic Hedging using Generated Genetic Programming Implied Volatility Models
http://d.repec.org/n?u=RePEc:arx:papers:2006.16407&r=cmp
The purpose of this paper is to improve the accuracy of dynamic hedging using implied volatilities generated by genetic programming. Using real data from S&P500 index options, the genetic programming's ability to forecast Black and Scholes implied volatility is compared between static and dynamic training-subset selection methods. The performance of the best generated GP implied volatilities is tested in dynamic hedging and compared with Black-Scholes model. Based on MSE total, the dynamic training of GP yields better results than those obtained from static training with fixed samples. According to hedging errors, the GP model is more accurate almost in all hedging strategies than the BS model, particularly for in-the-money call options and at-the-money put options.
Fathi Abid
Wafa Abdelmalek
Sana Ben Hamida
2020-06
Market Efficiency in the Age of Big Data
http://d.repec.org/n?u=RePEc:cpr:ceprdp:14235&r=cmp
Modern investors face a high-dimensional prediction problem: thousands of observable variables are potentially relevant for forecasting. We reassess the conventional wisdom on market efficiency in light of this fact. In our model economy, which resembles a typical machine learning setting, N assets have cash flows that are a linear function of J firm characteristics, but with uncertain coefficients. Risk-neutral Bayesian investors impose shrinkage (ridge regression) or sparsity (Lasso) when they estimate the J coefficients of the model and use them to price assets. When J is comparable in size to N, returns appear cross-sectionally predictable using firm characteristics to an econometrician who analyzes data from the economy ex post. A factor zoo emerges even without p-hacking and data-mining. Standard in-sample tests of market efficiency reject the no-predictability null with high probability, despite the fact that investors optimally use the information available to them in real time. In contrast, out-of-sample tests retain their economic meaning.
Martin, Ian
Nagel, Stefan
Big Data; Machine Learning; Market Efficiency
2019-12
Taming the Factor Zoo: A Test of New Factors
http://d.repec.org/n?u=RePEc:cpr:ceprdp:14266&r=cmp
We propose a model selection method to systematically evaluate the contribution to asset pricing of any new factor, above and beyond what a high-dimensional set of existing factors explains. Our methodology accounts for model selection mistakes that produce a bias due to omitted variables, unlike standard approaches that assume perfect variable selection. We apply our procedure to a set of factors recently discovered in the literature. While most of these new factors are shown to be redundant relative to the existing factors, a few have statistically significant explanatory power beyond the hundreds of factors proposed in the past.
Feng, Gavin
Giglio, Stefano W
Xiu, Dacheng
Elastic Net; Factors; Lasso; Machine Learning; PCA; Post-Selection Inference; Regularized Two-Pass Estimation; Stochastic discount factor; variable selection
2020-01
The U.S. Coal Sector between Shale Gas and Renewables: Last Resort Coal Exports?
http://d.repec.org/n?u=RePEc:diw:diwwpp:dp1880&r=cmp
Coal consumption and production have sharply declined in recent years in the U.S., despite political support. Reasons are mostly unfavorable economic conditions for coal, including competition from natural gas and renewables in the power sector, as well as an aging coal- fired power plant fleet. The U.S. Energy Information Administration as well as most models of North American energy markets depict continuously high shares of coal-fired power generation over the next decades in their current policies scenarios. We contrast their results with coal sector modelling based on bottom-up data and recent market trends. We project considerably lower near-term coal use for power generation in the U.S. This has significant effects on coal production and mining employment. Allowing new export terminals along the U.S. West Coast could ease cuts in U.S. production. Yet, exports are a highly uncertain strategy because the U.S. could be strongly affected by changes in global demand, for example from non-U.S. climate policy. Furthermore, coal production within the U.S. is likely to experience regional shifts, affecting location and number of mining jobs.
Christian Hauenstein
Franziska Holz
USA, coal, international coal trade, EMF34, numerical modeling, scenarios
2020
Numerical Simulation of Exchange Option with Finite Liquidity: Controlled Variate Model
http://d.repec.org/n?u=RePEc:arx:papers:2006.07771&r=cmp
In this paper we develop numerical pricing methodologies for European style Exchange Options written on a pair of correlated assets, in a market with finite liquidity. In contrast to the standard multi-asset Black-Scholes framework, trading in our market model has a direct impact on the asset's price. The price impact is incorporated into the dynamics of the first asset through a specific trading strategy, as in large trader liquidity model. Two-dimensional Milstein scheme is implemented to simulate the pair of assets prices. The option value is numerically estimated by Monte Carlo with the Margrabe option as controlled variate. Time complexity of these numerical schemes are included. Finally, we provide a deep learning framework to implement this model effectively in a production environment.
Kevin S. Zhang
Traian A. Pirvu
2020-06
Dynamic Portfolio Optimization with Real Datasets Using Quantum Processors and Quantum-Inspired Tensor Networks
http://d.repec.org/n?u=RePEc:arx:papers:2007.00017&r=cmp
In this paper we tackle the problem of dynamic portfolio optimization, i.e., determining the optimal trading trajectory for an investment portfolio of assets over a period of time, taking into account transaction costs and other possible constraints. This problem, well-known to be NP-Hard, is central to quantitative finance. After a detailed introduction to the problem, we implement a number of quantum and quantum-inspired algorithms on different hardware platforms to solve its discrete formulation using real data from daily prices over 8 years of 52 assets, and do a detailed comparison of the obtained Sharpe ratios, profits and computing times. In particular, we implement classical solvers (Gekko, exhaustive), D-Wave Hybrid quantum annealing, two different approaches based on Variational Quantum Eigensolvers on IBM-Q (one of them brand-new and tailored to the problem), and for the first time in this context also a quantum-inspired optimizer based on Tensor Networks. In order to fit the data into each specific hardware platform, we also consider doing a preprocessing based on clustering of assets. From our comparison, we conclude that D-Wave Hybrid and Tensor Networks are able to handle the largest systems, where we do calculations up to 1272 fully-connected qubits for demonstrative purposes. Finally, we also discuss how to mathematically implement other possible real-life constraints, as well as several ideas to further improve the performance of the studied methods.
Samuel Mugel
Carlos Kuchkovsky
Escolastico Sanchez
Samuel Fernandez-Lorenzo
Jorge Luis-Hita
Enrique Lizaso
Roman Orus
2020-06
Multi-objective Optimal Control of Dynamic Integrated Model of Climate and Economy: Evolution in Action
http://d.repec.org/n?u=RePEc:arx:papers:2007.00449&r=cmp
One of the widely used models for studying economics of climate change is the Dynamic Integrated model of Climate and Economy (DICE), which has been developed by Professor William Nordhaus, one of the laureates of the 2018 Nobel Memorial Prize in Economic Sciences. Originally a single-objective optimal control problem has been defined on DICE dynamics, which is aimed to maximize the social welfare. In this paper, a bi-objective optimal control problem defined on DICE model, objectives of which are maximizing social welfare and minimizing the temperature deviation of atmosphere. This multi-objective optimal control problem solved using Non-Dominated Sorting Genetic Algorithm II (NSGA-II) also it is compared to previous works on single-objective version of the problem. The resulting Pareto front rediscovers the previous results and generalizes to a wide range of non-dominant solutions to minimize the global temperature deviation while optimizing the economic welfare. The previously used single-objective approach is unable to create such a variety of possibilities, hence, its offered solution is limited in vision and reachable performance. Beside this, resulting Pareto-optimal set reveals the fact that temperature deviation cannot go below a certain lower limit, unless we have significant technology advancement or positive change in global conditions.
Mostapha Kalami Heris
Shahryar Rahnamayan
2020-06
The Role of Sentiment in the Economy of the 1920s
http://d.repec.org/n?u=RePEc:ces:ceswps:_8336&r=cmp
John Maynard Keynes composed The General Theory as a response to the Great Crash and Great Depression with all their devastating consequences for the US macro economy and financial markets, as well as the rest of the world. The role of expectations his new theory set out has been widely accepted. The role of “animal spirits” he proscribed (i.e. the role of emotion in cognition) has remained much more controversial. We analyse over two million digitally stored news articles from The Wall St Journal to construct a sentiment series that we use to measure the role of emotion at the time Keynes wrote. An eight variable vector error correction model is then used to identify shocks to sentiment that are orthogonal to the fundamentals of the economy. We show that the identified “pure” sentiment shocks do have statistically and economically significant effects on output, money supply (M2), and the stock market for periods of the 1920s.
Ali Kabiri
Harold James
John Landon-Lane
David Tuckett
Rickard Nyman
Great Depression, general theory, algorithmic text analysis, behavioural economics
2020
Real-time turning point indicators: Review of current international practices
http://d.repec.org/n?u=RePEc:nsr:escoed:escoe-dp-2020-05&r=cmp
This paper presents the results of a survey that identifies real-time turning point indicators published by international statistical and economic institutions. It reports the evidence on past and present indicators used, the methodology underlying their construction and the way the indicators are presented. We find that business and consumer surveys are the most popular source of data and composite indicators like diffusion or first component are the most popular types of indicators. The use of novel databases, big data and machine learning has been limited so far but has a promising future.
Cyrille Lenoel
Garry Young
business cycles, turning points, recession, leading indicator, composite indicator, diffusion index, bridge model, Markow-switching model
2020-04
Using Company Specific Headlines and Convolutional Neural Networks to Predict Stock Fluctuations
http://d.repec.org/n?u=RePEc:arx:papers:2006.12426&r=cmp
This work presents a Convolutional Neural Network (CNN) for the prediction of next-day stock fluctuations using company-specific news headlines. Experiments to evaluate model performance using various configurations of word-embeddings and convolutional filter widths are reported. The total number of convolutional filters used is far fewer than is common, reducing the dimensionality of the task without loss of accuracy. Furthermore, multiple hidden layers with decreasing dimensionality are employed. A classification accuracy of 61.7\% is achieved using pre-learned embeddings, that are fine-tuned during training to represent the specific context of this task. Multiple filter widths are also implemented to detect different length phrases that are key for classification. Trading simulations are conducted using the presented classification results. Initial investments are more than tripled over a 838 day testing period using the optimal classification configuration and a simple trading strategy. Two novel methods are presented to reduce the risk of the trading simulations. Adjustment of the sigmoid class threshold and re-labelling headlines using multiple classes form the basis of these methods. A combination of these approaches is found to more than double the Average Trade Profit (ATP) achieved during baseline simulations.
Jonathan Readshaw
Stefano Giani
2020-06
Deus ex Machina? A Framework for Macro Forecasting with Machine Learning
http://d.repec.org/n?u=RePEc:imf:imfwpa:2020/045&r=cmp
We develop a framework to nowcast (and forecast) economic variables with machine learning techniques. We explain how machine learning methods can address common shortcomings of traditional OLS-based models and use several machine learning models to predict real output growth with lower forecast errors than traditional models. By combining multiple machine learning models into ensembles, we lower forecast errors even further. We also identify measures of variable importance to help improve the transparency of machine learning-based forecasts. Applying the framework to Turkey reduces forecast errors by at least 30 percent relative to traditional models. The framework also better predicts economic volatility, suggesting that machine learning techniques could be an important part of the macro forecasting toolkit of many countries.
Marijn A. Bolhuis
Brett Rayner
Production growth;Capacity utilization;Economic growth;Stock markets;Emerging markets;Forecasts,Nowcasting,Machine learning,GDP growth,Cross-validation,Random Forest,Ensemble,Turkey.,WP,forecast error,factor model,predictor,forecast,OLS
2020-02-28
An approximation of the distribution of learning estimates in macroeconomic models
http://d.repec.org/n?u=RePEc:kof:wpskof:19-453&r=cmp
Adaptive learning under constant-gain allows persistent deviations of beliefs from equilibrium so as to more realistically reflect agentsâ€™ attempt of tracking the continuous evolution of the economy. A characterization of these beliefs is therefore paramount to a proper understanding of the role of expectations in the determination of macroeconomic outcomes. In this paper we propose a simple approximation of the first two moments (mean and variance) of the asymptotic distribution of learning estimates for a general class of dynamic macroeconomic models under constant-gain learning. Our approximation provides renewed convergence conditions that depend on the learning gain and the modelâ€™s structural parameters. We validate the accuracy of our approximation with numerical simulations of a Cobweb model, a standard New-Keynesian model, and a model including a lagged endogenous variable. The relevance of our results is further evidenced by an analysis of learning stability and the effects of alternative specifications of interest rate policy rules on the distribution of agentsâ€™ beliefs.
Jaqueson Kingeski Galimberti
expectations, adaptive learning, constant-gain, policy stability
2019-03
Swag: A Wrapper Method for Sparse Learning
http://d.repec.org/n?u=RePEc:chf:rpseri:rp2049&r=cmp
Predictive power has always been the main research focus of learning algorithms with the goal of minimizing the test error for supervised classification and regression problems. While the general approach for these algorithms is to consider all possible attributes in a dataset to best predict the response of interest, an important branch of research is focused on sparse learning in order to avoid overfitting which can greatly affect the accuracy of out-of-sample prediction. However, in many practical settings we believe that only an extremely small combination of different attributes affect the response whereas even sparse-learning methods can still preserve a high number of attributes in high-dimensional settings and possibly deliver inconsistent prediction performance. As a consequence, the latter methods can also be hard to interpret for researchers and practitioners, a problem which is even more relevant for the “black-box”-type mechanisms of many learning approaches. Finally, aside from needing to quantify prediction uncertainty, there is often a problem of replicability since not all data-collection procedures measure (or observe) the same attributes and therefore cannot make use of proposed learners for testing purposes. To address all the previous issues, we propose to study a procedure that combines screening and wrapper methods and aims to find a library of extremely low-dimensional attribute combinations (with consequent low data collection and storage costs) in order to (i) match or improve the predictive performance of any particular learning method which uses all attributes as an input (including sparse learners); (ii) provide a low-dimensional network of attributes easily interpretable by researchers and practitioners; and (iii) increase the potential replicability of results due to a diversity of attribute combinations defining strong learners with equivalent predictive power. We call this algorithm “Sparse Wrapper AlGorithm” (SWAG).
Roberto Molinari
Gaetan Bakalli
Stéphane Guerrier
Cesare Miglioli
Samuel Orso
O. Scaillet
interpretable machine learning, big data, wrapper, sparse learning, meta learning, ensemble learning, greedy algorithm, feature selection, variable importance network
2020-06
Distributionally Robust Profit Opportunities
http://d.repec.org/n?u=RePEc:arx:papers:2006.11279&r=cmp
This paper expands the notion of robust profit opportunities in financial markets to incorporate distributional uncertainty using Wasserstein distance as the ambiguity measure. Financial markets with risky and risk-free assets are considered. The infinite dimensional primal problems are formulated, leading to their simpler finite dimensional dual problems. A principal motivating question is how does distributional uncertainty help or hurt the robustness of the profit opportunity. Towards answering this question, some theory is developed and computational experiments are conducted. Finally some open questions and suggestions for future research are discussed.
Derek Singh
Shuzhong Zhang
2020-06
Toward a Comprehensive Tax Reform for Italy
http://d.repec.org/n?u=RePEc:imf:imfwpa:2020/037&r=cmp
This paper evaluates elements of a comprehensive reform of the Italian tax system. Reform options are guided by the principles of reducing complexity, broadening the tax base, and lowering marginal tax rates, especially the tax burden on labor income. The revenue and distributional implications of personal income and property tax reforms are assessed with EUROMOD, while a microsimulation model is developed to evaluate VAT reform options. Simulations suggest that a substantial reduction in the tax burden on labor income can be obtained with a revenue-neutral base-broadening reform that streamlines tax expenditures and updates the property valuation system. In addition, a comprehensive reform would benefit low- and middle-income households the most, by lowering significantly their overall current tax liability, which results in increased progressivity of the tax system.
Emile Cammeraat
Ernesto Crivelli
Recurrent taxes on immovable property;Tax revenue;Tax reforms;Tax evasion;Tax rates;Italy,personal income tax,VAT,property tax,microsimulations,EUROMOD,WP,tax expenditure,decile,revenue-neutral,reduced rate,revenue loss
2020-02-21
The Global Impact of Brexit Uncertainty
http://d.repec.org/n?u=RePEc:cpr:ceprdp:14253&r=cmp
Using tools from computational linguistics, we construct new measures of the impact of Brexit on listed firms in the United States and around the world; these measures are based on the proportion of discussions in quarterly earnings conference calls on the costs, benefits, and risks associated with the UK's intention to leave the EU. We identify which firms expect to gain or lose from Brexit and which are most affected by Brexit uncertainty. We then estimate effects of the different types of Brexit exposure on firm-level outcomes. We find that the impact of Brexit-related uncertainty extends far beyond British or even European firms; US and international firms most exposed to Brexit uncertainty lost a substantial fraction of their market value and have also reduced hiring and investment. In addition to Brexit uncertainty (the second moment), we find that international firms overwhelmingly expect negative direct effects from Brexit (the first moment) should it come to pass. Most prominently, firms expect difficulties from regulatory divergence, reduced labor mobility, limited trade access, and the costs of post-Brexit operational adjustments. Consistent with the predictions of canonical theory, this negative sentiment is recognized and priced in stock markets but has not yet significantly affected firm actions.
Hassan, Tarek Alexander
Hollander, Stephan
Tahoun, Ahmed
van Lent, Laurence
Brexit; cross-country effects; Machine Learning; sentiment; uncertainty
2019-12
Numerical aspects of integration in semi-closed option pricing formulas for stochastic volatility jump diffusion models
http://d.repec.org/n?u=RePEc:arx:papers:2006.13181&r=cmp
In mathematical finance, a process of calibrating stochastic volatility (SV) option pricing models to real market data involves a numerical calculation of integrals that depend on several model parameters. This optimization task consists of large number of integral evaluations with high precision and low computational time requirements. However, for some model parameters, many numerical quadrature algorithms fail to meet these requirements. We can observe an enormous increase in function evaluations, serious precision problems and a significant increase of computational time. In this paper we numerically analyse these problems and show that they are especially caused by inaccurately evaluated integrands. We propose a fast regime switching algorithm that tells if it is sufficient to evaluate the integrand in standard double arithmetic or if a higher precision arithmetic has to be used. We compare and recommend numerical quadratures for typical SV models and different parameter values, especially for problematic cases.
Josef Dan\v{e}k
J. Posp\'i\v{s}il
2020-06
Algorithm for Computing Approximate Nash equilibrium in Continuous Games with Application to Continuous Blotto
http://d.repec.org/n?u=RePEc:arx:papers:2006.07443&r=cmp
Successful algorithms have been developed for computing Nash equilibrium in a variety of finite game classes. However, solving continuous games---in which the pure strategy space is (potentially uncountably) infinite---is far more challenging. Nonetheless, many real-world domains have continuous action spaces, e.g., where actions refer to an amount of time, money, or other resource that is naturally modeled as being real-valued as opposed to integral. We present a new algorithm for computing Nash equilibrium strategies in continuous games. In addition to two-player zero-sum games, our algorithm also applies to multiplayer games and games of imperfect information. We experiment with our algorithm on a continuous imperfect-information Blotto game, in which two players distribute resources over multiple battlefields. Blotto games have frequently been used to model national security scenarios and have also been applied to electoral competition and auction theory. Experiments show that our algorithm is able to quickly compute very close approximations of Nash equilibrium strategies for this game.
Sam Ganzfried
2020-06
Algorithmic Collusion: Supra-competitive Prices via Independent Algorithms
http://d.repec.org/n?u=RePEc:cpr:ceprdp:14372&r=cmp
Motivated by their increasing prevalence, we study outcomes when competing sellers use machine learning algorithms to run real-time dynamic price experiments. These algorithms are often misspecified, ignoring the effect of factors outside their control, e.g. competitors' prices. We show that the long-run prices depend on the informational value (or signal to noise ratio) of price experiments: if low, the long-run prices are consistent with the static Nash equilibrium of the corresponding full information setting. However, if high, the long-run prices are supra-competitive---the full information joint-monopoly outcome is possible. We show this occurs via a novel channel: competitors' algorithms' prices end up running correlated experiments. Therefore, sellers' misspecified models overestimate own price sensitivity, resulting in higher prices. We discuss the implications on competition policy.
Hansen, Karsten
Misra, Kanishka
Pai, Mallesh
algorithmic pricing; bandit algorithms; Collusion; Misspecified models
2020-01
Robust uncertainty sensitivity analysis
http://d.repec.org/n?u=RePEc:arx:papers:2006.12022&r=cmp
We consider sensitivity of a generic stochastic optimization problem to model uncertainty. We take a non-parametric approach and capture model uncertainty using Wasserstein balls around the postulated model. We provide explicit formulae for the first order correction to both the value function and the optimizer and further extend our results to optimization under linear constraints. We present applications to statistics, machine learning, mathematical finance and uncertainty quantification. In particular, we provide explicit first-order approximation for square-root LASSO regression coefficients and deduce coefficient shrinkage compared to the ordinary least squares regression. We consider robustness of call option pricing and deduce a new Black-Scholes sensitivity, a non-parametric version of the so-called Vega. We also compute sensitivities of optimized certainty equivalents in finance and propose measures to quantify robustness of neural networks to adversarial examples.
Daniel Bartl
Samuel Drapeau
Jan Obloj
Johannes Wiesel
2020-06
The Macroeconomy as a Random Forest
http://d.repec.org/n?u=RePEc:arx:papers:2006.12724&r=cmp
Over the last decades, an impressive amount of non-linearities have been proposed to reconcile reduced-form macroeconomic models with the data. Many of them boil down to have linear regression coefficients evolving through time: threshold/switching/smooth-transition regression; structural breaks and random walk time-varying parameters. While all of these schemes are reasonably plausible in isolation, I argue that those are much more in agreement with the data if they are combined. To this end, I propose Macroeconomic Random Forests, which adapts the canonical Machine Learning (ML) algorithm to the problem of flexibly modeling evolving parameters in a linear macro equation. The approach exhibits clear forecasting gains over a wide range of alternatives and successfully predicts the drastic 2008 rise in unemployment. The obtained generalized time-varying parameters (GTVPs) are shown to behave differently compared to random walk coefficients by adapting nicely to the problem at hand, whether it is regime-switching behavior or long-run structural change. By dividing the typical ML interpretation burden into looking at each TVP separately, I find that the resulting forecasts are, in fact, quite interpretable. An application to the US Phillips curve reveals it is probably not flattening the way you think.
Philippe Goulet Coulombe
2020-06
Making a Breach: The Incorporation of Agent-Based Models into the Bank of England's Toolkit
http://d.repec.org/n?u=RePEc:gre:wpaper:2020-30&r=cmp
After the financial crisis of 2008, several central banks incorporated agent-based models (ABMs) into their toolkit. The Bank of England (BoE) is a case in point. Since 2008, it has developed four ABMs. Under which conditions could ABMs breach the walls of the BoE? Then, there is the issue of the size of the breach. In which divisions economists used ABMs? Was agent-based modeling used to inform a wide range of policies? Last but not least, there is the issue of the fate of ABMs at the BoE. Is the breach going to narrow or, on the contrary, to widen? What are the forces underlying the deployment of ABMs at the BoE? My article aims to address these issues. I show that institutional reforms were central to the use of ABMs at the BoE. I also show that so far, ABMs have been a marginal tool at the BoE. They were not used to inform monetary policy. Neither were they used to coordinate the BoE's microprudential, macroprudential, and monetary policies. ABMs were only used to inform the BoE's macroprudential policy. I conclude the article by examining the conditions for a broader use of ABMs at the BoE.
Romain Plassard
Bank of England, agent-based models, macroprudential policy, monetary policy, DSGE models
2020-06
The More the Merrier? A Machine Learning Algorithm for Optimal Pooling of Panel Data
http://d.repec.org/n?u=RePEc:imf:imfwpa:2020/044&r=cmp
We leverage insights from machine learning to optimize the tradeoff between bias and variance when estimating economic models using pooled datasets. Specifically, we develop a simple algorithm that estimates the similarity of economic structures across countries and selects the optimal pool of countries to maximize out-of-sample prediction accuracy of a model. We apply the new alogrithm by nowcasting output growth with a panel of 102 countries and are able to significantly improve forecast accuracy relative to alternative pools. The algortihm improves nowcast performance for advanced economies, as well as emerging market and developing economies, suggesting that machine learning techniques using pooled data could be an important macro tool for many countries.
Marijn A. Bolhuis
Brett Rayner
Economic models;Production growth;Developing countries;Emerging markets;Data analysis;Machine learning,GDP growth,forecasts,panel data,pooling.,WP,forecast error,DGP,forecast,economic structure,output growth
2020-02-28
An unsupervised deep learning approach in solving partial-integro differential equations
http://d.repec.org/n?u=RePEc:arx:papers:2006.15012&r=cmp
We investigate solving partial integro-differential equations (PIDEs) using unsupervised deep learning in this paper. To price options, assuming underlying processes follow \levy processes, we require to solve PIDEs. In supervised deep learning, pre-calculated labels are used to train neural networks to fit the solution of the PIDE. In an unsupervised deep learning, neural networks are employed as the solution, and the derivatives and the integrals in the PIDE are calculated based on the neural network. By matching the PIDE and its boundary conditions, the neural network gives an accurate solution of the PIDE. Once trained, it would be fast for calculating options values as well as option \texttt{Greeks}.
Ali Hirsa
Weilong Fu
2020-06
Global macroeconomic cooperation in response to the COVID-19 pandemic: a roadmap for the G20 and the IMF
http://d.repec.org/n?u=RePEc:een:camaaa:2020-68&r=cmp
The COVID-19 crisis has caused the greatest collapse in global economic activity since 1720. Some advanced countries have mounted a massive fiscal response, both to pay for disease-fighting action and to preserve the incomes of firms and workers until the economic recovery is under way. But there are many emerging market economies which have been prevented from doing what is needed by their high existing levels of public debt and—especially—by the external financial constraints which they face. We argue in the present paper that there is a need for international cooperation to allow such countries to undertake the kind of massive fiscal response that all countries now need, and that many advanced countries have been able to carry out. We show what such cooperation would involve. We use a global macroeconomic model to explore how extraordinarily beneficial such cooperation would be. Simulations of the model suggest that GDP in the countries in which extra fiscal support takes place would be around two and a half per cent higher in the first year, and that GDP in other countries in the world be more than one per cent higher. So far, such cooperation has been notably lacking, in striking contrast with what happened in the wake of the Global Financial Crisis in 2008. The necessary cooperation needs to be led by the Group of Twenty (G20), just as happened in 2008–9, since the G20 brings together the leaders of the world’s largest economies. This cooperation must also necessarily involve a promise of international financial support from the International Monetary Fund, otherwise international financial markets might take fright at the large budget deficits and current account deficits which will emerge, creating fiscal crises and currency crises and so causing such expansionary policies which we advocate to be brought to an end.
Warwick McKibbin
David Vines
COVID-19, risk, macroeconomics, DSGE, CGE, G-Cubed (G20)
2020-07
Global macroeconomic scenarios of the COVID-19 pandemic
http://d.repec.org/n?u=RePEc:een:camaaa:2020-62&r=cmp
The COVID-19 global pandemic has caused significant global economic and social disruption. In McKibbin and Fernando (2020), we used data from historical pandemics to explore seven plausible scenarios of the economic consequences if COVID-19 were to become a global pandemic. In this paper, we use currently observed epidemiological outcomes across countries and recent data on sectoral shutdowns and economic shocks to estimate the likely impact of COVID-19 pandemic on the global economy in coming years under six new scenarios. The first scenario explores the outcomes if the current course of COVID-19 is successfully controlled, and there is only a mild recurrence in 2021. We then explore scenarios where the opening of economies results in recurrent outbreaks of various magnitudes and countries respond with and without economic shutdowns. We also explore the impact if no vaccine becomes available and the world must adapt to living with COVID-19 in coming decades. The final scenario is the case where a given country is in the most optimistic scenario (scenario 1), but the rest of the world is in the most pessimistic scenario. The scenarios in this paper demonstrate that even a contained outbreak (which is optimistic), will significantly impact the global economy in the coming years. The economic consequences of the COVID-19 pandemic under plausible scenarios are substantial and the ongoing economic adjustment is far from over.
Warwick McKibbin
Roshen Fernando
Pandemics, infectious diseases, risk, macroeconomics, DSGE, CGE, G-Cubed
2020-06
Forecasting volatility with a stacked model based on a hybridized Artificial Neural Network
http://d.repec.org/n?u=RePEc:arx:papers:2006.16383&r=cmp
An appropriate calibration and forecasting of volatility and market risk are some of the main challenges faced by companies that have to manage the uncertainty inherent to their investments or funding operations such as banks, pension funds or insurance companies. This has become even more evident after the 2007-2008 Financial Crisis, when the forecasting models assessing the market risk and volatility failed. Since then, a significant number of theoretical developments and methodologies have appeared to improve the accuracy of the volatility forecasts and market risk assessments. Following this line of thinking, this paper introduces a model based on using a set of Machine Learning techniques, such as Gradient Descent Boosting, Random Forest, Support Vector Machine and Artificial Neural Network, where those algorithms are stacked to predict S&P500 volatility. The results suggest that our construction outperforms other habitual models on the ability to forecast the level of volatility, leading to a more accurate assessment of the market risk.
E. Ramos-P\'erez
P. J. Alonso-Gonz\'alez
J. J. N\'u\~nez-Vel\'azquez
2020-06
Rational Inattention via Ignorance Equivalence
http://d.repec.org/n?u=RePEc:fip:fedpwp:88228&r=cmp
We present a novel approach to finite Rational Inattention (RI) models based on the ignorance equivalent, a fictitious action with state-dependent payoffs that effectively summarizes the optimal learning and conditional choices. The ignorance equivalent allows us to recast the RI problem as a standard expected utility maximization over an augmented choice set called the learning-proof menu, yielding new insights regarding the behavioral implications of RI, in particular as new actions are added to the menu. Our geometric approach is also well suited to numerical methods, outperforming existing techniques both in terms of speed and accuracy, and offering robust predictions on the most frequently implemented actions.
Roc Armenter
Michèle Müller-Itten
Zachary Stangebye
Rational inattention; information acquisition; learning.
2020-06-22
Sedimentary structures discriminations with hyperspectral imaging on sediment cores
http://d.repec.org/n?u=RePEc:osf:eartha:4ue5s&r=cmp
Hyperspectral imaging (HSI) is a non-destructive high-resolution sensor, which is currently under significant development to analyze geological areas with remote devices or natural samples in a laboratory. In both cases, the hyperspectral image provides several sedimentary structures that need to be separated to temporally and spatially describe the sample. Sediment sequences are composed of successive deposits (strata, homogenite, flood) that can be visible or not depending on sample properties. The classical methods to identify them are time-consuming, have a low spatial resolution (millimeter), and are generally based on a naked-eye counting. In this study, we propose to compare several supervised classification algorithms for the discrimination of sedimentological structures on lake sediments. Instantaneous events in lake sediments are generally linked to extreme geodynamical events (e.g., floods, earthquakes), their identification and counting are essential to understand long-term fluctuations and improve hazard assessments. This is done by reconstructing a chronicle of event layer occurrence, including estimation of deposit thicknesses. Here we applied two hyperspectral imaging sensors (Visible Near-Infrared VNIR, 60 μm, 400-1000 nm; Short Wave Infrared SWIR, 200 μm, 1000-2500 nm) on three sediment cores from different lake systems. We highlight that the SWIR sensor is the optimal one to create robust classification models with discriminant analyses. Indeed, the VNIR sensor is impacted by the surface reliefs and structures that are not in the learning set, which lead to miss-classification. These observations are also valid for the combined sensor (VNIR-SWIR). Several spatial and spectral pre-processing were also compared and allowed to highlight discriminant information specific to a sample and a sensor. These works show that the combined use of hyperspectral imaging and machine learning improves the characterization of sedimentary structures in laboratory conditions.
Jacq, Kévin
William, Rapuc
Alexandre, Benoit
Didier, Coquin
Bernard, Fanget
Perrette, Yves
Sabatier, Pierre
Bruno, Wilhelm
Maxime, Debret
Arnaud, Fabien
2020-07-17
How do machine learning and non-traditional data affect credit scoring? New evidence from a Chinese fintech firm
http://d.repec.org/n?u=RePEc:cpr:ceprdp:14259&r=cmp
This paper compares the predictive power of credit scoring models based on machine learning techniques with that of traditional loss and default models. Using proprietary transaction-level data from a leading fintech company in China for the period between May and September 2017, we test the performance of different models to predict losses and defaults both in normal times and when the economy is subject to a shock. In particular, we analyse the case of an (exogenous) change in regulation policy on shadow banking in China that caused lending to decline and credit conditions to deteriorate. We find that the model based on machine learning and non-traditional data is better able to predict losses and defaults than traditional models in the presence of a negative shock to the aggregate credit supply. One possible reason for this is that machine learning can better mine the non-linear relationship between variables in a period of stress. Finally, the comparative advantage of the model that uses the fintech credit scoring technique based on machine learning and big data tends to decline for borrowers with a longer credit history.
Gambacorta, Leonardo
Huang, Yiping
Qiu, Han
Wang, Jingyi
credit risk; credit scoring; Fintech; Machine Learning; non-traditional information
2019-12