
on Computational Economics 
Issue of 2021‒02‒15
23 papers chosen by 
By:  Javier GonzalezConde; \'Angel Rodr\'iguezRozas; Enrique Solano; Mikel Sanz 
Abstract:  Pricing financial derivatives, in particular Europeanstyle options at different timematurities and strikes, is a relevant financial problem. The dynamics describing the price of vanilla options when constant volatilities and interest rates are assumed, is governed by the BlackScholes model, a linear parabolic partial differential equation with terminal value given by the payoff of the option contract and no additional boundary conditions. Here, we present a digital quantum algorithm to solve BlackScholes equation on a quantum computer for a wide range of relevant financial parameters by mapping it to the Schr\"odinger equation. The nonHermitian nature of the resulting Hamiltonian is solved by embedding the dynamics into an enlarged Hilbert space, which makes use of only one additional ancillary qubit. Moreover, we employ a second ancillary qubit to transform initial condition into periodic boundary conditions, which substantially improves the stability and performance of the protocol. This algorithm shows a feasible approach for pricing financial derivatives on a digital quantum computer based on Hamiltonian simulation, technique which differs from those based on Monte Carlo simulations to solve the stochastic counterpart of the Black Scholes equation. Our algorithm remarkably provides an exponential speedup since the terms in the Hamiltonian can be truncated by a polynomial number of interactions while keeping the error bounded. We report expected accuracy levels comparable to classical numerical algorithms by using 10 qubits and 94 entangling gates on a faulttolerant quantum computer, and an expected success probability of the postselection procedure due to the embedding protocol above 60\%. 
Date:  2021–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2101.04023&r=all 
By:  Jérôme Lelong (DAO  Données, Apprentissage et Optimisation  LJK  Laboratoire Jean Kuntzmann  Inria  Institut National de Recherche en Informatique et en Automatique  CNRS  Centre National de la Recherche Scientifique  UGA  Université Grenoble Alpes  Grenoble INP  Institut polytechnique de Grenoble  Grenoble Institute of Technology  UGA  Université Grenoble Alpes); Zineb El Filali EchChafiq (Natixis Asset Management, DAO  Données, Apprentissage et Optimisation  LJK  Laboratoire Jean Kuntzmann  Inria  Institut National de Recherche en Informatique et en Automatique  CNRS  Centre National de la Recherche Scientifique  UGA  Université Grenoble Alpes  Grenoble INP  Institut polytechnique de Grenoble  Grenoble Institute of Technology  UGA  Université Grenoble Alpes); Adil Reghai (Natixis Asset Management) 
Abstract:  Many pricing problems boil down to the computation of a high dimensional integral, which is usually estimated using Monte Carlo. In fact, the accuracy of a Monte Carlo estimator with M simulations is given by σ √ M. Meaning that its convergence is immune to the dimension of the problem. However, this convergence can be relatively slow depending on the variance σ of the function to be integrated. To resolve such a problem, one would perform some variance reduction techniques such as importance sampling, stratification, or control variates. In this paper, we will study two approaches for improving the convergence of Monte Carlo using Neural Networks. The first approach relies on the fact that many high dimensional financial problems are of low effective dimensions[15]. We expose a method to reduce the dimension of such problems in order to keep only the necessary variables. The integration can then be done using fast numerical integration techniques such as Gaussian quadrature. The second approach consists in building an automatic control variate using neural networks. We learn the function to be integrated (which incorporates the diffusion model plus the payoff function) in order to build a network that is highly correlated to it. As the network that we use can be integrated exactly, we can use it as a control variate. 
Date:  2021 
URL:  http://d.repec.org/n?u=RePEc:hal:journl:hal02891798&r=all 
By:  Jaegi Jeon; Kyunghyun Park; Jeonggyu Huh 
Abstract:  In this study, we generate a large number of implied volatilities for the Stochastic Alpha Beta Rho (SABR) model using a graphics processing unit (GPU) based simulation and enable an extensive neural network to learn them. This model does not have any exact pricing formulas for vanilla options, and neural networks have an outstanding ability to approximate various functions. Surprisingly, the network reduces the simulation noises by itself, thereby achieving as much accuracy as the MonteCarlo simulation. Extremely high accuracy cannot be attained via existing approximate formulas. Moreover, the network is as efficient as the approaches based on the formulas. When evaluating based on high accuracy and efficiency, extensive networks can eliminate the necessity of the pricing formulas for the SABR model. Another significant contribution is that a novel method is proposed to examine the errors based on nonlinear regression. This approach is easily extendable to other pricing models for which it is hard to induce analytic formulas. 
Date:  2021–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2101.09064&r=all 
By:  Paolo Falbo; Juri Hinz (University of Technology Sydney); Piyachat Leelasilapasart (University of Technology Sydney); Cristian Pelizzari 
Abstract:  Due to recent technical progress, using battery energy storages are becoming a viable option in the power sector. Their optimal operational management focuses on load shift and shaving of price spikes. However, this requires optimally responding to electricity demand, intermittent generation, and volatile electricity prices. More importantly, such optimization must take into account the socalled deep discharge costs, which have a significant impact on battery lifespan. We present a solution to a class of stochastic optimal control problems associated with these applications. Our numerical techniques are based on efficient algorithms which deliver a guaranteed accuracy. 
Keywords:  Approximate dynamic programming; energy trading; optimal control; power sector 
Date:  2021–01–01 
URL:  http://d.repec.org/n?u=RePEc:uts:rpaper:422&r=all 
By:  Benjamin L Hunt; Susanna Mursula; Rafael A Portillo; Marika Santoro 
Abstract:  In this paper, we investigate the mechanisms through which import tariffs impact the macroeconomy in two large scale workhorse models used for quantitative policy analysis: a computational general equilibrium (CGE) model (Purdue University GTAP model) and a multicountry dynamic stochastic general equilibrium (DSGE) model (IMF GIMF model). The quantitative effects of an increase in tariffs reflect different mechanisms at work. Like other models in the trade literature, in GTAP higher tariffs generate a loss in terms of output arising from an inefficient reallocation of resources between sectors. In GIMF instead, as in other DSGE models, tariffs act as a disincentive to factor utilization. We show that the two models/channels can be broadly interpreted as capturing the impact of tariffs on different components of a country’s aggregate production function: aggregate productivity (GTAP) and factor supply/utilization (GIMF). We discuss ways to combine the estimates from these two models to provide a more complete assessment of the macro effects of tariffs. 
Keywords:  Tariffs;Imports;Exports;Trade balance;Exchange rates;Trade policy,trade elasticity,Nominal and real rigidities,general equilibrium,WP,import tariff,trade diversion,terms of trade,GTAP estimate,price distortion 
Date:  2020–12–11 
URL:  http://d.repec.org/n?u=RePEc:imf:imfwpa:2020/279&r=all 
By:  Asiya Maskaeva; Mgeni Msafiri 
Abstract:  This study simulates the macromicro economic impacts of the employment policy, focusing on hysteresis in youth unemployment in South Africa. Specifically, we apply a dynamic computable general equilibrium model to calibrate the 2015 South African Social Accounting Matrix to estimate, compare, and determine the impact of employment policy on youth unemployment as well as on aggregate economic outcomes. We simulate two scenarios where we reduce the import price of fuel by 20 per cent. 
Keywords:  Computable general equilibrium, Youth unemployment, Employment, Policy, South Africa 
Date:  2021 
URL:  http://d.repec.org/n?u=RePEc:unu:wpaper:wp202120&r=all 
By:  Georges Sfeir; Filipe Rodrigues; Maya AbouZeid 
Abstract:  We present a Gaussian Process  Latent Class Choice Model (GPLCCM) to integrate a nonparametric class of probabilistic machine learning within discrete choice models (DCMs). Gaussian Processes (GPs) are kernelbased algorithms that incorporate expert knowledge by assuming priors over latent functions rather than priors over parameters, which makes them more flexible in addressing nonlinear problems. By integrating a Gaussian Process within a LCCM structure, we aim at improving discrete representations of unobserved heterogeneity. The proposed model would assign individuals probabilistically to behaviorally homogeneous clusters (latent classes) using GPs and simultaneously estimate classspecific choice models by relying on random utility models. Furthermore, we derive and implement an ExpectationMaximization (EM) algorithm to jointly estimate/infer the hyperparameters of the GP kernel function and the classspecific choice parameters by relying on a Laplace approximation and gradientbased numerical optimization methods, respectively. The model is tested on two different mode choice applications and compared against different LCCM benchmarks. Results show that GPLCCM allows for a more complex and flexible representation of heterogeneity and improves both insample fit and outofsample predictive power. Moreover, behavioral and economic interpretability is maintained at the classspecific choice model level while local interpretation of the latent classes can still be achieved, although the nonparametric characteristic of GPs lessens the transparency of the model. 
Date:  2021–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2101.12252&r=all 
By:  Omotosho, Babatunde S. 
Abstract:  This paper studies the macroeconomic implications of oil price shocks and the extant fuel subsidy regime for Nigeria. To do this, we develop and estimate a NewKeynesian DSGE model that accounts for passthrough effect of international oil price into the retail price of fuel. Our results show that oil price shocks generate significant and persistent impacts on output, accounting for about 22 percent of its variations up to the fourth year. Under our benchmark model (i.e. with fuel subsidies), we show that a negative oil price shock contracts aggregate GDP, boosts nonoil GDP, increases headline inflation, and depreciates the exchange rate. However, results generated under the model without fuel subsidies indicate that the contractionary effect of a negative oil price shock on aggregate GDP is moderated, headline inflation decreases, while the exchange rate depreciates more in the shortrun. Counterfactual simulations also reveal that fuel subsidy removal leads to higher macroeconomic instabilities and generates nontrivial implications for the response of monetary policy to an oil price shock. Thus, this study cautions that a successful fuel subsidy reform must necessarily encompass the deployment of welltargeted safety nets as well as the evolution of sustainable adjustment mechanisms. 
Keywords:  Fuel subsidies, oil price shocks, business cycle 
JEL:  E31 E32 E52 E62 
Date:  2020 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:105464&r=all 
By:  Majid Bazarbash 
Abstract:  Recent advances in digital technology and big data have allowed FinTech (financial technology) lending to emerge as a potentially promising solution to reduce the cost of credit and increase financial inclusion. However, machine learning (ML) methods that lie at the heart of FinTech credit have remained largely a black box for the nontechnical audience. This paper contributes to the literature by discussing potential strengths and weaknesses of MLbased credit assessment through (1) presenting core ideas and the most common techniques in ML for the nontechnical audience; and (2) discussing the fundamental challenges in credit risk analysis. FinTech credit has the potential to enhance financial inclusion and outperform traditional credit scoring by (1) leveraging nontraditional data sources to improve the assessment of the borrower’s track record; (2) appraising collateral value; (3) forecasting income prospects; and (4) predicting changes in general conditions. However, because of the central role of data in MLbased analysis, data relevance should be ensured, especially in situations when a deep structural change occurs, when borrowers could counterfeit certain indicators, and when agency problems arising from information asymmetry could not be resolved. To avoid digital financial exclusion and redlining, variables that trigger discrimination should not be used to assess credit rating. 
Keywords:  Credit risk;Credit;Credit ratings;Loans;Machine learning;WP,ML model,bears risk,machine learning technique,ML analysis,ML evaluation 
Date:  2019–05–17 
URL:  http://d.repec.org/n?u=RePEc:imf:imfwpa:2019/109&r=all 
By:  Daeyung Gim; Hyungbin Park 
Abstract:  This paper treats the Merton problem how to invest in safe assets and risky assets to maximize an investor's utility, given by investment opportunities modeled by a $d$dimensional state process. The problem is represented by a partial differential equation with optimizing term: the HamiltonJacobiBellman equation. The main purpose of this paper is to solve partial differential equations derived from the HamiltonJacobiBellman equations with a deep learning algorithm: the Deep Galerkin method, first suggested by Sirignano and Spiliopoulos (2018). We then apply the algorithm to get the solution of the PDE based on some model settings and compare with the one from the finite difference method. 
Date:  2021–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2101.12387&r=all 
By:  Claudius Gros 
Abstract:  Human societies are characterized, besides others, by three constituent features. (A) Options, as for jobs and societal positions, differ with respect to their associated monetary and nonmonetary payoffs. (B) Competition leads to reduced payoffs when individuals compete for the same option with others. (C) People care how they are doing relatively to others. The latter trait, the propensity to compare one's own success with that of others, expresses itself as envy. It is shown that the combination of (A)(C) leads to spontaneous class stratification. Societies of agents split endogenously into two social classes, an upper and a lower class, when envy becomes relevant. A comprehensive analysis of the Nash equilibria characterizing a basic reference game is presented. Class separation is due to the condensation of the strategies of lowerclass agents, which play an identical mixed strategy. Upper class agents do not condense, following individualist pure strategies. Model and results are sizeconsistent, holding for arbitrary large numbers of agents and options. Analytic results are confirmed by extensive numerical simulations. An analogy to interacting confined classical particles is discussed. 
Date:  2021–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2101.10824&r=all 
By:  Yi Wei; Cristian Tiu; Vipin Chaudhary 
Abstract:  Neural networks for stock price prediction(NNSPP) have been popular for decades. However, most of its study results remain in the research paper and cannot truly play a role in the securities market. One of the main reasons leading to this situation is that the prediction error(PE) based evaluation results have statistical flaws. Its prediction results cannot represent the most critical financial direction attributes. So it cannot provide investors with convincing, interpretable, and consistent model performance evaluation results for practical applications in the securities market. To illustrate, we have used data selected from 20 stock datasets over six years from the Shanghai and Shenzhen stock market in China, and 20 stock datasets from NASDAQ and NYSE in the USA. We implement six shallow and deep neural networks to predict stock prices and use four prediction error measures for evaluation. The results show that the prediction error value only partially reflects the model accuracy of the stock price prediction, and cannot reflect the change in the direction of the model predicted stock price. This characteristic determines that PE is not suitable as an evaluation indicator of NNSPP. Otherwise, it will bring huge potential risks to investors. Therefore, this paper establishes an experiment platform to confirm that the PE method is not suitable for the NNSPP evaluation, and provides a theoretical basis for the necessity of creating a new NNSPP evaluation method in the future. 
Date:  2021–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2101.10942&r=all 
By:  InKoo Cho; Jonathan Libgober 
Abstract:  We study interactions between strategic players and markets whose behavior is guided by an algorithm. Algorithms use data from prior interactions and a limited set of decision rules to prescribe actions. While asif rational play need not emerge if the algorithm is constrained, it is possible to guide behavior across a rich set of possible environments using limited details. Provided a condition known as weak learnability holds, Adaptive Boosting algorithms can be specified to induce behavior that is (approximately) asif rational. Our analysis provides a statistical perspective on the study of endogenous model misspecification. 
Date:  2021–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2101.09613&r=all 
By:  John Ery; Loris Michel 
Abstract:  We propose a reinforcement learning (RL) approach to model optimal exercise strategies for optiontype products. We pursue the RL avenue in order to learn the optimal actionvalue function of the underlying stopping problem. In addition to retrieving the optimal Qfunction at any time step, one can also price the contract at inception. We first discuss the standard setting with one exercise right, and later extend this framework to the case of multiple stopping opportunities in the presence of constraints. We propose to approximate the Qfunction with a deep neural network, which does not require the specification of basis functions as in the leastsquares Monte Carlo framework and is scalable to higher dimensions. We derive a lower bound on the option price obtained from the trained neural network and an upper bound from the dual formulation of the stopping problem, which can also be expressed in terms of the Qfunction. Our methodology is illustrated with examples covering the pricing of swing options. 
Date:  2021–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2101.09682&r=all 
By:  Ravi Goyal; John Hotchkiss; Robert T. Schooley; Victor De Gruttola; Natasha K. Martin 
Abstract:  Universities are faced with decisions on how to resume campus activities while mitigating SARSCoV2 risk. 
Keywords:  COVID19, prevention, modeling 
URL:  http://d.repec.org/n?u=RePEc:mpr:mprres:51c58d1aed8e47a0ad7eefb3c7882776&r=all 
By:  Xiaodong Wang; Fushing Hsieh 
Abstract:  We extend the Hierarchical Factor Segmentation(HFS) algorithm for discovering multiple volatility states process hidden within each individual S&P500 stock's return time series. Then we develop an associative measure to link stocks into directed networks of various scales of associations. Such networks shed lights on which stocks would likely stimulate or even promote, if not cause, volatility on other linked stocks. Our computing endeavors starting from encoding events of large return on the original time axis to transform the original return time series into a recurrencetime process on discretetimeaxis. By adopting BIC and clustering analysis, we identify potential multiple volatility states, and then apply the extended HFS algorithm on the recurrence time series to discover its underlying volatility state process. Our decoding approach is found favorably compared with Viterbi's in experiments involving both light and heavy tail distributions. After recovering the volatility state process back to the original timeaxis, we decode and represent stock dynamics of each stock. Our measurement of association is measured through overlapping concurrent volatility states upon a chosen window. Consequently, we establish datadriven associative networks for S&P500 stocks to discover their global dependency relational groupings with respect to various strengths of links. 
Date:  2021–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2101.09395&r=all 
By:  Ahlfeldt, Gabriel M.; Albers, Thilo; Behrens, Kristian 
Abstract:  We harness big data to detect prime locations  large clusters of knowledgebased tradable services  in 125 global cities and track changes in the withincity geography of prime service jobs over a century. Historically smaller cities that did not develop early public transit networks are less concentrated today and have prime locations farther from their historic cores. We rationalize these findings in an agentbased model that features extreme agglomeration, multiple equilibria, and path dependence. Both city size and public transit networks anchor city structure. Exploiting major disasters and using a novel instrument  subway potential  we provide causal evidence for these mechanisms and disentangle sizefrom transport network effects. 
Keywords:  prime services; internal city structure; agentbased model; multiple equilibria and path dependence; transport networks 
JEL:  R38 R52 R58 
Date:  2020–10 
URL:  http://d.repec.org/n?u=RePEc:ehl:lserod:108470&r=all 
By:  Aref Mahdavi Ardekani (Centre d'Economie de la Sorbonne) 
Abstract:  By applying the interbank network simulation, this paper examines whether the causal relationship between capital and liquidity is influenced by bank positions in the interbank network. While existing literature highlights the causal relationship that moves from liquidity to capital, the question of how interbank network characteristics affect this relationship remains unclear. Using a sample of commercial banks from 28 European countries, this paper suggests that bank's interconnectedness within interbank loan and deposit networks affects their decisions to set higher or lower regulatory capital ratios when facing higher iliquidity. This study provides support for the need to implement minimum liquidity ratios to complement capital ratios, as stressed by the Basel Committee on Banking Regulation and Supervision. This paper also highlights the need for regulatory authorities to consider the network characteristics of banks 
Keywords:  Interbank network topology; Bank regulatory capital; Liquidity risk; Basel III 
JEL:  G21 G28 L14 
Date:  2020–10 
URL:  http://d.repec.org/n?u=RePEc:mse:cesdoc:20022r&r=all 
By:  Jaehyuk Choi; Desheng Ge; Kyu Ho Kang; Sungbin Sohn 
Abstract:  The literature on using yield curves to forecast recessions typically measures the term spread as the difference between the 10year and the threemonth Treasury rates. Furthermore, using the term spread constrains the long and shortterm interest rates to have the same absolute effect on the recession probability. In this study, we adopt a machine learning method to investigate whether the predictive ability of interest rates can be improved. The machine learning algorithm identifies the best maturity pair, separating the effects of interest rates from those of the term spread. Our comprehensive empirical exercise shows that, despite the likelihood gain, the machine learning approach does not significantly improve the predictive accuracy, owing to the estimation error. Our finding supports the conventional use of the 10yearthreemonth Treasury yield spread. This is robust to the forecasting horizon, control variable, sample period, and oversampling of the recession observations. 
Date:  2021–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2101.09394&r=all 
By:  Xianfei Hui; Baiqing Sun; Hui Jiang; Indranil SenGupta 
Abstract:  We use the superposition of the Levy processes to optimize the classic BNS model. Considering the frequent fluctuations of price parameters difficult to accurately estimate in the model, we preprocess the price data based on fuzzy theory. The price of S&P500 stock index options in the past ten years are analyzed, and the deterministic fluctuations are captured by machine learning methods. The results show that the new model in a fuzzy environment solves the longterm dependence problem of the classic model with fewer parameter changes, and effectively analyzes the random dynamic characteristics of stock index option price time series. 
Date:  2021–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2101.08984&r=all 
By:  Mark Kiermayer 
Abstract:  Surrender poses one of the major risks to life insurance and a sound modeling of its true probability has direct implication on the risk capital demanded by the Solvency II directive. We add to the existing literature by performing extensive experiments that present highly practical results for various modeling approaches, including XGBoost and neural networks. Further, we detect shortcomings of prevalent model assessments, which are in essence based on a confusion matrix. Our results indicate that accurate label predictions and a sound modeling of the true probability can be opposing objectives. We illustrate this with the example of resampling. While resampling is capable of improving label prediction in rare event settings, such as surrender, and thus is commonly applied, we show theoretically and numerically that models trained on resampled data predict significantly biased event probabilities. Following a probabilistic perspective on surrender, we further propose timedependent confidence bands on predicted mean surrender rates as a complementary assessment and demonstrate its benefit. This evaluation takes a very practical, going concern perspective, which respects that the composition of a portfolio might change over time. 
Date:  2021–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:2101.11590&r=all 
By:  Vera Eichenauer (KOF Swiss Economic Institute, ETH Zurich, Switzerland); Ronald Indergand (State Secretariat for Economic Affairs SECO, Switzerland); Isabel Z. Martínez (KOF Swiss Economic Institute, ETH Zurich, Switzerland); Christoph Sax (University of Basel; cynkra LLC, Switzerland) 
Abstract:  Google Trends have become a popular data source for social science research. We show that for small countries or subnational regions like U.S. states, underlying sampling noise in Google Trends can be substantial. The data may therefore be unreliable for time series analysis and is furthermore frequencyinconsistent: daily data differs from weekly or monthly data. We provide a novel sampling technique along with the Rpackage trendecon in order to generate stable daily Google search results that are consistent with weekly and monthly queries of Google Trends. We use this new approach to construct long and consistent daily economic indices for the (mainly) Germanspeaking countries Germany, Austria, and Switzerland. The resulting indices are significantly correlated with traditional leading indicators, with the advantage that they are available much earlier. 
Keywords:  Google Trends, measurement, high frequency, forecasting, Covid19 Market, Euro, sectoral heterogeneity 
JEL:  E01 E32 E37 
Date:  2020–06 
URL:  http://d.repec.org/n?u=RePEc:kof:wpskof:20484&r=all 
By:  Jeffrey Grogger; Ria Ivandic; Tom Kirchmaier 
Abstract:  We compare predictions from a conventional protocolbased approach to risk assessment with those based on a machinelearning approach. We first show that the conventional predictions are less accurate than, and have similar rates of negative prediction error as, a simple Bayes classifier that makes use only of the base failure rate. A random forest based on the underlying risk assessment questionnaire does better under the assumption that negative prediction errors are more costly than positive prediction errors. A random forest based on twoyear criminal histories does better still. Indeed, adding the protocolbased features to the criminal histories adds almost nothing to the predictive adequacy of the model. We suggest using the predictions based on criminal histories to prioritize incoming calls for service, and devising a more sensitive instrument to distinguish true from false positives that result from this initial screening. 
Keywords:  domestic abuse, risk assessment, machine learning 
JEL:  K42 
Date:  2020–02 
URL:  http://d.repec.org/n?u=RePEc:cep:cepdps:dp1676&r=all 