
on Computational Economics 
By:  Justin Sirignano; Konstantinos Spiliopoulos 
Abstract:  Highdimensional PDEs have been a longstanding computational challenge. We propose a deep learning algorithm similar in spirit to Galerkin methods, using a deep neural network instead of linear combinations of basis functions. The PDE is approximated with a deep neural network, which is trained on random batches of spatial points to satisfy the differential operator and boundary conditions. The algorithm is meshless, which is key since meshes become infeasible in higher dimensions. Instead of forming a mesh, sequences of spatial points are randomly sampled. We implement the approach for American options (a type of freeboundary PDE which is widely used in finance) in up to 100 dimensions. We call the algorithm a "Deep Galerkin Method (DGM)". 
Date:  2017–08 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1708.07469&r=cmp 
By:  Grzegorz Marcjasz; Bartosz Uniejewski; Rafal Weron 
Abstract:  In dayahead electricity price forecasting the daily and weekly seasonalities are always taken into account, but the longterm seasonal component was believed to add unnecessary complexity and in most studies ignored. The recent introduction of the Seasonal Component AutoRegressive (SCAR) modeling framework has changed this viewpoint. However, the latter is based on linear models estimated using Ordinary Least Squares. Here we show that considering nonlinear neural networktype models with the same inputs as the corresponding SCAR model can lead to a yet better performance. While individual Seasonal Component Artificial Neural Network (SCANN) models are generally worse than the corresponding SCARtype structures, we provide empirical evidence that committee machines of SCANN networks can significantly outperform the latter. 
Keywords:  Electricity spot price; Forecasting; Dayahead market; Longterm seasonal component; Neural network; Committee machine 
JEL:  C14 C22 C45 C51 C53 Q47 
Date:  2017–07–29 
URL:  http://d.repec.org/n?u=RePEc:wuu:wpaper:hsc1703&r=cmp 
By:  Thomas Winberry (University of Chicago); Benjamin Moll (Princeton); Greg Kaplan (University of Chicago) 
Abstract:  We study the aggregate consumption, interest rate and output dynamics of a heterogeneous agent economy that is parameterized to match key features of the crosssectional distribution of labor income, wealth, and marginal propensities to consume measured from householdlevel micro data. Households face a process for idiosyncratic income risk with leptokurtic growth rates and can selfinsure in two assets with different degrees of liquidity . The equilibrium features a threedimensional distribution that moves stochastically over time, rendering computation difficult with existing methods. We develop computational tools to efficiently solve a broad class of heterogeneous agent model with aggregate shocks that include our model as a special case. The method uses linearization to solve for the dynamics of a reduced version of the model, which is obtained from a modelfree dimensionality reduction method for the endogenous distributions. We will publish an open source set of Matlab codes to implement our method in an easytouse and modelfree way. We find that our model, which is parameterized to household level facts, is consistent with the sensitivity of aggregate consumption to predictable changes in aggregate income, and with the relative smoothness of aggregate consumption  features that are difficult to generate in representative agent model. We illustrate the usefulness of our model and methods for studying the distributional implications of shocks more generally. 
Date:  2017 
URL:  http://d.repec.org/n?u=RePEc:red:sed017:483&r=cmp 
By:  Michael A. Kouritzin; Anne MacKay 
Abstract:  In a market with stochastic volatility and jumps, we consider a VIXlinked fee structure for variable annuity contracts with guaranteed minimum withdrawal benefits (GMWB). Our goal is to assess the effectiveness of the VIXlinked fee structure in decreasing the sensitivity of the insurer's liability to volatility risk. Since the GMWB payoff is highly pathdependent, it is particularly sensitive to volatility risk, and can also be challenging to price, especially in the presence of the VIXlinked fee. In this paper, we present an explicit weak solution for the value of the VA account and use it in Monte Carlo simulations to value the GMWB guarantee. Numerical examples are provided to analyze the impact of the VIXlinked fee on the sensitivity of the liability to changes in market volatility. 
Date:  2017–08 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1708.06886&r=cmp 
By:  Amir AhmadiJavid; Malihe FallahTafti 
Abstract:  The entropic valueatrisk (EVaR) is a new coherent risk measure, which is an upper bound for both the valueatrisk (VaR) and conditional valueatrisk (CVaR). As important properties, the EVaR is strongly monotone over its domain and strictly monotone over a broad subdomain including all continuous distributions, while wellknown monotone risk measures, such as VaR and CVaR lack these properties. A key feature for a risk measure, besides its financial properties, is its applicability in largescale samplebased portfolio optimization. If the negative return of an investment portfolio is a differentiable convex function, the portfolio optimization with the EVaR results in a differentiable convex program whose number of variables and constraints is independent of the sample size, which is not the case for the VaR and CVaR. This enables us to design an efficient algorithm using differentiable convex optimization. Our extensive numerical study shows the high efficiency of the algorithm in large scales, compared to the existing convex optimization software packages. The computational efficiency of the EVaR portfolio optimization approach is also compared with that of CVaRbased portfolio optimization. This comparison shows that the EVaR approach generally performs similarly, and it outperforms as the sample size increases. Moreover, the comparison of the portfolios obtained for a real case by the EVaR and CVaR approaches shows that the EVaR approach can find portfolios with better expectations and VaR values at high confidence levels. 
Date:  2017–08 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1708.05713&r=cmp 
By:  Jesus Lago; Fjo De Ridder; Peter Vrancx; Bart De Schutter 
Abstract:  Motivated by the increasing integration among electricity markets, in this paper we propose three different methods to incorporate market integration in electricity price forecasting and to improve the predictive performance. First, we propose a deep neural network that considers features from connected markets to improve the predictive accuracy in a local market. To measure the importance of these features, we propose a novel feature selection algorithm that, by using Bayesian optimization and functional analysis of variance, analyzes the effect of the features on the algorithm performance. In addition, using market integration, we propose a second model that, by simultaneously predicting prices from two markets, improves even further the forecasting accuracy. Finally, we present a third model to predict the probability of price spikes; then, we use it as an input in the other two forecasters to detect spikes. As a case study, we consider the electricity market in Belgium and the improvements in forecasting accuracy when using various French electricity features. In detail, we show that the three proposed models lead to improvements that are statistically significant. Particularly, due to market integration, predictive accuracy is improved from 15.7% to 12.5% sMAPE (symmetric mean absolute percentage error). In addition, we also show that the proposed feature selection algorithm is able to perform a correct assessment, i.e. to discard the irrelevant features. 
Date:  2017–08 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1708.07061&r=cmp 
By:  Grey Gordon (Indiana University); Shi Qiu (Indiana University) 
Abstract:  A divide and conquer algorithm for exploiting policy function monotonicity is proposed and analyzed. To solve a discrete problem with n states and n choices, the algorithm requires at most n log2(n) + 5n objective function evaluations. In contrast, existing methods for nonconcave problems require n^2 evaluations in the worst case. For concave problems, the solution technique can be combined with a method exploiting concavity to reduce evaluations to 14n + 2 log2(n). A version of the algorithm exploiting monotonicity in two state variables allows for even more efficient solutions. The algorithm can also be efficiently employed in a common class of problems that do not have monotone policies, including problems with many state and choice variables. In the sovereign default model of Arellano (2008) and the real business cycle model, the algorithm reduces run times by an order of magnitude for moderate grid sizes and orders of magnitude for larger ones. Sufficient conditions for monotonicity are provided. 
Keywords:  Computation, Monotonicity, Grid Search, Sovereign Default 
Date:  2017–04 
URL:  http://d.repec.org/n?u=RePEc:inu:caeprp:2017006&r=cmp 