|
on Computational Economics |
Issue of 2019‒04‒01
ten papers chosen by |
By: | Daniel Poh; Stephen Roberts; Martin Tegn\'er |
Abstract: | The non-storability of electricity makes it unique among commodity assets, and it is an important driver of its price behaviour in secondary financial markets. The instantaneous and continuous matching of power supply with demand is a key factor explaining its volatility. During periods of high demand, costlier generation capabilities are utilised since electricity cannot be stored---this has the impact of driving prices up very quickly. Furthermore, the non-storability also complicates physical hedging. Owing to this, the problem of joint price-quantity risk in electricity markets is a commonly studied theme. To this end, we investigate the use of coregionalized (or multi-task) sparse Gaussian processes (GPs) for risk management in the context of power markets. GPs provide a versatile and elegant non-parametric approach for regression and time-series modelling. However, GPs scale poorly with the amount of training data due to a cubic complexity. These considerations suggest that knowledge transfer between price and load is vital for effective hedging, and that a computationally efficient method is required. To gauge the performance of our model, we use an average-load strategy as comparator. The latter is a robust approach commonly used by industry. If the spot and load are uncorrelated and Gaussian, then hedging with the expected load will result in the minimum variance position. The main contribution of our work is twofold. Firstly, in developing a multi-task sparse GP-based approach for hedging. Secondly, in demonstrating that our model-based strategy outperforms the comparator, and can thus be employed for effective hedging in electricity markets. |
Date: | 2019–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1903.09536&r=all |
By: | Ludovic Gouden\`ege; Andrea Molent; Antonino Zanette |
Abstract: | In this paper we propose an efficient method to compute the price of American basket options, based on Machine Learning and Monte Carlo simulations. Specifically, the options we consider are written on a basket of assets, each of them following a Black-Scholes dynamics. The method we propose is a backward dynamic programming algorithm which considers a finite number of uniformly distributed exercise dates. On these dates, the value of the option is computed as the maximum between the exercise value and the continuation value, which is approximated via Gaussian Process Regression. Specifically, we consider a finite number of points, each of them representing the values reached by the underlying at a certain time. First of all, we compute the continuation value only for these points by means of Monte Carlo simulations and then we employ Gaussian Process Regression to approximate the whole continuation value function. Numerical tests show that the algorithm is fast and reliable and it can handle also American options on very large baskets of assets, overcoming the problem of the curse of dimensionality. |
Date: | 2019–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1903.11275&r=all |
By: | Antoine Jacquier; Emma R. Malone; Mugad Oumgari |
Abstract: | We introduce a stacking version of the Monte Carlo algorithm in the context of option pricing. Introduced recently for aeronautic computations, this simple technique, in the spirit of current machine learning ideas, learns control variates by approximating Monte Carlo draws with some specified function. We describe the method from first principles and suggest appropriate fits, and show its efficiency to evaluate European and Asian Call options in constant and stochastic volatility models. |
Date: | 2019–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1903.10795&r=all |
By: | Susan Athey; Guido Imbens |
Abstract: | We discuss the relevance of the recent Machine Learning (ML) literature for economics and econometrics. First we discuss the differences in goals, methods and settings between the ML literature and the traditional econometrics and statistics literatures. Then we discuss some specific methods from the machine learning literature that we view as important for empirical researchers in economics. These include supervised learning methods for regression and classification, unsupervised learning methods, as well as matrix completion methods. Finally, we highlight newly developed methods at the intersection of ML and econometrics, methods that typically perform better than either off-the-shelf ML or more traditional econometric methods when applied to particular classes of problems, problems that include causal inference for average treatment effects, optimal policy estimation, and estimation of the counterfactual effect of price changes in consumer choice models. |
Date: | 2019–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1903.10075&r=all |
By: | Liyang Han; Thomas Morstyn; Constance Crozier; Malcolm McCulloch |
Abstract: | Among the various market structures under peer-to-peer energy sharing, one model based on cooperative game theory provides clear incentives for prosumers to collaboratively schedule their energy resources. The computational complexity of this model, however, increases exponentially with the number of participants. To address this issue, this paper proposes the application of K-means clustering to the energy profiles following the grand coalition optimization. The cooperative model is run with the "clustered players" to compute their payoff allocations, which are then further distributed among the prosumers within each cluster. Case studies show that the proposed method can significantly improve the scalability of the cooperative scheme while maintaining a high level of financial incentives for the prosumers. |
Date: | 2019–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1903.10965&r=all |
By: | Matteo Mogliani |
Abstract: | We propose a new approach to mixed-frequency regressions in a high-dimensional environment that resorts to Group Lasso penalization and Bayesian techniques for estimation and inference. To improve the sparse recovery ability of the model, we also consider a Group Lasso with a spike-and-slab prior. Penalty hyper-parameters governing the model shrinkage are automatically tuned via an adaptive MCMC algorithm. Simulations show that the proposed models have good selection and forecasting performance, even when the design matrix presents high cross-correlation. When applied to U.S. GDP data, the results suggest that financial variables may have some, although limited, short-term predictive content. |
Keywords: | MIDAS regressions, penalized regressions, variable selection, forecasting, Bayesian estimation. |
JEL: | C11 C22 C53 E37 |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:bfr:banfra:713&r=all |
By: | Masaaki Fujii (Quantitative Finance Course, Graduate School of Economics, The University of Tokyo); Akihiko Takahashi (Quantitative Finance Course, Graduate School of Economics, The University of Tokyo); Masayuki Takahashi (Quantitative Finance Course, Graduate School of Economics, The University of Tokyo) |
Abstract: | We demonstrate that the use of asymptotic expansion as prior knowledge in the "deep BSDE solver", which is a deep learning method for high dimensional BSDEs proposed by Weinan E, Han & Jentzen (2017), drastically reduces the loss function and accelerates the speed of convergence. We illustrate the technique and its implications by using Bergman's model with different lending and borrowing rates as a typical model for FVA as well as a class of solvable BSDEs with quadratic growth drivers. We also present an extension of the deep BSDE solver for reflected BSDEs representing American option prices. |
Date: | 2019–03 |
URL: | http://d.repec.org/n?u=RePEc:cfi:fseres:cf456&r=all |
By: | Liyang Han; Thomas Morstyn; Malcolm McCulloch |
Abstract: | Various peer-to-peer energy markets have emerged in recent years in an attempt to manage distributed energy resources in a more efficient way. One of the main challenges these models face is how to create and allocate incentives to participants. Cooperative game theory offers a methodology to financially reward prosumers based on their contributions made to the local energy coalition using the Shapley value, but its high computational complexity limits the size of the game. This paper explores a stratified sampling method proposed in existing literature for Shapley value estimation, and modifies the method for a peer-to-peer cooperative game to improve its scalability. Finally, selected case studies verify the effectiveness of the proposed coalitional stratified random sampling method and demonstrate results from large games. |
Date: | 2019–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1903.11047&r=all |
By: | Cinzia Daraio (Department of Computer, Control and Management Engineering Antonio Ruberti (DIAG), University of Rome La Sapienza, Rome, Italy); Léopold Simar (Institut de Statistique, Biostatistique et de Sciences Actuarielles, Universite´ Catholique de Louvain, Louvain-la-Neuve, Belgium); Paul W. Wilson (Department of Economics and School of Computing, Division of Computer Science, Clemson University,Clemson, South Carolina, USA) |
Abstract: | The issue of quality and its relationship with efficiency and performance is a crucial operational issue in many fields of study including production economics, operations research, engineering and business management. In this paper we provide a methodology for identifying latent quality factors, estimate their statistical significance and analyze their impact on the performance of the production process. This methodology is based on up-to-date computational methods and statistical tests for directional distances. We illustrate the approach using real data to evaluate the performance of European Universities. |
Keywords: | nonparametric efficiency ; performance assessment ; quality ; benchmarking ; directional distances ; conditional efficiency ; observed and unobserved heterogeneity ; separability condition ; European universities |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:aeg:report:2019-01&r=all |
By: | Bernardo D'Auria; Eduardo Garc\'ia-Portugu\'es; Abel Guada-Azze |
Abstract: | We address the problem of optimally exercising American options based on the assumption that the underlying stock's price follows a Brownian bridge whose final value coincides with the strike price. In order to do so, we solve the discounted optimal stopping problem endowed with the gain function $G(x) = (S - x)^+$ and a Brownian bridge whose final value equals $S$. These settings came up as a first approach of optimally exercising an option within the so-called `stock pinning' scenario. The optimal stopping boundary for this problem is proved to be the unique solution, up to certain conditions, of an integral equation, which is then numerically solved by an algorithm hereby exposed. We face the case where the volatility is unspecified by providing an estimated optimal stopping boundary that, alongside with pointwise confidence intervals, provide alternative stopping rules. Finally, we demonstrate the usefulness of our method within the stock pinning scenario through a comparison with the optimal exercise time based on a geometric Brownian motion. We base our comparison on the contingent claims and the 5-minutes intraday stock price data of Apple and IBM for the period 2011--2018. Supplementary materials with the main proofs and auxiliary lemmas are available online. |
Date: | 2019–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1903.11686&r=all |