|
on Econometric Time Series |
By: | Christian Bayer; Blanka Horvath; Aitor Muguruza; Benjamin Stemper; Mehdi Tomas |
Abstract: | Techniques from deep learning play a more and more important role for the important task of calibration of financial models. The pioneering paper by Hernandez [Risk, 2017] was a catalyst for resurfacing interest in research in this area. In this paper we advocate an alternative (two-step) approach using deep learning techniques solely to learn the pricing map -- from model parameters to prices or implied volatilities -- rather than directly the calibrated model parameters as a function of observed market data. Having a fast and accurate neural-network-based approximating pricing map (first step), we can then (second step) use traditional model calibration algorithms. In this work we showcase a direct comparison of different potential approaches to the learning stage and present algorithms that provide a suffcient accuracy for practical use. We provide a first neural network-based calibration method for rough volatility models for which calibration can be done on the y. We demonstrate the method via a hands-on calibration engine on the rough Bergomi model, for which classical calibration techniques are diffcult to apply due to the high cost of all known numerical pricing methods. Furthermore, we display and compare different types of sampling and training methods and elaborate on their advantages under different objectives. As a further application we use the fast pricing method for a Bayesian analysis of the calibrated model. |
Date: | 2019–08 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1908.08806&r=all |
By: | Sander Barendse (University of Oxford); Erik Kole (Erasmus University Rotterdam); Dick van Dijk (Erasmus University Rotterdam) |
Abstract: | We investigate the effect of estimation error on backtests of (multi-period) expected shortfall (ES) forecasts. These backtests are based on first order conditions of a recently introduced family of jointly consistent loss functions for Value-at-Risk (VaR) and ES. We provide explicit expressions for the additional terms in the asymptotic covariance matrix that result from estimation error, and propose robust tests that account for it. Monte Carlo experiments show that the tests that ignore these terms suffer from size distortions, which are more pronounced for higher ratios of out-of-sample to in-sample observations. Robust versions of the backtests perform well, although this also depends on the choice of conditioning variables. In an application to VaR and ES forecasts for daily FTSE 100 index returns as generated by AR-GARCH, AR-GJR-GARCH, and AR-HEAVY models, we find that estimation error substantially impacts the outcome of the backtests. |
Keywords: | expected shortfall, backtesting, risk management, tail risk, Value-at-Risk |
JEL: | C12 C53 C58 G17 |
Date: | 2019–08–19 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:2019058&r=all |
By: | Agnieszka Borowska (Vrije Universiteit Amsterdam); Lennart Hoogerheide (Vrije Universiteit Amsterdam); Siem Jan Koopman (Vrije Universiteit Amsterdam); Herman van Dijk (Erasmus University Rotterdam) |
Abstract: | A novel approach to inference for a specific region of the predictive distribution is introduced. An important domain of application is accurate prediction of financial risk measures, where the area of interest is the left tail of the predictive density of logreturns. Our proposed approach originates from the Bayesian approach to parameter estimation and time series forecasting, however it is robust in the sense that it provides a more accurate estimation of the predictive density in the region of interest in case of misspecification. The first main contribution of the paper is the novel concept of the Partially Censored Posterior (PCP), where the set of model parameters is partitioned into two subsets: for the first subset of parameters we consider the standard marginal posterior, for the second subset of parameters (that are particularly related to the region of interest) we consider the conditional censored posterior. The censoring means that observations outside the region of interest are censored: for those observations only the probability of being outside the region of interest matters. This quasi-Bayesian approach yields more precise parameter estimation than a fully censored posterior for all parameters, and has more focus on the region of interest than a standard Bayesian approach. The second main contribution is that we introduce two novel methods for computationally efficient simulation: Conditional MitISEM, a Markov chain Monte Carlo method to simulate model parameters from the Partially Censored Posterior, and PCP-QERMit, an Importance Sampling method that is introduced to further decrease the numerical standard errors of the Value-at-Risk and Expected Shortfall estimators. The third main contribution is that we consider the effect of using a time-varying boundary of the region of interest, which may provide more information about the left tail of the distribution of the standardized innovations. Extensive simulation and empirical studies show the ability of the introduced method to outperform standard approaches. |
Keywords: | Bayesian inference, censored likelihood, censored posterior, partially censored posterior, misspecification, density forecasting, Markov chain Monte Carlo, importance sampling, mixture of Student's t, Value-at-Risk, Expected Shortfall |
JEL: | C11 C53 C58 |
Date: | 2019–08–19 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20190057&r=all |
By: | Joshua C. C. Chan |
Abstract: | Large Bayesian VARs with stochastic volatility are increasingly used in empirical macroeconomics. The key to make these highly parameterized VARs useful is the use of shrinkage priors. We develop a family of priors that captures the best features of two prominent classes of shrinkage priors: adaptive hierarchical priors and Minnesota priors. Like the adaptive hierarchical priors, these new priors ensure that only ‘small’ coefficients are strongly shrunk to zero, while ‘large’ coefficients remain intact. At the same time, these new priors can also incorporate many useful features of the Minnesota priors, such as cross-variable shrinkage and shrinking coefficients on higher lags more aggressively. We introduce a fast posterior sampler to estimate BVARs with this family of priors - for a BVAR with 25 variables and 4 lags, obtaining 10,000 posterior draws takes about 3 minutes on a standard desktop. In a forecasting exercise, we show that these new priors outperform both adaptive hierarchical priors and Minnesota priors. |
Keywords: | shrinkage prior, forecasting, stochastic volatility, structural VAR |
JEL: | C11 C52 E37 |
Date: | 2019–08 |
URL: | http://d.repec.org/n?u=RePEc:een:camaaa:2019-61&r=all |
By: | Paulo M.M. Rodrigues; Gabriel Zsurkis; João Nicolau |
Abstract: | This paper introduces a simple and easy to implement procedure to test for changes in persistence. The time-varying parameter that characterizes persistence changes under the alternative hypothesis is approximated by a parsimonious cosine function. The new test procedure is the minimum of a t-statistic, computed from a test regression that considers a set of reasonable values for a frequency term that is used to evaluate the time varying properties of persistence. The asymptotic distributions of the new tests are derived and critical values are provided. An indepth Monte Carlo analysis shows that the new procedure has important power gains when compared to the local GLS de-trended Dickey-Fuller (DFGLS) type tests introduced by Elliott et al. (1996) under various data generating processes with persistence changes. Moreover, an empirical application to OECD countries’ inflation series shows that for most countries analysed persistence was high in the first half of the sample and subsequently decreased. These results are compatible with modern macroeconomic theories that point to changes in inflation behavior in the early 1980s and also with recent empirical evidence against the I(1)-I(0) dichotomy. |
JEL: | C12 C22 |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:ptu:wpaper:w201909&r=all |
By: | António Rua; Hossein Hassani; Emmanuel Sirimal Silva; Dimitrios Thomakos |
Abstract: | The literature on mixed-frequency models is relatively recent and has found applications across economics and finance. The standard application in economics considers the use of (usually) monthly variables (e.g. industrial production) in predicting/fitting quarterly variables (e.g. real GDP). In this paper we propose a Multivariate Singular Spectrum Analysis (MSSA) based method for mixed frequency interpolation and forecasting, which can be used for any mixed frequency combination. The novelty of the proposed approach rests on the grounds of simplicity within the MSSA framework. We present our method using a combination of monthly and quarterly series and apply MSSA decomposition and reconstruction to obtain monthly estimates and forecasts for the quarterly series. Our empirical application shows that the suggested approach works well, as it offers forecasting improvements on a dataset of eleven developed countries over the last 50 years. The implications for mixed frequency modelling and forecasting, and useful extensions of this method, are also discussed. |
JEL: | C1 C53 E1 |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:ptu:wpaper:w201913&r=all |