|
on Econometric Time Series |
By: | Loriano Mancini (University of Zurich); Fabio Trojani (University of St-Gallen) |
Abstract: | We propose a general robust semiparametric bootstrap method to estimate conditional predictive distributions of GARCH-type models. Our approach is based on a robust estimator for the parameters in GARCH-type models and a robustified resampling method for standardized GARCH residuals, which controls the bootstrap instability due to influential observations in the tails of standardized GARCH residuals. Monte Carlo simulation shows that our method consistently provides lower VaR forecast errors, often to a large extent, and in contrast to classical methods never fails validation tests at usual significance levels. We test extensively our approach in the context of real data applications to VaR prediction for market risk, and find that only our robust procedure passes all validation tests at usual confidence levels. Moreover, the smaller tail estimation risk of robust VaR forecasts implies VaR prediction intervals that can be nearly 20% narrower and 50% less volatile over time. This is a further desirable property of our method, which allows to adapt risky positions to VaR limits more smoothly and thus more efficiently. |
Keywords: | Backtesting, M-estimator, Extreme Value Theory, Breakdown Point. |
JEL: | C14 C15 C23 C59 |
Date: | 2005–10 |
URL: | http://d.repec.org/n?u=RePEc:chf:rpseri:rp0731&r=ets |
By: | Dominique Guégan (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I); Zhiping Lu (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I, ECNU - East China Normal University) |
Abstract: | The purpose of this paper is to study the self-similar properties of discrete-time long memory processes. We apply our results to specific processes such as GARMA processes and GIGARCH processes, heteroscedastic models and the processes with switches and jumps. |
Keywords: | Covariance stationary, Long memory processes, short memory processes, self-similar, asymptotically second-order self-similar, autocorrelation function. |
Date: | 2007–11 |
URL: | http://d.repec.org/n?u=RePEc:hal:papers:halshs-00187910_v1&r=ets |
By: | Abdou Kâ Diongue (UFR SAT - Université Gaston Berger - Université Gaston Berger de Saint-Louis); Dominique Guégan (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I); Bertrand Vignal (EDF - EDF - Recherche et Développement) |
Abstract: | In this article, we investigate conditional mean and variance forecasts using a dynamic model following a k-factor GIGARCH process. We are particularly interested in calculating the conditional variance of the prediction error. We apply this method to electricity prices and test spot prices forecasts until one month ahead forecast. We conclude that the k-factor GIGARCH process is a suitable tool to forecast spot prices, using the classical RMSE criteria. |
Keywords: | Conditional mean, conditional variance, forecast, electricity prices, GIGARCH process. |
Date: | 2007–11 |
URL: | http://d.repec.org/n?u=RePEc:hal:papers:halshs-00188264_v1&r=ets |
By: | Andersson, Michael K (Monetary Policy Department, Central Bank of Sweden); Karlsson, Sune (Department of Business, Economics, Statistics and Informatics) |
Abstract: | We consider forecast combination and, indirectly, model selection for VAR models when there is uncertainty about which variables to include in the model in addition to the forecast variables. The key difference from traditional Bayesian variable selection is that we also allow for uncertainty regarding which endogenous variables to include in the model. That is, all models include the forecast variables, but may otherwise have differing sets of endogenous variables. This is a difficult problem to tackle with a traditional Bayesian approach. Our solution is to focus on the forecasting performance for the variables of interest and we construct model weights from the predictive likelihood of the forecast variables. The procedure is evaluated in a small simulation study and found to perform competitively in applications to real world data. |
Keywords: | Bayesian model averaging; Predictive likelihood; GDP forecasts |
JEL: | C11 C15 C32 C52 C53 |
Date: | 2007–11–01 |
URL: | http://d.repec.org/n?u=RePEc:hhs:rbnkwp:0216&r=ets |
By: | D.S.G. Pollock |
Abstract: | Methods are described that are available for extracting the trend from an economic data sequence and for isolating the cycles that might surround it. The latter often consist of a business cycle of variable duration and a perennial seasonal cycle. There is no evident distinction that can serve unequivocally to determine a point in the frequency spectrum where the trend ends and the business cycle begins. Unless it can be represented by a simple analytic function, such as an exponential growth path, there is bound to be a degree of arbitrariness in the definition of the trend. The business cycle, however defined, is liable to have an upper limit to its frequency range that falls short of the Nyquist frequency, which is the maximum observable frequency in sampled data. Therefore, if it is required to fit a parametric model to the business cycle, this ought to describe a band-limited process. The appropriate method for estimating a band-limited process is described for the case where the band includes the zero frequency. |
Date: | 2007–11 |
URL: | http://d.repec.org/n?u=RePEc:lec:leecon:07/17&r=ets |
By: | Dominique Guégan (Centre d'Economie de la Sorbonne) |
Abstract: | In this paper we deal with the problem of non-stationarity encountered in a lot of data sets coming from existence of multiple seasonnalities, jumps, volatility, distorsion, aggregation, etc. We study the problem caused by these non stationarities on the estimation of the sample autocorrelation function and give several examples of models for which spurious behaviors is created by this fact. It concerns Markov switching processes, Stopbreak models and SETAR processes. Then, new strategies are suggested to study locally these data sets. We propose first a test based on the k-the cumulants and mainly the construction of a meta-distribution based on copulas for the data set which will permit to take into account all the non-stationarities. This approach suggests that we can be able to do risk management for portfolio containing non stationary assets and also to obtain the distribution function of some specific models. |
Keywords: | Non-stationarity, distribution function, copula, long-memory, switching, SETAR, Stopbreak models, cumulants, estimation. |
JEL: | C32 C51 G12 |
Date: | 2007–04 |
URL: | http://d.repec.org/n?u=RePEc:mse:cesdoc:b07053&r=ets |
By: | Dominique Guégan (Centre d'Economie de la Sorbonne); Florian Ielpo (Centre d'Economie de la Sorbonne) |
Abstract: | In this paper, we introduce a new approach to estimate the subjective distribution of the future short rate from the historical dynamics of futures, based on a model generated by a Normal Inverse Gaussian distribution, with dynamical parameters. The model displays time varying conditional volatility, skewness and kurtosis and provides a flexible framework to recover the conditional distribution of the future rates. For the estimation, we use maximum likelihood method. Then, we apply the model to Fed Fund futures and discuss its performance. |
Keywords: | Subjective distribution, autoregressive conditional density, generalized hyperbolic distribution, Fed Funds futures contracts. |
JEL: | C51 E44 |
Date: | 2007–10 |
URL: | http://d.repec.org/n?u=RePEc:mse:cesdoc:b07056&r=ets |
By: | Dominique Guégan (Centre d'Economie de la Sorbonne); Jing Zhang (East China Normal University et Centre d'Economie de la Sorbonne) |
Abstract: | This paper develops the method for pricing bivariate contingent claims under General Autoregressive Conditionally Heteroskedastic (GARCH) process. In order to provide a general framework being able to accommodate skewness, leptokurtosis, fat tails as well as the time varying volatility that are often found in financial data, generalized hyperbolic (GH) distribution is used for innovations. As the association between the underlying assets may vary over time, the dynamic copula approach is considered. Therefore, the proposed method proves to play an important role in pricing bivariate option. The approach is illustrated for Chinese market with one type of better-of-two-markets claims : call option on the better performer of Shanghai Stock Composite Index and Shenzhen Stock Composite Index. Results show that the option prices obtained by the GARCH-GH model with time-varying copula differ substantially from the prices implied by the GARCH-Gaussian dynamic copula model. Moreover, the empirical work displays the advantage of the suggested method. |
Keywords: | Call-on-max option, GARCH process, generalized hyperbolic (GH) distribution, normal inverse Gaussian (NIG) distribution, copula, dynamic copula. |
JEL: | C51 G12 |
Date: | 2007–11 |
URL: | http://d.repec.org/n?u=RePEc:mse:cesdoc:b07057&r=ets |
By: | Eduardo José Araújo Lima; Benjamin Miranda Tabak |
Abstract: | This paper compares different versions of the multiple variance ratio test based on bootstrap techniques for the construction of empirical distributions. It also analyzes the crucial issue of selecting optimal block sizes when block bootstrap procedures are used, by applying the methods developed by Hall et al. (1995) and by Politis and White (2004). By comparing the results of the different methods using Monte Carlo simulations, we conclude that methodologies using block bootstrap methods present better performance for the construction of empirical distributions of the variance ratio test. Moreover, the results are highly sensitive to methods employed to test the null hypothesis of random walk. |
Date: | 2007–11 |
URL: | http://d.repec.org/n?u=RePEc:bcb:wpaper:151&r=ets |
By: | Dimitris Politis; Dimitrios Thomakos |
Abstract: | In this paper we contribute several new results on the NoVaS transformation approach for volatility forecasting introduced by Politis (2003a,b, 2007). In particular: (a) we introduce an alternative target distribution (uniform); (b) we present a new method for volatility forecasting using NoVaS ; (c) we show that the NoVaS methodology is applicable in situations where (global) stationarity fails such as the cases of local stationarity and/or structural breaks; (d) we show how to apply the NoVaS ideas in the case of returns with asymmetric distribution; and finally (e) we discuss the application of NoVaS to the problem of estimating value at risk (VaR). The NoVaS methodology allows for a flexible approach to inference and has immediate applications in the context of short time series and series that exhibit local behavior (e.g. breaks, regime switching etc.) We conduct an extensive simulation study on the predictive ability of the NoVaS approach and and that NoVaS forecasts lead to a much `tighter' distribution of the forecasting performance measure for all data generating processes. This is especially relevant in the context of volatility predictions for risk management. We further illustrate the use of NoVaS for a number of real datasets and compare the forecasting performance of NoVaS -based volatility forecasts with realized and range-based volatility measures. |
Keywords: | ARCH, GARCH, local stationarity, structural breaks, VaR, volatility. |
Date: | 2007 |
URL: | http://d.repec.org/n?u=RePEc:uop:wpaper:0005&r=ets |
By: | Dimitrios Thomakos; Tao Wang |
Abstract: | We examine the `relative optimality' of sign predictions for financial returns, extending the work of Christoffersen and Diebold (2006) on volatility dynamics and sign predictability. We show that there is a more general decomposition of financial returns than that implied by the sign decomposition and which depends on the choice of the threshold that defines direction. We then show that the choice of the threshold matters and that a threshold of zero (leading to sign predictions) is not necessarily `optimal'. We provide explicit conditions that allow for the choice of a threshold that has maximum responsiveness to changes in volatility dynamics and thus leads to `optimal' probabilistic predictions. Finally, we connect the evolution of volatility to probabilistic predictions and show that the volatility ratio is the crucial variable in this context. Our work strengthens the arguments in favor of accurate volatility measurement and prediction, as volatility dynamics are integrated into the `optimal' threshold. We provide an empirical illustration of our findings using monthly returns and realized volatility for the S&P500 index. |
Date: | 2007 |
URL: | http://d.repec.org/n?u=RePEc:uop:wpaper:0006&r=ets |