|
on Econometric Time Series |
By: | Stefano Grassi (Aarhus University and CREATES); Tommaso Proietti (Università di Roma “Tor Vergata”) |
Abstract: | We extend a recent methodology, Bayesian stochastic model specification search (SMSS), for the selection of the unobserved components (level, slope, seasonal cycles, trading days effects) that are stochastically evolving over time. SMSS hinges on two basic ingredients: the non-centered representation of the unobserved components and the reparameterization of the hyperparameters representing standard deviations as regression parameters with unrestricted support. The choice of the prior and the conditional independence structure of the model enable the definition of a very efficient MCMC estimation strategy based on Gibbs sampling. We illustrate that the methodology can be quite successfully applied to discriminate between stochastic and deterministic trends, fixed and evolutive seasonal and trading day effects. |
Keywords: | Seasonality, Structural time series models, Variable selection, Bayesian Estimation. |
JEL: | C22 C11 C01 |
Date: | 2011–02–21 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2011-08&r=ets |
By: | Jennifer L. Castle; Nicholas W.P. Fawcett; David F. Hendry |
Abstract: | Success in accurately forecasting breaks requires that they are predictable from relevant information available at the forecast origin using an appropriate model form, which can be selected and estimated before the break. To clarify the roles of these six necessary conditions, we distinguish between the information set for ‘normal forces’ and the ones for ‘break drivers’, then outline sources of potential information. Relevant non-linear, dynamic models facing multiple breaks can have more candidate variables than observations, so we discuss automatic model selection. As a failure to accurately forecast breaks remains likely, we augment our strategy by modelling breaks during their progress, and consider robust forecasting devices. |
Keywords: | Economic forecasting, structural breaks, information sets, non-linearity |
JEL: | C1 C53 |
Date: | 2011 |
URL: | http://d.repec.org/n?u=RePEc:oxf:wpaper:535&r=ets |
By: | Diaa Noureldin; Neil Shephard; Kevin Sheppard |
Abstract: | This paper introduces a new class of multivariate volatility models that utilizes high-frequency data. We discuss the models’ dynamics and highlight their differences from multivariate GARCH models. We also discuss their covariance targeting specification and provide closed-form formulas for multi-step forecasts. Estimation and inference strategies are outlined. Empirical results suggest that the HEAVY model outperforms the multivariate GARCH model out-of-sample, with the gains being particularly significant at short forecast horizons. Forecast gains are obtained for both forecast variances and correlations. |
Keywords: | HEAVY model, GARCH, multivariate volatility, realized covariance, covariance targeting, multi-step forecasting, Wishart distribution |
JEL: | C32 C52 |
Date: | 2011 |
URL: | http://d.repec.org/n?u=RePEc:oxf:wpaper:533&r=ets |
By: | Jean-Marie Dufour; Tarek Jouini |
Abstract: | We study two linear estimators for stationary invertible VARMA models in echelon form – to achieve identification (model parameter unicity) – with known Kronecker indices. Such linear estimators are much simpler to compute than Gaussian maximum-likelihood estimators often proposed for such models, which require highly nonlinear optimization. The first estimator is an improved two-step estimator which can be interpreted as a generalized-least-squares extension of the two-step least-squares estimator studied in Dufour and Jouini (2005). The setup considered is also more general and allows for the presence of drift parameters. The second estimator is a new relatively simple three-step linear estimator which is asymptotically equivalent to ML, hence asymptotically efficient, when the innovations of the process are Gaussian. The latter is based on using modified approximate residuals which better take into account the truncation error associated with the approximate long autoregression used in the first step of the method. We show that both estimators are consistent and asymptotically normal under the assumption that the innovations are a strong white noise, possibly non-Gaussian. Explicit formulae for the asymptotic covariance matrices are provided. The proposed estimators are computationally simpler than earlier “efficient” estimators, and the distributional theory we supply does not rely on a Gaussian assumption, in contrast with Gaussian maximum likelihood or the estimators considered by Hannan and Kavalieris (1984b) and Reinsel, Basu and Yap (1992). We present simulation evidence which indicates that the proposed three-step estimator typically performs better in finite samples than the alternative multi-step linear estimators suggested by Hannan and Kavalieris (1984b), Reinsel et al. (1992), and Poskitt and Salau (1995). <P> |
Keywords: | echelon form, linear estimation, generalized least squares, GLS; two-step linear estimation, three-step linear estimation, asymptotically efficient, maximum likelihood, ML, stationary process, invertible process, Kronecker indices, simulation, |
JEL: | C13 C32 |
Date: | 2011–02–01 |
URL: | http://d.repec.org/n?u=RePEc:cir:cirwor:2011s-25&r=ets |
By: | Chafik Bouhaddioui; Jean-Marie Dufour |
Abstract: | We propose a semiparametric approach for testing orthogonality and causality between two infinite-order cointegrated vector autoregressive IVAR(1) series. The procedures considered can be viewed as extensions of classical methods proposed by Haugh (1976, JASA) and Hong (1996, Biometrika) for testing independence between stationary univariate time series. The tests are based on the residuals of long autoregressions, hence allowing for computational simplicity, weak assumptions on the form of the underlying process, and a direct interpretation of the results in terms of innovations (or reduced-form shocks). The test statistics are standardized versions of the sum of weighted squares of residual cross-correlation matrices. The weights depend on a kernel function and a truncation parameter. The asymptotic distributions of the test statistics under the null hypothesis are derived, and consistency is established against fixed alternatives of serial cross-correlation of unknown form. Apart from standardization factors, the multivariate portmanteau statistic which takes into account a fixed number of lags, can be viewed as a special case of our procedure based on the truncated uniform kernel. A simulation study is presented which indicates that the proposed tests have good size and power properties in finite samples. The proposed procedures are applied to study interactions between Canadian and American monetary quarterly variables associated with monetary policy (money, interest rates, prices, aggregate output). The empirical results clearly allow to reject the absence of correlation between the shocks in both countries, and indicate a unidirectional Granger causality running from the U.S. variables to the Canadian ones. <P> |
Keywords: | Infinite-order cointegrated vector autoregressive process; independence; causality; residual cross-correlation; consistency; asymptotic power, |
Date: | 2011–02–01 |
URL: | http://d.repec.org/n?u=RePEc:cir:cirwor:2011s-23&r=ets |
By: | Elise Coudin; Jean-Marie Dufour |
Abstract: | We propose estimators for the parameters of a linear median regression without any assumption on the shape of the error distribution – including no condition on the existence of moments – allowing for heterogeneity (or heteroskedasticity) of unknown form, noncontinuous distributions, and very general serial dependence (linear or nonlinear) including GARCH-type and stochastic volatility of unknown order. The estimators follow from a reverse inference approach, based on the class of distribution-free sign tests proposed in Coudin and Dufour (2009, Econometrics J.) under a mediangale assumption. As a result, the estimators inherit strong robustness properties from their generating tests. Since the proposed estimators are based on maximizing a test statistic (or a p-value function) over different null hypotheses, they can be interpreted as Hodges-Lehmann-type (HL) estimators. It is easy to adapt the sign-based estimators to account for linear serial dependence. Both finite-sample and large-sample properties are established under weak regularity conditions. The proposed estimators are median unbiased (under symmetry and estimator unicity) and satisfy natural equivariance properties. Consistency and asymptotic normality are established without any condition on error moment existence, allowing for heterogeneity (or heteroskedasticity) of unknown form, noncontinuous distributions, and very general serial dependence (linear or nonlinear). These conditions are considerably weaker than those used to show corresponding results for LAD estimators. In a Monte Carlo study on bias and mean square error, we find that sign-based estimators perform better than LAD-type estimators, especially in heteroskedastic settings. The proposed procedures are applied to a trend model of the Standard and Poor’s composite price index, where disturbances are affected by both heavy tails (non-normality) and heteroskedasticity. <P> |
Keywords: | sign test, median regression, Hodges-Lehmann estimator, p-value; least absolute deviations, quantile regression; simultaneous inference, Monte Carlo tests, projection methods, nonnormality, heteroskedasticity; serial dependence; GARCH; stochastic volatility., |
JEL: | C13 C12 C14 C15 |
Date: | 2011–02–01 |
URL: | http://d.repec.org/n?u=RePEc:cir:cirwor:2011s-24&r=ets |
By: | Mario Forni; Luca Gambetti |
Abstract: | We derive necessary and sufficient conditions under which a set of variables is informationally sufficient, i.e. it contains enough information to estimate the structural shocks with a VAR model. Based on such conditions, we suggest a procedure to test for informational sufficiency. Moreover, we show how to amend the VAR if informational sufficiency is rejected. We apply our procedure to a VAR including TFP, unemployment and per-capita hours worked. We find that the three variables are not informationally sufficient. When adding missing information, the effects of technology shocks change dramatically. |
Keywords: | Structural VAR, non-fundamentalness, information, FAVAR models, technology shocks. |
JEL: | C32 E32 E62 |
Date: | 2011–02–22 |
URL: | http://d.repec.org/n?u=RePEc:aub:autbar:863.11&r=ets |
By: | Knoth, Sven (Institute of Mathematics and Statistics, Helmut Schmidt University Hamburg); Frisén, Marianne (Statistical Research Unit, Department of Economics, School of Business, Economics and Law, Göteborg University) |
Abstract: | Different change point models for AR(1) processes are reviewed. For some models, the change is in the distribution conditional on earlier observations. For others the change is in the unconditional distribution. Some models include an observation before the first possible change time — others not. Earlier and new CUSUM type methods are given and minimax optimality is examined. For the conditional model with an observation before the possible change there are sharp results of optimality in the literature. The unconditional model with possible change at (or before) the first observation is of interest for applications. We examined this case and derived new variants of four earlier suggestions. By numerical methods and Monte Carlo simulations it was demonstrated that the new variants dominate the original ones. However, none of the methods is uniformly minimax optimal. |
Keywords: | Autoregressive; Change point; Monitoring; Online detection |
JEL: | C10 |
Date: | 2011–02–10 |
URL: | http://d.repec.org/n?u=RePEc:hhs:gunsru:2011_004&r=ets |
By: | Malik, S.; Pitt, M. K. |
Abstract: | In this paper we provide a unified methodology for conducting likelihood-based inference on the unknown parameters of a general class of discrete-time stochastic volatility (SV) models, characterized by both a leverage effect and jumps in returns. Given the nonlinear/non-Gaussian state-space form, approximating the likelihood for the parameters is conducted with output generated by the particle filter. Methods are employed to ensure that the approximating likelihood is continuous as a function of the unknown parameters thus enabling the use of standard Newton-Raphson type maximization algorithms. Our approach is robust and efficient relative to alternative Markov Chain Monte Carlo schemes employed in such contexts. In addition it provides a feasible basis for undertaking the nontrivial task of model comparison. Furthermore, we introduce new volatility model, namely SV-GARCH which attempts to bridge the gap between GARCH and stochastic volatility specifications. In nesting the standard GARCH model as a special case, it has the attractive feature of inheriting the same unconditional properties of the standard GARCH model but being conditionally heavier-tailed; thus more robust to outliers. It is demonstrated how this model can be estimated using the described methodology. The technique is applied to daily returns data for S&P 500 stock price index for various spans. In assessing the relative performance of SV with leverage and jumps and nested specifications, we find strong evidence in favour of a including leverage effect and jumps when modelling stochastic volatility. Additionally, we find very encouraging results for SV-GARCH in terms of predictive ability which is comparable to the other models considered. |
Keywords: | Stochastic volatility ; Particle filter ; Simulation ; State space ; Leverage effect ; Jumps. |
JEL: | C01 C11 C14 C15 C32 E32 |
Date: | 2011 |
URL: | http://d.repec.org/n?u=RePEc:bfr:banfra:318&r=ets |