|
on Econometric Time Series |
By: | Markku Lanne; Helmut Lütkepohl |
Abstract: | In structural vector autoregressive (SVAR) models identifying restrictions for shocks and impulse responses are usually derived from economic theory or institutional constraints. Sometimes the restrictions are insufficient for identifying all shocks and impulse responses. In this paper it is pointed out that specific distributional assumptions can also help in identifying the structural shocks. In particular, a mixture of normal distributions is considered as a plausible model that can be used in this context. Our model setup makes it possible to test restrictions which are just-identifying in a standard SVAR framework. In particular, we can test for the number of transitory and permanent shocks in a cointegrated SVAR model. The results are illustrated using a data set from King, Plosser, Stock and Watson (1991) and a system of US and European interest rates. |
Keywords: | mixture normal distribution, cointegration, vector autoregressive process, vector error correction model, impulse responses |
JEL: | C32 |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:ces:ceswps:_1651&r=ets |
By: | Fulvio Corsi (University of Lugano); Uta Kretschmer (University of Bonn, Germany); Stefan Mittnik (University of Munich); Christian Pigorsch (University of Munich) |
Abstract: | Using unobservable conditional variance as measure, latent–variable approaches, such as GARCH and stochastic–volatility models, have traditionally been dominating the empirical finance literature. In recent years, with the availability of high–frequency financial market data modeling realized volatility has become a new and innovative research direction. By constructing “observable” or realized volatility series from intraday transaction data, the use of standard time series models, such as ARFIMA models, have become a promising strategy for modeling and predicting (daily) volatility. In this paper, we show that the residuals of the commonly used time–series models for realized volatility exhibit non–Gaussianity and volatility clustering. We propose extensions to explicitly account for these properties and assess their relevance when modeling and forecasting realized volatility. In an empirical application for S&P500 index futures we show that allowing for time–varying volatility of realized volatility leads to a substantial improvement of the model’s fit as well as predictive performance. Furthermore, the distributional assumption for residuals plays a crucial role in density forecasting. |
Keywords: | Finance, Realized Volatility, Realized Quarticity, GARCH, Normal Inverse Gaussian Distribution, Density Forecasting |
JEL: | C22 C51 C52 C53 |
Date: | 2005–11–28 |
URL: | http://d.repec.org/n?u=RePEc:cfs:cfswop:wp200533&r=ets |
By: | Hiroaki Chigira; Taku Yamamoto |
Abstract: | It is widely believed that taking cointegration and integration into consideration is useful in constructing long-term forecasts for cointegrated processes. This paper shows that imposing neither cointegration nor integration leads to superior long-term forecasts. |
Keywords: | Forecasting, Cointegration, Integration |
JEL: | C12 C32 |
Date: | 2006–03 |
URL: | http://d.repec.org/n?u=RePEc:hst:hstdps:d05-148&r=ets |
By: | Jurgen A. Doornik (Nuffield College, Oxford University); Marius Ooms (Free University of Amsterdam) |
Abstract: | We present a new procedure for detecting multiple additive outliers in GARCH(1,1) models at unknown dates. The outlier candidates are the observations with the largest standardized residual. First, a likelihood-ratio based test determines the presence and timing of an outlier. Next, a second test determines the type of additive outlier (volatility or level). The tests are shown to be similar with respect to the GARCH parameters. Their null distribution can be easily approximated from an extreme value distribution, so that computation of p-values does not require simulation. The procedure outperforms alternative methods, especially when it comes to determining the date of the outlier. We apply the method to returns of the Dow Jones index, using monthly, weekly, and daily data. The procedure is extended and applied to GARCH models with Student-t distributed errors. |
Date: | 2005–09–20 |
URL: | http://d.repec.org/n?u=RePEc:nuf:econwp:0524&r=ets |
By: | Troy Matheson (Reserve Bank of New Zealand) |
Abstract: | Stock and Watson (1999) show that the Phillips curve is a good forecasting tool in the United States. We assess whether this good performance extends to two small open economies, with relatively large tradable sectors. Using data for Australia and New Zealand, we find that the open economy Phillips curve performs poorly relative to a univariate autoregressive benchmark. However, its performance improves markedly when sectoral Phillips curves are used which model the tradable and non-tradable sectors separately. Combining forecasts from these sectoral models is much better than obtaining forecasts from a Phillips curve estimated on aggregate data. We also find that a diffusion index that combines a large number of indicators of real economic activity provides better forecasts of non-tradable inflation than more conventional measures of real demand, thus supporting Stock and Watson's (1999) findings for the United States. |
JEL: | C53 E31 |
Date: | 2006–02 |
URL: | http://d.repec.org/n?u=RePEc:nzb:nzbdps:2006/01&r=ets |
By: | Anthony Garratt; Gary Koop; Shaun P. Vahey (Reserve Bank of New Zealand) |
Abstract: | A recent revision to the preliminary measurement of GDP(E) growth for 2003Q2 caused considerable press attention, provoked a public enquiry and prompted a number of reforms to UK statistical reporting procedures. In this paper, we compute the probability of "substantial revisions" that are greater (in absolute value) than the controversial 2003 revision. The pre-dictive densities are derived from Bayesian model averaging over a wide set of forecasting models including linear, structural break and regime-switching models with and without heteroskedasticity. Ignoring the nonlinearities and model uncertainty yields misleading predictives and obscures the improvement in the quality of preliminary UK macroeconomic measurements relative to the early 1990s. |
JEL: | C11 C32 C53 |
Date: | 2006–02 |
URL: | http://d.repec.org/n?u=RePEc:nzb:nzbdps:2006/02&r=ets |
By: | Ralf Brüggemann |
Abstract: | This paper investigates the finite sample properties of confidence intervals for structural vector error correction models (SVECMs) with long-run identifying restrictions on the impulse response functions. The simulation study compares methods that are frequently used in applied SVECM studies including an interval based on the asymptotic distribution of impulse responses, a standard percentile (Efron) bootstrap interval, Hall’s percentile and Hall’s studentized bootstrap interval. Data generating processes are based on empirical SVECM studies and evaluation criteria include the empirical coverage, the average length and the sign implied by the interval. Our Monte Carlo evidence suggests that applied researchers have little to choose between the asymptotic and the Hall bootstrap intervals in SVECMs. In contrast, the Efron bootstrap interval may be less suitable for applied work as it is less informative about the sign of the underlying impulse response function and the computationally demanding studentized Hall interval is often outperformed by the other methods. Differences between methods are illustrated empirically by using a data set from King, Plosser, Stock & Watson (1991). |
Keywords: | Structural vector error correction model, impulse response intervals, cointegration, long-run restrictions, bootstrap |
JEL: | C32 C53 C15 |
Date: | 2006–03 |
URL: | http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2006-021&r=ets |
By: | Josep Lluís Carrion-i-Silvestre; Andreu Sansó |
Abstract: | In this paper we propose an LM-Type statistic to test the null hypothesis of cointegration allowing for the possibility of a structural break, in both the deterministic and the cointegration vector. Our proposal focuses on the presence of endogenous regressors and analyses which estimation method provides better results. The test has been designed to be used as a complement to the usual non-cointegration tests in order to obtain stronger evidence of cointegration. We consider the cases of known and unknown break date. In the latter case, we show that minimizing the SSR results in a super-consistent estimator of the break fraction. Finally, the behaviour of the tests is studied through Monte Carlo experiments. |
Keywords: | cointegration, strcutural breaks, KPSS test. |
JEL: | C12 C22 |
Date: | 2005–11 |
URL: | http://d.repec.org/n?u=RePEc:ubi:deawps:10&r=ets |
By: | Manabu Asai; Michael McAleer |
Abstract: | This paper proposes and analyses two types of asymmetric multivariate stochastic volatility (SV) models, namely: (i) SV with leverage (SV-L) model, which is based on the negative correlation between the innovations in the returns and volatility; and (ii) SV with leverage and size effect (SV-LSE) model, which is based on the signs and magnitude of the returns. The paper derives the state space form for the logarithm of the squared returns which follow the multivariate SV-L model, and develops estimation methods for the multivariate SV-L and SV-LSE models based on the Monte Carlo likelihood (MCL) approach. The empirical results show that the multivariate SV-LSE model fits the bivariate and trivariate returns of the S&P 500, Nikkei 225, and Hang Seng indexes with respect to AIC and BIC more accurately than does the multivariate SV-L model. Moreover, the empirical results suggest that the univariate models should be rejected in favour of their bivariate and trivariate counterparts. |
Keywords: | Multivariate stochastic volatility, asymmetric leverage, dynamic leverage, size effect, numerical likelihood, Bayesian Markov chain Monte Carlo, importance sampling. |
Date: | 2005–11 |
URL: | http://d.repec.org/n?u=RePEc:ubi:deawps:12&r=ets |
By: | Josep Lluís Carrion-i-Silvestre; Andreu Sansó |
Abstract: | In this paper we generalize the KPSS-type test to allow for two structural breaks. Seven models have been de?ned depending on the way that the structural breaks a¤ect the time series behaviour. The paper derives the limit distribution of the test both under the null and the alternative hypotheses and conducts a set of simulation experiments to analyse the performance in finite samples. |
Keywords: | Stationary tests, structural breaks, unit root. |
JEL: | C12 C15 C22 |
Date: | 2005–07 |
URL: | http://d.repec.org/n?u=RePEc:ubi:deawps:13&r=ets |
By: | Suhejla Hoiti; Esfandiar Maasoumi; Michael McAleer; Daniel Slottje |
Abstract: | As U.S. Treasury securities carry the full faith and credit of the U.S. government, they are free of default risk. Thus, their yields are risk-free rates of return, which allows the most recently issued U.S. Treasury securities to be used as a benchmark to price other fixedincome instruments. This paper analyzes the time series properties of interest rates on U.S. Treasury benchmarks and related debt instruments by modelling the conditional mean and conditional volatility for weekly yields on 12 Treasury Bills and other debt instruments for the period 8 January 1982 to 20 August 2004. The conditional correlations between all pairs of debt instruments are also calculated. These estimates are of interest as they enable an assessment of the implications of modelling conditional volatility on forecasting performance. The estimated conditional correlation coefficients indicate whether there is specialization, diversification or independence in the debt instrument shocks. Constant conditional correlation estimates of the standardized shocks indicate that the shocks to the first differences in the debt instrument yields are generally high and always positively correlated. In general, the primary purpose in holding a portfolio of Treasury Bills and other debt instruments should be to specialize on instruments that provide the largest returns. Tests for Stochastic Dominance are consistent with these findings, but find somewhat surprising rankings between debt instruments with implications for portfolio composition. 30 year treasuries, Aaa bonds and mortgages tend to dominate other instruments, at least to the second order. |
Keywords: | Treasury bills, debt instruments, risk, conditional volatility, conditional correlation, asymmetry, specialization, diversification, independence, forecasting. |
Date: | 2005–10 |
URL: | http://d.repec.org/n?u=RePEc:ubi:deawps:14&r=ets |
By: | Niels Haldrup; Antonio Montañés; Andreu Sansó |
Abstract: | The detection of additive outliers in integrated variables has attracted some attention recently, see e.g. Shin et al. (1996), Vogelsang (1999) and Perron and Rodriguez (2003). This paper serves several purposes. We prove the inconsistency of the test proposed by Vogelsang, we extend the tests proposed by Shin et al. and Perron and Rodriguez to the seasonal case, and we consider alternative ways of computing their tests. We also study the effects of periodically varying variances on the previous tests and demonstrate that these can be seriously size distorted. Subsequently, some new tests that allow for periodic heteroskedasticity are proposed. |
Keywords: | Additive outliers, outlier detection, integrated processes, periodic heteroscedasticity, seasonality. |
JEL: | C12 C2 C22 |
Date: | 2005–01 |
URL: | http://d.repec.org/n?u=RePEc:ubi:deawps:15&r=ets |