
on Econometric Time Series 
By:  Stefano Grassi (Aarhus University and CREATES); Tommaso Proietti (Università di Roma “Tor Vergata”) 
Abstract:  We extend a recently proposed Bayesian model selection technique, known as stochastic model specification search, for characterising the nature of the trend in macroeconomic time series. In particular, we focus on autoregressive models with possibly timevarying intercept and slope and decide on whether their parameters are fixed or evolutive. Stochastic model specification is carried out to discriminate two alternative hypotheses concerning the generation of trends: the trendstationary hypothesis, on the one hand, for which the trend is a deterministic function of time and the short run dynamics are represented by a stationary autoregressive process; the differencestationary hypothesis, on the other, according to which the trend results from the cumulation of the effects of random disturbances. We illustrate the methodology for a set of U.S. macroeconomic time series, which includes the traditional Nelson and Plosser dataset. The broad conclusion is that most series are better represented by autoregressive models with timeinvariant intercept and slope and coefficients that are close to boundary of the stationarity region. The posterior distribution of the autoregressive parameters, estimated by a suitable Gibbs sampling scheme, provides useful insight on quasiintegrated nature of the specifications selected. 
Keywords:  Bayesian model selection, stationarity, unit roots, stochastic trends, variable selection. 
JEL:  C22 C52 
Date:  2011–05–05 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201116&r=ets 
By:  Tom Engsted (Aarhus University and CREATES); Thomas Q. Pedersen (Aarhus University and CREATES) 
Abstract:  We analyze and compare the properties of various methods for biascorrecting parameter estimates in vector autoregressions. First, we show that two analytical bias formulas from the existing literature are in fact identical. Next, based on a detailed simulation study, we show that this simple and easytouse analytical bias formula compares very favorably to the more standard but also more computer intensive bootstrap biascorrection method, both in terms of bias and mean squared error. Both methods yield a notable improvement over both OLS and a recently proposed WLS estimator. We also investigate the properties of an iterative scheme when applying the analytical bias formula, and we ?find that this can imply slightly better fi?nitesample properties for very small sample sizes while for larger sample sizes there is no gain by iterating. Finally, we also pay special attention to the risk of pushing an otherwise stationary model into the nonstationary region of the parameter space during the process of correcting for bias. 
Keywords:  Bias reduction, VAR model, analytical bias formula, bootstrap, iteration, YuleWalker, nonstationary system, skewed and fattailed data. 
JEL:  C13 C32 
Date:  2011–05–13 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201118&r=ets 
By:  Stefano Grassi (Aarhus University and CREATES); Paolo Santucci de Magistris (Aarhus University and CREATES) 
Abstract:  The finite sample properties of the state space methods applied to long memory time series are analyzed through Monte Carlo simulations. The state space setup allows to introduce a novel modeling approach in the long memory framework, which directly tackles measurement errors and random level shifts. Missing values and several alternative sources of misspecification are also considered. It emerges that the state space methodology provides a valuable alternative for the estimation of the long memory models, under different data generating processes, which are common in financial and economic series. Two empirical applications highlight the practical usefulness of the proposed state space methods. 
Keywords:  ARFIMA models, Kalman Filter, Missing Observations, Measurement Error, Level Shifts. 
JEL:  C10 C22 C80 
Date:  2011–05–02 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201114&r=ets 
By:  Søren Johansen (University of Copenhagen and CREATES); Theis Lange (University of Copenhagen and CREATES) 
Abstract:  The purpose of the present paper is to analyse a simple bubble model suggested by Blanchard and Watson. The model is defined by y(t) =s(t)?y(t1)+e(t), t=1,…,n, where s(t) is an i.i.d. binary variable with p=P(s(t)=1), independent of e(t) i.i.d. with mean zero and finite variance. We take ?>1 so the process is explosive for a period and collapses when s(t)=0. We apply the drift criterion for nonlinear time series to show that the process is geometrically ergodic when p<1, because of the recurrent collapse. It has a finite mean if p?<1, and a finite variance if p?²<1. The question we discuss is whether a bubble model with infinite variance can create the long swings, or persistence, which are observed in many macro variables. We say that a variable is persistent if its autoregressive coefficient ?(n) of y(t) on y(t1), is close to one. We show that the estimator of ?(n) converges to ?p, if the variance is finite, but if the variance of y(t) is infinite, we prove the curious result that the estimator converges to ??¹. The proof applies the notion of a tail index of sums of positive random variables with infinite variance to find the order of magnitude of the product moments of y(t). 
Keywords:  Time series, explosive processes, bubble models. 
JEL:  C32 
Date:  2011–05–09 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201117&r=ets 
By:  Ryota Yabe 
Abstract:  This paper derives the asymptotic distribution of Tanaka's score statistic under moderate deviation from a unit root in a moving average model of order one or MA(1). We classify the limiting distribution into three types depending on the order of deviation. In the fastest case, the convergence order of the asymptotic distribution continuously changes from the invertible process to the unit root. In the slowest case, the limiting distribution coincides with the invertible process in a distributional sense. This implies that these cases share an asymptotic property. The limiting distribution in the intermediate case provides the boundary property between the fastest and slowest cases. 
Date:  2011–02 
URL:  http://d.repec.org/n?u=RePEc:hst:ghsdps:gd10170&r=ets 
By:  Eiji Kurozumi; Khashbaatar Dashtseren 
Abstract:  We develop a new approach of statistical inference in possibly integrated/cointegrated vector autoregressions. Our method is built on the two previous approaches: the lag augmented approach by Toda and Yamamoto (1995) and the artificial autoregressions by Yamamoto (1996). We show that our estimator is asymptotically normally distributed irrespective of whether the variables are stationary or nonstationary, and that the Wald test statistic for the parameter restrictions has an asymptotic chisquare distribution. Using this method, we also propose to test for multiple structural changes. We show that our test statistics have the same limiting distributions as in the standard case, irrespective of whether the variables are stationary, purely integrated, or cointegrated. 
Keywords:  multiple breaks, stationary, unit root, cointegration 
JEL:  C12 C13 C32 
Date:  2011–04 
URL:  http://d.repec.org/n?u=RePEc:hst:ghsdps:gd11187&r=ets 
By:  Shigeru Iwata; Han Li 
Abstract:  When a certain procedure is applied to extract two component processes from a single observed process, it is necessary to impose a set of restrictions that defines two components. One popular restriction is the assumption that the shocks to the trend and cycle are orthogonal. Another is the assumption that the trend is a pure random walk process. The unobserved components (UC) model (Harvey, 1985) assumes both of the above, whereas the BN decomposition (Beveridge and Nelson, 1981) assumes only the latter. Quah (1992) investigates a broad class of decompositions by making the former assumption only. This paper provides a general framework in which alternative trendcycle decompositions are regarded as special cases, and examines alternative decomposition schemes from the perspective of the frequency domain. We find that as long as the US GDP is concerned, the conventional UC model is inappropriate for the trendcycle decomposition. We agree with Morley et al (2003) that the UC model is simply misspecified. However, this does not imply that the UC model that allows for the correlated shocks is a better model specification. The correlated UC model would lose many attractive features of the conventional UC model. 
Keywords:  BeveridgeNelson decomposition, Unobserved Component Models 
JEL:  E44 F36 G15 
Date:  2011–03 
URL:  http://d.repec.org/n?u=RePEc:hst:ghsdps:gd10171&r=ets 
By:  Daisuke Nagakura 
Abstract:  In this paper, we propose a simple methodology for investigating how shocks to trend and cycle are correlated in unidentified unobserved components models, in which the correlation is not identified. The proposed methodology is applied to U.S. and U.K. real GDP data. We find that the correlation parameters are negative for both countries. We also investigate how changing the identification restriction results in different trend and cycle estimates. 
Keywords:  Unobserved components model, Trend, Cycle, Business Cycle Analysis 
Date:  2011–03 
URL:  http://d.repec.org/n?u=RePEc:hst:ghsdps:gd10172&r=ets 
By:  Massimiliano Caporin (University of Padova); Gabriel G. Velo (University of Padova) 
Abstract:  In this paper, we estimate, model and forecast Realized Range Volatility, a new realized measure and estimator of the quadratic variation of financial prices. This estimator was early introduced in the literature and it is based on the highlow range observed at high frequency during the day. We consider the impact of the microstructure noise in high frequency data and correct our estimations, following a known procedure. Then, we model the Realized Range accounting for the wellknown stylized effects present in financial data. We consider an HAR model with asymmetric effects with respect to the volatility and the return, and GARCH and GJRGARCH specifications for the variance equation. Moreover, we also consider a non Gaussian distribution for the innovations. The analysis of the forecast performance during the different periods suggests that including the HAR components in the model improve the point forecasting accuracy while the introduction of asymmetric effects only leads to minor improvements. 
Keywords:  Statistical analysis of financial data, Econometrics, Forecasting methods, Time series analysis, Realized Range Volatility, Realized Volatility, Longmemory, Volatility forecasting 
JEL:  C22 C52 C53 
Date:  2011–02 
URL:  http://d.repec.org/n?u=RePEc:pad:wpaper:0128&r=ets 
By:  Antoni, Espasa; Iván, Mayo 
Abstract:  The paper is focused on providing joint consistent forecasts for an aggregate and all its components and in showing that this indirect forecast of the aggregate is at least as accurate as the direct one. The procedure developed in the paper is a disaggregated approach based on singleequation models for the components, which take into account common stable features which some components share between them. The procedure is applied to forecasting euro area, UK and US inflation and it is shown that its forecasts are significantly more accurate than the ones obtained by the direct forecast of the aggregate or by dynamic factor models. A byproduct of the procedure is the classification of a large number of components by restrictions shared between them, which could be also useful in other respects, as the application of dynamic factors, the definition of intermediate aggregates or the formulation of models with unobserved components 
Keywords:  Common trends, Common serial correlation, Inflation, Euro Area, UK, US, Cointegration, Singleequation econometric models 
Date:  2011–04 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:ws110805&r=ets 
By:  Cristina Amado (Universidade do Minho  NIPE); Timo Teräsvirta (CREATES, School of Economics and Management, Aarhus University) 
Abstract:  In this paper we investigate the effects of careful modelling the longrun dynamics of the volatilities of stock market returns on the conditional correlation structure. To this end we allow the individual unconditional variances in Conditional Correlation GARCH models to change smoothly over time by incorporating a nonstationary component in the variance equations. The modelling technique to determine the parametric structure of this timevarying component is based on a sequence of specification Lagrange multipliertype tests derived in Amado and Teräsvirta (2011). The variance equations combine the longrun and the shortrun dynamic behaviour of the volatilities. The structure of the conditional correlation matrix is assumed to be either time independent or to vary over time. We apply our model to pairs of seven daily stock returns belonging to the S&P 500 composite index and traded at the New York Stock Exchange. The results suggest that accounting for deterministic changes in the unconditional variances considerably improves the fit of the multivariate Conditional Correlation GARCH models to the data. The effect of careful specification of the variance equations on the estimated correlations is variable: in some cases rather small, in others more discernible. As a byproduct, we generalize news impact surfaces to the situation in which both the GARCH equations and the conditional correlations contain a deterministic component that is a function of time. 
Keywords:  Multivariate GARCH model; Timevarying unconditional variance; Lagrange multiplier test; Modelling cycle; Nonlinear time series. 
JEL:  C12 C32 C51 C52 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:nip:nipewp:15/2011&r=ets 
By:  Luís Francisco Aguiar (Universidade do Minho  NIPE); Maria Joana Soares (Universidade do Minho  Departamento de Matemática) 
Abstract:  Economists are already familiar with the Discrete Wavelet Transform. However, a body of work using the Continuous Wavelet Transform has also been growing. We provide a selfcontained summary on continuous wavelet tools, such as the Continuous Wavelet Transform, the CrossWavelet, the Wavelet Coherency and the PhaseDifference. Furthermore, we generalize the concept of simple coherency to Partial Wavelet Coherency and Multiple Wavelet Coherency, akin to partial and multiple correlations, allowing the researcher to move beyond bivariate analysis. Finally, we describe the Generalized Morse Wavelets, a class of analytic wavelets recently proposed. A userfriendly toolbox, with examples, is attached to this paper. 
Keywords:  Continuous Wavelet Transform, CrossWavelet Transform, Wavelet Coherency, Partial Wavelet Coherency, Multiple Wavelet Coherency, Wavelet PhaseDifference; Economic fluctuations 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:nip:nipewp:16/2011&r=ets 
By:  Adam E Clements (QUT); Christopher A ColemanFenn (QUT); Daniel R Smith (QUT) 
Abstract:  This article examines the outofsample forecast performance of several timeseries models of equicorrelation, a mean of the offdiagonal elements of a covariance matrix. Building on the existing Dynamic Conditional Correlation and Linear Dynamic Equicorrelation models, we propose adapting the latter to include measures of equicorrelation based on highfrequency intraday data, as well as a forecast of equicorrelation implied by the options market. Using stateoftheart statistical evaluation technology, we find that the use of both the realised measures and the implied equicorrelation outperform those models that use daily data alone. However, the outofsample forecasting benefits of implied equicorrelation disappear when used in conjunction with the realised measures. 
Date:  2011–04–01 
URL:  http://d.repec.org/n?u=RePEc:qut:auncer:2011_3&r=ets 