
on Econometric Time Series 
By:  Jozef Barunik; Evzen Kocenda; Lukas Vacha 
Abstract:  Based on the negative and positive realized semivariances developed in BarndorffNielsen et al. (2010), we modify the volatility spillover index devised in Diebold and Yilmaz (2009). The resulting asymmetric volatility spillover indices are easy to compute and account well for negative and positive parts of volatility. We apply the modified indices on the 30 U.S. stocks with the highest market capitalization over the period 20042011 to study intramarket spillovers. We provide evidence of sizable volatilityspillover asymmetries and a markedly different pattern of spillovers during periods of economic ups and downs. 
Date:  2013–08 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1308.1221&r=ets 
By:  Giuseppe Cavaliere (Università di Bologna); Luca De Angelis (Università di Bologna); Anders Rahbek (University of Copenhagen); robert.taylor@nottingham.ac.uk (University of Nottingham) 
Abstract:  In this paper we investigate the behaviour of a number of methods for estimating the cointegration rank in VAR systems characterized by heteroskedastic innovation processes. In particular we compare the efficacy of the most widely used information criteria, such as AIC and BIC, with the commonly used sequential approach of Johansen (1996) based around the use of either asymptotic or wild bootstrapbased likelihood ratio type tests. Complementing recent work done for the latter in Cavaliere, Rahbek and Taylor (2013, Econometric Reviews, forthcoming), we establish the asymptotic properties of the procedures based on information criteria in the presence of heteroskedasticity (conditional or unconditional) of a quite general and unknown form. The relative finitesample properties of the different methods are investigated by means of a Monte Carlo simulation study. For the simulation DGPs considered in the analysis, we find that the BICbased procedure and the bootstrap sequential test procedure deliver the best overall performance in terms of their frequency of selecting the correct cointegration rank across different values of the cointegration rank, sample size, stationary dynamics and models of heteroskedasticity. Of these the wild bootstrap procedure is perhaps the more reliable overall since it avoids a significant tendency seen in the BICbased method to overestimate the cointegration rank in relatively small sample sizes. 
Keywords:  Cointegrazione; Wild bootstrap; Statistic traccia; Criteri di informazione; Determinazione rango; Eteroschedasticità. Cointegration; Wild bootstrap; Trace statistic; Information criteria; Rank determi nation; Heteroskedasticity. 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:bot:quadip:121&r=ets 
By:  Gabriel Rodriguez (Departamento de Economía  Pontificia Universidad Católica del Perú); Dionisio Ramirez (Universidad Castilla La Mancha) 
Abstract:  Perron and Rodríguez (2003) claimed that their procedure to detect for additive outliers (Taud) is powerful even when we have departures from the unit root case. In this note, we use MonteCarlo simulations to show that Taud is powerful when we have ARFIMA(p; d; q) errors. Using simulations, we calculate the expected number of additive outliers found in this context and the number of times that the approach Taud identi es the true location of the additive outliers. The results indicate that the power of the procedure Taud depends of the size of the additive outliers. When we have a DGP with big sized additive outliers the percentage of time that Taud detects correctly the location of the additive outliers is 100.0%. 
Keywords:  Additive Outliers, ARFIMA Errors, Detection of Additive Outliers. 
JEL:  C2 C3 C5 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:pcp:pucwps:wp00355&r=ets 
By:  Gabriel Rodriguez (Departamento de Economía  Pontificia Universidad Católica del Perú) 
Abstract:  In a recent paper, Fajardo et al. (2009) propose an alternative semiparametric estimator of the fractional parameter in ARFIMA models which is robust to the presence of additive outliers. The results are very interesting, however, they use samples of 300 or 800 observations which are rarely found in macroeconomics or economics. In order to perform a comparison, I use the procedure to detect for additive outliers based on the estimator Tau d suggested by Perron and Rodríguez (2003). Further, I use dummy variables associated to the location of the selected outliers to estimate the fractional parameter. I found better results for the mean and bias of this parameter when T = 100 and the results in terms of the standard deviation and the MSE are very similar. However, for higher sample sizes as 300 or 800, the robust procedure performs better, specially based on the standard deviation and MSE measures. Empirical applications for seven Latin American ination series with very small sample sizes contaminated by additive outliers is discussed. What we nd is that when no correction for additive outliers is performed, the fractional parameter is underestimated. 
Keywords:  Additive Outliers, ARFIMA Errors, semiparametric estimation. 
JEL:  C2 C3 C5 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:pcp:pucwps:wp00356&r=ets 
By:  Gabriel Rodriguez (Departamento de Economía  Pontificia Universidad Católica del Perú); Dionisio Ramirez (Universidad Castilla La Mancha) 
Abstract:  This note analyzes the empirical size of the augmented Dickey and Fuller (ADF) statistic proposed by Perron and Rodríguez (2003) when the errors are frac tional. This ADF is based on a searching procedure for additive outliers based on rstdifferences of the data named Tau d. Simulations show that empirical size of the ADF is not affected by fractional errors con rming the claim of Perron and Rodríguez (2003) that the procedure Taud is robust to departures of the unit root framework. In particular the results show low sensitivity of the size of the ADF statistic respect to the fractional parameter (d). However, as expected, when there is strong negative moving average autocorrelation or negative au toregressive autocorrelation, the ADF statistic is oversized. These difficulties are xed when sample increases (from T = 100 to T = 200). Empirical applica tion to eight quarterly LatinAmerican ination series is also provided showing the importance of taking into account dummy variables for the detected additive outliers. 
Keywords:  Additive Outliers, ARFIMA Errors, ADF test 
JEL:  C2 C3 C5 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:pcp:pucwps:wp00357&r=ets 
By:  Grech, Aaron George 
Abstract:  The HodrickPrescott (HP) filter is a commonly used method, particularly in potential output studies. However its suitability depends on a number of conditions. Very small open economies do not satisfy these as their macroeconomic series exhibit pronounced trends, large fluctuations and recurrent breaks. Consequently the use of the filter results in random changes in the output gap that are out of line with the concept of equilibrium. Two suggestions are put forward. The first involves defining the upper and lower bounds of a series and determining equilibrium as a weighted average of the filter applied separately on these bounds. The second involves an integration of structural features into the standard filter to allow researchers to set limits on the impact of structural/temporary shocks and allow for lengthy periods of disequilibria. This paper shows that these methods can result in a smoother output gap series for the smallest Euro Area economies. 
Keywords:  Potential output, output gap, HodrickPrescott filter, detrending, business cycles, small open economies 
JEL:  B41 C1 E32 F41 
Date:  2013–08 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:48803&r=ets 