
on Econometric Time Series 
By:  Guglielmo D'Amico; Ada Lika; Filippo Petroni 
Abstract:  A new branch based on Markov processes is developing in the recent literature of financial time series modeling. In this paper, an Indexed Markov Chain has been used to model high frequency price returns of quoted firms. The peculiarity of this type of model is that through the introduction of an Index process it is possible to consider the market volatility endogenously and two very important stylized facts of financial time series can be taken into account: long memory and volatility clustering. In this paper, first we propose a method for the optimal determination of the state space of the Index process which is based on a changepoint approach for Markov chains. Furthermore we provide an explicit formula for the probability distribution function of the first change of state of the index process. Results are illustrated with an application to intraday prices of a quoted Italian firm from January $1^{st}$, 2007 to December $31^{st}$ 2010. 
Date:  2018–02 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1802.01540&r=ets 
By:  Shanika L. Wickramasuriya; George Athanasopoulos; Rob J. Hyndman 
Abstract:  Large collections of time series often have aggregation constraints due to product or geographical groupings. The forecasts for the most disaggregated series are usually required to addup exactly to the forecasts of the aggregated series, a constraint we refer to as "coherence". Forecast reconciliation is the process of adjusting forecasts to make them coherent. The reconciliation algorithm proposed by Hyndman et al. (2011) is based on a generalized least squares estimator that requires an estimate of the covariance matrix of the coherency errors (i.e., the errors that arise due to incoherence). We show that this matrix is impossible to estimate in practice due to identifiability conditions. We propose a new forecast reconciliation approach that incorporates the information from a full covariance matrix of forecast errors in obtaining a set of coherent forecasts. Our approach minimizes the mean squared error of the coherent forecasts across the entire collection of time series under the assumption of unbiasedness. The minimization problem has a closed form solution. We make this solution scalable by providing a computationally efficient representation. We evaluate the performance of the proposed method compared to alternative methods using a series of simulation designs which take into account various features of the collected time series. This is followed by an empirical application using Australian domestic tourism data. The results indicate that the proposed method works well with artificial and real data. 
Keywords:  Aggregation, Australian tourism, Coherent forecasts, contemporaneous error correlation, forecast combinations, spatial correlations. 
Date:  2017 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:201722&r=ets 
By:  Tatsushi Oka; Pierre Perron 
Abstract:  The issue addressed in this paper is that of testing for common breaks across or within equations of a multivariate system. Our framework is very general and allows integrated regressors and trends as well as stationary regressors. The null hypothesis is that breaks in different parameters occur at common locations and are separated by some positive fraction of the sample size unless they occur across different equations. Under the alternative hypothesis, the break dates across parameters are not the same and also need not be separated by a positive fraction of the sample size whether within or across equations. The test considered is the quasilikelihood ratio test assuming normal errors, though as usual the limit distribution of the test remains valid with nonnormal errors. Of independent interest, we provide results about the rate of convergence of the estimates when searching over all possible partitions subject only to the requirement that each regime contains at least as many observations as some positive fraction of the sample size, allowing break dates not separated by a positive fraction of the sample size across equations. Simulations show that the test has good finite sample properties. We also provide an application to issues related to level shifts and persistence for various measures of inflation to illustrate its usefulness. 
Keywords:  changepoint, segmented regressions, break dates, hypothesis testing, multiple equations systems. 
JEL:  C32 
Date:  2018 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:20183&r=ets 
By:  Onatski, A.; Wang, C. 
Abstract:  The simplest version of Johansen's (1988) trace test for cointegration is based on the squared sample canonical correlations between a random walk and its own innovations. Onatski and Wang (2017) show that the empirical distribution of such squared canonical correlations weakly converges to the Wachter distribution as the sample size and the dimensionality of the random walk go to infinity proportionally. In this paper we prove that, in addition, the extreme squared correlations almost surely converge to the upper and lower boundaries of the support of the Wachter distribution. This result yields strong laws of large numbers for the averages of functions of the squared canonical correlations that may be discontinuous or unbounded outside the support of the Wachter distribution. In particular, we establish the a.s. limit of the scaled Johansen's trace statistic, which has a logarithmic singularity at unity. We use this limit to derive a previously unknown analytic expression for the Bartletttype correction coefficient for Johansen's test in a highdimensional environment. 
Keywords:  Highdimensional random walk, cointegration, extreme canonical correlations, Wachter distribution, trace statistic. 
Date:  2018–01–25 
URL:  http://d.repec.org/n?u=RePEc:cam:camdae:1805&r=ets 
By:  Foroni, Claudia; Marcellino, Massimiliano; Stevanović, Dalibor 
Abstract:  Temporal aggregation in general introduces a moving average (MA) component in the aggregated model. A similar feature emerges when not all but only a few variables are aggregated, which generates a mixed frequency model. The MA component is generally neglected, likely to preserve the possibility of OLS estimation, but the consequences have never been properly studied in the mixed frequency context. In this paper, we show, analytically, in Monte Carlo simulations and in a forecasting application on U.S. macroeconomic variables, the relevance of considering the MA component in mixedfrequency MIDAS and UnrestrictedMIDAS models (MIDASARMA and UMIDASARMA). Specifically, the simulation results indicate that the shortterm forecasting performance of MIDASARMA and UMIDASARMA is better than that of, respectively, MIDAS and UMIDAS. The empirical applications on nowcasting U.S. GDP growth, investment growth and GDP deflator inflation confirm this ranking. Moreover, in both simulation and empirical results, MIDASARMA is better than UMIDASARMA. 
Keywords:  temporal aggregation,MIDAS models,ARMA models 
JEL:  E37 C53 
Date:  2018 
URL:  http://d.repec.org/n?u=RePEc:zbw:bubdps:022018&r=ets 
By:  Schüler, Yves S. 
Abstract:  Hamilton (2017) criticises the Hodrick and Prescott (1981, 1997) filter (HP filter) because of three drawbacks (i. spurious cycles, ii. endofsample bias, iii. ad hoc assumptions regarding the smoothing parameter) and proposes a regression filter as an alternative. I demonstrate that Hamilton's regression filter shares some of these drawbacks. For instance, Hamilton's ad hoc formulation of a 2year regression filter implies a cancellation of twoyear cycles and an amplification of cycles longer than typical business cycles. This is at odds with stylised business cycle facts, such as the oneyear duration of a typical recession, leading to inconsistencies, for example, with the NBER business cycle chronology. Nonetheless, I show that Hamilton's regression filter should be preferred to the HP filter for constructing a credittoGDP gap. The filter extracts the various mediumterm frequencies more equally. Due to this property, a regressionfiltered credittoGDP ratio indicates that imbalances prior to the global financial crisis started earlier than shown by the Basel III credittoGDP gap. 
Keywords:  detrending,spurious cycles,business cycles,financial cycles,Basel III 
JEL:  C10 E32 E58 G01 
Date:  2018 
URL:  http://d.repec.org/n?u=RePEc:zbw:bubdps:032018&r=ets 
By:  Till Weigt; Bernd Wilfling 
Abstract:  We consider a situation in which the forecaster has available M individual forecasts of a univariate target variable. We propose a 3step procedure designed to exploit the interrelationships among the M forecasterror series (estimated from a large timevarying parameter VAR model of the errors, using past observations) with the aim of obtaining more accurate predictions of future forecast errors. The refined future forecasterror predictions are then used to obtain M new individual forecasts that are adapted to the information from the estimated VAR. The adapted M individual forecasts are ultimately combined and any potential accuracy gains of the adapted combination forecasts analyzed. We evaluate our approach in an outofsample forecasting analysis, using a wellestablished 7country data set on output growth. Our 3step procedure yields substantial accuracy gains (in terms of loss reductions ranging between 6.2% up to 18%) for the simple average and three timevaryingparameter combination forecasts. 
Keywords:  Forecast combinations, large timevarying parameter VARs, Bayesian VAR estimation, statespace model, forgetting factors, dynamic model averaging. 
JEL:  C53 C32 C11 
Date:  2018–02 
URL:  http://d.repec.org/n?u=RePEc:cqe:wpaper:6818&r=ets 