
on Econometric Time Series 
By:  Jakob Guldbæk Mikkelsen (Aarhus University and CREATES); Eric Hillebrand (Aarhus University and CREATES); Giovanni Urga (Cass Business School) 
Abstract:  In this paper, we develop a maximum likelihood estimator of timevarying loadings in highdimensional factor models. We specify the loadings to evolve as stationary vector autoregressions (VAR) and show that consistent estimates of the loadings parameters can be obtained by a twostep maximum likelihood estimation procedure. In the first step, principal components are extracted from the data to form factor estimates. In the second step, the parameters of the loadings VARs are estimated as a set of univariate regression models with timevarying coefficients. We document the finitesample properties of the maximum likelihood estimator through an extensive simulation study and illustrate the empirical relevance of the timevarying loadings structure using a large quarterly dataset for the US economy. 
Keywords:  Highdimensional factor models, dynamic factor loadings, maximum likelihood, principal components JEL classification: C33, C55, C13 
Date:  2015–12–15 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201561&r=ets 
By:  Matteo Ludovico Bedini; Rainer Buckdahn; HansJ\"urgen Engelbert 
Abstract:  The issue of giving an explicit description of the flow of information concerning the time of bankruptcy of a company (or a state) arriving on the market is tackled by defining a bridge process starting from zero and conditioned to be equal to zero when the default occurs. This enables to catch some empirical facts on the behavior of financial markets: when the bridge process is away from zero, investors can be relatively sure that the default will not happen immediately. However, when the information process is close to zero, market agents should be aware of the risk of an imminent default. In this sense the bridge process leaks information concerning the default before it occurs. The objective of this first paper on Brownian bridges on stochastic intervals is to provide the basic properties of these processes. 
Date:  2016–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1601.01811&r=ets 
By:  Lucas Lacasa; Ryan Flanagan 
Abstract:  The relation between time series irreversibility and entropy production has been recently investigated in thermodynamic systems operating away from equilibrium. In this work we explore this concept in the context of financial time series. We make use of visibility algorithms to quantify in graphtheoretical terms time irreversibility of 35 financial indices evolving over the period 19982012. We show that this metric is complementary to standard measures based on volatility and exploit it to both classify periods of financial stress and to rank companies accordingly. We then validate this approach by finding that a projection in principal components space of financial years based on time irreversibility features clusters together periods of financial stress from stable periods. Relations between irreversibility, efficiency and predictability are briefly discussed. 
Date:  2016–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1601.01980&r=ets 
By:  Jaydip Sen; Tamal Datta Chaudhuri 
Abstract:  With the rapid development and evolution of sophisticated algorithms for statistical analysis of time series data, the research community has started spending considerable effort in technical analysis of such data. Forecasting is also an area which has witnessed a paradigm shift in its approach. In this work, we have used the time series of the index values of the Auto sector in India during January 2010 to December 2015 for a deeper understanding of the behavior of its three constituent components, e.g., the Trend, the Seasonal component, and the Random component. Based on this structural analysis, we have also designed three approaches for forecasting and also computed their accuracy in prediction using suitably chosen training and test data sets. The results clearly demonstrate the accuracy of our decomposition results and efficiency of our forecasting techniques, even in presence of a dominant Random component in the time series. 
Date:  2016–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1601.02407&r=ets 
By:  Jonas Hallgren; Timo Koski 
Abstract:  Continuous time Bayesian networks are investigated with a special focus on their ability to express causality. A framework is presented for doing inference in these networks. The central contributions are a representation of the intensity matrices for the networks and the introduction of a causality measure. A new model for highfrequency financial data is presented. It is calibrated to market data and by the new causality measure it performs better than older models. 
Date:  2016–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1601.06651&r=ets 
By:  Wieladek, Tomasz (Bank of England) 
Abstract:  Interacted panel VAR (IPVAR) models allow coefficients to vary as a deterministic function of observable country characteristics. The varying coefficient Bayesian panel VAR generalises this to the stochastic case. As an application of this framework, I examine if the impact of commodity price shocks on consumption and the CPI varies with the degree of exchange rate, financial, product and labour market liberalisation on data from 1976 Q1–2006 Q4 for 18 OECD countries. The confidence bands are smaller in the deterministic case and as a result most of the characteristics affect the transmission mechanism in a statistically significant way. But only financial liberalisation is an important determinant of commodity price shocks in the stochastic case. This suggests that results from IPVAR models should be interpreted with caution. 
Keywords:  Bayesian panel VAR; commodity price shocks 
JEL:  C33 E30 
Date:  2016–01–08 
URL:  http://d.repec.org/n?u=RePEc:boe:boeewp:0578&r=ets 
By:  Domit, Sílvia (Bank of England); Monti, Francesca (Bank of England); Sokol, Andrej (Bank of England) 
Abstract:  We estimate a Bayesian VAR analogue to the Bank of England’s DSGE model (COMPASS) and assess their relative performance in forecasting GDP growth and CPI inflation in real time between 2000 and 2012. We find that the BVAR outperformed COMPASS when forecasting both GDP and its expenditure components. In contrast, the performance of these models was similar when forecasting CPI. We also find that, despite underpredicting inflation at most forecast horizons, the BVAR density forecasts outperformed those of COMPASS. Both models overpredicted GDP growth at all forecast horizons, but the BVAR outperformed COMPASS at forecast horizons up to one year ahead. The BVAR’s point and density forecast performance is also comparable to that of a Bank of England inhouse statistical suite for both GDP and CPI inflation and to the Inflation Report projections. Our results are broadly consistent with the findings of similar studies for other advanced economies. 
Keywords:  Forecasting; Bayesian VARs; macromodelling 
JEL:  C53 E12 E17 
Date:  2016–01–25 
URL:  http://d.repec.org/n?u=RePEc:boe:boeewp:0583&r=ets 
By:  Giuseppe Cavaliere (Università di Bologna); Iliyan Georgiev (Università di Bologna); Robert Taylor (University of Essex) 
Abstract:  The contribution of this paper is twofold. First, we derive the asymptotic null distribution of the familiar augmented DickeyFuller [ADF] statistics in the case where the shocks follow a linear process driven by in…nite variance innovations. We show that these distributions are free of serial correlation nuisance parameters but depend on the tail index of the in…nite variance process. These distributions are shown to coincide with the corresponding results for the case where the shocks follow a …nite autoregression, provided the lag length in the ADF regression satis…es the same o(T1=3) rate condition as is required in the …nite variance case. In addition, we establish the rates of consistency and (where they exist) the asymptotic distributions of the ordinary least squares sieve estimates from the ADF regression. Given the dependence of their null distributions on the unknown tail index, our second contribution is to explore sieve wild bootstrap implementations of the ADF tests. Under the assumption of symmetry, we demonstrate the asymptotic validity (bootstrap consistency) of the wild bootstrap ADF tests. This is done by establishing that (conditional on the data) the wild bootstrap ADF statistics attain the same limiting distribution as that of the original ADF statistics taken conditional on the magnitude of the innovations. 
Keywords:  Bootstrap, Unit roots, Sieve autoregression, Infinite variance, Time Series 
Date:  2016 
URL:  http://d.repec.org/n?u=RePEc:bot:quadip:wpaper:130&r=ets 
By:  Harris, David; Leybourne, Stephen J; Taylor, A M Robert 
Abstract:  In this paper we consider the problem of testing for the cointegration rank of a vector autoregressive process in the case where a trend break may potentially be present in the data. It is known that unmodelled trend breaks can result in tests which are incorrectly sized under the null hypothesis and inconsistent under the alternative hypothesis. Extant procedures in this literature have attempted to solve this inference problem but require the practitioner to either assume that the trend break date is known or to assume that any trend break cannot occur under the cointegration rank null hypothesis being tested. These procedures also assume the autoregressive lag length is known to the practitioner. All of these assumptions would seem unreasonable in practice. Moreover in each of these strands of the literature there is also a presumption in calculating the tests that a trend break is known to have happened. This can lead to a substantial loss in finite sample power in the case where a trend break does not in fact occur. Using information criteria based methods to select both the autoregressive lag order and to choose between the trend break and no trend break models, using a consistent estimate of the break fraction in the context of the former, we develop a number of procedures which deliver asymptotically correctly sized and consistent tests of the cointegration rank regardless of whether a trend break is present in the data or not. By selecting the no break model when no trend break is present, these procedures also avoid the potentially large power losses associated with the extant procedures in such cases. 
Keywords:  Cointegration rank; vector autoregression; errorcorrection model; trend break; break point estimation; information criteria 
Date:  2016–01 
URL:  http://d.repec.org/n?u=RePEc:esy:uefcwp:15847&r=ets 
By:  Athanasopouolos, George (Monash University); Poskitt, Don (Monash University); Vahid, Farshid (Monash University); Yao, Wenying (School of Business and Economics, University of Tasmania) 
Abstract:  This article studies error correction vector autoregressive moving average (ECVARMA) models. A complete procedure for identifying and estimating ECVARMA models is proposed. The cointegrating rank is estimated in the first stage using an extension of the nonparametric method of Poskitt (2000). Then, the structure of the VARMA model for variables in levels is identified using the scalar component model (SCM) methodology developed in Athanasopoulos and Vahid (2008), which leads to a uniquely identifiable VARMA model. In the last stage, the VARMA model is estimated in its error correction form. Monte Carlo simulation is conducted using a 3dimensional VARMA(1,1) DGP with cointegrating rank 1, in order to evaluate the forecasting performances of the ECVARMA models. This algorithm is illustrated further using an empirical example of the term structure of U.S. interest rates. The results reveal that the outofsample forecasts of the ECVARMA model are superior to those produced by error correction vector autoregressions (VARs) of finite order, especially in short horizons. 
Keywords:  cointegration, VARMA model, iterative OLS, scalar component modelNote: 
JEL:  C1 C32 C53 
Date:  2014–02–22 
URL:  http://d.repec.org/n?u=RePEc:tas:wpaper:17835&r=ets 
By:  Yao, Wenying (Tasmanian School of Business & Economics, University of Tasmania); Tian, Jing (Tasmanian School of Business & Economics, University of Tasmania) 
Abstract:  This paper examines the effect of adjusting for the intraday volatility pattern on jump detection. Using tests that identify the intraday timing of jumps, we show that before the adjustment, jumps in the financial market have high probability of occurring concurrently with prescheduled economywide news announcements. We demonstrate that adjustment for the Ushaped volatility pattern prior to jump detection effectively removes most of the association between jumps and macroeconomic news announcements. We find empirical evidence that only news that comes with large surprise can cause jumps in the market index after the volatility adjustment, while the effect of other types of news is largely absorbed through the continuous volatility channel. The FOMC meeting announcement is shown to have the highest association with jumps in the market both before and after the adjustment. 
Keywords:  volatility pattern, intraday jumps, news announcements, high frequency data 
JEL:  C58 C12 G14 
Date:  2015 
URL:  http://d.repec.org/n?u=RePEc:tas:wpaper:22662&r=ets 