nep-ets New Economics Papers
on Econometric Time Series
Issue of 2016‒01‒29
eleven papers chosen by
Yong Yin
SUNY at Buffalo

  1. Maximum Likelihood Estimation of Time-Varying Loadings in High-Dimensional Factor Models By Jakob Guldbæk Mikkelsen; Eric Hillebrand; Giovanni Urga
  2. Brownian Bridges on Random Intervals By Matteo Ludovico Bedini; Rainer Buckdahn; Hans-J\"urgen Engelbert
  3. Irreversibility of financial time series: a graph-theoretical approach By Lucas Lacasa; Ryan Flanagan
  4. Decomposition of Time Series Data of Stock Markets and its Implications for Prediction: An Application for the Indian Auto Sector By Jaydip Sen; Tamal Datta Chaudhuri
  5. Testing for Causality in Continuous Time Bayesian Network Models of High-Frequency Data By Jonas Hallgren; Timo Koski
  6. The varying coefficient Bayesian panel VAR model By Wieladek, Tomasz
  7. A Bayesian VAR benchmark for COMPASS By Domit, Sílvia; Monti, Francesca; Sokol, Andrej
  8. Unit root inference for non-stationary linear processes driven by infinite variance innovations By Giuseppe Cavaliere; Iliyan Georgiev; Robert Taylor
  9. Tests of the Co-integration Rank in VAR Models in the Presence of a Possible Break in Trend at an Unknown Point By Harris, David; Leybourne, Stephen J; Taylor, A M Robert
  10. Forecasting with EC-VARMA models By Athanasopouolos, George; Poskitt, Don; Vahid, Farshid; Yao, Wenying
  11. The role of intra-day volatility pattern in jump detection: empirical evidence on how financial markets respond to macroeconomic news announcements By Yao, Wenying; Tian, Jing

  1. By: Jakob Guldbæk Mikkelsen (Aarhus University and CREATES); Eric Hillebrand (Aarhus University and CREATES); Giovanni Urga (Cass Business School)
    Abstract: In this paper, we develop a maximum likelihood estimator of time-varying loadings in high-dimensional factor models. We specify the loadings to evolve as stationary vector autoregressions (VAR) and show that consistent estimates of the loadings parameters can be obtained by a two-step maximum likelihood estimation procedure. In the first step, principal components are extracted from the data to form factor estimates. In the second step, the parameters of the loadings VARs are estimated as a set of univariate regression models with time-varying coefficients. We document the finite-sample properties of the maximum likelihood estimator through an extensive simulation study and illustrate the empirical relevance of the time-varying loadings structure using a large quarterly dataset for the US economy.
    Keywords: High-dimensional factor models, dynamic factor loadings, maximum likelihood, principal components JEL classification: C33, C55, C13
    Date: 2015–12–15
    URL: http://d.repec.org/n?u=RePEc:aah:create:2015-61&r=ets
  2. By: Matteo Ludovico Bedini; Rainer Buckdahn; Hans-J\"urgen Engelbert
    Abstract: The issue of giving an explicit description of the flow of information concerning the time of bankruptcy of a company (or a state) arriving on the market is tackled by defining a bridge process starting from zero and conditioned to be equal to zero when the default occurs. This enables to catch some empirical facts on the behavior of financial markets: when the bridge process is away from zero, investors can be relatively sure that the default will not happen immediately. However, when the information process is close to zero, market agents should be aware of the risk of an imminent default. In this sense the bridge process leaks information concerning the default before it occurs. The objective of this first paper on Brownian bridges on stochastic intervals is to provide the basic properties of these processes.
    Date: 2016–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1601.01811&r=ets
  3. By: Lucas Lacasa; Ryan Flanagan
    Abstract: The relation between time series irreversibility and entropy production has been recently investigated in thermodynamic systems operating away from equilibrium. In this work we explore this concept in the context of financial time series. We make use of visibility algorithms to quantify in graph-theoretical terms time irreversibility of 35 financial indices evolving over the period 1998-2012. We show that this metric is complementary to standard measures based on volatility and exploit it to both classify periods of financial stress and to rank companies accordingly. We then validate this approach by finding that a projection in principal components space of financial years based on time irreversibility features clusters together periods of financial stress from stable periods. Relations between irreversibility, efficiency and predictability are briefly discussed.
    Date: 2016–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1601.01980&r=ets
  4. By: Jaydip Sen; Tamal Datta Chaudhuri
    Abstract: With the rapid development and evolution of sophisticated algorithms for statistical analysis of time series data, the research community has started spending considerable effort in technical analysis of such data. Forecasting is also an area which has witnessed a paradigm shift in its approach. In this work, we have used the time series of the index values of the Auto sector in India during January 2010 to December 2015 for a deeper understanding of the behavior of its three constituent components, e.g., the Trend, the Seasonal component, and the Random component. Based on this structural analysis, we have also designed three approaches for forecasting and also computed their accuracy in prediction using suitably chosen training and test data sets. The results clearly demonstrate the accuracy of our decomposition results and efficiency of our forecasting techniques, even in presence of a dominant Random component in the time series.
    Date: 2016–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1601.02407&r=ets
  5. By: Jonas Hallgren; Timo Koski
    Abstract: Continuous time Bayesian networks are investigated with a special focus on their ability to express causality. A framework is presented for doing inference in these networks. The central contributions are a representation of the intensity matrices for the networks and the introduction of a causality measure. A new model for high-frequency financial data is presented. It is calibrated to market data and by the new causality measure it performs better than older models.
    Date: 2016–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1601.06651&r=ets
  6. By: Wieladek, Tomasz (Bank of England)
    Abstract: Interacted panel VAR (IPVAR) models allow coefficients to vary as a deterministic function of observable country characteristics. The varying coefficient Bayesian panel VAR generalises this to the stochastic case. As an application of this framework, I examine if the impact of commodity price shocks on consumption and the CPI varies with the degree of exchange rate, financial, product and labour market liberalisation on data from 1976 Q1–2006 Q4 for 18 OECD countries. The confidence bands are smaller in the deterministic case and as a result most of the characteristics affect the transmission mechanism in a statistically significant way. But only financial liberalisation is an important determinant of commodity price shocks in the stochastic case. This suggests that results from IPVAR models should be interpreted with caution.
    Keywords: Bayesian panel VAR; commodity price shocks
    JEL: C33 E30
    Date: 2016–01–08
    URL: http://d.repec.org/n?u=RePEc:boe:boeewp:0578&r=ets
  7. By: Domit, Sílvia (Bank of England); Monti, Francesca (Bank of England); Sokol, Andrej (Bank of England)
    Abstract: We estimate a Bayesian VAR analogue to the Bank of England’s DSGE model (COMPASS) and assess their relative performance in forecasting GDP growth and CPI inflation in real time between 2000 and 2012. We find that the BVAR outperformed COMPASS when forecasting both GDP and its expenditure components. In contrast, the performance of these models was similar when forecasting CPI. We also find that, despite underpredicting inflation at most forecast horizons, the BVAR density forecasts outperformed those of COMPASS. Both models overpredicted GDP growth at all forecast horizons, but the BVAR outperformed COMPASS at forecast horizons up to one year ahead. The BVAR’s point and density forecast performance is also comparable to that of a Bank of England in-house statistical suite for both GDP and CPI inflation and to the Inflation Report projections. Our results are broadly consistent with the findings of similar studies for other advanced economies.
    Keywords: Forecasting; Bayesian VARs; macro-modelling
    JEL: C53 E12 E17
    Date: 2016–01–25
    URL: http://d.repec.org/n?u=RePEc:boe:boeewp:0583&r=ets
  8. By: Giuseppe Cavaliere (Università di Bologna); Iliyan Georgiev (Università di Bologna); Robert Taylor (University of Essex)
    Abstract: The contribution of this paper is two-fold. First, we derive the asymptotic null distribution of the familiar augmented Dickey-Fuller [ADF] statistics in the case where the shocks follow a linear process driven by in…nite variance innovations. We show that these distributions are free of serial correlation nuisance parameters but depend on the tail index of the in…nite variance process. These distributions are shown to coincide with the corresponding results for the case where the shocks follow a …nite autoregression, provided the lag length in the ADF regression satis…es the same o(T1=3) rate condition as is required in the …nite variance case. In addition, we establish the rates of consistency and (where they exist) the asymptotic distributions of the ordinary least squares sieve estimates from the ADF regression. Given the dependence of their null distributions on the unknown tail index, our second contribution is to explore sieve wild bootstrap implementations of the ADF tests. Under the assumption of symmetry, we demonstrate the asymptotic validity (bootstrap consistency) of the wild bootstrap ADF tests. This is done by establishing that (conditional on the data) the wild bootstrap ADF statistics attain the same limiting distribution as that of the original ADF statistics taken conditional on the magnitude of the innovations.
    Keywords: Bootstrap, Unit roots, Sieve autoregression, Infinite variance, Time Series
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:bot:quadip:wpaper:130&r=ets
  9. By: Harris, David; Leybourne, Stephen J; Taylor, A M Robert
    Abstract: In this paper we consider the problem of testing for the co-integration rank of a vector autoregressive process in the case where a trend break may potentially be present in the data. It is known that un-modelled trend breaks can result in tests which are incorrectly sized under the null hypothesis and inconsistent under the alternative hypothesis. Extant procedures in this literature have attempted to solve this inference problem but require the practitioner to either assume that the trend break date is known or to assume that any trend break cannot occur under the co-integration rank null hypothesis being tested. These procedures also assume the autoregressive lag length is known to the practitioner. All of these assumptions would seem unreasonable in practice. Moreover in each of these strands of the literature there is also a presumption in calculating the tests that a trend break is known to have happened. This can lead to a substantial loss in finite sample power in the case where a trend break does not in fact occur. Using information criteria based methods to select both the autoregressive lag order and to choose between the trend break and no trend break models, using a consistent estimate of the break fraction in the context of the former, we develop a number of procedures which deliver asymptotically correctly sized and consistent tests of the co-integration rank regardless of whether a trend break is present in the data or not. By selecting the no break model when no trend break is present, these procedures also avoid the potentially large power losses associated with the extant procedures in such cases.
    Keywords: Co-integration rank; vector autoregression; error-correction model; trend break; break point estimation; information criteria
    Date: 2016–01
    URL: http://d.repec.org/n?u=RePEc:esy:uefcwp:15847&r=ets
  10. By: Athanasopouolos, George (Monash University); Poskitt, Don (Monash University); Vahid, Farshid (Monash University); Yao, Wenying (School of Business and Economics, University of Tasmania)
    Abstract: This article studies error correction vector autoregressive moving average (ECVARMA) models. A complete procedure for identifying and estimating EC-VARMA models is proposed. The cointegrating rank is estimated in the first stage using an extension of the non-parametric method of Poskitt (2000). Then, the structure of the VARMA model for variables in levels is identified using the scalar component model (SCM) methodology developed in Athanasopoulos and Vahid (2008), which leads to a uniquely identifiable VARMA model. In the last stage, the VARMA model is estimated in its error correction form. Monte Carlo simulation is conducted using a 3-dimensional VARMA(1,1) DGP with cointegrating rank 1, in order to evaluate the forecasting performances of the EC-VARMA models. This algorithm is illustrated further using an empirical example of the term structure of U.S. interest rates. The results reveal that the out-of-sample forecasts of the EC-VARMA model are superior to those produced by error correction vector autoregressions (VARs) of finite order, especially in short horizons.
    Keywords: cointegration, VARMA model, iterative OLS, scalar component modelNote:
    JEL: C1 C32 C53
    Date: 2014–02–22
    URL: http://d.repec.org/n?u=RePEc:tas:wpaper:17835&r=ets
  11. By: Yao, Wenying (Tasmanian School of Business & Economics, University of Tasmania); Tian, Jing (Tasmanian School of Business & Economics, University of Tasmania)
    Abstract: This paper examines the effect of adjusting for the intra-day volatility pattern on jump detection. Using tests that identify the intra-day timing of jumps, we show that before the adjustment, jumps in the financial market have high probability of occurring concurrently with pre-scheduled economy-wide news announcements. We demonstrate that adjustment for the U-shaped volatility pattern prior to jump detection effectively removes most of the association between jumps and macroeconomic news announcements. We find empirical evidence that only news that comes with large surprise can cause jumps in the market index after the volatility adjustment, while the effect of other types of news is largely absorbed through the continuous volatility channel. The FOMC meeting announcement is shown to have the highest association with jumps in the market both before and after the adjustment.
    Keywords: volatility pattern, intra-day jumps, news announcements, high frequency data
    JEL: C58 C12 G14
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:tas:wpaper:22662&r=ets

This nep-ets issue is ©2016 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.