nep-ets New Economics Papers
on Econometric Time Series
Issue of 2014‒12‒13
eight papers chosen by
Yong Yin
SUNY at Buffalo

  1. Modelling of dependence in high-dimensional financial time series by cluster-derived canonical vines By David Walsh-Jones; Daniel Jones; Christoph Reisinger
  2. A Matlab program and user's guide for the fractionally cointegrated VAR model By Morten Ørregaard Nielsen; Michał Ksawery Popiel
  3. Choice of Spectral Density Estimator in Ng-Perron Test: Comparative Analysis By Malik, Muhammad Irfan; Rehman, Atiq-ur-
  4. Vector Autoregressions with Parsimoniously Time Varying Parameters and an Application to Monetary Policy By Laurent Callot; Johannes Tang Kristensen
  5. Approximate Bayesian Computation in State Space Models By Gael M. Martin; Brendan P.M. McCabe; Worapree Maneesoonthorn; Christian P. Robert
  6. On the identification of fractionally cointegrated VAR models with the F(d) condition By Paolo Santucci de Magistris; Federico Carlini
  7. Dealing with unobservable common trends in small samples: a panel cointegration approach By Francesca Di Iorio; Stefano Fachin
  8. Forecasting the Volatility of the Dow Jones Islamic Stock Market Index: Long Memory vs. Regime Switching By Ben Nasr, Adnen; Lux, Thomas; Ajmi, Ahdi Noomen; Gupta, Rangan

  1. By: David Walsh-Jones; Daniel Jones; Christoph Reisinger
    Abstract: We extend existing models in the financial literature by introducing a cluster-derived canonical vine (CDCV) copula model for capturing high dimensional dependence between financial time series. This model utilises a simplified market-sector vine copula framework similar to those introduced by Heinen and Valdesogo (2008) and Brechmann and Czado (2013), which can be applied by conditioning asset time series on a market-sector hierarchy of indexes. While this has been shown by the aforementioned authors to control the excessive parameterisation of vine copulas in high dimensions, their models have relied on the provision of externally sourced market and sector indexes, limiting their wider applicability due to the imposition of restrictions on the number and composition of such sectors. By implementing the CDCV model, we demonstrate that such reliance on external indexes is redundant as we can achieve equivalent or improved performance by deriving a hierarchy of indexes directly from a clustering of the asset time series, thus abstracting the modelling process from the underlying data.
    Date: 2014–11
  2. By: Morten Ørregaard Nielsen (Queen's University and CREATES); Michał Ksawery Popiel (Queen's University)
    Abstract: This manual describes the usage of the accompanying freely available Matlab program for estimation and testing in the fractionally cointegrated vector autoregressive (FCVAR) model. This program replaces an earlier Matlab program by Nielsen and Morin (2014), and although the present Matlab program is not compatible with the earlier one, we encourage use of the new program.
    Keywords: cofractional process, cointegration rank, computer program, fractional autoregressive model, fractional cointegration, fractional unit root, Matlab, VAR model
    JEL: C22 C32
    Date: 2014–10
  3. By: Malik, Muhammad Irfan; Rehman, Atiq-ur-
    Abstract: Ng and Perron (2001) designed a unit root test which incorporates the properties of DF-GLS and Phillips Perron test. Ng and Perron claim that the test performs exceptionally well especially in the presence of negative moving average. However, the performance of test depends heavily on the choice of spectral density estimators used in the construction of test. There are various estimators for spectral density available in literature, having crucial impact on the output of test however there is no clarity on which of these estimators gives optimal size and power properties. This study aims to evaluate the performance of Ng-Perron for different choices of spectral density estimators in the presence of negative and positive moving average using Monte Carlo simulations. The results for large samples show that: (a) in the presence of positive moving average, test with kernel based estimator give good effective power and no size distortion (b) in the presence of negative moving average, autoregressive estimator gives better effective power, however, huge size distortion is observed in several specifications of data generating process
    Keywords: Ng-Perron test, Monte Carlo, Spectral Density, Unit Root Testing
    JEL: C01 C15 C63
    Date: 2014–11–17
  4. By: Laurent Callot (VU University Amsterdam); Johannes Tang Kristensen (University of Southern Denmark, Denmark)
    Abstract: This paper proposes a parsimoniously time varying parameter vector autoregressive model (with exogenous variables, VARX) and studies the properties of the Lasso and adaptive Lasso as estimators of this model. The parameters of the model are assumed to follow parsimonious random walks, where parsimony stems from the assumption that increments to the parameters have a non-zero probability of being exactly equal to zero. By varying the degree of parsimony our model can accommodate constant parameters, an unknown number of structural breaks, or parameters with a high degree of variation. We characterize the finite sample properties of the Lasso by deriving upper bounds on the estimation and prediction errors that are valid with high probability; and asymptotically we show that these bounds tend to zero with probability tending to one if the number of non zero increments grows slower than √T . By simulation experiments we investigate the properties of the Lasso and the adaptive Lasso in settings where the parameters are stable, experience structural breaks, or follow a parsimonious random walk. We use our model to investigate the monetary policy response to inflation and business cycle fluctuations in the US by estimating a parsimoniously time varying parameter Taylor rule. We document substantial changes in the policy response of the Fed in the 1980s and since 2008.
    Keywords: Parsimony, time varying parameters, VAR, structural break, Lasso
    JEL: C01 C13 C32 E52
    Date: 2014–11–07
  5. By: Gael M. Martin; Brendan P.M. McCabe; Worapree Maneesoonthorn; Christian P. Robert
    Abstract: A new approach to inference in state space models is proposed, based on approximate Bayesian computation (ABC). ABC avoids evaluation of the likelihood function by matching observed summary statistics with statistics computed from data simulated from the true process; exact inference being feasible only if the statistics are sufficient. With finite sample sufficiency unattainable in the state space setting, we seek asymptotic sufficiency via the maximum likelihood estimator (MLE) of the parameters of an auxiliary model. We prove that this auxiliary model-based approach achieves Bayesian consistency, and that - in a precise limiting sense - the proximity to (asymptotic) sufficiency yielded by the MLE is replicated by the score. In multiple parameter settings a separate treatment of scalar parameters, based on integrated likelihood techniques, is advocated as a way of avoiding the curse of dimensionality. Some attention is given to a structure in which the state variable is driven by a continuous time process, with exact inference typically infeasible in this case as a result of intractable transitions. The ABC method is demonstrated using the unscented Kalman filter as a fast and simple way of producing an approximation in this setting, with a stochastic volatility model for financial returns used for illustration.
    Keywords: Likelihood-free methods, latent diffusion models, linear Gaussian state space models, asymptotic sufficiency, unscented Kalman filter, stochastic volatility.
    JEL: C11 C22 C58
    Date: 2014
  6. By: Paolo Santucci de Magistris (Aarhus University and CREATES); Federico Carlini (Aarhus University and CREATES)
    Abstract: This paper discusses identification problems in the fractionally cointegrated system of Johansen (2008) and Johansen and Nielsen (2012). It is shown that several equivalent re-parameterizations of the model associated with different fractional integration and cointegration parameters may exist for any choice of the lag length, also when the true cointegration rank is known. The properties of these multiple non-identified models are studied and a necessary and sufficient condition for the identification of the fractional parameters of the system is provided. The condition is named F(d) and it is a generalization to the fractional case of the I(1) condition in the VECM model. The assessment of the F(d) condition in the empirical analysis is relevant for the determination of the fractional parameters as well as the number of lags. The paper also illustrates the indeterminacy between the cointegration rank and lag length. It is proved that, under certain restrictions on the fractional parameters, the model with rank zero and k lags is equivalent to the model with full rank and k - 1 lags. This precludes the possibility to test for the nullity of the cointegration rank.
    Keywords: Fractional Cointegration, Cofractional Model, Identification, Lag Selection
    JEL: C18 C32 C52
    Date: 2014–11–13
  7. By: Francesca Di Iorio (Universita' di Napoli Federico II); Stefano Fachin (Universita' di Roma "La Sapienza")
    Abstract: Non stationary panel models allowing for unobservable common trends have recently become very popular. However, standard methods, which are based on factor extraction or models augmented with cross-section averages, require large sample sizes, not always available in practice. In these cases we propose the simple and robust alternative of augmenting the panel regres- sion with common time dummies. The underlying assumption of additive e¤ects can be tested by means of a panel cointegration test, with no need of estimating a general interactive e¤ects model. An application to modelling labour productivity growth in the four major European economies (France, Germany, Italy and UK) illustrates the method.
    Keywords: Common trends, Panel cointegration, TFP.
    JEL: C23 C15 E2
    Date: 2014–11
  8. By: Ben Nasr, Adnen; Lux, Thomas; Ajmi, Ahdi Noomen; Gupta, Rangan
    Abstract: The financial crisis has fueled interest in alternatives to traditional asset classes that might be less a ected by large market gyrations and, thus, provide for a less volatile development of a portfolio. One attempt at selecting stocks that are less prone to extreme risks, is obeyance of Islamic Sharia rules. In this light, we investigate the statistical properties of the Dow Jones Islamic Stock Market Index (DJIM) and explore its volatility dynamics using a number of up-to-date statistical models allowing for long memory and regime-switching dynamics. We find that the DJIM shares all stylized facts of traditional asset classes, and estimation results and forecasting performance for various volatility models are also in line with prevalent findings in the literature. Overall, the relatively new Markov-switching multifractal model performs best under the majority of time horizons and loss criteria. Long memory GARCH-type models always improve upon the short-memory GARCH specification and additionally allowing for regime changes can further improve their performance.
    Keywords: Islamic finance,volatility dynamics,long memory,multifractals
    JEL: G15 G17 G23
    Date: 2014

This nep-ets issue is ©2014 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.