nep-ets New Economics Papers
on Econometric Time Series
Issue of 2014‒04‒18
eleven papers chosen by
Yong Yin
SUNY at Buffalo

  1. Forecasting with the Standardized Self-Perturbed Kalman Filter By Stefano Grassi; Nima Nonejad; Paolo Santucci de Magistris
  2. Bayesian DEJD model and detection of asymmetric jumps By Maciej Kostrzewski
  3. Adaptive forecasting in the presence of recent and ongoing structural change By Giraitis, Liudas; Kapetanios, George; Price, Simon
  4. Generalised density forecast combinations By Fawcett, Nicholas; Kapetanios, George; Mitchell, James; Price, Simon
  5. Forecasting financial volatility with combined QML and LAD-ARCH estimators of the GARCH model By Liam Cheung; John Galbraith
  6. Trend-Cycle Decomposition: Implications from an Exact Structural Identification By Mardi Dungey; Jan P. A. M. Jacobs; Jing Jian; Simon van Norden
  7. Finite-sample resampling-based combined hypothesis tests, with applications to serial correlation and predictability By Jean-Marie Dufour; Lynda Khalaf; Marcel Voia
  8. On the Power of Invariant Tests for Hypotheses on a Covariance Matrix By Preinerstorfer, David; Pötscher, Benedikt M.
  9. Modeling Covariance Breakdowns in Multivariate GARCH By Jin, Xin; Maheu, John M
  10. DSGE Priors for BVAR Models By Thomai Filippeli; Konstantinos Theodoridis
  11. Fat-tails in VAR Models By Ching-Wai (Jeremy) Chiu; Haroon Mumtaz; Gabor Pinter

  1. By: Stefano Grassi (Univeristy of Kent and CREATES); Nima Nonejad (Aarhus University and CREATES); Paolo Santucci de Magistris (Aarhus University and CREATES)
    Abstract: A modification of the self-perturbed Kalman filter of Park and Jun (1992) is proposed for the on-line estimation of models subject to parameter instability. The perturbationterm in the updating equation of the state covariance matrix is weighted by the measurement error variance, thus avoiding the calibration of a design parameter. The standardization leads to a better tracking of the dynamics of the parameters compared to other on-line methods, especially as the level of noise increases. The proposed estimation method, coupled with dynamic model averaging and selection, is adopted to forecast S&P500 realized volatility series with a time-varying parameters HAR model with exogenous variables.
    Keywords: TVP models, Self-Perturbed Kalman Filter, Dynamic Model Averaging, Dynamic Model Selection, Forecasting, Realized Variance
    JEL: C10 C11 C22 C80
    Date: 2014–04–07
    URL: http://d.repec.org/n?u=RePEc:aah:create:2014-12&r=ets
  2. By: Maciej Kostrzewski
    Abstract: News might trigger jump arrivals in financial time series. The "bad" and "good" news seems to have distinct impact. In the research, a double exponential jump distribution is applied to model downward and upward jumps. Bayesian double exponential jump-diffusion model is proposed. Theorems stated in the paper enable estimation of the model's parameters, detection of jumps and analysis of jump frequency. The methodology, founded upon the idea of latent variables, is illustrated with two empirical studies, employing both simulated and real-world data (the KGHM index). News might trigger jump arrivals in financial time series. The "bad" and "good" news seems to have distinct impact. In the research, a double exponential jump distribution is applied to model downward and upward jumps. Bayesian double exponential jump-diffusion model is proposed. Theorems stated in the paper enable estimation of the model's parameters, detection of jumps and analysis of jump frequency. The methodology, founded upon the idea of latent variables, is illustrated with two empirical studies, employing both simulated and real-world data (the KGHM index).
    Date: 2014–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1404.2050&r=ets
  3. By: Giraitis, Liudas (Queen Mary University of London); Kapetanios, George (Queen Mary University of London); Price, Simon (Bank of England)
    Abstract: We consider time series forecasting in the presence of ongoing structural change where both the time-series dependence and the nature of the structural change are unknown. Methods that downweight older data, such as rolling regressions, forecast averaging over different windows and exponentially weighted moving averages, known to be robust to historical structural change, are found also to be useful in the presence of ongoing structural change in the forecast period. A crucial issue is how to select the degree of downweighting, usually defined by an arbitrary tuning parameter. We make this choice data-dependent by minimising forecast mean square error, and provide a detailed theoretical analysis of our proposal. Monte Carlo results illustrate the methods. We examine their performance on 97 US macro series. Forecasts using data-based tuning of the data discount rate are shown to perform well.
    Keywords: Recent and ongoing structural change; forecast combination; robust forecasts
    JEL: C10 C59
    Date: 2014–03–28
    URL: http://d.repec.org/n?u=RePEc:boe:boeewp:0490&r=ets
  4. By: Fawcett, Nicholas (Bank of England); Kapetanios, George (Queen Mary University of London); Mitchell, James (WBS); Price, Simon (Bank of England)
    Abstract: Density forecast combinations are becoming increasingly popular as a means of improving forecast ‘accuracy’, as measured by a scoring rule. In this paper we generalise this literature by letting the combination weights follow more general schemes. Sieve estimation is used to optimise the score of the generalised density combination where the combination weights depend on the variable one is trying to forecast. Specific attention is paid to the use of piecewise linear weight functions that let the weights vary by region of the density. We analyse these schemes theoretically, in Monte Carlo experiments and in an empirical study. Our results show that the generalised combinations outperform their linear counterparts.
    Keywords: Density Forecasting; Model Combination; Scoring Rules
    JEL: C53
    Date: 2014–03–28
    URL: http://d.repec.org/n?u=RePEc:boe:boeewp:0492&r=ets
  5. By: Liam Cheung; John Galbraith
    Abstract: GARCH models and their variants are usually estimated using quasi-Maximum Likelihood (QML). Recent work has shown that by using estimates of quadratic variation, for example from the daily realized volatility, it is possible to estimate these models in a different way which incorporates the additional information. Theory suggests that as the precision of estimates of daily quadratic variation improves, such estimates (via LAD- ARCH approximation) should come to equal and eventually dominate the QML estimators. The present paper investigates this using a five-year sample of data on returns from all 466 S&P 500 stocks which were present in the index continuously throughout the period. The results suggest that LAD-ARCH estimates, using realized volatility on five-minute returns over the trading day, yield measures of 1-step forecast accuracy comparable or slightly superior to those obtained from QML estimates. Combining the two estimators, either by equal weighting or weighting based on cross-validation, appears to produce a clear improvement in forecast accuracy relative to either of the two different forecasting methods alone.
    Keywords: QML and LAD-ARCH estimators, GARCH models,
    Date: 2013–07–01
    URL: http://d.repec.org/n?u=RePEc:cir:cirwor:2013s-19&r=ets
  6. By: Mardi Dungey; Jan P. A. M. Jacobs; Jing Jian; Simon van Norden
    Abstract: A well-documented property of the Beveridge-Nelson trend-cycle decomposition is the perfect negative correlation between trend and cycle innovations. We show how this may be consistent with a structural model where trend shocks enter the cycle, or cycle shocks enter the trend and that identification restrictions are necessary to make this structural distinction. A reduced form unrestricted version such as Morley, Nelson and Zivot (2003) is compatible with either option, but cannot distinguish which is relevant. We discuss economic interpretations and implications using US real GDP data. Une caractéristique bien connue de la décomposition Beveridge-Nelson est la corrélation négative parfaite entre les innovations aux cycles et aux tendances. Nous montrons comment cette corrélation est compatible avec des modèles structurels où les chocs aux tendances entrent par les cycles, ou les chocs aux cycles entrent par les tendances et que des restrictions d’identification sont nécessaires pour faire cette distinction structurelle. Une forme réduite sans restriction comme celle de Morley, Nelson et Zivot (2003) est compatible avec les deux options, mais ne peut pas les distinguer. Nous discutons des interprétations économiques et les implications en utilisant des données réelles du PIB américain.
    Keywords: trend-cycle decomposition, data revision, state-space form,
    JEL: C22 C53 C82
    Date: 2013–07–01
    URL: http://d.repec.org/n?u=RePEc:cir:cirwor:2013s-23&r=ets
  7. By: Jean-Marie Dufour; Lynda Khalaf; Marcel Voia
    Abstract: This paper suggests Monte Carlo multiple test procedures which are provably valid in finite samples. These include combination methods originally proposed for independent statistics and further improvements which formalize statistical practice. We also adapt the Monte Carlo test method to non-continuous combined statistics. The methods suggested are applied to test serial dependence and predictability. In particular, we introduce and analyze new procedures that account for endogenous lag selection. A simulation study illustrates the properties of the proposed methods. Results show that concrete and non-spurious power gains (over standard combination methods) can be achieved through the combined Monte Carlo test approach, and confirm arguments in favour of variance-ratio type criteria.
    Keywords: Monte Carlo test, induced test, test combination, simultaneous inference, Variance ratio,
    Date: 2013–10–01
    URL: http://d.repec.org/n?u=RePEc:cir:cirwor:2013s-40&r=ets
  8. By: Preinerstorfer, David; Pötscher, Benedikt M.
    Abstract: The behavior of the power function of autocorrelation tests such as the Durbin-Watson test in time series regressions or the Cliff-Ord test in spatial regression models has been intensively studied in the literature. When the correlation becomes strong, Krämer (1985) (for the Durbin-Watson test) and Krämer (2005) (for the Cliff-Ord test) have shown that the power can be very low, in fact can converge to zero, under certain circumstances. Motivated by these results, Martellosio (2010) set out to build a general theory that would explain these findings. Unfortunately, Martellosio (2010) does not achieve this goal, as a substantial portion of his results and proofs suffer from serious flaws. The present paper now builds a theory as envisioned in Martellosio (2010) in a fairly general framework, covering general invariant tests of a hypothesis on the disturbance covariance matrix in a linear regression model. The general results are then specialized to testing for spatial correlation and to autocorrelation testing in time series regression models. We also characterize the situation where the null and the alternative hypothesis are indistinguishable by invariant tests.
    Keywords: power function, invariant test, autocorrelation, spatial correlation, zero-power trap, indistinguishability, Durbin-Watson test, Cliff-Ord test
    JEL: C12
    Date: 2014–03
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:55059&r=ets
  9. By: Jin, Xin; Maheu, John M
    Abstract: This paper proposes a flexible way of modeling dynamic heterogeneous covariance breakdowns in multivariate GARCH (MGARCH) models. During periods of normal market activity, volatility dynamics are governed by an MGARCH specification. A covariance breakdown is any significant temporary deviation of the conditional covariance matrix from its implied MGARCH dynamics. This is captured through a flexible stochastic component that allows for changes in the conditional variances, covariances and implied correlation coefficients. Different breakdown periods will have different impacts on the conditional covariance matrix and are estimated from the data. We propose an efficient Bayesian posterior sampling procedure for the estimation and show how to compute the marginal likelihood of the model. When applying the model to daily stock market and bond market data, we identify a number of different covariance breakdowns. Modeling covariance breakdowns leads to a significant improvement in the marginal likelihood and gains in portfolio choice.
    Keywords: correlation breakdown; marginal likelihood; particle filter; Markov chain; generalized variance
    JEL: C32 C58 G1
    Date: 2014–04–10
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:55243&r=ets
  10. By: Thomai Filippeli (Queen Mary University of London); Konstantinos Theodoridis (Bank of England)
    Abstract: Similar to Ingram and Whiteman (1994), De Jong et al. (1993) and Del Negro and Schorfheide (2004) this study proposes a methodology of constructing Dynamic Stochastic General Equilibrium (DSGE) consistent prior distributions for Bayesian Vector Autoregressive (BVAR) models. The moments of the assumed Normal-Inverse Wishart (no conjugate) prior distribution of the VAR parameter vector are derived using the results developed by Fernandez-Villaverde et al. (2007), Christiano et al. (2006) and Ravenna (2007) regarding structural VAR (SVAR) models and the normal prior density of the DSGE parameter vector. In line with the results from previous studies, BVAR models with theoretical priors seem to achieve forecasting performance that is comparable - if not better - to the one obtained using theory free "Minnesota" priors (Doan et al., 1984). Additionally, the marginal-likelihood of the time-series model with theory founded priors - derived from the output of the Gibbs sampler - can be used to rank competing DSGE theories that aim to explain the same observed data (Geweke, 2005). Finally, motivated by the work of Christiano et al. (2010b,a) and Del Negro and Schorfheide (2004) we use the theoretical results developed by Chernozhukov and Hong (2003) and Theodoridis (2011) to derive the quasi Bayesian posterior distribution of the DSGE parameter vector.
    Keywords: BVAR, SVAR, DSGE, Gibbs sampling, Marginal-likelihood evaluation, Predictive density evaluation, Quasi-Bayesian DSGE estimation
    JEL: C11 C13 C32 C52
    Date: 2014–03
    URL: http://d.repec.org/n?u=RePEc:qmw:qmwecw:wp713&r=ets
  11. By: Ching-Wai (Jeremy) Chiu (Bank of England); Haroon Mumtaz (Queen Mary University of London); Gabor Pinter (Bank of England)
    Abstract: We confirm that standard time-series models for US output growth, inflation, interest rates and stock market returns feature non-Gaussian error structure. We build a 4-variable VAR model where the orthogonolised shocks have a Student t-distribution with a time-varying variance. We find that in terms of in-sample fit, the VAR model that features both stochastic volatility and Student-t disturbances outperforms restricted alternatives that feature either attributes. The VAR model with Student-t disturbances results in density forecasts for industrial production and stock returns that are superior to alternatives that assume Gaussianity. This difference appears to be especially stark over the recent financial crisis.
    Keywords: Bayesian VAR, Fat tails, Stochastic volatility
    JEL: C32 C53
    Date: 2014–03
    URL: http://d.repec.org/n?u=RePEc:qmw:qmwecw:wp714&r=ets

This nep-ets issue is ©2014 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.