nep-ets New Economics Papers
on Econometric Time Series
Issue of 2011‒05‒24
thirteen papers chosen by
Yong Yin
SUNY at Buffalo

  1. Characterizing economic trends by Bayesian stochastic model specification search By Stefano Grassi; Tommaso Proietti
  2. Bias-correction in vector autoregressive models: A simulation study By Tom Engsted; Thomas Q. Pedersen
  3. When Long Memory Meets the Kalman Filter: A Comparative Study By Stefano Grassi; Paolo Santucci de Magistris
  4. Some econometric results for the Blanchard-Watson bubble model By Søren Johansen; Theis Lange
  5. Limiting Distribution of the Score Statistic under Moderate Deviation from a Unit Root in MA(1) By Ryota Yabe
  6. Statistical Inference in Possibly Integrated/Cointegrated Vector Autoregressions: Application to Testing for Structural Changes By Eiji Kurozumi; Khashbaatar Dashtseren
  7. Why are Trend Cycle Decompositions of Alternative Models So Different? By Shigeru Iwata; Han Li
  8. How Are Shocks to Trend and Cycle Correlated? A Simple Methodology for Unidentified Unobserved Components Models By Daisuke Nagakura
  9. Modeling and forecasting realized range volatility By Massimiliano Caporin; Gabriel G. Velo
  10. Forecasting aggregate and disaggregates with common features By Antoni, Espasa; Iván, Mayo
  11. Conditional Correlation Models of Autoregressive Conditional Heteroskedasticity with Nonstationary GARCH Equations By Cristina Amado; Timo Teräsvirta
  12. The Continuous Wavelet Transform: A Primer By Luís Francisco Aguiar; Maria Joana Soares
  13. Forecasting Equicorrelation By Adam E Clements; Christopher A Coleman-Fenn; Daniel R Smith

  1. By: Stefano Grassi (Aarhus University and CREATES); Tommaso Proietti (Università di Roma “Tor Vergata”)
    Abstract: We extend a recently proposed Bayesian model selection technique, known as stochastic model specification search, for characterising the nature of the trend in macroeconomic time series. In particular, we focus on autoregressive models with possibly time-varying intercept and slope and decide on whether their parameters are fixed or evolutive. Stochastic model specification is carried out to discriminate two alternative hypotheses concerning the generation of trends: the trend-stationary hypothesis, on the one hand, for which the trend is a deterministic function of time and the short run dynamics are represented by a stationary autoregressive process; the difference-stationary hypothesis, on the other, according to which the trend results from the cumulation of the effects of random disturbances. We illustrate the methodology for a set of U.S. macroeconomic time series, which includes the traditional Nelson and Plosser dataset. The broad conclusion is that most series are better represented by autoregressive models with time-invariant intercept and slope and coefficients that are close to boundary of the stationarity region. The posterior distribution of the autoregressive parameters, estimated by a suitable Gibbs sampling scheme, provides useful insight on quasi-integrated nature of the specifications selected.
    Keywords: Bayesian model selection, stationarity, unit roots, stochastic trends, variable selection.
    JEL: C22 C52
    Date: 2011–05–05
    URL: http://d.repec.org/n?u=RePEc:aah:create:2011-16&r=ets
  2. By: Tom Engsted (Aarhus University and CREATES); Thomas Q. Pedersen (Aarhus University and CREATES)
    Abstract: We analyze and compare the properties of various methods for bias-correcting parameter estimates in vector autoregressions. First, we show that two analytical bias formulas from the existing literature are in fact identical. Next, based on a detailed simulation study, we show that this simple and easy-to-use analytical bias formula compares very favorably to the more standard but also more computer intensive bootstrap bias-correction method, both in terms of bias and mean squared error. Both methods yield a notable improvement over both OLS and a recently proposed WLS estimator. We also investigate the properties of an iterative scheme when applying the analytical bias formula, and we ?find that this can imply slightly better fi?nite-sample properties for very small sample sizes while for larger sample sizes there is no gain by iterating. Finally, we also pay special attention to the risk of pushing an otherwise stationary model into the non-stationary region of the parameter space during the process of correcting for bias.
    Keywords: Bias reduction, VAR model, analytical bias formula, bootstrap, iteration, Yule-Walker, non-stationary system, skewed and fat-tailed data.
    JEL: C13 C32
    Date: 2011–05–13
    URL: http://d.repec.org/n?u=RePEc:aah:create:2011-18&r=ets
  3. By: Stefano Grassi (Aarhus University and CREATES); Paolo Santucci de Magistris (Aarhus University and CREATES)
    Abstract: The finite sample properties of the state space methods applied to long memory time series are analyzed through Monte Carlo simulations. The state space setup allows to introduce a novel modeling approach in the long memory framework, which directly tackles measurement errors and random level shifts. Missing values and several alternative sources of misspecification are also considered. It emerges that the state space methodology provides a valuable alternative for the estimation of the long memory models, under different data generating processes, which are common in financial and economic series. Two empirical applications highlight the practical usefulness of the proposed state space methods.
    Keywords: ARFIMA models, Kalman Filter, Missing Observations, Measurement Error, Level Shifts.
    JEL: C10 C22 C80
    Date: 2011–05–02
    URL: http://d.repec.org/n?u=RePEc:aah:create:2011-14&r=ets
  4. By: Søren Johansen (University of Copenhagen and CREATES); Theis Lange (University of Copenhagen and CREATES)
    Abstract: The purpose of the present paper is to analyse a simple bubble model suggested by Blanchard and Watson. The model is defined by y(t) =s(t)?y(t-1)+e(t), t=1,…,n, where s(t) is an i.i.d. binary variable with p=P(s(t)=1), independent of e(t) i.i.d. with mean zero and finite variance. We take ?>1 so the process is explosive for a period and collapses when s(t)=0. We apply the drift criterion for non-linear time series to show that the process is geometrically ergodic when p<1, because of the recurrent collapse. It has a finite mean if p?<1, and a finite variance if p?²<1. The question we discuss is whether a bubble model with infinite variance can create the long swings, or persistence, which are observed in many macro variables. We say that a variable is persistent if its autoregressive coefficient ?(n) of y(t) on y(t-1), is close to one. We show that the estimator of ?(n) converges to ?p, if the variance is finite, but if the variance of y(t) is infinite, we prove the curious result that the estimator converges to ??¹. The proof applies the notion of a tail index of sums of positive random variables with infinite variance to find the order of magnitude of the product moments of y(t).
    Keywords: Time series, explosive processes, bubble models.
    JEL: C32
    Date: 2011–05–09
    URL: http://d.repec.org/n?u=RePEc:aah:create:2011-17&r=ets
  5. By: Ryota Yabe
    Abstract: This paper derives the asymptotic distribution of Tanaka's score statistic under moderate deviation from a unit root in a moving average model of order one or MA(1). We classify the limiting distribution into three types depending on the order of deviation. In the fastest case, the convergence order of the asymptotic distribution continuously changes from the invertible process to the unit root. In the slowest case, the limiting distribution coincides with the invertible process in a distributional sense. This implies that these cases share an asymptotic property. The limiting distribution in the intermediate case provides the boundary property between the fastest and slowest cases.
    Date: 2011–02
    URL: http://d.repec.org/n?u=RePEc:hst:ghsdps:gd10-170&r=ets
  6. By: Eiji Kurozumi; Khashbaatar Dashtseren
    Abstract: We develop a new approach of statistical inference in possibly integrated/cointegrated vector autoregressions. Our method is built on the two previous approaches: the lag augmented approach by Toda and Yamamoto (1995) and the artificial autoregressions by Yamamoto (1996). We show that our estimator is asymptotically normally distributed irrespective of whether the variables are stationary or nonstationary, and that the Wald test statistic for the parameter restrictions has an asymptotic chi-square distribution. Using this method, we also propose to test for multiple structural changes. We show that our test statistics have the same limiting distributions as in the standard case, irrespective of whether the variables are stationary, purely integrated, or cointegrated.
    Keywords: multiple breaks, stationary, unit root, cointegration
    JEL: C12 C13 C32
    Date: 2011–04
    URL: http://d.repec.org/n?u=RePEc:hst:ghsdps:gd11-187&r=ets
  7. By: Shigeru Iwata; Han Li
    Abstract: When a certain procedure is applied to extract two component processes from a single observed process, it is necessary to impose a set of restrictions that defines two components. One popular restriction is the assumption that the shocks to the trend and cycle are orthogonal. Another is the assumption that the trend is a pure random walk process. The unobserved components (UC) model (Harvey, 1985) assumes both of the above, whereas the BN decomposition (Beveridge and Nelson, 1981) assumes only the latter. Quah (1992) investigates a broad class of decompositions by making the former assumption only. This paper provides a general framework in which alternative trend-cycle decompositions are regarded as special cases, and examines alternative decomposition schemes from the perspective of the frequency domain. We find that as long as the US GDP is concerned, the conventional UC model is inappropriate for the trend-cycle decomposition. We agree with Morley et al (2003) that the UC model is simply misspecified. However, this does not imply that the UC model that allows for the correlated shocks is a better model specification. The correlated UC model would lose many attractive features of the conventional UC model.
    Keywords: Beveridge-Nelson decomposition, Unobserved Component Models
    JEL: E44 F36 G15
    Date: 2011–03
    URL: http://d.repec.org/n?u=RePEc:hst:ghsdps:gd10-171&r=ets
  8. By: Daisuke Nagakura
    Abstract: In this paper, we propose a simple methodology for investigating how shocks to trend and cycle are correlated in unidentified unobserved components models, in which the correlation is not identified. The proposed methodology is applied to U.S. and U.K. real GDP data. We find that the correlation parameters are negative for both countries. We also investigate how changing the identification restriction results in different trend and cycle estimates.
    Keywords: Unobserved components model, Trend, Cycle, Business Cycle Analysis
    Date: 2011–03
    URL: http://d.repec.org/n?u=RePEc:hst:ghsdps:gd10-172&r=ets
  9. By: Massimiliano Caporin (University of Padova); Gabriel G. Velo (University of Padova)
    Abstract: In this paper, we estimate, model and forecast Realized Range Volatility, a new realized measure and estimator of the quadratic variation of financial prices. This estimator was early introduced in the literature and it is based on the high-low range observed at high frequency during the day. We consider the impact of the microstructure noise in high frequency data and correct our estimations, following a known procedure. Then, we model the Realized Range accounting for the well-known stylized effects present in financial data. We consider an HAR model with asymmetric effects with respect to the volatility and the return, and GARCH and GJR-GARCH specifications for the variance equation. Moreover, we also consider a non Gaussian distribution for the innovations. The analysis of the forecast performance during the different periods suggests that including the HAR components in the model improve the point forecasting accuracy while the introduction of asymmetric effects only leads to minor improvements.
    Keywords: Statistical analysis of financial data, Econometrics, Forecasting methods, Time series analysis, Realized Range Volatility, Realized Volatility, Long-memory, Volatility forecasting
    JEL: C22 C52 C53
    Date: 2011–02
    URL: http://d.repec.org/n?u=RePEc:pad:wpaper:0128&r=ets
  10. By: Antoni, Espasa; Iván, Mayo
    Abstract: The paper is focused on providing joint consistent forecasts for an aggregate and all its components and in showing that this indirect forecast of the aggregate is at least as accurate as the direct one. The procedure developed in the paper is a disaggregated approach based on single-equation models for the components, which take into account common stable features which some components share between them. The procedure is applied to forecasting euro area, UK and US inflation and it is shown that its forecasts are significantly more accurate than the ones obtained by the direct forecast of the aggregate or by dynamic factor models. A by-product of the procedure is the classification of a large number of components by restrictions shared between them, which could be also useful in other respects, as the application of dynamic factors, the definition of intermediate aggregates or the formulation of models with unobserved components
    Keywords: Common trends, Common serial correlation, Inflation, Euro Area, UK, US, Cointegration, Single-equation econometric models
    Date: 2011–04
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws110805&r=ets
  11. By: Cristina Amado (Universidade do Minho - NIPE); Timo Teräsvirta (CREATES, School of Economics and Management, Aarhus University)
    Abstract: In this paper we investigate the effects of careful modelling the long-run dynamics of the volatilities of stock market returns on the conditional correlation structure. To this end we allow the individual unconditional variances in Conditional Correlation GARCH models to change smoothly over time by incorporating a nonstationary component in the variance equations. The modelling technique to determine the parametric structure of this time-varying component is based on a sequence of specification Lagrange multiplier-type tests derived in Amado and Teräsvirta (2011). The variance equations combine the long-run and the short-run dynamic behaviour of the volatilities. The structure of the conditional correlation matrix is assumed to be either time independent or to vary over time. We apply our model to pairs of seven daily stock returns belonging to the S&P 500 composite index and traded at the New York Stock Exchange. The results suggest that accounting for deterministic changes in the unconditional variances considerably improves the fit of the multivariate Conditional Correlation GARCH models to the data. The effect of careful specification of the variance equations on the estimated correlations is variable: in some cases rather small, in others more discernible. As a by-product, we generalize news impact surfaces to the situation in which both the GARCH equations and the conditional correlations contain a deterministic component that is a function of time.
    Keywords: Multivariate GARCH model; Time-varying unconditional variance; Lagrange multiplier test; Modelling cycle; Nonlinear time series.
    JEL: C12 C32 C51 C52
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:nip:nipewp:15/2011&r=ets
  12. By: Luís Francisco Aguiar (Universidade do Minho - NIPE); Maria Joana Soares (Universidade do Minho - Departamento de Matemática)
    Abstract: Economists are already familiar with the Discrete Wavelet Transform. However, a body of work using the Continuous Wavelet Transform has also been growing. We provide a self-contained summary on continuous wavelet tools, such as the Continuous Wavelet Transform, the Cross-Wavelet, the Wavelet Coherency and the Phase-Difference. Furthermore, we generalize the concept of simple coherency to Partial Wavelet Coherency and Multiple Wavelet Coherency, akin to partial and multiple correlations, allowing the researcher to move beyond bivariate analysis. Finally, we describe the Generalized Morse Wavelets, a class of analytic wavelets recently proposed. A user-friendly toolbox, with examples, is attached to this paper.
    Keywords: Continuous Wavelet Transform, Cross-Wavelet Transform, Wavelet Coherency, Partial Wavelet Coherency, Multiple Wavelet Coherency, Wavelet Phase-Difference; Economic fluctuations
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:nip:nipewp:16/2011&r=ets
  13. By: Adam E Clements (QUT); Christopher A Coleman-Fenn (QUT); Daniel R Smith (QUT)
    Abstract: This article examines the out-of-sample forecast performance of several timeseries models of equicorrelation, a mean of the off-diagonal elements of a covariance matrix. Building on the existing Dynamic Conditional Correlation and Linear Dynamic Equicorrelation models, we propose adapting the latter to include measures of equicorrelation based on high-frequency intraday data, as well as a forecast of equicorrelation implied by the options market. Using state-of-the-art statistical evaluation technology, we find that the use of both the realised measures and the implied equicorrelation outperform those models that use daily data alone. However, the out-of-sample forecasting benefits of implied equicorrelation disappear when used in conjunction with the realised measures.
    Date: 2011–04–01
    URL: http://d.repec.org/n?u=RePEc:qut:auncer:2011_3&r=ets

This nep-ets issue is ©2011 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.