nep-ets New Economics Papers
on Econometric Time Series
Issue of 2016‒08‒28
eleven papers chosen by
Yong Yin
SUNY at Buffalo

  1. Measuring Business Cycles with Structural Breaks and Outliers: Applications to International Data By Pierre Perron; Tatsuma Wada
  2. Forecasting in the presence of in and out of sample breaks By Jiawen Xu; Pierre Perron
  3. Combining Long Memory and Level Shifts in Modeling and Forecasting the Volatility of Asset Returns By Rasmus T. Varneskov; Pierre Perron
  4. Improved Tests for Forecast Comparisons in the Presence of Instabilities By Luis Filipe Martins; Pierre Perron
  5. Residuals-based Tests for Cointegration with GLS Detrended Data By Pierre Perron; Gabriel Rodríguez
  6. Testing for Flexible Nonlinear Trends with an Integrated or Stationary Noise Component By Pierre Perron; Mototsugu Shintani; Tomoyoshi Yabu
  7. A Dynamic Multi-Level Factor Model with Long-Range Dependence By Yunus Emre Ergemen; Carlos Vladimir Rodríguez-Caballero
  8. Fractal approach towards power-law coherency to measure cross-correlations between time series By Ladislav Kristoufek
  9. Convergence rates of sums of α-mixing triangular arrays : with an application to non-parametric drift function estimation of continuous-time processes By Kanaya, Shin
  10. Eigenvalue Ratio Estimators for the Number of Common Factors By Cavicchioli, Maddalena; Forni, Mario; Lippi, Marco; Zaffaroni, Paolo
  11. Assessing Point Forecast Accuracy by Stochastic Error Distance By Francis X. Diebold; Minchul Shin

  1. By: Pierre Perron (Boston University); Tatsuma Wada (Keio University)
    Abstract: This paper first generalizes the trend-cycle decomposition framework of Perron and Wada (2009) based on unobserved components models with innovations having a mixture of normals distribution, which is able to handle sudden level and slope changes to the trend function as well as outliers. We investigate how important are the differences in the implied trend and cycle compared to the popular decomposition based on the Hodrick and Prescott (HP) (1997) filter. Our results show important qualitative and quantitative differences in the implied cycles for both real GDP and consumption series for the G7 countries. Most of the differences can be ascribed to the fact that the HP filter does not handle well slope changes, level shifts and outliers, while our method does so. Then, we reassess how such different cycles affect some socalled “stylized facts†about the relative variability of consumption and output across countries.
    Keywords: Trend-Cycle Decomposition, Unobserved Components Model, International Business Cycle, Non Gaussian Filter
    JEL: C22 E32
    Date: 2015–10–29
    URL: http://d.repec.org/n?u=RePEc:bos:wpaper:wp2015-016&r=ets
  2. By: Jiawen Xu (Shanghai University of Finance and Economics); Pierre Perron (Boston University)
    Abstract: We present a frequentist-based approach to forecast time series in the presence of in-sample and out-of-sample breaks in the parameters of the forecasting model. We first model the parameters as following a random level shift process, with the occurrence of a shift governed by a Bernoulli process. In order to have a structure so that changes in the parameters be forecastable, we introduce two modifications. The Örst models the probability of shifts according to some covariates that can be forecasted. The second incorporates a built-in mean reversion mechanism to the time path of the parameters. Similar modifications can also be made to model changes in the variance of the error process. Our full model can be cast into a non-linear nonGaussian state space framework. To estimate it, we use particle filtering and a Monte Carlo expectation maximization algorithm. Simulation results show that the algorithm delivers accurate in-sample estimates, in particular the filtered estimates of the time path of the parameters follow closely their true variations. We provide a number of empirical applications and compare the forecasting performance of our approach with a variety of alternative methods. These show that substantial gains in forecasting accuracy are obtained.
    Keywords: instabilities; structural change; forecasting; random level shifts; particle filter
    JEL: C22 C53
    Date: 2015–09–20
    URL: http://d.repec.org/n?u=RePEc:bos:wpaper:wp2015-012&r=ets
  3. By: Rasmus T. Varneskov (Aarhus University and CREATES); Pierre Perron (Boston University)
    Abstract: We propose a parametric state space model of asset return volatility with an accompanying estimation and forecasting framework that allows for ARFIMA dynamics, random level shifts and measurement errors. The Kalman filter is used to construct the state-augmented likelihood function and subsequently to generate forecasts, which are mean- and path-corrected. We apply our model to eight daily volatility series constructed from both high-frequency and daily returns. Full sample parameter estimates reveal that random level shifts are present in all series. Genuine long memory is present in high-frequency measures of volatility whereas there is little remaining dynamics in the volatility measures constructed using daily returns. From extensive forecast evaluations, we find that our ARFIMA model with random level shifts consistently belongs to the 10% Model Confidence Set across a variety of forecast horizons, asset classes, and volatility measures. The gains in forecast accuracy can be very pronounced, especially at longer horizons.
    Keywords: Forecasting, Kalman Filter, Long Memory Processes, State Space Modeling, Stochastic Volatility, Structural Change
    JEL: C13 C22 C53
    Date: 2015–09–08
    URL: http://d.repec.org/n?u=RePEc:bos:wpaper:wp2015-015&r=ets
  4. By: Luis Filipe Martins (Lisbon University Institute); Pierre Perron (Boston University)
    Abstract: Of interest is comparing the out-of-sample forecasting performance of two competing models in the presence of possible instabilities. To that effect, we suggest using simple structural change tests, sup-Wald and UDmax as proposed by Andrews (1993) and Bai and Perron (1998), for changes in the mean of the loss-differences. Giacomini and Rossi (2010) proposed a áuctuations test and a one-time reversal test also applied to the loss-differences. When properly constructed to account for potential serial correlation under the null hypothesis to have a pivotal limit distribution, it is shown that their tests have undesirable power properties, power that can be low and non-increasing as the alternative gets further from the null hypothesis. The good power properties they reported is simply an artifact of imposing a priori that the loss differentials are serially uncorrelated and using the simple sample variance to scale the tests. On the contrary, our statistics are shown to have higher monotonic power, especially the UDmax version. We use their empirical examples to show the practical relevance of the issues raised.
    Keywords: non-monotonic power, structural change, forecasts, long-run variance
    JEL: C22 C53
    Date: 2015–10–06
    URL: http://d.repec.org/n?u=RePEc:bos:wpaper:wp2015-014&r=ets
  5. By: Pierre Perron (Boston University); Gabriel Rodríguez (Pontificia Universidad Católica del Perú)
    Abstract: We provide GLS-detrended versions of single-equation static regression or residuals-based tests for testing whether or not non-stationary time series are cointegrated. Our approach is to consider nearly optimal tests for unit roots and apply them in the cointegration context. We derive the local asymptotic power functions of all tests considered for a triangular DGP imposing a directional restriction such that the regressors are pure integrated processes. Our GLS versions of the tests do indeed provide substantial power improvements over their OLS counterparts. Simulations show that the gains in power are important and stable across various configurations.
    Keywords: Cointegration, Residuals-Based Unit Root Tests, ECR Tests, OLS and GLS Detrended Data, Hypothesis Testing
    JEL: C22 C32 C52
    Date: 2015–10–19
    URL: http://d.repec.org/n?u=RePEc:bos:wpaper:wp2015-017&r=ets
  6. By: Pierre Perron (Boston University); Mototsugu Shintani (RCAST, University of Tokyo, Vanderbilt University); Tomoyoshi Yabu (Keio University)
    Abstract: This paper proposes a new test for the presence of a nonlinear deterministic trend approximated by a Fourier expansion in a univariate time series for which there is no prior knowledge as to whether the noise component is stationary or contains an autoregressive unit root. Our approach builds on the work of Perron and Yabu (2009a) and is based on a Feasible Generalized Least Squares procedure that uses a superefficient estimator of the sum of the autoregressive coe¢ cients when = 1. The resulting Wald test statistic asymptotically follows chi-square distribution in both the I(0) and I(1) cases. To improve the finite sample properties of the test, we use a bias corrected version of the OLS estimator of proposed by Roy and Fuller (2001). We show that our procedure is substantially more powerful than currently available alternatives. We illustrate the usefulness of our method via an application to modeling the trend of global and hemispheric temperatures.
    Keywords: Fourier approximation, median-unbiased estimator, nonlinear trends, super-efficient estimator, unit root
    JEL: C22
    Date: 2015–01
    URL: http://d.repec.org/n?u=RePEc:bos:wpaper:wp2015-018&r=ets
  7. By: Yunus Emre Ergemen (Aarhus University and CREATES); Carlos Vladimir Rodríguez-Caballero (Aarhus University and CREATES)
    Abstract: A dynamic multi-level factor model with stationary or nonstationary global and regional factors is proposed. In the model, persistence in global and regional common factors as well as innovations allows for the study of fractional cointegrating relationships. Estimation of global and regional common factors is performed in two steps employing canonical correlation analysis and a sequential least-squares algorithm. Selection of the number of global and regional factors is discussed. The small sample properties of our methodology are investigated by some Monte Carlo simulations. The method is then applied to the Nord Pool power market for the analysis of price comovements among different regions within the power grid. We find that the global factor can be interpreted as the system price of the power grid as well as a fractional cointegration relationship between prices and the global factor.
    Keywords: Multi-level factor, long memory, fractional cointegration, electricity prices
    JEL: C12 C22
    Date: 2016–08–12
    URL: http://d.repec.org/n?u=RePEc:aah:create:2016-23&r=ets
  8. By: Ladislav Kristoufek
    Abstract: We focus on power-law coherency as an alternative approach towards studying power-law cross-correlations between simultaneously recorded time series. To be able to study empirical data, we introduce three estimators of the power-law coherency parameter $H_{\rho}$ based on popular techniques usually utilized for studying power-law cross-correlations -- detrended cross-correlation analysis (DCCA), detrending moving-average cross-correlation analysis (DMCA) and height cross-correlation analysis (HXA). In the finite sample properties study, we focus on the bias, variance and mean squared error of the estimators. We find that the DMCA-based method is the safest choice among the three. The HXA method is reasonable for long time series with at least $10^4$ observations, which can be easily attainable in some disciplines but problematic in others. The DCCA-based method does not provide favorable properties which even deteriorate with an increasing time series length. The paper opens a new venue towards studying cross-correlations between time series.
    Date: 2016–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1608.06781&r=ets
  9. By: Kanaya, Shin
    Abstract: The convergence rates of the sums of α-mixing (or strongly mixing) triangular arrays of heterogeneous random variables are derived. We pay particular attention to the case where central limit theorems may fail to hold, due to relatively strong time-series dependence and/or the non-existence of higher-order moments. Several previous studies have presented various versions of laws of large numbers for sequences/triangular arrays, but their convergence rates were not fully investigated. This study is the first to investigate the convergence rates of the sums of α-mixing triangular arrays whose mixing coefficients are permitted to decay arbitrarily slowly. We consider two kinds of asymptotic assumptions: one is that the time distance between adjacent observations is fixed for any sample size n; and the other, called the infill assumption, is that it shrinks to zero as n tends to infinity. Our convergence theorems indicate that an explicit trade-off exists between the rate of convergence and the degree of dependence. While the results under the infill assumption can be seen as a direct extension of those under the fixed-distance assumption, they are new and particularly useful for deriving sharper convergence rates of discretization biases in estimating continuous-time processes from discretely sampled observations. We also discuss some examples to which our results and techniques are useful and applicable: a moving-average process with long lasting past shocks, a continuous-time diffusion process with weak mean reversion, and a near-unit-root process.
    Keywords: Law of large numbers, rate of convergence, α-mixing triangular array, infi ll asymptotics, kernel estimation
    JEL: C14 C22 C58
    Date: 2016–08
    URL: http://d.repec.org/n?u=RePEc:hit:hituec:646&r=ets
  10. By: Cavicchioli, Maddalena; Forni, Mario; Lippi, Marco; Zaffaroni, Paolo
    Abstract: In this paper we introduce three dynamic eigenvalue ratio estimators for the number of dynamic factors. Two of them, the Dynamic Eigenvalue Ratio (DER) and the Dynamic Growth Ratio (DGR) are dynamic counterparts of the eigenvalue ratio estimators (ER and GR) proposed by Ahn and Horenstein (2013). The third, the Dynamic eigenvalue Di fference Ratio (DDR), is a new one but closely related to the test statistic proposed by Onatsky (2009). The advantage of such estimators is that they do not require preliminary determination of discretionary parameters. Finally, a static counterpart of the latter estimator, called eigenvalue Diff erence Ratio estimator (DR), is also proposed. We prove consistency of such estimators and evaluate their performance under simulation. We conclude that both DDR and DR are valid alternatives to existing criteria. Application to real data gives new insights on the number of factors driving the US economy.
    Date: 2016–08
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:11440&r=ets
  11. By: Francis X. Diebold; Minchul Shin
    Abstract: We propose point forecast accuracy measures based directly on distance of the forecast-error c.d.f. from the unit step function at 0 ("stochastic error distance," or SED). We provide a precise characterization of the relationship between SED and standard predictive loss functions, and we show that all such loss functions can be written as weighted SED's. The leading case is absolute-error loss. Among other things, this suggests shifting attention away from conditional-mean forecasts and toward conditional-median forecasts.
    JEL: C52 C53
    Date: 2016–08
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:22516&r=ets

This nep-ets issue is ©2016 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.