nep-ets New Economics Papers
on Econometric Time Series
Issue of 2011‒08‒09
twelve papers chosen by
Yong Yin
SUNY at Buffalo

  1. A sieve bootstrap range test for poolability independent cointegrated panels By Francesca Di Iorio; Stefano Fachin
  2. Filtering and decomposing time series in Stata 12 By David M. Drukker
  3. Forecasting Under Strucural Break Uncertainty By Jing Tian; Heather M. Anderson
  4. Bayesian Analysis of Time-Varying Parameter Vector Autoregressive Model with the Ordering of Variables for the Japanese Economy and Monetary Policy By Jouchi Nakajima; Toshiaki Watanabe
  5. Quantile Forecasts of Financial Returns Using Realized GARCH Models By Toshiaki Watanabe
  6. Estimation and Inference in Predictive Regressions By Eiji Kurozumi; Kohei Aono
  7. Bayesian Inference in a Time Varying Cointegration Model By Gary Koop; Roberto Leon-Gonzales; Rodney W Strachan
  8. A Conditional-Heteroskedasticity-Robust Confidence Interval for the Autoregressive Parameter By Donald W.K. Andrews; Patrik Guggenberger
  9. Large Vector Auto Regressions By Song Song; Peter J. Bickel
  10. Understanding and forecasting aggregate and disaggregate price dynamics By Colin Bermingham; Antonello D’Agostino
  11. Median-Unbiased Estimation in DF-GLS Regressions and the PPP Puzzle By Lopez, C.; Murray, C J.; Papell, D H.
  12. Smoothing parameter selection for penalized spline estimators By Tatyana Krivobokova

  1. By: Francesca Di Iorio (Universita' di Napoli Federico II); Stefano Fachin (Universita' di Roma "La Sapienza")
    Abstract: We develop a sieve bootstrap range test for poolability of cointegrating regressions in dependent panels and evaluate by simulation its performances. Although slightly undersized the test has good power even when only a single unit of the panel is heterogenous.
    Keywords: Poolability, Panel cointegration, sieve bootstrap.
    JEL: C23 C15 E2
    Date: 2011–07
  2. By: David M. Drukker (StataCorp LP)
    Abstract: In this talk, I introduce new methods in Stata 12 for filtering and decomposing time series and I show how to implement them. I provide an underlying framework for understanding and comparing the different methods. I also present a framework for interpreting the parameters.
    Date: 2011–07–20
  3. By: Jing Tian; Heather M. Anderson
    Abstract: This paper proposes two new weighting schemes that average forecasts using different estimation windows to account for structural change. We let the weights reflect the probability of each time point being the most-recent break point, and we use the reversed ordered Cusum test statistics to capture this intuition. The second weighting method simply imposes heavier weights on those forecasts that use more recent information. The proposed combination forecasts are evaluated using Monte Carlo techniques, and we compare them with forecasts based on other methods that try to account for structural change, including average forecasts weighted by past forecasting performance and techniques that first estimate a break point and then forecast using the post break data. Simulation results show that our proposed weighting methods often outperform the others in the presence of structural breaks. An empirical application based on a NAIRU Phillips curve model for the United States indicates that it is possible to outperform the random walk forecasting model when we employ forecasting methods that account for break uncertainty.
    Keywords: Forecasting with Structural breaks, Parameter Shifts, break Uncertainty, Structural break Tests, Choice of Estimation Sample, Forecast Combinations, NAIRU Phillips Curve.
    JEL: C22 C53 E37
    Date: 2011–07
  4. By: Jouchi Nakajima; Toshiaki Watanabe
    Abstract: This paper applies the time-varying parameter vector autoregressive model to the Japanese economy. The both parameters and volatilities, which are assumed to follow a random-walk process, are estimated using a Bayesian method with MCMC. The recursive structure is assumed for identification and the reversible jump MCMC is used for the ordering of variables. The empirical result reveals the time-varying structure of the Japanese economy and monetary policy during the period from 1981 to 2008 and provides evidence that the order of variables may change by the introduction of zero interest rate policy.
    Keywords: Bayesian inference, Monetary policy, Reversible jump Markov chain Monte Carlo, Stochastic volatility, Time-varying parameter VAR
    JEL: C11 C15 E52
    Date: 2011–07
  5. By: Toshiaki Watanabe
    Abstract: This article applies the realized GARCH model, which incorporates the GARCH model with realized volatility (RV), to quantile forecasts of financial returns such as Value-at-Risk and expected shortfall. This model has certain advantages in the application to quantile forecasts because it can adjust the bias of RV casued by microstructure noise and non-trading hours and enables us to estimate the parameters in the return distribution jointly with the other parameters. Student's t- and skewed strudent's t-distributions as well as normal distribution are used for the return distribution. The EGARCH model is used for comparison. Main results for the S&P 500 stock index are: (1) the realized GARCH model with the skewed student's t-distribution performs better than that with the normal and student's t-distributions and the EGARCH model using the daily returns only, and (2) the performance does not improve if the realized kernel, which takes account of microstructure noise, is used instead of the plain realized volatility, implying that the realized GARCH model can adjust the bias of RV caused by microstructure noise.
    Keywords: Expected shortfall, GARCH, Realized volatility, Skewed student's t-distribution, Value-at-Risk
    JEL: C52 C53
    Date: 2011–07
  6. By: Eiji Kurozumi; Kohei Aono
    Abstract: This paper proposes new point estimates for predictive regressions. Our estimates are easily obtained by the least squares and the instrumental variable methods. Our estimates, called the plug-in estimates, have nice asymptotic properties such as median unbiasedness and the approximated normality of the associated t-statistics. In addition, the plug-in estimates are shown to have good finite sample properties via Monte Carlo simulations. Using the new estimates, we investigate U.S. stock returns and find that some variables, which have not been statistically detected as useful predictors in the literature, are able to predict stock returns. Because of their nice properties, our methods complement the existing statistical tests for predictability to investigate the relations between stock returns and economic variables.
    Keywords: unit root, near unit root, bias, median unbiased, stock return
    JEL: C13 C22
    Date: 2011–05
  7. By: Gary Koop; Roberto Leon-Gonzales; Rodney W Strachan
    Abstract: There are both theoretical and empirical reasons for believing that the parameters of macroeconomic models may vary over time. However, work with time-varying parameter models has largely involved Vector autoregressions (VARs), ignoring cointegrations. This is despite the fact that cointegration plays an important role in informing macroeconomists on a range of issues. In this paper we develop a new time varying parameter model which permits cointegration. We use a specification which allows for the cointegrating space to evolve over time in a manner comparable to the random walk variation used with TVP-VARs. The properties of our approach are investigated before developing a method of posterior simulation. We use our methods in an empirical investigation involving the Fisher effect.
    Date: 2011–08
  8. By: Donald W.K. Andrews (Cowles Foundation, Yale University); Patrik Guggenberger (Dept. of Economics, UCSD)
    Abstract: This paper introduces a new confidence interval (CI) for the autoregressive parameter (AR) in an AR(1) model that allows for conditional heteroskedasticity of general form and AR parameters that are less than or equal to unity. The CI is a modification of Mikusheva's (2007a) modification of Stock's (1991) CI that employs the least squares estimator and a heteroskedasticity-robust variance estimator. The CI is shown to have correct asymptotic size and to be asymptotically similar (in a uniform sense). It does not require any tuning parameters. No existing procedures have these properties. Monte Carlo simulations show that the CI performs well in finite samples in terms of coverage probability and average length, for innovations with and without conditional heteroskedasticity.
    Keywords: Asymptotically similar, Asymptotic size, Autoregressive model, Conditional heteroskedasticity, Confidence interval, Hybrid test, Subsampling test, Unit root
    JEL: C12 C15 C22
    Date: 2011–08
  9. By: Song Song; Peter J. Bickel
    Abstract: One popular approach for nonstructural economic and financial forecasting is to include a large number of economic and financial variables, which has been shown to lead to significant improvements for forecasting, for example, by the dynamic factor models. A challenging issue is to determine which variables and (their) lags are relevant, especially when there is a mixture of serial correlation (temporal dynamics), high dimensional (spatial) dependence structure and moderate sample size (relative to dimensionality and lags). To this end, an integrated solution that addresses these three challenges simultaneously is appealing. We study the large vector auto regressions here with three types of estimates. We treat each variable's own lags different from other variables' lags, distinguish various lags over time, and is able to select the variables and lags simultaneously. We first show the consequences of using Lasso type estimate directly for time series without considering the temporal dependence. In contrast, our proposed method can still produce an estimate as efficient as an oracle under such scenarios. The tuning parameters are chosen via a data driven "rolling scheme" method to optimize the forecasting performance. A macroeconomic and financial forecasting problem is considered to illustrate its superiority over existing estimators.
    Keywords: Time Series, Vector Auto Regression, Regularization, Lasso, Group Lasso, Oracle estimator
    JEL: C13 C14 C32 E30 E40 G10
    Date: 2011–08
  10. By: Colin Bermingham (Central Bank of Ireland, Dame Street, Dublin 2, Ireland.); Antonello D’Agostino (European Central Bank, Kaiserstrasse 29, D-60311 Frankfurt am Main, Germany and Central Bank of Ireland.)
    Abstract: The issue of forecast aggregation is to determine whether it is better to forecast a series directly or instead construct forecasts of its components and then sum these component forecasts. Notwithstanding some underlying theoretical results, it is generally accepted that forecast aggregation is an empirical issue. Empirical results in the literature often go unexplained. This leaves forecasters in the dark when confronted with the option of forecast aggregation. We take our empirical exercise a step further by considering the underlying issues in more detail. We analyse two price datasets, one for the United States and one for the Euro Area, which have distinctive dynamics and provide a guide to model choice. We also consider multiple levels of aggregation for each dataset. The models include an autoregressive model, a factor augmented autoregressive model, a large Bayesian VAR and a time-varying model with stochastic volatility. We find that once the appropriate model has been found, forecast aggregation can significantly improve forecast performance. These results are robust to the choice of data transformation. JEL Classification: E17, E31, C11, C38.
    Keywords: Aggregation, Forecasting, Inflation.
    Date: 2011–08
  11. By: Lopez, C.; Murray, C J.; Papell, D H.
    Abstract: This article analyzes the hysteresis hypothesis in the unemployment rates of the four French overseas regions (Guadeloupe, Martinique, Guyana, Reunion) [FORs] over the period 1993-2008. We use standard univariate and panel unit root tests, among them Choi (2006) and Lopez (2009) that account for cross-sectional dependence and have improved performance when the number of countries and the time dimension of the data are limited. Our results cannot reject the null hypothesis of a unit root and so find evidence supporting hysteresis in the unemployment rates for the FORs.
    Keywords: PPP puzzle, median-unbiased, persistence.
    JEL: C22 F31
    Date: 2011
  12. By: Tatyana Krivobokova (Georg-August-University Göttingen)
    Abstract: There are two popular smoothing parameter selection methods for spline smoothing. First, criteria that approximate the average mean squared error of the estimator (e.g. generalized cross validation) are widely used. Alternatively, the maximum likelihood paradigm can be employed under the assumption that the underlying function to be estimated is a realization of some stochastic process. In this article the asymptotic properties of both smoothing parameter estimators are studied and compared in the frequentist and stochastic framework for penalized spline smoothing. Consistency and asymptotic normality of the estimators are proved and small sample properties are discussed. A simulation study and a real data example illustrate the theoretical findings.
    Keywords: Maximum likelihood; Mean squared error minimizer; Penalized splines; Smoothing splines
    Date: 2011–08–02

This nep-ets issue is ©2011 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.