nep-ets New Economics Papers
on Econometric Time Series
Issue of 2012‒05‒15
eleven papers chosen by
Yong Yin
SUNY at Buffalo

  1. Stein-Rule Estimation and Generalized Shrinkage Methods for Forecasting Using Many Predictors By Eric Hillebrand; Tae-Hwy Lee
  2. Oracle Inequalities for High Dimensional Vector Autoregressions By Anders Bredahl Kock; Laurent A.F. Callot
  3. Overlapping sub-sampling and invariance to initial conditions By Kyriacou, Maria
  4. Alternative Methodology for Turning-Point Detection in Business Cycle : A Wavelet Approach By Peter Martey Addo; Monica Billio; Dominique Guegan
  5. Robust Ranking of Multivariate GARCH Models by Problem Dimension By Massimiliano Caporin; Michael McAleer
  6. Nonparametric tests for conditional independence using conditional distributions By Taoufik Bouezmarni; Abderrahim Taamouti
  7. Estimating a semiparametric asymmetric stochastic volatility model with a Dirichlet process mixture By Mark J. Jensen; John M. Maheu
  8. Evaluating DSGE model forecasts of comovements By Edward Herbst; Frank Schorfheide
  9. Robust Standard Errors in Transformed Likelihood Estimation of Dynamic Panel Models By Hayakawa, K.; Pesaran, M.H.
  10. Indirect estimation of GARCH models with alpha-stable innovations By Parrini, Alessandro
  11. Large time-varying parameter VARs By Koop, Gary; Korobilis, Dimitris

  1. By: Eric Hillebrand (Aarhus University and CREATES); Tae-Hwy Lee (University of California, Riverside)
    Abstract: We examine the Stein-rule shrinkage estimator for possible improvements in estimation and forecasting when there are many predictors in a linear time series model. We consider the Stein-rule estimator of Hill and Judge (1987) that shrinks the unrestricted unbiased OLS estimator towards a restricted biased principal component (PC) estimator. Since the Stein-rule estimator combines the OLS and PC estimators, it is a model-averaging estimator and produces a combined forecast. The conditions under which the improvement can be achieved depend on several unknown parameters that determine the degree of the Stein-rule shrinkage. We conduct Monte Carlo simulations to examine these parameter regions. The overall picture that emerges is that the Stein-rule shrinkage estimator can dominate both OLS and principal components estimators within an intermediate range of the signal-to-noise ratio. If the signal-to-noise ratio is low, the PC estimator is superior. If the signal-to-noise ratio is high, the OLS estimator is superior. In out-of-sample forecasting with AR(1) predictors, the Stein-rule shrinkage estimator can dominate both OLS and PC estimators when the predictors exhibit low persistence.
    Keywords: Stein-rule, shrinkage, risk, variance-bias tradeo, OLS, principal components.
    JEL: C1 C2 C5
    Date: 2012–04–30
  2. By: Anders Bredahl Kock (Aarhus University and CREATES); Laurent A.F. Callot (Aarhus University and CREATES)
    Abstract: This paper establishes non-asymptotic oracle inequalities for the prediction error and estimation accuracy of the LASSO in stationary vector autoregressive models. These inequalities are used to establish consistency of the LASSO even when the number of parameters is of a much larger order of magnitude than the sample size. Furthermore, it is shown that under suitable conditions the number of variables selected is of the right order of magnitude and that no relevant variables are excluded. Next, non-asymptotic probabilities are given for the Adaptive LASSO to select the correct sign pattern (and hence the correct sparsity pattern). Finally conditions under which the Adaptive LASSO reveals the correct sign pattern with probability tending to one are given. Again, the number of parameters may be much larger than the sample size. Some maximal inequalities for vector autoregressions which might be of independent interest are contained in the appendix.
    Keywords: Vector autoregression, LASSO, Adaptive LASSO, Oracle inequality, Variable selection.
    JEL: C01 C02 C13 C32
    Date: 2012–04–30
  3. By: Kyriacou, Maria
    Abstract: This paper studies the use of the overlapping blocking scheme in unit root autoregression. When the underlying process is that of a random walk, the blocks’ initial conditions are not fixed, but are equal to the sum of all the previous observations’ error terms. When non- overlapping subsamples are used, as first shown by Chambers and Kyriacou (2010), these initial conditions do not disappear asymptotically. In this paper we show that a simple way of overcoming this issue is to use overlapping blocks. By doing so, the effect of these initial conditions vanishes asymptotically. An application of these findings to jackknife estimators indicates that an estimator based on moving-blocks is able to provide obvious reductions to the mean square error
    Date: 2012–05–01
  4. By: Peter Martey Addo (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Paris I - Panthéon Sorbonne, Università Ca' Foscari of Venice - Department of Economics); Monica Billio (Università Ca' Foscari of Venice - Department of Economics); Dominique Guegan (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Paris I - Panthéon Sorbonne, EEP-PSE - Ecole d'Économie de Paris - Paris School of Economics - Ecole d'Économie de Paris)
    Abstract: We provide a signal modality analysis to characterize and detect nonlinearity schemes in the US Industrial Production Index time series. The analysis is achieved by using the recently proposed 'delay vector variance ' (DVV) method, which examines local predictability of a signal in the phase space to detect the presence of determinism and nonlinearity in a time series. Optimal embedding parameters used in the DVV analysis are obtained via a differential entropy based method using wavelet-based surrogates. A complex Morlet wavelet is employed to detect and characterize the US business cycle. A comprehensive analysis of the feasibility of this approach is provided. Our results coincide with the business cycles peaks and troughs dates published by the National Bureau of Economic Research (NBER).
    Keywords: Nonparametric methods, STAR models, business cycles.
    Date: 2012–04
  5. By: Massimiliano Caporin (Dipartimento di Scienze Economiche "Marco Fanno" (Department of Economics and Management), Università degli Studi di Padova); Michael McAleer (Econometrisch Instituut (Econometric Institute), Faculteit der Economische Wetenschappen (Erasmus School of Economics), Erasmus Universiteit, Tinbergen Instituut (Tinbergen Institute).)
    Abstract: During the last 15 years, several Multivariate GARCH (MGARCH) models have appeared in the literature. Recent research has begun to examine MGARCH specifications in terms of their out-of-sample forecasting performance. We provide an empirical comparison of alternative MGARCH models, namely BEKK, DCC, Corrected DCC (cDCC), CCC, OGARCH Exponentially Weighted Moving Average, and covariance shrinking, using historical data for 89 US equities. We contribute to the literature in several directions. First, we consider a wide range of models, including the recent cDCC and covariance shrinking models. Second, we use a range of tests and approaches for direct and indirect model comparison, including the Model Confidence Set. Third, we examine how the robust model rankings are influenced by the cross-sectional dimension of the problem.
    Keywords: Covariance forecasting, model confidence set, robust model ranking, MGARCH, robust model comparison.
    JEL: C32 C53 C52
    Date: 2012–04
  6. By: Taoufik Bouezmarni; Abderrahim Taamouti
    Abstract: The concept of causality is naturally defined in terms of conditional distribution, however almost all the empirical works focus on causality in mean. This paper aim to propose a nonparametric statistic to test the conditional independence and Granger non-causality between two variables conditionally on another one. The test statistic is based on the comparison of conditional distribution functions using an L2 metric. We use Nadaraya-Watson method to estimate the conditional distribution functions. We establish the asymptotic size and power properties of the test statistic and we motivate the validity of the local bootstrap. Further, we ran a simulation experiment to investigate the finite sample properties of the test and we illustrate its practical relevance by examining the Granger non-causality between S&P 500 Index returns and VIX volatility index. Contrary to the conventional t-test, which is based on a linear mean-regression model, we find that VIX index predicts excess returns both at short and long horizons.
    Keywords: Nonparametric tests, Time series, Conditional independence, Granger non-causality, Nadaraya-Watson estimator, Conditional distribution function, VIX volatility index, S&P500 index
    JEL: C12 C14 C15 C19 G1 G12 E3 E4
    Date: 2011–10
  7. By: Mark J. Jensen; John M. Maheu
    Abstract: In this paper, we extend the parametric, asymmetric, stochastic volatility model (ASV), where returns are correlated with volatility, by flexibly modeling the bivariate distribution of the return and volatility innovations nonparametrically. Its novelty is in modeling the joint, conditional, return-volatility distribution with an infinite mixture of bivariate Normal distributions with mean zero vectors, but having unknown mixture weights and covariance matrices. This semiparametric ASV model nests stochastic volatility models whose innovations are distributed as either Normal or Student-t distributions, plus the response in volatility to unexpected return shocks is more general than the fixed asymmetric response with the ASV model. The unknown mixture parameters are modeled with a Dirichlet process prior. This prior ensures a parsimonious, finite, posterior mixture that best represents the distribution of the innovations and a straightforward sampler of the conditional posteriors. We develop a Bayesian Markov chain Monte Carlo sampler to fully characterize the parametric and distributional uncertainty. Nested model comparisons and out-of-sample predictions with the cumulative marginal-likelihoods, and one-day-ahead, predictive log-Bayes factors between the semiparametric and parametric versions of the ASV model shows the semiparametric model projecting more accurate empirical market returns. A major reason is how volatility responds to an unexpected market movement. When the market is tranquil, expected volatility reacts to a negative (positive) price shock by rising (initially declining, but then rising when the positive shock is large). However, when the market is volatile, the degree of asymmetry and the size of the response in expected volatility is muted. In other words, when times are good, no news is good news, but when times are bad, neither good nor bad news matters with regards to volatility.
    Date: 2012
  8. By: Edward Herbst; Frank Schorfheide
    Abstract: This paper develops and applies tools to assess multivariate aspects of Bayesian Dynamic Stochastic General Equilibrium (DSGE) model forecasts and their ability to predict comovements among key macroeconomic variables. We construct posterior predictive checks to evaluate conditional and unconditional density forecasts, in addition to checks for root-mean-squared errors and event probabilities associated with these forecasts. The checks are implemented on a three-equation DSGE model as well as the Smets and Wouters (2007) model using real-time data. We find that the additional features incorporated into the Smets-Wouters model do not lead to a uniform improvement in the quality of density forecasts and prediction of comovements of output, inflation, and interest rates.
    Date: 2012
  9. By: Hayakawa, K.; Pesaran, M.H.
    Abstract: This paper extends the transformed maximum likelihood approach for estimation of dynamic panel data models by Hsiao, Pesaran, and Tahmiscioglu (2002) to the case where the errors are crosssectionally heteroskedastic. This extension is not trivial due to the incidental parameters problem that arises, and its implications for estimation and inference. We approach the problem by working with a mis-specified homoskedastic model. It is shown that the transformed maximum likelihood estimator continues to be consistent even in the presence of cross-sectional heteroskedasticity. We also obtain standard errors that are robust to cross-sectional heteroskedasticity of unknown form. By means of Monte Carlo simulation, we investigate the finite sample behavior of the transformed maximum likelihood estimator and compare it with various GMM estimators proposed in the literature. Simulation results reveal that, in terms of median absolute errors and accuracy of inference, the transformed likelihood estimator outperforms the GMM estimators in almost all cases.
    Keywords: Dynamic Panels, Cross-sectional heteroskedasticity, Monte Carlo simulation, GMM estimation
    JEL: C12 C13 C23
    Date: 2012–05–09
  10. By: Parrini, Alessandro
    Abstract: Several studies have highlighted the fact that heavy-tailedness of asset returns can be the consequence of conditional heteroskedasticity. GARCH models have thus become very popular, given their ability to account for volatility clustering and, implicitly, heavy tails. However, these models encounter some difficulties in handling financial time series, as they respond equally to positive and negative shocks and their tail behavior remains too short even with Student-t error terms. To overcome these weaknesses we apply GARCH-type models with alpha-stable innovations. The stable family of distributions constitutes a generalization of the Gaussian distribution that has intriguing theoretical and practical properties. Indeed it is stable under addiction and, having four parameters, it allows for asymmetry and heavy tails. Unfortunately stable models do not have closed likelihood function, but since simulated values from α-stable distributions can be straightforwardly obtained, the indirect inference approach is particularly suited to the situation at hand. In this work we provide a description of how to estimate a GARCH(1,1) and a TGARCH(1,1) with symmetric stable shocks using as auxiliary model a GARCH(1,1) with skew-t innovations. Monte Carlo simulations, conducted using GAUSS, are presented and finally the proposed models are used to estimate the IBM weekly return series as an illustration of how they perform on real data.
    Keywords: GARCH; alpha-stable distribution; indirect estimation; skew-t distribution; Monte Carlo simulations
    JEL: C13 C32 C87 C15 C01
    Date: 2012–04–18
  11. By: Koop, Gary; Korobilis, Dimitris
    Abstract: In this paper we develop methods for estimation and forecasting in large time-varying parameter vector autoregressive models (TVP-VARs). To overcome computational constraints with likelihood-based estimation of large systems, we rely on Kalman filter estimation with forgetting factors. We also draw on ideas from the dynamic model averaging literature and extend the TVP-VAR so that its dimension can change over time. A final extension lies in the development of a new method for estimating, in a time-varying manner, the parameter(s) of the shrinkage priors commonly-used with large VARs. These extensions are operationalized through the use of forgetting factor methods and are, thus, computationally simple. An empirical application involving forecasting inflation, real output, and interest rates demonstrates the feasibility and usefulness of our approach.
    Keywords: Bayesian VAR; forecasting; time-varying coefficients; state-space model
    JEL: E27 C52 E37 C11
    Date: 2012–02–28

This nep-ets issue is ©2012 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.