nep-ets New Economics Papers
on Econometric Time Series
Issue of 2011‒03‒26
ten papers chosen by
Yong Yin
SUNY at Buffalo

  1. Note on the Interpretation of Convergence Speed in the Dynamic Panel Model By Masahiko Shibamoto; Yoshiro Tsutsui
  2. Improving forecasting performance by window and model averaging By Prasad S Bhattacharya; Dimitrios D Thomakos
  3. Bootstrap Tests for Structural Breaks When the Regressors and Error Term are Nonstationary By Dong Jin Lee
  4. Small Sample Properties of Alternative Tests for Martingale Difference Hypothesis By Amélie Charles; Olivier Darné; Jae H Kim
  5. Relating Stochastic Volatility Estimation Methods By Charles S. Bos
  6. Zero Variance Markov Chain Monte Carlo for Bayesian Estimators By Antonietta Mira; Daniele Imparato; Reza Solgi
  7. Cointegrating Polynomial Regressions By Hong, Seung Hyun; Wagner, Martin
  8. Markov-Switching MIDAS Models By Pierre Guerin; Massimiliano Marcellino
  9. Multifractal detrending moving average cross-correlation analysis By Zhi-Qiang Jiang; Wei-Xing Zhou
  10. Testing for non-causality by using the Autoregressive Metric By Di Iorio, Francesca; Triacca, Umberto

  1. By: Masahiko Shibamoto (Research Institute for Economics and Business Administration, Kobe University); Yoshiro Tsutsui (Graduate School of Economics, Osaka University)
    Abstract: Studies using the dynamic panel regression approach have found the speed of income convergence among the world and regional economies to be high. For example, Lee et al. (1997, 1998) report the income convergence speed to be 30% per annum. This note argues that their estimates may be seriously overstated. Using a factor model, we show that the coefficient of the lagged income in their specification may not be the long-run convergence speed, but the adjustment speed of the short-run deviation from the long-run equilibrium path. We give an example of an empirical analysis, where the short-run adjustment speed is about 40%.
    Keywords: convergence speed, dynamic panel regression, factor model
    JEL: O40
    Date: 2011–01
    URL: http://d.repec.org/n?u=RePEc:kob:dpaper:dp2011-04&r=ets
  2. By: Prasad S Bhattacharya; Dimitrios D Thomakos
    Abstract: This study presents extensive results on the benefits of rolling window and model averaging. Building on the recent work on rolling window averaging by Pesaran et al (2010, 2009) and on exchange rate forecasting by Molodtsova and Papell (2009), we explore whether rolling window averaging can be considered beneficial on a priori grounds. We investigate whether rolling window averaging can improve the performance of model averaging, especially when ‘simpler’ models are used. The analysis provides strong support for rolling window averaging, outperforming the best window forecasts more than 50% of the time across all rolling windows. Furthermore, rolling window averaging smoothes out the forecast path, improves robustness, and minimizes the pitfalls associated with potential structural breaks.
    Keywords: Exchange rate forecasting, inflation forecasting, output growth forecasting, rolling window, model averaging, short horizon, robustness.
    JEL: C22 C53 F31 F47 E31
    Date: 2011–02–21
    URL: http://d.repec.org/n?u=RePEc:dkn:econwp:eco_2011_1&r=ets
  3. By: Dong Jin Lee (University of Connecticut)
    Abstract: This paper considers tests for structural breaks in linear models when the regressors and the serially dependent error process are unstable. The set of models contains various economic circumstances such as the structural breaks in the regressors and/or the error variance, and a linear trend model with I(0)/I(1) error. We show that the existing heteroscedasticity robust tests and the fixed regressor bootstrap method of Hansen (2000) have severe size distortion problem even in the asymptotics. We suggest a method which combines the fixed regressor bootstrap and the sieve-wild bootstrap method to nonparametrically approximate the the serially dependent unstable error process. The suggested method is shown to asymptotically replicates the true distribution of the existing tests under various circumstances. Monte Carlo experiments show significant improvements both in the size and the power properties. Once the size is controlled by the bootstrap, Wald type tests have better power properties relative to LM type tests.
    Keywords: structural break, sieve bootstrap, fixed regressor bootstrap, robust test, break in linear trend
    JEL: C10 C12 C22
    Date: 2011–03
    URL: http://d.repec.org/n?u=RePEc:uct:uconnp:2011-05&r=ets
  4. By: Amélie Charles (Audencia Nantes, School of Management); Olivier Darné (LEMNA, University of Nantes); Jae H Kim (School of Economics and Finance, La Trobe University)
    Abstract: A Monte Carlo experiment is conducted to compare power properties of al- ternative tests for the martingale difference hypothesis. Overall, we find that the wild bootstrap automatic variance ratio test shows the highest power against lin- ear dependence; while the generalized spectral test performs most desirably under nonlinear dependence.
    Keywords: Monte Carlo experiment; Nonlinear dependence; Portmanteau test; Variance ratio test
    JEL: C12 C14
    Date: 2010–11
    URL: http://d.repec.org/n?u=RePEc:ltr:wpaper:2010.07&r=ets
  5. By: Charles S. Bos
    Abstract: Estimation of the volatility of time series has taken off since the introduction of the GARCH and stochastic volatility models. While variants of the GARCH model are applied in scores of articles, use of the stochastic volatility model is less widespread. In this article it is argued that one reason for this difference is the relative difficulty of estimating the unobserved stochastic volatility, and the varying approaches that have been taken for such estimation. In order to simplify the comprehension of these estimation methods, the main methods for estimating stochastic volatility are discussed, with focus on their commonalities. In this manner, the advantages of each method are investigated, resulting in a comparison of the methods for their efficiency, difficulty-of-implementation, and precision.
    Keywords: Stochastic volatility; estimation; methodology
    JEL: C13 C51
    Date: 2011–03–03
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20110049&r=ets
  6. By: Antonietta Mira (Department of Economics, University of Insubria, Italy); Daniele Imparato (Department of Economics, University of Insubria, Italy); Reza Solgi (Istituto di Finanza, Universita di Lugano)
    Abstract: A general purpose variance reduction technique for Markov chain Monte Carlo (MCMC) estimators, based on the zero-variance principle introduced in the physics literature, is proposed to evaluate the expected value, of a function f with respect to a, possibly unnormalized, probability distribution . In this context, a control variate approach, generally used for Monte Carlo simulation, is exploited by replacing f with a dierent function, ~ f. The function ~ f is constructed so that its expectation, under , equals f , but its variance with respect to is much smaller. Theoretically, an optimal re-normalization f exists which may lead to zero variance; in practice, a suitable approximation for it must be investigated. In this paper, an ecient class of re-normalized ~ f is investigated, based on a polynomial parametrization. We nd that a low-degree polynomial (1st, 2nd or 3rd degree) can lead to dramatically huge variance reduction of the resulting zero-variance MCMC estimator. General formulas for the construction of the control variates in this context are given. These allow for an easy implementation of the method in very general settings regardless of the form of the target/posterior distribution (only dierentiability is required) and of the MCMC algorithm implemented (in particular, no reversibility is needed).
    Keywords: Control variates, GARCH models, Logistic regression, Metropolis-Hastings algorithm, Variance reduction
    Date: 2011–03
    URL: http://d.repec.org/n?u=RePEc:ins:quaeco:qf1109&r=ets
  7. By: Hong, Seung Hyun (Korea Institute of Public Finance, Seoul, Korea); Wagner, Martin (Department of Economics and Finance, Institute for Advanced Studies, Vienna, Austria)
    Abstract: This paper develops a fully modified OLS estimator for cointegrating polynomial regressions, i.e. for regressions including deterministic variables, integrated processes and powers of integrated processes as explanatory variables and stationary errors. The errors are allowed to be serially correlated and the regressors are allowed to be endogenous. The paper thus extends the fully modified approach developed in Phillips and Hansen (1990). The FM-OLS estimator has a zero mean Gaussian mixture limiting distribution, which is the basis for standard asymptotic inference. In addition Wald and LM tests for specification as well as a KPSS-type test for cointegration are derived. The theoretical analysis is complemented by a simulation study which shows that the developed FM-OLS estimator and tests based upon it perform well in the sense that the performance advantages over OLS are by and large similar to the performance advantages of FM-OLS over OLS in cointegrating regressions.
    Keywords: Cointegrating polynomial regression, fully modified OLS estimation, integrated process, testing
    JEL: C12 C13 C32
    Date: 2011–03
    URL: http://d.repec.org/n?u=RePEc:ihs:ihsesp:264&r=ets
  8. By: Pierre Guerin; Massimiliano Marcellino
    Abstract: This paper introduces a new regression model - Markov-switching mixed data sampling (MS-MIDAS) - that incorporates regime changes in the parameters of the mixed data sampling (MIDAS) models and allows for the use of mixed-frequency data in Markov-switching models. After a discussion of estimation and inference for MS-MIDAS, and a small sample simulation based evaluation, the MS-MIDAS model is applied to the prediction of the US and UK economic activity, in terms both of quantitative forecasts of the aggregate economic activity and of the prediction of the business cycle regimes. Both simulation and empirical results indicate that MSMIDAS is a very useful specification.
    Keywords: Business cycle, Mixed-frequency data, Non-linear models, Forecasting, Nowcasting
    JEL: C22 C53 E37
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:eui:euiwps:eco2011/03&r=ets
  9. By: Zhi-Qiang Jiang; Wei-Xing Zhou
    Abstract: There are a number of situations in which several signals are simultaneously recorded in complex systems, which exhibit long-term power-law cross-correlations. The multifractal detrended cross-correlation analysis (MF-DCCA) approaches can be used to quantify such cross-correlations, such as the MF-DCCA based on detrended fluctuation analysis (MF-X-DFA) method. We develop in this work a class of MF-DCCA algorithms based on the detrending moving average analysis, called MF-X-DMA. The performances of the MF-X-DMA algorithms are compared with the MF-X-DFA method by extensive numerical experiments on pairs of time series generated from bivariate fractional Brownian motions, two-component autoregressive fractionally integrated moving average processes and binomial measures, which have theoretical expressions of the multifractal nature. In all cases, the scaling exponents $h_{xy}$ extracted from the MF-X-DMA and MF-X-DFA algorithms are very close to the theoretical values. For bivariate fractional Brownian motions, the scaling exponent of the cross-correlation is independent of the cross-correlation coefficient between two time series and the MF-X-DFA and centered MF-X-DMA algorithms have comparative performance, which outperform the forward and backward MF-X-DMA algorithms. We apply these algorithms to the return time series of two stock market indexes and to their volatilities. For the returns, the centered MF-X-DMA algorithm gives the best estimates of $h_{xy}(q)$ since its $h_{xy}(2)$ is closest to 0.5 as expected, and the MF-X-DFA algorithm has the second best performance. For the volatilities, the forward and backward MF-X-DMA algorithms give similar results, while the centered MF-X-DMA and the MF-X-DFA algorithms fails to extract rational multifractal nature.
    Date: 2011–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1103.2577&r=ets
  10. By: Di Iorio, Francesca; Triacca, Umberto
    Abstract: A new non-causality test based on the notion of distance between ARMA models is proposed in this paper. The advantage of this test is that it can be used in possible integrated and cointegrated systems, without pre-testing for unit roots and cointegration. The Monte Carlo experiments indicate that the proposed method performs reasonably well in nite samples. The empirical relevance of the test is illustrated via two applications.
    Keywords: AR metric; Bootstrap test; Granger non-causality; VAR
    JEL: C12 C15 C22
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:29637&r=ets

This nep-ets issue is ©2011 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.