nep-ets New Economics Papers
on Econometric Time Series
Issue of 2014‒11‒12
eleven papers chosen by
Yong Yin
SUNY at Buffalo

  1. A Cusum Test of Common Trends in Large Heterogeneous Panels By Javier Hidalgo; Jungyoon Lee
  2. A note on implementing the Durbin and Koopman simulation smoother By Jarocinski, Marek
  3. Analyzing the Taylor Rule with Wavelet Lenses By Luís Francisco Aguiar-Conraria; Manuel M. F. Martins; Maria Joana Soares
  4. Comovement of Selected International Stock Market Indices:A Continuous Wavelet Transformation and Cross Wavelet Transformation Analysis By Masih, Mansur; Majid, Hamdan Abdul
  5. Inference Based on SVARs Identified with Sign and Zero Restrictions: Theory and Applications By Arias, Jonas E.; Rubio-Ramirez, Juan F.; Waggoner, Daniel F.
  6. LADE-based inference for ARMA models with unspecified and heavy-tailed heteroscedastic noises By Zhu, Ke; Ling, Shiqing
  7. Least squares estimation for GARCH (1,1) model with heavy tailed errors By Preminger, Arie; Storti, Giuseppe
  8. On various confidence intervals post-model-selection By Leeb, Hannes; Pötscher, Benedikt M.; Ewald, Karl
  9. Score driven asymmetric stochastic volatility models By Xiuping Mao; Esther Ruiz; Helena Veiga
  10. The Model Confidence Set package for R By Mauro Bernardi; Leopoldo Catania
  11. Times Series: Cointegration By Søren Johansen

  1. By: Javier Hidalgo; Jungyoon Lee
    Abstract: This paper examines a nonparametric CUSUM-type test for common trends in large panel data sets with individual fixed effects. We consider, as in Zhang, Su and Phillips (2012), a partial linear regression model with unknown functional form for the trend component, although our test does not involve local smoothings. This conveniently forgoes the need to choose a bandwidth parameter, which due to a lack of a clear and sensible information criteria it is difficult for testing purposes. We are able to do so after making use that the number of individuals increases with no limit. After removing the parametric component of the model, when the errors are homoscedastic, our test statistic converges to a Gaussian process whose critical values are easily tabulated. We also examine the consequences of having heteroscedasticity as well as discussing the problem of how to compute valid critical values due to the very complicated covariance structure of the limiting process. Finally, we present a small Monte-Carlo experiment to shed some light on the finite sample performance of the test.
    Keywords: Common Trends, large data set, Partial linear models,Bootstrap algorithms
    JEL: C12 C13 C23
    Date: 2014–08
    URL: http://d.repec.org/n?u=RePEc:cep:stiecm:/2014/576&r=ets
  2. By: Jarocinski, Marek
    Abstract: The correct implementation of the Durbin and Koopman simulation smoother is explained. A possible misunderstanding is pointed out and clarified for both the basic state space model and for its extension that allows time-varying intercepts (mean adjustments).
    Keywords: state space model; simulation smoother; trend output
    JEL: C15 C32
    Date: 2014–10–24
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:59466&r=ets
  3. By: Luís Francisco Aguiar-Conraria (Universidade do Minho - NIPE); Manuel M. F. Martins (Cef.up and Faculty of Economics, University of Porto); Maria Joana Soares (Universidade do Minho)
    Abstract: This paper analyses the Taylor Rule in the U.S. 1960-2014 with new lenses: continuous time partial wavelets tools. We assess the co-movement between the policy interest rate and the macroeconomic variables in the Rule, inflation and the output gap, both jointly and independently, for each frequency and at each moment of time. Our results uncover some new stylized facts about U.S. monetary policy and add new insights to the record of U.S. monetary history since the early 1960s. Among other things we conclude that monetary policy has been successful in stabilizing inflation. However, its effectiveness varies both in time and frequencies. Monetary policy has lagged the output gap across most of the sample, but in recent times became more reactive. Volcker’s disinflation, and the conquest of credibility in 1979-1986, was achieved with no extra costs in terms of output.
    Keywords: Monetary Policy, Taylor Rule, Continuous Wavelet Transform, Partial Wavelet Coherency, Partial Phase-difference
    JEL: C49 E43 E52
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:nip:nipewp:18/2014&r=ets
  4. By: Masih, Mansur; Majid, Hamdan Abdul
    Abstract: This study accounts for the time-varying pattern of price shock transmission, exploring stock market co-movements using continuous wavelet coherency methodology to find the correlation analysis between stock market indices of Malaysia, Thailand (Asian), Greece (Europe) and United States, in the time-frequency domain of time-series data. We employ the Wavelet Coherence method with the consideration of the financial crisis episodes of 1997 Asian Financial Crisis, 1998 Russian Sovereign Debt Default, 9/11 Attack on World Trade Centre US, 2008 US Sub-Prime Mortgage Crisis and the recent 2010-2011 Greece Debt Crisis. Results tend to indicate that the relations among indices are strong but not homogeneous across time scales, that local phenomena are more evident than others in these markets and that there seems to be no quick transmission through markets around the world, but a significant time delay. The relations among these indices have changed and evolved through time, mostly due to the financial crises that occurred at different time periods. Results also favour the view that regionally and economically closer markets exhibit higher correlation and more short run co-movements among them. The high correlation between the two regional indices of Malaysia and Thailand, indicates that for the international investors, it is little gain to include both in their portfolio diversification. Strong co-movement is mostly confined to long-run fluctuations favouring contagion analysis. This indicates that shocks in the high frequency but low period are short term but shocks in the low frequency but high period are long term with the trend elements affecting the co-movements of the indices. The study of market correlations on the frequency-time scale domain using continuous wavelet coherency is appealing and can be an important tool in decision making for different types of investors.
    Keywords: stock market comovement; continuous wavelet transform; cross-wavelet; wavelet coherency; frequency-time scale domain
    JEL: C22 C58 E44 G15
    Date: 2013–12–20
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:58313&r=ets
  5. By: Arias, Jonas E. (Federal Reserve Board of Governors); Rubio-Ramirez, Juan F. (Duke University); Waggoner, Daniel F. (Federal Reserve Bank of Atlanta)
    Abstract: Are optimism shocks an important source of business cycle fluctuations? Are deficit-financed tax cuts better than deficit-financed spending to increase output? These questions have been previously studied using structural vector autoregressions (SVAR) identified with sign and zero restrictions and the answers have been positive and definite in both cases. Although the identification of SVARs with sign and zero restrictions is theoretically attractive because it allows the researcher to remain agnostic with respect to the responses of the key variables of interest, we show that current implementation of these techniques does not respect the agnosticism of the theory. These algorithms impose additional sign restrictions on variables that are seemingly unrestricted that bias the results and produce misleading confidence intervals. We provide an alternative and efficient algorithm that does not introduce any additional sign restriction, hence preserving the agnosticism of the theory. Without the additional restrictions, it is hard to support the claim that either optimism shocks are an important source of business cycle fluctuations or deficit-financed tax cuts work best at improving output. Our algorithm is not only correct but also faster than current ones.
    Keywords: identification; sign restrictions; simulation
    JEL: C11 C32 E50
    Date: 2014–02–01
    URL: http://d.repec.org/n?u=RePEc:fip:fedawp:2014-01&r=ets
  6. By: Zhu, Ke; Ling, Shiqing
    Abstract: This paper develops a systematic procedure of statistical inference for the ARMA model with unspecified and heavy-tailed heteroscedastic noises. We first investigate the least absolute deviation estimator (LADE) and the self-weighted LADE for the model. Both estimators are shown to be strongly consistent and asymptotically normal when the noise has a finite variance and infinite variance, respectively. The rates of convergence of the LADE and the self-weighted LADE are $n^{-1/2}$ which is faster than those of LSE for the AR model when the tail index of GARCH noises is in (0,4], and thus they are more efficient in this case. Since their asymptotic covariance matrices can not be estimated directly from the sample, we develop the random weighting approach for statistical inference under this nonstandard case. We further propose a novel sign-based portmanteau test for model adequacy. Simulation study is carried out to assess the performance of our procedure and one real illustrating example is given.
    Keywords: ARMA(p,q) models; Asymptotic normality; Heavy-tailed noises; G/ARCH noises; LADE; Random weighting approach; Self-weighted LADE; Sign-based portmanteau test; Strong consistency.
    JEL: C1 C12 C13
    Date: 2014–10–12
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:59099&r=ets
  7. By: Preminger, Arie; Storti, Giuseppe
    Abstract: GARCH (1,1) models are widely used for modelling processes with time varying volatility. These include financial time series, which can be particularly heavy tailed. In this paper, we propose a log-transform-based least squares estimator (LSE) for the GARCH (1,1) model. The asymptotic properties of the LSE are studied under very mild moment conditions for the errors. We establish the consistency, asymptotic normality at the standard convergence rate of square root-of-n for our estimator. The finite sample properties are assessed by means of an extensive simulation study. Our results show that LSE is more accurate than the quasi-maximum likelihood estimator (QMLE) for heavy tailed errors. Finally, we provide some empirical evidence on two financial time series considering daily and high frequency returns. The results of the empirical analysis suggest that in some settings, depending on the specific measure of volatility adopted, the LSE can allow for more accurate predictions of volatility than the usual Gaussian QMLE.
    Keywords: GARCH (1,1), least squares estimation, consistency, asymptotic normality.
    JEL: C13 C15 C22
    Date: 2014–01–17
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:59082&r=ets
  8. By: Leeb, Hannes; Pötscher, Benedikt M.; Ewald, Karl
    Abstract: We compare several confidence intervals after model selection in the setting recently studied by Berk et al. (2013), where the goal is to cover not the true parameter but a certain non-standard quantity of interest that depends on the selected model. In particular, we compare the PoSI-intervals that are proposed in that reference with the `naive' confidence interval, which is constructed as if the selected model were correct and fixed a-priori (thus ignoring the presence of model selection). Overall, we find that the actual coverage probabilities of all these intervals deviate only moderately from the desired nominal coverage probability. This finding is in stark contrast to several papers in the existing literature, where the goal is to cover the true parameter.
    Keywords: Confidence intervals, model selection
    JEL: C1
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:58326&r=ets
  9. By: Xiuping Mao; Esther Ruiz; Helena Veiga
    Abstract: In this paper we propose a new class of asymmetric stochastic volatility (SV) models, which specifies the volatility as a function of the score of the distribution of returns conditional on volatilities based on the Generalized Autoregressive Score (GAS) model. Different specifications of the log-volatility are obtained by assuming different return error distributions. In particular, we consider three of the most popular distributions, namely, the Normal, Student-t and Generalized Error Distribution and derive the statistical properties of each of the corresponding score driven SV models. We show that some of the parameters cannot be property identified by the moments usually considered as to describe the stylized facts of financial returns, namely, excess kurtosis, autocorrelations of squares and cross-correlations between returns and future squared returns. The parameters of some restricted score driven SV models can be estimated adequately using a MCMC procedure. Finally, the new proposed models are fitted to financial returns and evaluated in terms of their in-sample and out-of-sample performance
    Keywords: BUGS, Generalized Asymmetric Stochastic Volatility, MCMC, Score driven models
    JEL: C22
    Date: 2014–10
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws142618&r=ets
  10. By: Mauro Bernardi; Leopoldo Catania
    Abstract: This paper presents the R package MCS which implements the Model Confidence Set (MCS) procedure recently developed by Hansen et al. (2011). The Hansen's procedure consists on a sequence of tests which permits to construct a set of 'superior' models, where the null hypothesis of Equal Predictive Ability (EPA) is not rejected at a certain confidence level. The EPA statistic tests is calculated for an arbitrary loss function, meaning that we could test models on various aspects, for example punctual forecasts. The relevance of the package is shown using an example which aims at illustrating in details the use of the functions provided by the package. The example compares the ability of different models belonging to the ARCH family to predict large financial losses. We also discuss the implementation of the ARCH--type models and their maximum likelihood estimation using the popular R package rugarch developed by Ghalanos (2014).
    Date: 2014–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1410.8504&r=ets
  11. By: Søren Johansen (University of Copenhagen and CREATES)
    Abstract: An overview of results for the cointegrated VAR model for nonstationary I(1) variables is given. The emphasis is on the analysis of the model and the tools for asymptotic inference. These include: formulation of criteria on the parameters, for the process to be nonstationary and I(1), formulation of hypotheses of interest on the rank, the cointegrating relations and the adjustment coefficients. A discussion of the asymptotic distribution results that are used for inference. The results are illustrated by a few examples. A number of extensions of the theory are pointed out.
    Keywords: adjustment coefficients, cointegrating relations, cointegration, cointegrated vector autoregressive model, Dickey-Fuller distributions, error correction models, econometric analysis of macroeconomic data, likelihood inference, mixed Gaussian distribution, nonstationarity
    JEL: C32
    Date: 2014–10–21
    URL: http://d.repec.org/n?u=RePEc:aah:create:2014-38&r=ets

This nep-ets issue is ©2014 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.