nep-ets New Economics Papers
on Econometric Time Series
Issue of 2015‒11‒01
eight papers chosen by
Yong Yin
SUNY at Buffalo

  1. Bayesian Inference on Structural Impulse Response Functions By Mikkel Plagborg-Møller
  2. Granger Causality and Regime Inference in Bayesian Markov-Switching VARs By Matthieu Droumagueta; Anders Warneb; Tomasz Wozniakc
  3. Granger-causal analysis of GARCH models: a Bayesian approach By Tomasz Wozniak
  5. Pseudo Maximum Likelihood Estimation of Spatial Autoregressive Models with Increasing Dimension By Abhimanyu Gupta; Peter M Robinson
  6. Semiparametric model averaging of ultra-high dimensional time series By Jia Chen; Degui Li; Oliver Linton; Zudi Lu
  7. Specification and Estimation of Bayesian Dynamic Factor Models: A Monte Carlo Analysis with an Application to Global House Price Comovement By Jackson, Laura E.; Kose, M. Ayhan; Otrok, Christopher; Owyang, Michael T.
  8. Testing constancy of unconditional variance in volatility models by misspecification and specification tests By Annastiina Silvennoinen; Timo Teräsvirta

  1. By: Mikkel Plagborg-Møller
    Abstract: I propose to estimate structural impulse responses from macroeconomic time series by doing Bayesian inference on the Structural Vector Moving Average representation of the data. This approach has two advantages over Structural Vector Autoregressions. First, it imposes prior information directly on the impulse responses in a flexible and transparent manner. Second, it can handle noninvertible impulse response functions, which are often encountered in applications. To rapidly simulate from the posterior of the impulse responses, I develop an algorithm that exploits the Whittle likelihood. The impulse responses are partially identified, and I derive the frequentist asymptotics of the Bayesian procedure to show which features of the prior information are updated by the data. I demonstrate the usefulness of my method in a simulation study and in an empirical application that estimates the effects of technological news shocks on the U.S. business cycle.
    Date: 2015–10
  2. By: Matthieu Droumagueta (Department of Economics, European University Institute); Anders Warneb (Directorate General Research, European Central Bank); Tomasz Wozniakc (Department of Economics, University of Melbourne)
    Abstract: We derive restrictions for Granger noncausality in Markov-switching vector autoregressive models and also show under which conditions a variable does not affect the forecast of the hidden Markov process. Based on Bayesian approach to evaluating the hypotheses, the computational tools for posterior inference include a novel block Metropolis-Hastings sampling algorithm for the estimation of the restricted models. We analyze a system of monthly US data on money and income. The results of testing in MS-VARs contradict those in linear VARs: the money aggregate M1 is useful for forecasting income and for predicting the next period’s state.
    Keywords: Technical Efficiency, Penalised Splines, Gibbs Sampling
    JEL: C11 C12 C32 C53 E32
    Date: 2015–05
  3. By: Tomasz Wozniak (Department of Economics, University of Melbourne)
    Abstract: A multivariate GARCH model is used to investigate Granger causality in the conditional variance of time series. Parametric restrictions for the hypothesis of noncausality in conditional variances between two groups of variables, when there are other variables in the system as well, are derived. These novel conditions are convenient for the analysis of potentially large systems of economic variables. To evaluate hypotheses of noncausality, a Bayesian testing procedure is proposed. It avoids the singularity problem that may appear in theWald test and it relaxes the assumption of the existence of higher-order moments of the residuals required in classical tests.
    Keywords: second-order noncausality, VAR-GARCH models, Bayesian hypotheses assessment
    JEL: C11 C12 C32 C53
    Date: 2015–05
  4. By: Renáta Géczi-Papp (University of Miskolc)
    Abstract: In the decision making process the forecasting and time series analysis are important, but unfortunately the reliability of the prediction is often questionable. In today's rapidly changing business environment, it is crucial that decisions are based on correct information which means a better estimate of the expected economic developments. In this paper I examine the possible applications of using genetic algorithms in time series analysis to improve the reliability of the forecast. I try to submit the most relevant findings in the field of genetic algorithms and forecasting. My goal is to give a thorough description about the possible applications of genetic algorithms (GA) and I like to prove that this method can be useful in the time series analysis. The literature review is focused only to the prediction of stock market data. First I summarize shortly the most important methods of time series analysis, then I introduce the genetic algorithm and its main steps. The essential of the paper is the literature review, where I try to describe the most important applications of GA in finance. There are lots of interesting results in the forecasting of stock market data, which makes the GA more important. Of course the GA model is not perfect, it has some shortcomings and limitations of application. After drawing the conclusionsI hope this study will help the reader to understand better the genetic algorithm and its significance in the forecast.
    Date: 2015–10–15
  5. By: Abhimanyu Gupta; Peter M Robinson
    Abstract: Pseudo maximum likelihood estimates are developed for higher-order spatial autoregres- sive models with increasingly many parameters, including models with spatial lags in the dependent variables and regression models with spatial autoregressive disturbances. We consider models with and without a linear or nonlinear regression component. Sucient conditions for consistency and asymptotic normality are provided, the results varying ac- cording to whether the number of neighbours of a particular unit diverges or is bounded. Monte Carlo experiments examine nite-sample behaviour.
    Date: 2015–10–22
  6. By: Jia Chen (Institute for Fiscal Studies); Degui Li (Institute for Fiscal Studies); Oliver Linton (Institute for Fiscal Studies and University of Cambridge); Zudi Lu (Institute for Fiscal Studies)
    Abstract: In this paper, we consider semiparametric model averaging of the nonlinear dynamic time series system where the number of exogenous regressors is ultra large and the number of autoregressors is moderately large. In order to accurately forecast the response variable, we propose two semiparametric approaches of dimension reduction among the exogenous regressors and auto-regressors (lags of the response variable). In the first approach, we introduce a Kernel Sure Independence Screening (KSIS) technique for the nonlinear time series setting which screens out the regressors whose marginal regression (or auto-regression) functions do not make significant contribution to estimating the joint multivariate regression function and thus reduces the dimension of the regressors from a possible exponential rate to a certain polynomial rate, typically smaller than the sample size; then we consider a semiparametric method of Model Averaging MArginal Regression (MAMAR) for the regressors and auto-regressors that survive the screening procedure, and propose a penalised MAMAR method to further select the regressors which have significant effects on estimating the multivariate regression function and predicting the future values of the response variable. In the second approach, we impose an approximate factor modelling structure on the ultra-high dimensional exogenous regressors and use a well-known principal component analysis to estimate the latent common factors, and then apply the penalised MAMAR method to select the estimated common factors and lags of the response variable which are significant. Through either of the two approaches, we can finally determine the optimal combination of the significant marginal regression and auto-regression functions. Under some regularity conditions, we derive the asymptotic properties for the two semiparametric dimension-reduction approaches. Some numerical studies including simulation and an empirical application are provided to illustrate the proposed methodology.
    Keywords: Kernel smoother; penalised MAMAR; principal component analysis; semiparametric approximation; sure independence screening; ultra-high dimensional time series
    JEL: C14 C22 C52
    Date: 2015–10
  7. By: Jackson, Laura E. (Bentley University); Kose, M. Ayhan (World Bank); Otrok, Christopher (University of Missouri and Federal Reserve Bank of St. Louis); Owyang, Michael T. (Federal Reserve Bank of St. Louis)
    Abstract: We compare methods to measure comovement in business cycle data using multi-level dynamic factor models. To do so, we employ a Monte Carlo procedure to evaluate model performance for different specifications of factor models across three different estimation procedures. We consider three general factor model specifications used in applied work. The first is a single- factor model, the second a two-level factor model, and the third a three-level factor model. Our estimation procedures are the Bayesian approach of Otrok and Whiteman (1998), the Bayesian state space approach of Kim and Nelson (1998) and a frequentist principal components approach. The latter serves as a benchmark to measure any potential gains from the more computationally intensive Bayesian procedures. We then apply the three methods to a novel new dataset on house prices in advanced and emerging markets from Cesa-Bianchi, Cespedes, and Rebucci (2015) and interpret the empirical results in light of the Monte Carlo results.
    Keywords: principal components; Kalman filter; data augmentation; business cycles
    JEL: C3
    Date: 2015–08–26
  8. By: Annastiina Silvennoinen (School of Economics and Finance, Queensland University of Technology); Timo Teräsvirta (Aarhus University and CREATES)
    Abstract: The topic of this paper is testing the hypothesis of constant unconditional variance in GARCH models against the alternative that the unconditional variance changes deterministically over time. Tests of this hypothesis have previously been performed as misspecification tests after fitting a GARCH model to the original series. It is found by simulation that the positive size distortion present in these tests is a function of the kurtosis of the GARCH process. Adjusting the size by numerical methods is considered. The possibility of testing the constancy of the unconditional variance before fitting a GARCH model to the data is discussed. The power of the ensuing test is vastly superior to that of the misspecification test and the size distortion minimal. The test has reasonable power already in very short time series. It would thus serve as a test of constant variance in conditional mean models. An application to exchange rate returns is included. JEL Classification: C32, C52
    Keywords: autoregressive conditional heteroskedasticity, modelling volatility, testing parameter constancy, time-varying GARCH
    Date: 2015–10–27

This nep-ets issue is ©2015 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.