nep-ets New Economics Papers
on Econometric Time Series
Issue of 2013‒11‒22
six papers chosen by
Yong Yin
SUNY at Buffalo

  1. Polynomial Regressions and Nonsense Inference By Daniel Ventosa-Santaulària; Carlos Vladimir Rodríguez-Caballero
  2. “Forecasting Business surveys indicators: neural networks vs. time series models” By Oscar Claveria; Salvador Torra
  3. Global self-weighted and local quasi-maximum exponential likelihood estimators for ARMA-GARCH/IGARCH models By Zhu, Ke; Ling, Shiqing
  4. Inference on Impulse Response Functions in Structural VAR Models By Atsushi Inoue; Lutz Kilian
  5. Multivariate Time Series Model with Hierarchical Structure for Over-dispersed Discrete Outcomes By Nobuhiko Terui; Masataka Ban
  6. Copula-based dynamic conditional correlation multiplicative error processes By Bodnar, Taras; Hautsch, Nikolaus

  1. By: Daniel Ventosa-Santaulària (División de Economía, CIDE); Carlos Vladimir Rodríguez-Caballero (Aarhus University and CREATES)
    Abstract: Polynomial specifications are widely used, not only in applied economics, but also in epidemiology, physics, political analysis, and psychology, just to mention a few examples. In many cases, the data employed to estimate such estimations are time series that may exhibit stochastic nonstationary behavior. We extend Phillips’ (1986) results by proving an inference drawn from polynomial specifications, under stochastic nonstationarity, is misleading unless the variables cointegrate. We use a generalized polynomial specification as a vehicle to study its asymptotic and finite-sample properties. Our results, therefore, lead to a call to be cautious whenever practitioners estimate polynomial regressions.
    Keywords: Polynomial Regression; misleading Inference; Integrated Processes
    JEL: C12 C15 C22
    Date: 2013–11–15
  2. By: Oscar Claveria (Faculty of Economics, University of Barcelona); Salvador Torra (Faculty of Economics, University of Barcelona)
    Abstract: The objective of this paper is to compare different forecasting methods for the short run forecasting of Business Survey Indicators. We compare the forecasting accuracy of Artificial Neural Networks (ANN) vs. three different time series models: autoregressions (AR), autoregressive integrated moving average (ARIMA) and self-exciting threshold autoregressions (SETAR). We consider all the indicators of the question related to a country’s general situation regarding overall economy, capital expenditures and private consumption (present judgement, compared to same time last year, expected situation by the end of the next six months) of the World Economic Survey (WES) carried out by the Ifo Institute for Economic Research in co-operation with the International Chamber of Commerce. The forecast competition is undertaken for fourteen countries of the European Union. The main results of the forecast competition are offered for raw data for the period ranging from 1989 to 2008, using the last eight quarters for comparing the forecasting accuracy of the different techniques. ANN and ARIMA models outperform SETAR and AR models. Enlarging the observed time series of Business Survey Indicators is of upmost importance in order of assessing the implications of the current situation and its use as input in quantitative forecast models.
    Keywords: Business surveys; Forecasting; Time series models; Nonlinear models; Neural networks.
    Date: 2013–11
  3. By: Zhu, Ke; Ling, Shiqing
    Abstract: This paper investigates the asymptotic theory of the quasi-maximum exponential likelihood estimators (QMELE) for ARMA–GARCH models. Under only a fractional moment condition, the strong consistency and the asymptotic normality of the global self-weighted QMELE are obtained. Based on this self-weighted QMELE, the local QMELE is showed to be asymptotically normal for the ARMA model with GARCH (finite variance) and IGARCH errors. A formal comparison of two estimators is given for some cases. A simulation study is carried out to assess the performance of these estimators, and a real example on the world crude oil price is given.
    Keywords: ARMA–GARCH/IGARCH model; asymptotic normality; global selfweighted/local quasi-maximum exponential likelihood estimator; strong consistency.
    JEL: C13 C5
    Date: 2013–11–17
  4. By: Atsushi Inoue; Lutz Kilian
    Abstract: Skepticism toward traditional identifying assumptions based on exclusion restrictions has led to a surge in the use of structural VAR models in which structural shocks are identified by restricting the sign of the responses of selected macroeconomic aggregates to these shocks. Researchers commonly report the vector of pointwise posterior medians of the impulse responses as a measure of central tendency of the estimated response functions, along with pointwise 68 percent posterior error bands. It can be shown that this approach cannot be used to characterize the central tendency of the structural impulse response functions. We propose an alternative method of summarizing the evidence from sign-identified VAR models designed to enhance their practical usefulness. Our objective is to characterize the most likely admissible model(s) within the set of structural VAR models that satisfy the sign restrictions. We show how the set of most likely structural response functions can be computed from the posterior mode of the joint distribution of admissible models both in the fully identified and in the partially identified case, and we propose a highest-posterior density credible set that characterizes the joint uncertainty about this set. Our approach can also be used to resolve the long-standing problem of how to conduct joint inference on sets of structural impulse response functions in exactly identified VAR models. We illustrate the differences between our approach and the traditional approach for the analysis of the effects of monetary policy shocks and of the effects of oil demand and oil supply shocks.
    Date: 2013–07
  5. By: Nobuhiko Terui; Masataka Ban
    Abstract: In this paper, we propose a multivariate time series model for over-dispersed discrete data to explore the market structure based on sales count dynamics. We first discuss the microstructure to show that over-dispersion is inherent in the modeling of market structure based on sales count data. The model is built on the likelihood function induced by decomposing sales count response variables according to products' competitiveness and conditioning on their sum of variables, and it augments them to higher levels by using Poisson-Multinomial relationship in a hierarchical way, represented as a tree structure for the market definition. State space priors are applied to the structured likelihood to develop dynamic generalized linear models for discrete outcomes. For over-dispersion problem, Gamma compound Poisson variables for product sales counts and Dirichlet compound multinomial variables for their shares are connected in a hierarchical fashion. Instead of the density function of compound distributions, we propose a data augmentation approach for more efficient posterior computations in terms of the generated augmented variables particularly for generating forecasts and predictive density. We present the empirical application using weekly product sales time series in a store to compare the proposed models accommodating over-dispersion with alternative no over-dispersed models by several model selection criteria, including in-sample fit, out-of-sample forecasting errors, and information criterion. The empirical results show that the proposed modeling works well for the over-dispersed models based on compound Poisson variables and they provide improved results than models with no consideration of over-dispersion.
    Date: 2013–01
  6. By: Bodnar, Taras; Hautsch, Nikolaus
    Abstract: We introduce a copula-based dynamic model for multivariate processes of (non-negative) high-frequency trading variables revealing time-varying conditional variances and correlations. Modeling the variables' conditional mean processes using a multiplicative error model we map the resulting residuals into a Gaussian domain using a Gaussian copula. Based on high-frequency volatility, cumulative trading volumes, trade counts and market depth of various stocks traded at the NYSE, we show that the proposed copula-based transformation is supported by the data and allows capturing (multivariate) dynamics in higher order moments. The latter are modeled using a DCC-GARCH specification. We suggest estimating the model by composite maximum likelihood which is sufficiently flexible to be applicable in high dimensions. Strong empirical evidence for time-varying conditional (co-)variances in trading processes supports the usefulness of the approach. Taking these higher-order dynamics explicitly into account significantly improves the goodness-of-fit of the multiplicative error model and allows capturing time-varying liquidity risks. --
    Keywords: multiplicative error model,trading processes,copula,DCC-GARCH,liquidity risk
    JEL: C32 C58 C46
    Date: 2013

This nep-ets issue is ©2013 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.