nep-ets New Economics Papers
on Econometric Time Series
Issue of 2018‒02‒05
nine papers chosen by
Yong Yin
SUNY at Buffalo

  1. Predicting crypto-currencies using sparse non-Gaussian state space models By Christian Hotz-Behofsits; Florian Huber; Thomas O. Z\"orner
  2. Alternatives to Polynomial Trend-Corrected Differences-In-Differences Models By Vincent Vandenberghe
  3. ARDL model as a remedy for spurious regression: problems, performance and prospectus By Ghouse, Ghulam; Khan, Saud Ahmed; Rehman, Atiq Ur
  4. Testing for Common Breaks in a Multiple Equations System By Tatsushi Oka; Pierre Perron
  5. Generative Models for Stochastic Processes Using Convolutional Neural Networks By Fernando Fernandes Neto
  6. Structural Scenario Analysis with SVARs By Antolin-Diaz, Juan; Petrella, Ivan; Rubio-Ramírez, Juan Francisco
  7. Exact Likelihood Estimation and Probabilistic Forecasting in Higher-order INAR(p) Models By Lu, Yang
  8. Outliers and misleading leverage effect in asymmetric GARCH-type models By M. Angeles Carnero Fernández; Ana Pérez Espartero
  9. Bayesian Analysis of Realized Matrix-Exponential GARCH Models By Manabu Asai; Michael McAleer

  1. By: Christian Hotz-Behofsits; Florian Huber; Thomas O. Z\"orner
    Abstract: In this paper we forecast daily returns of crypto-currencies using a wide variety of different econometric models. To capture salient features commonly observed in financial time series like rapid changes in the conditional variance, non-normality of the measurement errors and sharply increasing trends, we develop a time-varying parameter VAR with t-distributed measurement errors and stochastic volatility. To control for overparameterization, we rely on the Bayesian literature on shrinkage priors that enables us to shrink coefficients associated with irrelevant predictors and/or perform model specification in a flexible manner. Using around one year of daily data we perform a real-time forecasting exercise and investigate whether any of the proposed models is able to outperform the naive random walk benchmark. To assess the economic relevance of the forecasting gains produced by the proposed models we moreover run a simple trading exercise.
    Date: 2018–01
  2. By: Vincent Vandenberghe (UNIVERSITE CATHOLIQUE DE LOUVAIN, Institut de Recherches Economiques et Sociales (IRES))
    Abstract: A common problem with differences-in-differences (DD) estimates is the failure of the parallel-trend assumption. To cope with this, most authors include polynomial (linear, quadratic…) trends among the regressors, and estimate the treatment effect as a once-in-a-time trend shift. In practice that strategy does not work very well, because inter alia the estimation of the trend uses post-treatment data. An extreme case is when sample covers only one period before treatment and many after. Then the trend's estimate relies almost completely on post-treatment developments, and absorbs most of the treatment effect. What is needed is a method that i) uses pre-treatment observations to capture linear or non-linear trend differences, and ii) extrapolates these to compute the treatment effect. This paper shows how this can be achieved using a fully-flexible version of the canonical DD equation. It also contains an illustration using data on a 1994-2000 EU programme that was implemented in the Belgian province of Hainaut.
    Keywords: Treatment-Effect Analysis, Differences-in-Differences Models, Correction for trend differences
    JEL: C21 C4 C5
    Date: 2018–01–25
  3. By: Ghouse, Ghulam; Khan, Saud Ahmed; Rehman, Atiq Ur
    Abstract: Spurious regression have performed a vital role in the construction of contemporary time series econometrics and have developed many tools employed in applied macroeconomics. The conventional Econometrics has limitations in the treatment of spurious regression in non-stationary time series. While reviewing a well-established study of Granger and Newbold (1974) we realized that the experiments constituted in this paper lacked Lag Dynamics thus leading to spurious regression. As a result of this paper, in conventional Econometrics, the Unit root and Cointegration analysis have become the only ways to circumvent the spurious regression. These procedures are also equally capricious because of some specification decisions like, choice of the deterministic part, structural breaks, autoregressive lag length choice and innovation process distribution. This study explores an alternative treatment for spurious regression. We concluded that it is the missing variable (lag values) that are the major cause of spurious regression therefore an alternative way to look at the problem of spurious regression takes us back to the missing variable which further leads to ARDL Model. The study mainly focus on Monte Carlo simulations. The results are providing justification, that ARDL model can be used as an alternative tool to avoid the spurious regression problem.
    Keywords: Spurious regression, misspecification, Stationarity, unit root, cointegration and ARDL
    JEL: B41 C4 C5 C53
    Date: 2018–01–10
  4. By: Tatsushi Oka; Pierre Perron
    Abstract: The issue addressed in this paper is that of testing for common breaks across or within equations of a multivariate system. Our framework is very general and allows integrated regressors and trends as well as stationary regressors. The null hypothesis is that breaks in different parameters occur at common locations and are separated by some positive fraction of the sample size unless they occur across different equations. Under the alternative hypothesis, the break dates across parameters are not the same and also need not be separated by a positive fraction of the sample size whether within or across equations. The test considered is the quasi-likelihood ratio test assuming normal errors, though as usual the limit distribution of the test remains valid with non-normal errors. Of independent interest, we provide results about the rate of convergence of the estimates when searching over all possible partitions subject only to the requirement that each regime contains at least as many observations as some positive fraction of the sample size, allowing break dates not separated by a positive fraction of the sample size across equations. Simulations show that the test has good finite sample properties. We also provide an application to issues related to level shifts and persistence for various measures of inflation to illustrate its usefulness.
    Date: 2016–05
  5. By: Fernando Fernandes Neto
    Abstract: The present paper aims to demonstrate the usage of Convolutional Neural Networks as a generative model for stochastic processes, enabling researchers from a wide range of fields (such as quantitative finance and physics) to develop a general tool for forecasts and simulations without the need to identify/assume a specific system structure or estimate its parameters.
    Date: 2018–01
  6. By: Antolin-Diaz, Juan; Petrella, Ivan; Rubio-Ramírez, Juan Francisco
    Abstract: In the context of vector autoregressions, conditional forecasts are typically constructed by specifying the future path of one or more variables while remaining silent about the structural shocks that might have caused the path. However, in many cases, researchers may be interested in identifying a structural vector autoregression and choosing which structural shock is driving the path of the conditioning variables. This would allow researchers to create a ''structural scenario'' that can be given an economic interpretation. In this paper we show how to construct structural scenarios and develop efficient algorithms to implement our methods. We show how structural scenario analysis can lead to results that are very different from, but complementary to, those of the traditional conditional forecasting exercises. We also propose an approach to assess and compare the plausibility of alternative scenarios. We illustrate our methods by applying them to two examples: comparing alternative monetary policy options and stress testing the reaction of bank profitability to an economic recession.
    Keywords: Bayesian methods; Conditional forecasts; probability distribution; SVARs
    JEL: C32 C53 E47
    Date: 2018–01
  7. By: Lu, Yang
    Abstract: The computation of the likelihood function and the term structure of probabilistic forecasts in higher-order INAR(p) models are qualified numerically intractable and the literature has considered various approximations. Using the notion of compound autoregressive process, we propose an exact and fast algorithm for both quantities. We find that existing approximation schemes induce significant errors for forecasting.
    Keywords: compound autoregressive process, probabilistic forecast of counts, matrix arithmetic.
    JEL: C22 C25
    Date: 2018–01–01
  8. By: M. Angeles Carnero Fernández (Universidad de Alicante); Ana Pérez Espartero (Dpto. Economía Aplicada)
    Abstract: This paper illustrates how outliers can affect both the estimation and testing of leverage effect by focusing on the TGARCH model. Three estimation methods are compared through Monte Carlo experiments: Gaussian Quasi-Maximum Likelihood, Quasi-Maximum Likelihood based on the t Student likelihood and Least Absolute Deviation method. The empirical behavior of the t-ratio and the Likelihood Ratio tests for the significance of the leverage parameter is also analyzed. Our results put forward the unreliability of Gaussian Quasi-Maximum Likelihood methods in the presence of outliers. In particular, we show that one isolated outlier could hide true leverage effect whereas two consecutive outliers bias the estimated leverage coefficient in a direction that crucially depends on the sign of the first outlier and could lead to wrongly reject the null of no leverage effect or to estimate asymmetries of the wrong sign. By contrast, we highlight the good performance of the robust estimators in the presence of an isolated outlier. However, when there are patches of outliers, our findings suggest that the sizes and powers of the tests as well as the estimated parameters based on robust methods may still be distorted in some cases. We illustrate these results with two series of daily returns, namely the Spain IGBM Consumer Goods index and the futures contracts of the Natural gas.
    Keywords: Conditional heteroscedasticity, QMLE, Robust estimators, TGARCH, AVGARCH
    JEL: C22 G10 Q40
    Date: 2018–01
  9. By: Manabu Asai (Soka University, Japan); Michael McAleer (Asia University, Taiwan; University of Sydney Business School, Australia; Erasmus School of Economics, Erasmus University Rotterdam, The Netherlands; Complutense University of Madrid, Spain; Yokohama National University, Japan)
    Abstract: The paper develops a new realized matrix-exponential GARCH (MEGARCH) model, which uses the information of returns and realized measure of co-volatility matrix simultaneously. The paper also considers an alternative multivariate asymmetric function to develop news impact curves. We consider Bayesian MCMC estimation to allow non-normal posterior distributions. For three US financial assets, we compare the realized MEGARCH models with existing multivariate GARCH class models. The empirical results indicate that the realized MEGARCH models outperform the other models regarding in-sample and out-of-sample performance. The news impact curves based on the posterior densities provide reasonable results.
    Keywords: C11; C32
    Date: 2018–01–17

This nep-ets issue is ©2018 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.