nep-ets New Economics Papers
on Econometric Time Series
Issue of 2015‒05‒09
twelve papers chosen by
Yong Yin
SUNY at Buffalo

  1. The evolution of the Volatility in Financial Returns: Realized Volatility vs Stochastic Volatility Measures By António Alberto Santos
  2. A Martingale Decomposition of Discrete Markov Chains By Peter Reinhard Hansen
  3. A Markov Chain Estimator of Multivariate Volatility from High Frequency Data By Peter Reinhard Hansen; Guillaume Horel; Asger Lunde; Ilya Archakov
  4. Bayesian averaging vs. dynamic factor models for forecasting economic aggregates with tendency survey data By Bialowolski, Piotr; Kuszewski, Tomasz; Witkowski, Bartosz
  5. On the Forecast Combination Puzzle By Wei Qian; Craig A. Rolling; Gang Cheng; Yuhong Yang
  6. CAN A SUBSET OF FORECASTERS BEAT THE SIMPLE AVERAGE IN THE SPF? By Constantin Burgi
  7. QUASI MAXIMUM-LIKELIHOOD ESTIMATION OF DYNAMIC PANEL DATA MODELS FOR SHORT TIME SERIES By Robert F. Phillips
  8. Improving short term load forecast accuracy via combining sister forecasts By Jakub Nowotarski; Bidong Liu; Rafal Weron; Tao Hong
  9. Estimation of connectivity measures in gappy time series By G. Papadopoulos; D. Kugiumtzis
  10. Forecasting in Nonstationary Environments: What Works and What Doesn’t in Reduced-Form and Structural Models By Raffaella Giacomini; Barbara Rossi
  11. A Random Walk Test for Functional Time Series By Nicola Mingotti; Rosa E. Lillo; Juan Romo
  12. Prior selection for panel vector autoregressions By Korobilis, Dimitris

  1. By: António Alberto Santos (Faculty of Economics, University of Coimbra and GEMF, Portugal)
    Abstract: In this paper, we calculate the realized volatility measures using intraday data not equally spaced in time. The aim is to compare these measures with the ones from the stochastic volatility model. With this model, the data used are obtained in equal time intervals. Known facts are that the volatility is not directly observable and time-varying. If we consider the set of the most flexible models to capture the volatility evolution of returns, the stochastic volatility model belongs to the aforementioned set. High-frequency observations are used, which means daily observations obtained in equal time intervals. Can this be compatible with ultra-high-frequency data and realized volatility measures? Can we obtain compatible measures of volatility with both approaches? This is the object of this paper.
    Keywords: Bayesian estimation, Financial returns, Integrated volatility, Intraday data, Markov chain Monte Carlo, Realized volatility, Stochastic volatility.
    JEL: C11 C15 C53 G17
    Date: 2015–04
    URL: http://d.repec.org/n?u=RePEc:gmf:wpaper:2015-10.&r=ets
  2. By: Peter Reinhard Hansen (European University Institute and CREATES)
    Abstract: We consider a multivariate time series whose increments are given from a homogeneous Markov chain. We show that the martingale component of this process can be extracted by a filtering method and establish the corresponding martingale decomposition in closed-form. This representation is useful for the analysis of time series that are confined to a grid, such as financial high frequency data.
    Keywords: Markov Chain; Martingale; Beveridge-Nelson Decomposition
    JEL: C10 C22 C58
    Date: 2015–04–01
    URL: http://d.repec.org/n?u=RePEc:aah:create:2015-18&r=ets
  3. By: Peter Reinhard Hansen (European University Institute and CREATES); Guillaume Horel (Serenitas Credit L.p.); Asger Lunde (Aarhus University and CREATES); Ilya Archakov (European University Institute)
    Abstract: We introduce a multivariate estimator of financial volatility that is based on the theory of Markov chains. The Markov chain framework takes advantage of the discreteness of high-frequency returns. We study the finite sample properties of the estimation in a simulation study and apply it to highfrequency commodity prices.
    Keywords: Markov chain, Multivariate Volatility, Quadratic Variation, Integrated Variance, Realized Variance, High Frequency Data
    JEL: C10 C22 C80
    Date: 2015–03–30
    URL: http://d.repec.org/n?u=RePEc:aah:create:2015-19&r=ets
  4. By: Bialowolski, Piotr; Kuszewski, Tomasz; Witkowski, Bartosz
    Abstract: The main goal of the article is to investigate forecasting quality of two approaches to modelling main macroeconomic variables without a priori assumptions concerning causality and generate forecasts without additional assumptions regarding regressors. With application of tendency survey data the authors develop methodology for application of the Bayesian averaging of classical estimates (BACE) but also construct dynamic factor models (DFM). Within the BACE framework they apply two diversified methods of regressors' selection: frequentist (FMA) and averaging (BMA). Because their models yield multiple forecasts for each period, subsequently the authors employ diversified approaches to combine forecasts. The assessment of the results is performed with in-sample and out-of-sample prediction errors. Although the results do not significantly differ, the best performance is observed in Bayesian models with frequentist approach. Their analysis conducted for Polish economy also shows that the unemployment rate turns out to be forecasted with highest precision, followed by the rate of GDP growth and the CPI. It can be concluded from their analyses that although their methods are atheoretical they provide reasonable forecast accuracy not inferior to that of structural models. Additional advantage of their approach is that the forecasting procedure can be mostly automated and the influence of subjective decisions made in the forecasting process can be significantly reduced.
    Keywords: Bayesian averaging of classical estimates,dynamic factor models,tendency survey data,forecasting
    JEL: C10 C38 C83 E32 E37
    Date: 2015
    URL: http://d.repec.org/n?u=RePEc:zbw:ifwedp:201528&r=ets
  5. By: Wei Qian; Craig A. Rolling; Gang Cheng; Yuhong Yang
    Abstract: It is often reported in forecast combination literature that a simple average of candidate forecasts is more robust than sophisticated combining methods. This phenomenon is usually referred to as the "forecast combination puzzle". Motivated by this puzzle, we explore its possible explanations including estimation error, invalid weighting formulas and model screening. We show that existing understanding of the puzzle should be complemented by the distinction of different forecast combination scenarios known as combining for adaptation and combining for improvement. Applying combining methods without consideration of the underlying scenario can itself cause the puzzle. Based on our new understandings, both simulations and real data evaluations are conducted to illustrate the causes of the puzzle. We further propose a multi-level AFTER strategy that can integrate the strengths of different combining methods and adapt intelligently to the underlying scenario. In particular, by treating the simple average as a candidate forecast, the proposed strategy is shown to avoid the heavy cost of estimation error and, to a large extent, solve the forecast combination puzzle.
    Date: 2015–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1505.00475&r=ets
  6. By: Constantin Burgi (The George Washington University)
    Abstract: The forecast combination literature has optimal combination methods, however, empirical studies have shown that the simple average is notoriously difficult to improve upon. This paper introduces a novel way to choose a subset of forecasters who might have specialized knowledge to improve upon the simple average over all forecasters in the SPF. In particular, taking the average of forecasters that recently beat the simple average more than the calibrated threshold of 52.5% of times can statistically significantly outperform the simple average for 10-year treasury bond yields, CPI inflation and unemployment at some horizons.
    Keywords: Forecast combination; Forecast evaluation; Multiple model comparisons; Real-time data; Survey of Professional Forecasters
    JEL: C22 C52 C53
    Date: 2015–03
    URL: http://d.repec.org/n?u=RePEc:gwc:wpaper:2015-001&r=ets
  7. By: Robert F. Phillips (The George Washington University)
    Abstract: This paper establishes the almost sure convergence and asymptotic normality of quasi maximum-likelihooD (QML) estimators of a dynamic panel data model when the time series for each cross section is short. The QML estimators are robust with respect to initial conditions and misspecication of the log-likelihood, and results are provided for a general specication of the error variance-covariance matrix. The paper also provides procedures for computing QML estimates that improve on computational methods previously recommended in the literature. Moreover, it compares the nite sample performance of several QML estimators, the differenced GMM estimator, and the system GMM estimator.
    Keywords: random effects; fixed effects; differenced QML; augmented dynamic panel data model
    JEL: C23
    Date: 2014–09
    URL: http://d.repec.org/n?u=RePEc:gwc:wpaper:2014-006&r=ets
  8. By: Jakub Nowotarski; Bidong Liu; Rafal Weron; Tao Hong
    Abstract: Although combining forecasts is well-known to be an effective approach to improving forecast accuracy, the literature and case studies on combining load forecasts are very limited. In this paper, we investigate the performance of combining so-called sister load forecasts with eight methods: three variants of arithmetic averaging, four regression based and one performance based method. Through comprehensive analysis of two case studies developed from public data (Global Energy Forecasting Competition 2014 and ISO New England), we demonstrate that combing sister forecasts outperforms the benchmark methods significantly in terms of forecasting accuracy measured by Mean Absolute Percentage Error. With the power to improve accuracy of individual forecasts and the advantage of easy generation, combining sister load forecasts has a high academic and practical value for researchers and practitioners.
    Keywords: Electric load forecasting; Forecast combination; Sister forecast
    JEL: C22 C32 C53 Q47
    Date: 2015–05–03
    URL: http://d.repec.org/n?u=RePEc:wuu:wpaper:hsc1505&r=ets
  9. By: G. Papadopoulos; D. Kugiumtzis
    Abstract: A new method is proposed to compute connectivity measures on multivariate time series with gaps. Rather than removing or filling the gaps, the rows of the joint data matrix containing empty entries are removed and the calculations are done on the remainder matrix. The method, called measure adapted gap removal (MAGR), can be applied to any connectivity measure that uses a joint data matrix, such as cross correlation, cross mutual information and transfer entropy. MAGR is favorably compared using these three measures to a number of known gap-filling techniques, as well as the gap closure. The superiority of MAGR is illustrated on time series from synthetic systems and financial time series.
    Date: 2015–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1505.00003&r=ets
  10. By: Raffaella Giacomini; Barbara Rossi
    Abstract: This review provides an overview of forecasting methods that can help researchers forecast in the presence of non-stationarities caused by instabilities. The emphasis of the review is both theoretical and applied, and provides several examples of interest to economists. We show that modeling instabilities can help, but it depends on how they are modeled. We also show how to robustify a model against instabilities.
    Keywords: forecasting, instabilities, structural breaks
    Date: 2014–12
    URL: http://d.repec.org/n?u=RePEc:bge:wpaper:819&r=ets
  11. By: Nicola Mingotti; Rosa E. Lillo; Juan Romo
    Abstract: In this paper we introduce a Random Walk test for Functional Autoregressive Processes of Order One. The test is non parametric, based on Bootstrap and Functional Principal Components. The power of the test is shown through an extensive Montecarlo simulation. We apply the test to two real dataset, Bitcoin prices and electrical energy consumption in France.
    Keywords: Autoregressive Process , FAR(1) , Unit root , Bootstrap , Computational Statistics , Hypothesis test , Principal Components
    Date: 2015–04
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws1506&r=ets
  12. By: Korobilis, Dimitris
    Abstract: There is a vast literature that specifies Bayesian shrinkage priors for vector autoregressions (VARs) of possibly large dimensions. In this paper I argue that many of these priors are not appropriate for multi-country settings, which motivates me to develop priors for panel VARs (PVARs). The parametric and semi-parametric priors I suggest not only perform valuable shrinkage in large dimensions, but also allow for soft clustering of variables or countries which are homogeneous. I discuss the implications of these new priors for modelling interdependencies and heterogeneities among different countries in a panel VAR setting. Monte Carlo evidence and an empirical forecasting exercise show clear and important gains of the new priors compared to existing popular priors for VARs and PVARs.
    Keywords: Bayesian model selection; shrinkage; spike and slab priors; forecasting; large vector autoregression
    JEL: C11 C32 C33 C52
    Date: 2015–04
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:64143&r=ets

This nep-ets issue is ©2015 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.