nep-ets New Economics Papers
on Econometric Time Series
Issue of 2012‒03‒28
eighteen papers chosen by
Yong Yin
SUNY at Buffalo

  1. Adaptive Forecasting in the Presence of Recent and Ongoing Structural Change By Liudas Giraitis; George Kapetanios; Simon Price
  2. Pitfalls in Backtesting Historical Simulation VaR Models By Juan Carlos Escanciano; Pei Pei
  3. A NEW APPROACH FOR EVALUATING ECONOMIC FORECASTS By Tara M. Sinclair; H.O. Stekler; Warren Carnow
  4. Estimating Idiosyncratic Volatility and Its Effects on a Cross-Section of Returns By Serguey Khovansky; Zhylyevskyy, Oleksandr
  5. TESTING FOR SKEWNESS IN AR CONDITIONAL VOLATILITY MODELS FOR FINANCIAL RETURN SERIES By Mantalos, Panagiotis; Karagrigoriou, Alex
  6. Dynamic Functional Data Analysis with Nonparametric State Space Models. By Márcio Laurini
  7. A Hybrid Data Cloning Maximum Likelihood Estimator for Stochastic Volatility Models By Márcio Laurini
  8. Extracting non-linear signals from several economic indicators By Maximo Camacho; Gabriel Perez-Quiros; Pilar Poncela
  9. Haavelmo's Probability Approach and the Cointegrated VAR By Katarina Juselius
  10. Estimation in Non-Linear Non-Gaussian State Space Models with Precision-Based Methods By Joshua Chan; Rodney Strachan
  11. Pricing Central Tendency in Volatility By Stanislav Khrapov
  12. On detection of volatility spillovers in simultaneously open stock markets By Kohonen, Anssi
  13. Prior Selection for Vector Autoregressions By Giannone, Domenico; Lenza, Michele; Primiceri, Giorgio E
  14. Common Drifting Volatility in Large Bayesian VARs By Carriero, Andrea; Clark, Todd; Marcellino, Massimiliano
  15. Markov-switching dynamic factor models in real time By Camacho, Maximo; Pérez-Quirós, Gabriel; Poncela, Pilar
  16. U-MIDAS: MIDAS regressions with unrestricted lag polynomials By Foroni, Claudia; Marcellino, Massimiliano; Schumacher, Christian
  17. Estimating the number of mean shifts under long memory By Sibbertsen, Philipp; Willert, Juliane
  18. Distribution Theory for the Studentized Mean for Long, Short, and Negative Memory Time Series By McElroy, Tucker S; Politis, D N

  1. By: Liudas Giraitis (Queen Mary, University of London); George Kapetanios (Queen Mary, University of London); Simon Price (Bank of England and City University)
    Abstract: We consider time series forecasting in the presence of ongoing structural change where both the time series dependence and the nature of the structural change are unknown. Methods that downweight older data, such as rolling regressions, forecast averaging over different windows and exponentially weighted moving averages, known to be robust to historical structural change, are found to be also useful in the presence of ongoing structural change in the forecast period. A crucial issue is how to select the degree of downweighting, usually defined by an arbitrary tuning parameter. We make this choice data dependent by minimizing forecast mean square error, and provide a detailed theoretical analysis of our proposal. Monte Carlo results illustrate the methods. We examine their performance on 191 UK and US macro series. Forecasts using data-based tuning of the data discount rate are shown to perform well.
    Keywords: Recent and ongoing structural change, Forecast combination, Robust forecasts
    JEL: C10 C59
    Date: 2012–03
    URL: http://d.repec.org/n?u=RePEc:qmw:qmwecw:wp691&r=ets
  2. By: Juan Carlos Escanciano (Indiana University); Pei Pei (Indiana University and Chinese Academy of Finance and Development, Central University of Finance and Economics)
    Abstract: Historical Simulation (HS) and its variant, the Filtered Historical Simulation (FHS), are the most widely used Value-at-Risk forecast methods at commercial banks. These forecast methods are traditionally evaluated by means of the unconditional backtest. This paper formally shows that the unconditional backtest is always inconsistent for backtesting HS and FHS models, with a power function that can be even smaller than the nominal level in large samples. Our ndings have fundamental implications in the determination of market risk capital requirements, and also explain Monte Carlo and empirical ndings in previous studies. We also propose a data-driven weighted backtest with good power properties to evaluate HS and FHS forecasts. Finally, our theoretical ndings are conrmed in a Monte Carlo simulation study and an empirical application with three U.S. stocks. The empirical application shows that multiplication factors computed under the current regulatory framework are downward biased, as they inherit the inconsistency of the unconditional backtest.
    Date: 2012–02
    URL: http://d.repec.org/n?u=RePEc:inu:caeprp:2012-003&r=ets
  3. By: Tara M. Sinclair (George Washington University); H.O. Stekler (George Washington University); Warren Carnow (George Washington University)
    Abstract: This paper presents a new approach to evaluating multiple economic forecasts. In the past, evaluations have focused on the forecasts of individual variables. However, many macroeconomic variables are forecast at the same time and are used together to describe the state of the economy. It is, therefore, appropriate to examine those forecasts jointly. This specific approach is based on the Sinclair and Stekler (forthcoming) analysis of data revisions. The main contributions of this paper are (1) the application of this technique to the Survey of Professional Forecasters (SPF) and (2) showing that there is a bias that is associated with the stages of the business cycle.
    Keywords: Federal Reserve, Forecast Evaluation, Survey of Professional Forecasts, Business Cycle, Mahalanobis Distance
    JEL: C5 E2 E3
    Date: 2012–03
    URL: http://d.repec.org/n?u=RePEc:gwc:wpaper:2012-004&r=ets
  4. By: Serguey Khovansky; Zhylyevskyy, Oleksandr
    Abstract: We apply a new econometric method -- the generalized method of moments under a common shock -- to estimate idiosyncratic volatility premium and average idiosyncratic stock volatility. In contrast to the popular two-pass estimation approach of Fama and MacBeth (1973), the method requires using only a cross-section of return observations. We apply it to cross-sections of weekly U.S. stock returns in January and October 2008 and fiÂ…nd that during these months, the idiosyncratic volatility premium is nearly always negative and statistically signiÂ…cant. The results also indicate that the average idiosyncratic stock volatility increased by at least 50% between January and October.
    Keywords: Generalized method of moments; Idiosyncratic volatility; Cross-section of stock returns; Idiosyncratic volatility premium
    JEL: C21 C51 G12
    Date: 2012–01–31
    URL: http://d.repec.org/n?u=RePEc:isu:genres:34990&r=ets
  5. By: Mantalos, Panagiotis (Department of Business, Economics, Statistics and Informatics); Karagrigoriou, Alex (Department of Mathematics and Statistics, University of Cyprus)
    Abstract: In this paper a test procedure is proposed for the skewness in autoregressive conditional volatility models. The size and the power of the test are investigated through a series of Monte Carlo simulations with various models. Furthermore, applications with financial data are analyzed in order to explore the applicability and the capabilities of the proposed testing procedure.
    Keywords: ARCH /GARCH model; kurtosis; NoVaS; skewness. JEL Classification Codes: C01; C12; C15
    JEL: C01 C12 C15
    Date: 2012–03–21
    URL: http://d.repec.org/n?u=RePEc:hhs:oruesi:2012_004&r=ets
  6. By: Márcio Laurini (IBMEC Business School)
    Abstract: In this article we introduce a new methodology for modeling curves with a dynamic structure, using a non-parametric approach formulated as a state space model. The non-parametric approach is based on the use of penalized splines, represented as a dynamic mixed model. This formulation can capture the dynamic evolution of curves using a limited number of latent factors, allowing a accurate fit with a limited number of parameters. We also present a new method to determine the optimal smoothing parameter through an adaptive procedure using a formulation analogous to a model of stochastic volatility. This methodology allows unifying different methodologies applied to data with a functional structure in finance. We present the advantages and limitations of this methodology through a simulation study and also comparing its predictive performance with other parametric and non-parametric methods used in financial applications using data from term structure of interest rates.
    Keywords: Functional Data, Penalized Splines, MCMC, Bayesian non-parametric methods
    JEL: C11 C15 G12
    Date: 2012–03–16
    URL: http://d.repec.org/n?u=RePEc:ibr:dpaper:2012-01&r=ets
  7. By: Márcio Laurini (IBMEC Business School)
    Abstract: In this paper we analyze a maximum likelihood estimator using data cloning for stochastic volatility models.This estimator is constructed using a hybrid methodology based on Integrated Nested Laplace Approximations to calculate analytically the auxiliary Bayesian estimators with great accuracy and computational efficiency, without requiring the use of simulation methods as Markov Chain Monte Carlo. We analyze the performance of this estimator compared to methods based in Monte Carlo simulations (Simulated Maximum Likelihood, MCMC Maximum Likelihood) and approximate maximum likelihood estimators using Laplace Approximations. The results indicate that this data cloning methodology achieves superior results over methods based on MCMC, and comparable to results obtained by the Simulated Maximum Likelihood estimator.
    Keywords: Stochastic Volatility: Data Cloning, Maximum Likelihood, MCMC, Laplace Approximations.
    JEL: C53 E43 G17
    Date: 2012–03–16
    URL: http://d.repec.org/n?u=RePEc:ibr:dpaper:2012-02&r=ets
  8. By: Maximo Camacho (Universidad de Murcia); Gabriel Perez-Quiros (Banco de España); Pilar Poncela (Universidad Autónoma de Madrid)
    Abstract: We develop a twofold analysis of how the information provided by several economic indicators can be used in Markov-switching dynamic factor models to identify the business cycle turning points. First, we compare the performance of a fully non-linear multivariate specifi cation (one-step approach) with the “shortcut” of using a linear factor model to obtain a coincident indicator which is then used to compute the Markov-switching probabilities (two-step approach). Second, we examine the role of increasing the number of indicators. Our results suggest that one step is generally preferred to two steps, although its marginal gains diminish as the quality of the indicators increases and as more indicators are used to identify the non-linear signal. Using the four constituent series of the Stock-Watson coincident index, we illustrate these results for US data.
    Keywords: Business cycles, output growth, time series
    JEL: E32 C22 E27
    Date: 2012–02
    URL: http://d.repec.org/n?u=RePEc:bde:wpaper:1202&r=ets
  9. By: Katarina Juselius (Department of Economics)
    Abstract: Some key econometric concepts and problems addressed by Trygve Haavelmo and Ragnar Frisch are discussed within the general framework of a cointegrated VAR. The focus is on problems typical of time series data such as multicollinearity, spurious correlation and regression results, time dependent residuals, normalization, reduced rank, model selection, missing variables, simultaneity, autonomy and identification. Specifically the paper discusses (1) the conditions under which the VAR model represents a full probability formulation of a sample of time-series observations, (2) the plausibility of the multivariate normality assumption underlying the VAR, (3) cointegration as a solution to the problem of spurious correlation and multicollinearity when data contain deterministic and stochastic trends, (4) the existence of a universe, (5) the association between Frisch's confluence analysis and cointegrated VAR analysis, (6) simultaneity and identification when data are nonstationary, (7) conditions under which identified cointegration relations can be considered structural or autonomous, and finally (8) a formulation of a design of experiment for passive observations based on theory consistent CVAR scenarios illustrated with a monetary model for inflation.
    Keywords: Haavelmo, CVAR, autonomy, identification, passive observations
    JEL: B16 B31 B41 C32 C82
    Date: 2012–03–01
    URL: http://d.repec.org/n?u=RePEc:kud:kuiedp:1201&r=ets
  10. By: Joshua Chan; Rodney Strachan
    Abstract: In recent years state space models, particularly the linear Gaussian version, have become the standard framework for analyzing macro-economic and financial data. However, many theoretically motivated models imply non-linear or non-Gaussian specifications – or both. Existing methods for estimating such models are computationally intensive, and often cannot be applied to models with more than a few states. Building upon recent developments in precision-based algorithms, we propose a general approach to estimating high-dimensional non-linear non-Gaussian state space models. The baseline algorithm approximates the conditional distribution of the states by a multivariate Gaussian or t density, which is then used for posterior simulation. We further develop this baseline algorithm to construct more sophisticated samplers with attractive properties: on based on the accept—reject Metropolis-Hastings (ARHM) algorithm, and another adaptive collapsed sampler inspired by the cross-entropy method. To illustrate the proposed approach, we investigate the effect of the zero lower bound of interest rate on monetary transmission mechanism.
    Date: 2012–03
    URL: http://d.repec.org/n?u=RePEc:acb:camaaa:2012-13&r=ets
  11. By: Stanislav Khrapov (New Economic School)
    Abstract: It is widely accepted that there is a risk of fluctuating volatility. There is some evidence, analogously to long-term consumption risk literature or central tendency in interest rates, that there exists a slowly varying component in volatility. Volatility literature concentrates on investigation of two-factor volatility process, with one factor being very persistent. I propose a different parametrization of volatility process that includes this persistent component as a stochastic central tendency. The reparametrization is observationally equivalent but has compelling economic interpretation. I estimate the historical and riskneutral parameters of the model jointly using GMM with the data on realized volatility and VIX volatility index and treating central tendency as completely unobservable. The main empirical result of the paper is that on average the volatility premium is mainly due to the premium on highly persistent shocks of the central tendency.
    Keywords: stochastic volatility; central tendency; volatility risk premium; GMM
    Date: 2011–10
    URL: http://d.repec.org/n?u=RePEc:cfr:cefirw:w0168&r=ets
  12. By: Kohonen, Anssi
    Abstract: Empirical research confirms the existence of volatility spillovers across national stock markets. However, the models in use are mostly statistical ones. Much less is known about the actual transmission mechanisms; theoretical literature is scarce, and so is empirical work trying to estimate specific theoretical models. Some economic theory founded tests for such spillovers have been developed for non-overlapping markets; this institutional set up provides a way around the problems of estimating a system of simultaneous equations. However, volatility spillovers across overlapping markets might be as important a phenomenon as across non-overlapping markets. Building on recent advances in econometrics of identifying structural vector autoregressive models, this paper proposes a way to estimate an existing signal-extraction model that explains volatility spillovers across simultaneously open stock markets. Furthermore, a new empirical test for detection of such spillovers is derived. As an empirical application, the theoretical model is fitted to daily data of eurozone stock markets in years 2010--2011. Evidence of volatility spillovers across the countries is found.
    Keywords: Volatility transmission; financial contagion; SVAR identification; hypothesis testing; stock markets; euro debt crisis
    JEL: G14 C12 G15 C30 D82
    Date: 2012–03–08
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:37504&r=ets
  13. By: Giannone, Domenico; Lenza, Michele; Primiceri, Giorgio E
    Abstract: Vector autoregressions (VARs) are flexible time series models that can capture complex dynamic interrelationships among macroeconomic variables. However, their dense parameterization leads to unstable inference and inaccurate out-of-sample forecasts, particularly for models with many variables. A potential solution to this problem is to use informative priors, in order to shrink the richly parameterized unrestricted model towards a parsimonious naïve benchmark, and thus reduce estimation uncertainty. This paper studies the optimal choice of the informativeness of these priors, which we treat as additional parameters, in the spirit of hierarchical modeling. This approach is theoretically grounded, easy to implement, and greatly reduces the number and importance of subjective choices in the setting of the prior. Moreover, it performs very well both in terms of out-of-sample forecasting, and accuracy in the estimation of impulse response functions.
    Keywords: Bayesian Methods; Forecasting; Hierarchical Modeling; Impulse Responses; Marginal Likelihood
    JEL: C11 C32 C52 E37
    Date: 2012–01
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:8755&r=ets
  14. By: Carriero, Andrea; Clark, Todd; Marcellino, Massimiliano
    Abstract: The estimation of large Vector Autoregressions with stochastic volatility using standard methods is computationally very demanding. In this paper we propose to model conditional volatilities as driven by a single common unobserved factor. This is justified by the observation that the pattern of estimated volatilities in empirical analyses is often very similar across variables. Using a combination of a standard natural conjugate prior for the VAR coefficients, and an independent prior on a common stochastic volatility factor, we derive the posterior densities for the parameters of the resulting BVAR with common stochastic volatility (BVAR-CSV). Under the chosen prior the conditional posterior of the VAR coefficients features a Kroneker structure that allows for fast estimation, even in a large system. Using US and UK data, we show that, compared to a model with constant volatilities, our proposed common volatility model significantly improves model fit and forecast accuracy. The gains are comparable to or as great as the gains achieved with a conventional stochastic volatility specification that allows independent volatility processes for each variable. But our common volatility specification greatly speeds computations.
    Keywords: Bayesian VARs; forecasting; prior specification; stochastic volatility
    JEL: C11 C13 C33 C53
    Date: 2012–03
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:8894&r=ets
  15. By: Camacho, Maximo; Pérez-Quirós, Gabriel; Poncela, Pilar
    Abstract: We extend the Markov-switching dynamic factor model to account for some of the specificities of the day-to-day monitoring of economic developments from macroeconomic indicators, such as ragged edges and mixed frequencies. We examine the theoretical benefits of this extension and corroborate the results through several MonteCarlo simulations. Finally, we assess its empirical reliability to compute real-time inferences of the US business cycle.
    Keywords: Business Cycles; Output Growth; Time Series
    JEL: C22 E27 E32
    Date: 2012–02
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:8866&r=ets
  16. By: Foroni, Claudia; Marcellino, Massimiliano; Schumacher, Christian
    Abstract: Mixed-data sampling (MIDAS) regressions allow to estimate dynamic equations that explain a low-frequency variable by high-frequency variables and their lags. When the difference in sampling frequencies between the regressand and the regressors is large, distributed lag functions are typically employed to model dynamics avoiding parameter proliferation. In macroeconomic applications, however, differences in sampling frequencies are often small. In such a case, it might not be necessary to employ distributed lag functions. In this paper, we discuss the pros and cons of unrestricted lag polynomials in MIDAS regressions. We derive unrestricted MIDAS regressions (U-MIDAS) from linear high-frequency models, discuss identification issues, and show that their parameters can be estimated by OLS. In Monte Carlo experiments, we compare U-MIDAS to MIDAS with functional distributed lags estimated by NLS. We show that U-MIDAS performs better than MIDAS for small differences in sampling frequencies. On the other hand, with large differing sampling frequencies, distributed lag-functions outperform unrestricted polynomials. The good performance of U-MIDAS for small differences in frequency is confirmed in an empirical application on nowcasting Euro area and US GDP using monthly indicators.
    Keywords: distributed lag polynomals; Mixed data sampling; nowcasting; time aggregation
    JEL: C53 E37
    Date: 2012–02
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:8828&r=ets
  17. By: Sibbertsen, Philipp; Willert, Juliane
    Abstract: Detecting the number of breaks in the mean can be challenging when it comes to the long memory framework. Tree-based procedures can be applied to time series when the location and number of mean shifts are unknown and estimate the breaks consistently though with possible overfitting. For pruning the redundant breaks information criteria can be used. An alteration of the BIC, the LWZ, is presented to overcome long-range dependence issues. A Monte Carlo Study shows the superior performance of the LWZ to alternative pruning criteria like the BIC or LIC.
    Keywords: long memory, mean shift, regression tree, ART, LWZ, LIC.
    JEL: C14 C22
    Date: 2012–03
    URL: http://d.repec.org/n?u=RePEc:han:dpaper:dp-496&r=ets
  18. By: McElroy, Tucker S; Politis, D N
    Abstract: We consider the problem of estimating the variance of the partial sums of a stationary time series that has either long memory, short memory, negative/intermediate memory, or is the ¯rst- di®erence of such a process. The rate of growth of this variance depends crucially on the type of memory, and we present results on the behavior of tapered sums of sample autocovariances in this context when the bandwidth vanishes asymptotically. We also present asymptotic results for the case that the bandwidth is a ¯xed proportion of sample size, extending known results to the case of °at-top tapers. We adopt the ¯xed proportion bandwidth perspective in our empirical section, presenting two methods for estimating the limiting critical values { both the subsampling method and a plug-in approach. Extensive simulation studies compare the size and power of both approaches as applied to hypothesis testing for the mean. Both methods perform well { although the subsampling method appears to be better sized { and provide a viable framework for conducting inference for the mean. In summary, we supply a uni¯ed asymptotic theory that covers all di®erent types of memory under a single umbrella.
    Keywords: kernel, lag-windows, overdifferencing, spectral estimation, subsampling, tapers, unit-root problem, Econometrics
    Date: 2011–09–01
    URL: http://d.repec.org/n?u=RePEc:cdl:ucsdec:qt0dr145dt&r=ets

This nep-ets issue is ©2012 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.