nep-ets New Economics Papers
on Econometric Time Series
Issue of 2015‒04‒25
29 papers chosen by
Yong Yin
SUNY at Buffalo

  1. Real-time forecasting with a MIDAS VAR By Mikosch, Heiner; Neuwirth , Stefan
  2. Fractional Cointegration Rank Estimation By Katarzyna Lasak; Carlos Velasco
  3. Maximum Likelihood Estimation for Correctly Specified Generalized Autoregressive Score Models: Feedback Effects, Contraction Conditions and Asymptotic Properties By Francisco Blasques; Siem Jan Koopman; André Lucas
  4. Inference on Co-integration Parameters in Heteroskedastic Vector Autoregressions By H. Peter Boswijk; Giuseppe Cavaliere; Anders Rahbek; A. M. Robert Taylor
  5. New HEAVY Models for Fat-Tailed Returns and Realized Covariance Kernels By Pawel Janus; André Lucas; Anne Opschoor
  6. On an Estimation Method for an Alternative Fractionally Cointegrated Model By Federico Carlini; Katarzyna Lasak
  7. The Forecast Combination Puzzle: A Simple Theoretical Explanation By Gerda Claeskens; Jan Magnus; Andrey Vasnev; Wendun Wang
  8. Asymmetric Realized Volatility Risk By David E. Allen; Michael McAleer; Marcel Scharth
  9. Interactions between Eurozone and US Booms and Busts: A Bayesian Panel Markov-switching VAR Model By Monica Billio; Roberto Casarin; Francesco Ravazzolo; Herman K. van Dijk
  10. A Test for the Portion of Bivariate Dependence in Multivariate Tail Risk By Carsten Bormann; Melanie Schienle; Julia Schaumburg
  11. Empirical Bayes Methods for Dynamic Factor Models By Siem Jan Koopman; Geert Mesters
  12. Asymmetry and Leverage in Conditional Volatility Models By Michael McAleer
  13. Low Frequency and Weighted Likelihood Solutions for Mixed Frequency Dynamic Factor Models By Francisco Blasques; Siem Jan Koopman; Max Mallee
  14. Frontiers in Time Series and Financial Econometrics: An Overview By Shiqing Ling; Michael McAleer; Howell Tong
  15. Stationarity and Ergodicity Regions for Score Driven Dynamic Correlation Models By Francisco Blasques; Andre Lucas; Erkki Silde
  16. Bayesian Forecasting of US Growth using Basic Time Varying Parameter Models and Expectations Data By Nalan Basturk; Pinar Ceyhan; Herman K. van Dijk
  17. In-Sample Bounds for Time-Varying Parameters of Observation Driven Models By Francisco Blasques; Siem Jan Koopman; Katarzyna Lasak; André Lucas
  18. Maximum Likelihood Estimation for Generalized Autoregressive Score Models By Francisco Blasques; Siem Jan Koopman; Andre Lucas
  19. Time Varying Transition Probabilities for Markov Regime Switching Models By Marco Bazzi; Francisco Blasques; Siem Jan Koopman; Andre Lucas
  20. Optimal Formulations for Nonlinear Autoregressive Processes By Francisco Blasques; Siem Jan Koopman; André Lucas
  21. Vector Autoregressions with Parsimoniously Time Varying Parameters and an Application to Monetary Policy By Laurent Callot; Johannes Tang Kristensen
  22. Forecasting Co-Volatilities via Factor Models with Asymmetry and Long Memory in Realized Covariance By Manabu Asai; Michael McAleer
  23. The Impact of Jumps and Leverage in Forecasting Co-Volatility By Manabu Asai; Michael McAleer
  24. Testing for Parameter Instability in Competing Modeling Frameworks By Francesco Calvori; Drew Creal; Siem Jan Koopman; Andre Lucas
  25. Joint Bayesian Analysis of Parameters and States in Nonlinear, Non-Gaussian State Space Models By István Barra; Lennart Hoogerheide; Siem Jan Koopman; André Lucas
  26. A New Bootstrap Test for the Validity of a Set of Marginal Models for Multiple Dependent Time Series: An Application to Risk Analysis By David Ardia; Lukasz Gatarek; Lennart F. Hoogerheide
  27. On the Invertibility of EGARCH(p,q) By Guillaume Gaetan Martinet; Michael McAleer
  28. Likelihood Ratio Test for Change in Persistence By Skrobotov, Anton
  29. Time-consistency of risk measures with GARCH volatilities and their estimation By Claudia Kl\"uppelberg; Jianing Zhang

  1. By: Mikosch, Heiner (BOFIT); Neuwirth , Stefan (BOFIT)
    Abstract: This paper presents a MIDAS type mixed frequency VAR forecasting model. First, we propose a general and compact mixed frequency VAR framework using a stacked vector approach. Second, we integrate the mixed frequency VAR with a MIDAS type Almon lag polynomial scheme which is designed to reduce the parameter space while keeping models fexible. We show how to recast the resulting non-linear MIDAS type mixed frequency VAR into a linear equation system that can be easily estimated. A pseudo out-of-sample forecasting exercise with US real-time data yields that the mixed frequency VAR substantially improves predictive accuracy upon a standard VAR for dierent VAR specications. Forecast errors for, e.g., GDP growth decrease by 30 to 60 percent for forecast horizons up to six months and by around 20 percent for a forecast horizon of one year.
    Keywords: Forecasting; mixed frequency data; MIDAS; VAR; real time
    JEL: C53 E27
    Date: 2015–04–13
    URL: http://d.repec.org/n?u=RePEc:hhs:bofitp:2015_013&r=ets
  2. By: Katarzyna Lasak (VU University Amsterdam, the Netherlands); Carlos Velasco (Universidad Carlos III de Madrid, Spain)
    Abstract: Accepted for publication in the <I>Journal of Business & Economic Statistics</I>.<P> We consider cointegration rank estimation for a p-dimensional Fractional Vector Error Correction Model. We propose a new two-step procedure which allows testing for further long-run equilibrium relations with possibly different persistence levels. The first step consists in estimating the parameters of the model under the null hypothesis of the cointegration rank r=1,2,…,p-1. This step provides consistent estimates of the order of fractional cointegration, the cointegration vectors, the speed of adjustment to the equilibrium parameters and the common trends. In the second step we carry out a sup-likelihood ratio test of no-cointegration on the estimated p-r common trends that are not cointegrated under the null. The order of fractional cointegration is re-estimated in the second step to allow for new cointegration relationships with different memory. We augment the error correction model in the second step to adapt to the representation of the common trends estimated in the first step. The critical values of the proposed tests depend only on the number of common trends under the null, p-r, and on the interval of the orders of fractional cointegration b allowed in the estimation, but not on the order of fractional cointegration of already identified relationships. Hence this reduces the set of simulations required to approximate the critical values, making this procedure convenient for practical purposes. In a Monte Carlo study we analyze the finite sample properties of our procedure and compare with alternative methods. We finally apply these methods to study the term structure of interest rates.
    Keywords: Error correction model, Gaussian VAR model, Likelihood ratio tests, Maximum likelihood estimation
    JEL: C12 C15 C32
    Date: 2014–02–13
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20140021&r=ets
  3. By: Francisco Blasques (VU University Amsterdam); Siem Jan Koopman (VU University Amsterdam, the Netherlands); André Lucas (VU University Amsterdam, the Netherlands, and Aarhus University, Denmark)
    Abstract: The strong consistency and asymptotic normality of the maximum likelihood estimator in observation-driven models usually requires the study of the model both as a filter for the time-varying parameter and as a data generating process (DGP) for observed data. The probabilistic properties of the filter can be substantially different from those of the DGP. This difference is particularly relevant for recently developed time varying parameter models. We establish new conditions under which the dynamic properties of the true time varying parameter as well as of its filtered counterpart are both well-behaved and We only require the verification of one rather than two sets of conditions. In particular, we formulate conditions under which the (local) invertibility of the model follows directly from the stable behavior of the true time varying parameter. We use these results to prove the local strong consistency and asymptotic normality of the maximum likelihood estimator. To illustrate the results, we apply the theory to a number of empirically relevant models.
    Keywords: Observation-driven models, stochastic recurrence equations, contraction conditions, invertibility, stationarity, ergodicity, generalized autoregressive score models
    JEL: C13 C22 C12
    Date: 2014–06–20
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20140074&r=ets
  4. By: H. Peter Boswijk (University of Amsterdam); Giuseppe Cavaliere (University of Bologna, Italy); Anders Rahbek (University of Copenhagen, Denmark, and CREATES); A. M. Robert Taylor (University of Essex, United Kingdom)
    Abstract: It is well established that the shocks driving many key macro-economic and financial variables display time-varying volatility. In this paper we consider estimation and hypothesis testing on the coefficients of the co-integrating relations and the adjustment coefficients in vector autoregressions driven by both conditional and unconditional heteroskedasticity of a quite general and unknown form in the shocks. We show that the conventional results in Johansen (1996) for the maximum likelihood estimators and associated likelihood ratio tests derived under homoskedasticity do not in general hold in the presence of heteroskedasticity. As a consequence, standard confidence intervals and tests of hypothesis on these coefficients are potentially unreliable. Solutions to this inference problem based on Wald tests (using a "sandwich" estimator of the variance matrix) and on the use of the wild bootstrap are discussed. These do not require the practitioner to specify a parametric model for volatility, or to assume that the pattern of volatility is common to, or independent across, the vector of series under analysis. We formally establish the conditions under which these methods are asymptotically valid. A Monte Carlo simulation study demonstrates that significant improvements in finite sample size can be obtained by the bootstrap over the corresponding asymptotic tests in both heteroskedastic and homoskedastic environments. An application to the term structure of interest rates in the US illustrates the difference between standard and bootstrap inferences regarding hypotheses on the co-integrating vectors and adjustment coefficients.
    Keywords: Co-integration, adjustment coefficients, (un)conditional heteroskedasticity, heteroskedasticity-robust inference, wild bootstrap
    JEL: C30 C32
    Date: 2013–11–19
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20130187&r=ets
  5. By: Pawel Janus (UBS Global Asset Management, the Netherlands); André Lucas (VU University Amsterdam); Anne Opschoor (VU University Amsterdam, the Netherlands)
    Abstract: We develop a new model for the multivariate covariance matrix dynamics based on daily return observations and daily realized covariance matrix kernels based on intraday data. Both types of data may be fat-tailed. We account for this by assuming a matrix-F distribution for the realized kernels, and a multivariate Student’s t distribution for the returns. Using generalized autoregressive score dynamics for the unobserved true covariance matrix, our approach automatically corrects for the effect of outliers and incidentally large observations, both in returns and in covariances. Moreover, by an appropriate choice of scaling of the conditional score function we are able to retain a convenient matrix formulation for the dynamic updates of the covariance matrix. This makes the model highly computationally efficient. We show how the model performs in a controlled simulation setting as well as for empirical data. In our empirical application, we study daily returns and realized kernels from 15 equities over the period 2001-2012 and find that the new model statistically outperforms (recently developed) multivariate volatility models, both in-sample and out-of-sample. We also comment on the possibility to use composite likelihood methods for estimation if desired.
    Keywords: realized covariance matrices, heavy tails, (degenerate) matrix-F distribution, generalized autoregressive score (GAS) dynamics
    JEL: C32 C58
    Date: 2014–06–19
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20140073&r=ets
  6. By: Federico Carlini (CREATES, Aarhus University, Denmark); Katarzyna Lasak (VU University Amsterdam)
    Abstract: In this paper we consider the Fractional Vector Error Correction model proposed in Avarucci (2007), which is characterized by a richer lag structure than models proposed in Granger (1986) and Johansen (2008, 2009). We discuss the identification issues of the model of Avarucci (2007), following the ideas in Carlini and Santucci de Magistris (2014) for the model of Johansen (2008, 2009). We propose a 4-step estimation procedure that is based on the switching algorithm employed in Carlini and Mosconi (2014) and the GLS procedure in Mosconi and Paruolo (2014). The proposed procedure provides estimates of the long run parameters of the fractionally cointegrated system that are consistent and unbiased, which we demonstrate by a Monte Carlo experiment.
    Keywords: Error correction model, Gaussian VAR model, Fractional Cointegration, Estimation algorithm, Maximum likelihood estimation, Switching Algorithm, Reduced Rank Regression
    JEL: C13 C32
    Date: 2014–05–01
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20140052&r=ets
  7. By: Gerda Claeskens (KU Leuven, Belgium); Jan Magnus (VU University Amsterdam, the Netherlands); Andrey Vasnev (University of Sydney, Australia); Wendun Wang (Erasmus University, Rotterdam, the Netherlands)
    Abstract: This papers offers a theoretical explanation for the stylized fact that forecast combinations with estimated optimal weights often perform poorly in applications. The properties of the forecast combination are typically derived under the assumption that the weights are fixed, while in practice they need to be estimated. If the fact that the weights are random rather than fixed is taken into account during the optimality derivation, then the forecast combination will be biased (even when the original forecasts are unbiased) and its variance is larger than in the fixed-weights case. In particular, there is no guarantee that the 'optimal' forecast combination will be better than the equal-weights case or even improve on the original forecasts. We provide the underlying theory, some special cases and an application in the context of model selection.
    Keywords: forecast combination, optimal weights, model selection
    JEL: C53 C52
    Date: 2014–09–19
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20140127&r=ets
  8. By: David E. Allen (University of Sydney, and University of South Australia, Australia); Michael McAleer (National Tsing Hua University, Taiwan; Erasmus University Rotterdam, Tinbergen Institute, the Netherlands; Complutense University Madrid, Spain); Marcel Scharth (University of New South Wales, Australia)
    Abstract: In this paper we document that realized variation measures constructed from high-frequency returns reveal a large degree of volatility risk in stock and index returns, where we characterize volatility risk by the extent to which forecasting errors in realized volatility are substantive. Even though returns standardized by ex post quadratic variation measures are nearly gaussian, this unpredictability brings considerably more uncertainty to the empirically relevant ex ante distribution of returns. Explicitly modeling this volatility risk is fundamental. We propose a dually asymmetric realized volatility model, which incorporates the fact that realized volatility series are systematically more volatile in high volatility periods. Returns in this framework display time varying volatility, skewness and kurtosis. We provide a detailed account of the empirical advantages of the model using data on the S&P 500 index and eight other indexes and stocks.
    Keywords: Realized volatility, volatility of volatility, volatility risk, value-at-risk, forecasting, conditional heteroskedasticity
    JEL: C58 G12
    Date: 2014–06–23
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20140075&r=ets
  9. By: Monica Billio (University of Venice, GRETA Assoc. and School for Advanced Studies in Venice, Italy); Roberto Casarin (University of Venice, GRETA Assoc. and School for Advanced Studies in Venice, Italy); Francesco Ravazzolo (Norges Bank and BI Norwegian Business School, Norway); Herman K. van Dijk (Erasmus University Rotterdam, and VU University Amsterdam, The Netherlands)
    Abstract: Interactions between the eurozone and US booms and busts and among major eurozone economies are analyzed by introducing a panel Markov-switching VAR model well suitable for a multi-country cyclical analysis. The model accommodates changes in low and high data frequencies and endogenous time-varying transition matrices of the country-specific Markov chains. The transition matrix of each Markov chain depends on its own past history and on the history of the other chains, thus allowing for modeling of the interactions between cycles. An endogenous common eurozone cycle is derived by aggregating country-specific cycles. The model is estimated using a simulation based Bayesian approach in which an efficient multi-move strategy algorithm is defined to draw common time-varying Markov-switching chains. Our results show that the US and eurozone cycles are not fully synchronized over the 1991-2013 sample period, with evidence of more recessions in the Eurozone. Shocks affect the US 1-quarter in advance of the eurozone, but these spread very rapidly among economies. An increase in the number of eurozone countries in recession increases the probability of the US to stay within recession, while the US recession indicator has a negative impact on the probability to stay in recession for eurozone countries. Turning point analysis shows that the cycles of Germany, France and Italy are closer to the US cycle than other countries. Belgium, Spain, and Germany, provide more timely information on the aggregate recession than Netherlands and France.
    Keywords: Bayesian Model, Panel VAR, Markov-switching, International Business Cycles, Interaction Mechanism
    JEL: C11 C15 C53 E37
    Date: 2013–09–16
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20130142&r=ets
  10. By: Carsten Bormann; Melanie Schienle (Leibniz Universität Hannover, Germany); Julia Schaumburg (VU University Amsterdam)
    Abstract: In practice, multivariate dependencies between extreme risks are often only assessed in a pairwise way. We propose a test to detect when tail dependence is truly high-dimensional and bivariate simplifications would produce misleading results. This occurs when a significant portion of the multivariate dependence structure in the tails is of higher dimension than two. Our test statistic is based on a decomposition of the stable tail dependence function, which is standard in extreme value theory for describing multivariate tail dependence. The asymptotic properties of the test are provided and a bootstrap based finite sample version of the test is suggested. A simulation study documents the good performance of the test for standard sample sizes. In an application to international government bonds, we detect a high tail{risk and low return situation during the last decade which can essentially be attributed to increased higher{order tail risk. We also illustrate the empirical consequences from ignoring higher-dimensional tail risk.
    Keywords: decomposition of tail dependence, multivariate extreme values, stable tail dependence function, subsample bootstrap, tail correlation
    JEL: C12 C19
    Date: 2014–02–25
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20140024&r=ets
  11. By: Siem Jan Koopman; Geert Mesters (VU University Amsterdam)
    Abstract: We consider the dynamic factor model where the loading matrix, the dynamic factors and the disturbances are treated as latent stochastic processes. We present empirical Bayes methods that enable the efficient shrinkage-based estimation of the loadings and the factors. We show that our estimates have lower quadratic loss compared to the standard maximum likelihood estimates. We investigate the methods in a Monte Carlo study where we document the finite sample properties. Finally, we present and discuss the results of an empirical study concerning the forecasting of U.S. macroeconomic time series using our empirical Bayes methods.
    Keywords: Importance sampling, Kalman filtering, Likelihood-based analysis, Posterior modes, Rao-Blackwellization, Shrinkage
    JEL: C32 C43
    Date: 2014–05–23
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20140061&r=ets
  12. By: Michael McAleer (National Tsing Hua University Taiwan; Erasmus University Rotterdam, the Netherlands; Complutense University of Madrid, Spain)
    Abstract: The three most popular univariate conditional volatility models are the generalized autoregressive conditional heteroskedasticity (GARCH) model of Engle (1982) and Bollerslev (1986), the GJR (or threshold GARCH) model of Glosten, Jagannathan and Runkle (1992), and the exponential GARCH (or EGARCH) model of Nelson (1990, 1991). The underlying stochastic specification to obtain GARCH was demonstrated by Tsay (1987), and that of EGARCH was shown recently in McAleer and Hafner (2014). These models are important in estimating and forecasting volatility, as well as capturing asymmetry, which is the different effects on conditional volatility of positive and negative effects of equal magnitude, and leverage, which is the negative correlation between returns shocks and subsequent shocks to volatility. As there seems to be some confusion in the literature between asymmetry and leverage, as well as which asymmetric models are purported to be able to capture leverage, the purpose of the paper is two-fold, namely: (1) to derive the GJR model from a random coefficient autoregressive process, with appropriate regularity conditions; and (2) to show that leverage is not possible in these univariate conditional volatility models.
    Keywords: Conditional volatility models, random coefficient autoregressive processes, random coefficient complex nonlinear moving average process, asymmetry, leverage
    JEL: C22 C52 C58 G32
    Date: 2014–09–18
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20140125&r=ets
  13. By: Francisco Blasques; Siem Jan Koopman; Max Mallee (VU University Amsterdam, the Netherlands)
    Abstract: The multivariate analysis of a panel of economic and financial time series with mixed frequencies is a challenging problem. The standard solution is to analyze the mix of monthly and quarterly time series jointly by means of a multivariate dynamic model with a monthly time index: artificial missing values are inserted for the intermediate months of the quarterly time series. In this paper we explore an alternative solution for a class of dynamic factor models that is specified by means of a low frequency quarterly time index. We show that there is no need to introduce artificial missing values while the high frequency (monthly) information is preserved and can still be analyzed. We also provide evidence that the analysis based on a low frequency specification can be carried out in a computationally more efficient way. A comparison study with existing mixed frequency procedures is presented and discussed. Furthermore, we modify the method of maximum likelihood in the context of a dynamic factor model. We introduce variable-specific weights in the likelihood function to let some variable equations be of more importance during the estimation process. We derive the asymptotic properties of the weighted maximum likelihood estimator and we show that the estimator is consistent and asymptotically normal. We also verify the weighted estimation method in a Monte Carlo study to investigate the effect of differen t choices for the weights in different scenarios. Finally, we empirically illustrate the new developments for the extraction of a coincident economic indicator from a small panel of mixed frequency economic time series.
    Keywords: Asymptotic theory, Forecasting, Kalman filter, Nowcasting, State space
    JEL: C13 C32 C53 E17
    Date: 2014–08–11
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20140105&r=ets
  14. By: Shiqing Ling (Hong Kong University of Science and Technology, Hong Kong, China); Michael McAleer (National Tsing Hua University, Taiwan; Erasmus School of Economics, Erasmus University Rotterdam, the Netherlands; and Complutense University of Madrid, Spain); Howell Tong (London School of Economics, United Kingdom)
    Abstract: Two of the fastest growing frontiers in econometrics and quantitative finance are time series and financial econometrics. Significant theoretical contributions to financial econometrics have been made by experts in statistics, econometrics, mathematics, and time series analysis. The purpose of this special issue of the journal on “Frontiers in Time Series and Financial Econometrics” is to highlight several areas of research by leading academics in which novel methods have contributed significantly to time series and financial econometrics, including forecasting co-volatilities via factor models with asymmetry and long memory in realized covariance, prediction of Lévy-driven CARMA processes, functional index coefficient models with variable selection, LASSO estimation of threshold autoregressive models, high dimensional stochastic regression with latent factors, endogeneity and nonlinearity, sign-based portmanteau test for ARCH-type models with heavy-tailed innovations, toward optimal model averaging in regression models with time series errors, high dimensional dynamic stochastic copula models, a misspecification test for multiplicative error models of non-negative time series processes, sample quantile analysis for long-memory stochastic volatility models, testing for independence between functional time series, statistical inference for panel dynamic simultaneous equations models, specification tests of calibrated option pricing models, asymptotic inference in multiple-threshold double autoregressive models, a new hyperbolic GARCH model, intraday value-at-risk: an asymmetric autoregressive conditional duration approach, refinements in maximum likelihood inference on spatial autocorrelation in panel data, statistical inference of conditional quantiles in nonlinear time series models, quasi-likelihood estimation of a threshold diffusion process, threshold models in time series analysis - some reflections, and generalized ARMA models with martingale difference errors.
    Keywords: Time series, financial econometrics, threshold models, conditional volatility, stochastic volatility, copulas, conditional duration
    JEL: C22 C32 C58 G17 G32
    Date: 2015–02–20
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20150026&r=ets
  15. By: Francisco Blasques (VU University Amsterdam); Andre Lucas (VU University Amsterdam); Erkki Silde (VU University Amsterdam, and Duisenberg school of finance)
    Abstract: We describe stationarity and ergodicity (SE) regions for a recently proposed class of score driven dynamic correlation models. These models have important applications in empirical work. The regions are derived from sufficiency conditions in Bougerol (1993) and take a non-standard form. We show that the non-standard shape of the sufficiency regions cannot be avoided by reparameterizing the model or by rescaling the score steps in the transition equation for the correlation parameter. This makes the result markedly different from the volatility case. Observationally equivalent decompositions of the stochastic recurrence equation yield regions with different sizes and shapes. We illustrate our results with an analysis of time-varying correlations between UK and Greek equity indices. We find that also in empirical applications different decompositions can give rise to different conclusions regarding the stability of the estimated model.
    Keywords: dynamic copulas, generalized autoregressive score (GAS) models, stochastic recurrence equations, observation driven models, contraction properties
    JEL: C22 C32 C58
    Date: 2013–07–19
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20130097&r=ets
  16. By: Nalan Basturk (Maastricht University, the Netherlands); Pinar Ceyhan (Erasmus University Rotterdam); Herman K. van Dijk (Erasmus University Rotterdam, VU University Amsterdam, the Netherlands)
    Abstract: Time varying patterns in US growth are analyzed using various univariate model structures, starting from a naive model structure where all features change every period to a model where the slow variation in the conditional mean and changes in the conditional variance are specified together with their interaction, including survey data on expected growth in order to strengthen the information in the model. Use is made of a simulation based Bayesian inferential method to determine the forecasting performance of the various model specifications. The extension of a basic growth model with a constant mean to models including time variation in the mean and variance requires careful investigation of possible identification issues of the parameters and existence conditions of the posterior under a diffuse prior. The use of diffuse priors leads to a focus on the likelihood fu nction and it enables a researcher and policy adviser to evaluate the scientific information contained in model and data. Empirical results indicate that incorporating time variation in mean growth rates as well as in volatility are important in order to improve for the predictive performances of growth models. Furthermore, using data information on growth expectations is important for forecasting growth in specific periods, such as the the recession periods around 2000s and around 2008.
    Keywords: Growth, Time varying parameters, Expectations data
    JEL: C11 C22 E17
    Date: 2014–09–01
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20140119&r=ets
  17. By: Francisco Blasques (VU University Amsterdam); Siem Jan Koopman (VU University Amsterdam); Katarzyna Lasak (VU University Amsterdam); André Lucas (VU University Amsterdam)
    Abstract: We study the performance of two analytical methods and one simulation method for computing in-sample confidence bounds for time-varying parameters. These in-sample bounds are designed to reflect parameter uncertainty in the associated filter. They are applicable to the complete class of observation driven models and are valid for a wide range of estimation procedures. A Monte Carlo study is conducted for time-varying parameter models such as generalized autoregressive conditional heteroskedasticity and autoregressive conditional duration models. Our results show clear differences between the actual coverage provided by our three methods of computing in-sample bounds. The analytical methods may be less reliable than the simulation method, their coverage performance is sufficiently adequate to provide a reasonable impression of the parameter uncertainty that is embedded in the time-varying parameter path. We illustrate our findings in a volatility analysis for monthly Standard & Poor's 500 index returns.
    Keywords: autoregressive conditional duration, delta-method, generalized autoregressive conditional heteroskedasticity, score driven models, time-varying mean
    JEL: C15 C22 C58
    Date: 2015–02–23
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20150027&r=ets
  18. By: Francisco Blasques (VU University Amsterdam); Siem Jan Koopman (VU University Amsterdam); Andre Lucas (VU University Amsterdam)
    Abstract: We study the strong consistency and asymptotic normality of the maximum likelihood estimator for a class of time series models driven by the score function of the predictive likelihood. This class of nonlinear dynamic models includes both new and existing observation driven time series models. Examples include models for generalized autoregressive conditional heteroskedasticity, mixed-measurement dynamic factors, serial dependence in heavy-tailed densities, and other time varying parameter processes. We formulate primitive conditions for global identification, invertibility, strong consistency, asymptotic normality under correct specification and under mis-specification. We provide key illustrations of how the theory can be applied to specific dynamic models.
    Keywords: time-varying parameter models, GAS, score driven models, Markov processes estimation, stationarity, invertibility, consistency, asymptotic normality
    JEL: C13 C22 C12
    Date: 2014–03–04
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20140029&r=ets
  19. By: Marco Bazzi (University of Padova, Italy); Francisco Blasques (VU University Amsterdam); Siem Jan Koopman (VU University Amsterdam); Andre Lucas (VU University Amsterdam, the Netherlands)
    Abstract: We propose a new Markov switching model with time varying probabilities for the transitions. The novelty of our model is that the transition probabilities evolve over time by means of an observation driven model. The innovation of the time varying probability is generated by the score of the predictive likelihood function. We show how the model dynamics can be readily interpreted. We investigate the performance of the model in a Monte Carlo study and show that the model is successful in estimating a range of different dynamic patterns for unobserved regime switching probabilities. We also illustrate the new methodology in an empirical setting by studying the dynamic mean and variance behavior of U.S. Industrial Production growth. We find empirical evidence of changes in the regime switching probabilities, with more persistence for high volatility regimes in the earlier part of the sample, and more persistence for low volatility regimes in the later part of the sample.
    Keywords: Hidden Markov Models; observation driven models; generalized autoregressive score dynamics
    JEL: C22 C32
    Date: 2014–06–17
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20140072&r=ets
  20. By: Francisco Blasques; Siem Jan Koopman; André Lucas (VU University Amsterdam, the Netherlands)
    Abstract: We develop optimal formulations for nonlinear autoregressive models by representing them as linear autoregressive models with time-varying temporal dependence coefficients. We propose a parameter updating scheme based on the score of the predictive likelihood function at each time point. The resulting time-varying autoregressive model is formulated as a nonlinear autoregressive model and is compared with threshold and smooth-transition autoregressive models. We establish the information theoretic optimality of the score driven nonlinear autoregressive process and the asymptotic theory for maximum likelihood parameter estimation. The performance of our model in extracting the time-varying or the nonlinear dependence for finite samples is studied in a Monte Carlo exercise. In our empirical study we present the in-sample and out-of-sample performances of our model for a weekly time series of unemployment insurance claims.
    Keywords: Asymptotic theory; Dynamic models, Observation driven time series models; Smooth-transition model; Time-Varying Parameters; Treshold autoregressive model
    JEL: C13 C22 C32
    Date: 2014–08–11
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20140103&r=ets
  21. By: Laurent Callot (VU University Amsterdam); Johannes Tang Kristensen (University of Southern Denmark, Denmark)
    Abstract: This paper proposes a parsimoniously time varying parameter vector autoregressive model (with exogenous variables, VARX) and studies the properties of the Lasso and adaptive Lasso as estimators of this model. The parameters of the model are assumed to follow parsimonious random walks, where parsimony stems from the assumption that increments to the parameters have a non-zero probability of being exactly equal to zero. By varying the degree of parsimony our model can accommodate constant parameters, an unknown number of structural breaks, or parameters with a high degree of variation. We characterize the finite sample properties of the Lasso by deriving upper bounds on the estimation and prediction errors that are valid with high probability; and asymptotically we show that these bounds tend to zero with probability tending to one if the number of non zero increments grows slower than √T . By simulation experiments we investigate the properties of the Lasso and the adaptive Lasso in settings where the parameters are stable, experience structural breaks, or follow a parsimonious random walk. We use our model to investigate the monetary policy response to inflation and business cycle fluctuations in the US by estimating a parsimoniously time varying parameter Taylor rule. We document substantial changes in the policy response of the Fed in the 1980s and since 2008.
    Keywords: Parsimony, time varying parameters, VAR, structural break, Lasso
    JEL: C01 C13 C32 E52
    Date: 2014–11–07
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20140145&r=ets
  22. By: Manabu Asai (Soka University, Japan); Michael McAleer (National Tsing Hua University, Taiwan; Econometric Institute, Erasmus School of Economics, Erasmus University Rotterdam, and Tinbergen Institute, the Netherlands; Complutense University of Madrid, Spain)
    Abstract: Modelling covariance structures is known to suffer from the curse of dimensionality. In order to avoid this problem for forecasting, the authors propose a new factor multivariate stochastic volatility (fMSV) model for realized covariance measures that accommodates asymmetry and long memory. Using the basic structure of the fMSV model, the authors extend the dynamic correlation MSV model, the conditional/stochastic Wishart autoregressive models, the matrix-exponential MSV model, and the Cholesky MSV model. Empirical results for 7 financial asset returns for US stock returns indicate that the new fMSV models outperform existing dynamic conditional correlation models for forecasting future covariances. Among the new fMSV models, the Cholesky MSV model with long memory and asymmetry shows stable and better forecasting performance for one-day, five-day and ten-day horizons in the periods before, during and after the global financial crisis.
    Keywords: Dimension reduction; Factor Model; Multivariate Stochastic Volatility; Leverage Effects; Long Memory; Realized Volatility.
    JEL: C32 C53 C58 G17
    Date: 2014–03–17
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20140037&r=ets
  23. By: Manabu Asai (Soka University, Japan); Michael McAleer (National Tsing Hua University, Taiwan, Erasmus University Rotterdam, the Netherlands, Complutense University of Madrid, Spain)
    Abstract: The paper investigates the impact of jumps in forecasting co-volatility, accommodating leverage effects. We modify the jump-robust two time scale covariance estimator of Boudt and Zhang (2013)such that the estimated matrix is positive definite. Using this approach we can disentangle the estimates of the integrated co-volatility matrix and jump variations from the quadratic covariation matrix. Empirical results for three stocks traded on the New York Stock Exchange indicate that the co-jumps of two assets have a significant impact on future co-volatility, but that the impact is negligible for forecasting weekly and monthly horizons.
    Keywords: Co-Volatility; Forecasting; Jump; Leverage Effects; Realized Covariance; Threshold
    JEL: C32 C53 C58 G17
    Date: 2015–02–09
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20150018&r=ets
  24. By: Francesco Calvori (Department of Statistics 'G. Parenti', University of Florence, Italy); Drew Creal (Booth School of Business, University of Chicago); Siem Jan Koopman (VU University Amsterdam); Andre Lucas (VU University Amsterdam)
    Abstract: We develop a new parameter stability test against the alternative of observation driven generalized autoregressive score dynamics. The new test generalizes the ARCH-LM test of Engle (1982) to settings beyond time-varying volatility and exploits any autocorrelation in the likelihood scores under the alternative. We compare the test's performance with that of alternative tests developed for competing time-varying parameter frameworks, such as structural breaks and observation driven parameter dynamics. The new test has higher and more stable power against alternatives with frequent regime switches or with non-local parameter driven time-variation. For parameter driven time variation close to the null or for infrequent structural changes, the test of Muller and Petalas (2010) performs best overall. We apply all tests empirically to a panel of losses given default over the period 1982--2010 and find significant evidence of parameter variation in the underlying beta distribution.
    Keywords: time-varying parameters; observation driven models; parameter driven models; structural breaks; generalized autoregressive score model; regime switching; credit risk
    JEL: C12 C52 C22
    Date: 2014–01–14
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20140010&r=ets
  25. By: István Barra (VU University Amsterdam, Duisenberg School of Finance, the Netherlands); Lennart Hoogerheide (VU University Amsterdam); Siem Jan Koopman (VU University Amsterdam); André Lucas (VU University Amsterdam, the Netherlands)
    Abstract: We propose a new methodology for designing flexible proposal densities for the joint posterior density of parameters and states in a nonlinear non-Gaussian state space model. We show that a highly efficient Bayesian procedure emerges when these proposal densities are used in an independent Metropolis-Hastings algorithm. A particular feature of our approach is that smoothed estimates of the states and the marginal likelihood are obtained directly as an output of the algorithm. Our method provides a computationally efficient alternative to several recently proposed algorithms. We present extensive simulation evidence for stochastic volatility and stochastic intensity models. For our empirical study, we analyse the performance of our method for stock returns and corporate default panel data. (This paper is an updated version of the paper that appeared earlier as Barra, I., Hoogerheide, L.F., Koopman, S.J., and Lucas, A. (2013) "Joint Independent Metropolis-Hastings Methods for Nonlinear Non-Gaussian State Space Models". TI Discussion Paper 13-050/III. Amsterdam: Tinbergen Institute.)
    Keywords: Bayesian inference, importance sampling, Monte Carlo estimation, Metropolis-Hastings algorithm, mixture of Student's t-distributions
    JEL: C11 C15 C22 C32 C58
    Date: 2014–09–02
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20140118&r=ets
  26. By: David Ardia (Laval University, Quebec, Canada); Lukasz Gatarek (Erasmus University Rotterdam); Lennart F. Hoogerheide (VU University Amsterdam)
    Abstract: A novel simulation-based methodology is proposed to test the validity of a set of marginal time series models, where the dependence structure between the time series is taken ‘directly’ from the observed data. The procedure is useful when one wants to summarize the test results for several time series in one joint test statistic and p-value. The proposed test method can have higher power than a test for a univariate time series, especially for short time series. Therefore our test for multiple time series is particularly useful if one wants to assess Value-at-Risk (or Expected Shortfall) predictions over a small time frame (e.g., a crisis period). We apply our method to test GARCH model specifications for a large panel data set of stock returns.
    Keywords: Bootstrap test, GARCH, marginal models, multiple time series, Value-at-Risk
    JEL: C1 C12 C22 C44
    Date: 2014–02–28
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20140028&r=ets
  27. By: Guillaume Gaetan Martinet (ENSAE Paris Tech, France, and Columbia University, USA); Michael McAleer (National Tsing Hua University, Taiwan, Erasmus University Rotterdam, the Netherlands, Complutense University of Madrid, Spain.)
    Abstract: Of the two most widely estimated univariate asymmetric conditional volatility models, the exponential GARCH (or EGARCH) specification can capture asymmetry, which refers to the different effects on conditional volatility of positive and negative effects of equal magnitude, and leverage, which refers to the negative correlation between the returns shocks and subsequent shocks to volatility. However, the statistical properties of the (quasi-) maximum likelihood estimator (QMLE) of the EGARCH parameters are not available under general conditions, but only for special cases under highly restrictive and unverifiable conditions, such as EGARCH(1,0) or EGARCH(1,1), and possibly only under simulation. A limitation in the development of asymptotic properties of the QMLE for the EGARCH(p,q) model is the lack of an invertibility condition for the returns shocks underlying the model. It is shown in this paper that the EGARCH(p,q) model can be derived from a stochastic process, for which the invertibility conditions can be stated simply and explicitly. This will be useful in re-interpreting the existing properties of the QMLE of the EGARCH(p,q) parameters.
    Keywords: Leverage, asymmetry, existence, stochastic process, asymptotic properties, invertibility
    JEL: C22 C52 C58 G32
    Date: 2015–02–12
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20150022&r=ets
  28. By: Skrobotov, Anton (Russian Presidential Academy of National Economy and Public Administration (RANEPA))
    Abstract: In this paper we propose a likelihood ratio test for a change in persistence of a time series. We consider the null hypothesis of a constant persistence I(1) and an alternative in which the series changes from a stationary regime to a unit root regime and vice versa. Both known and unknown break dates are analyzed. Moreover, we consider a modication of a lag length selection procedure which provides better size control over various data generation processes. In general, our likelihood ratio-based tests show the best nite sample properties from all persistence change tests that use the null hypothesis of a unit root throughout.
    Keywords: change in persistence, likelihood ratio test, unit root test, lag length selection
    JEL: C12 C22
    Date: 2015–01–28
    URL: http://d.repec.org/n?u=RePEc:rnp:ppaper:skr001&r=ets
  29. By: Claudia Kl\"uppelberg; Jianing Zhang
    Abstract: In this paper we study time-consistent risk measures for returns that are given by a GARCH$(1,1)$ model. We present a construction of risk measures based on their static counterparts that overcomes the lack of time-consistency. We then study in detail our construction for the risk measures Value-at-Risk (VaR) and Average Value-at-Risk (AVaR). While in the VaR case we can derive an analytical formula for its time-consistent counterpart, in the AVaR case we derive lower and upper bounds to its time-consistent version. Furthermore, we incorporate techniques from Extreme Value Theory (EVT) to allow for a more tail-geared analysis of the corresponding risk measures. We conclude with an application of our results to stock prices to investigate the applicability of our results.
    Date: 2015–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1504.04774&r=ets

This nep-ets issue is ©2015 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.