Econometric Time Series
http://lists.repec.org/mailman/listinfo/nep-ets
Econometric Time Series2015-04-25Yong YinReal-time forecasting with a MIDAS VAR
http://d.repec.org/n?u=RePEc:hhs:bofitp:2015_013&r=ets
This paper presents a MIDAS type mixed frequency VAR forecasting model. First, we propose a general and compact mixed frequency VAR framework using a stacked vector approach. Second, we integrate the mixed frequency VAR with a MIDAS type Almon lag polynomial scheme which is designed to reduce the parameter space while keeping models fexible. We show how to recast the resulting non-linear MIDAS type mixed frequency VAR into a linear equation system that can be easily estimated. A pseudo out-of-sample forecasting exercise with US real-time data yields that the mixed frequency VAR substantially improves predictive accuracy upon a standard VAR for dierent VAR specications. Forecast errors for, e.g., GDP growth decrease by 30 to 60 percent for forecast horizons up to six months and by around 20 percent for a forecast horizon of one year.Mikosch, Heiner, Neuwirth , Stefan2015-04-13Forecasting; mixed frequency data; MIDAS; VAR; real timeFractional Cointegration Rank Estimation
http://d.repec.org/n?u=RePEc:tin:wpaper:20140021&r=ets
Accepted for publication in the <I>Journal of Business & Economic Statistics</I>.<P> We consider cointegration rank estimation for a p-dimensional Fractional Vector Error Correction Model. We propose a new two-step procedure which allows testing for further long-run equilibrium relations with possibly different persistence levels. The first step consists in estimating the parameters of the model under the null hypothesis of the cointegration rank r=1,2,…,p-1. This step provides consistent estimates of the order of fractional cointegration, the cointegration vectors, the speed of adjustment to the equilibrium parameters and the common trends. In the second step we carry out a sup-likelihood ratio test of no-cointegration on the estimated p-r common trends that are not cointegrated under the null. The order of fractional cointegration is re-estimated in the second step to allow for new cointegration relationships with different memory. We augment the error correction model in the second step to adapt to the representation of the common trends estimated in the first step. The critical values of the proposed tests depend only on the number of common trends under the null, p-r, and on the interval of the orders of fractional cointegration b allowed in the estimation, but not on the order of fractional cointegration of already identified relationships. Hence this reduces the set of simulations required to approximate the critical values, making this procedure convenient for practical purposes. In a Monte Carlo study we analyze the finite sample properties of our procedure and compare with alternative methods. We finally apply these methods to study the term structure of interest rates.Katarzyna Lasak, Carlos Velasco2014-02-13Error correction model, Gaussian VAR model, Likelihood ratio tests, Maximum likelihood estimationMaximum Likelihood Estimation for Correctly Specified Generalized Autoregressive Score Models: Feedback Effects, Contraction Conditions and Asymptotic Properties
http://d.repec.org/n?u=RePEc:tin:wpaper:20140074&r=ets
The strong consistency and asymptotic normality of the maximum likelihood estimator in observation-driven models usually requires the study of the model both as a filter for the time-varying parameter and as a data generating process (DGP) for observed data. The probabilistic properties of the filter can be substantially different from those of the DGP. This difference is particularly relevant for recently developed time varying parameter models. We establish new conditions under which the dynamic properties of the true time varying parameter as well as of its filtered counterpart are both well-behaved and We only require the verification of one rather than two sets of conditions. In particular, we formulate conditions under which the (local) invertibility of the model follows directly from the stable behavior of the true time varying parameter. We use these results to prove the local strong consistency and asymptotic normality of the maximum likelihood estimator. To illustrate the results, we apply the theory to a number of empirically relevant models.Francisco Blasques, Siem Jan Koopman, André Lucas2014-06-20Observation-driven models, stochastic recurrence equations, contraction conditions, invertibility, stationarity, ergodicity, generalized autoregressive score modelsInference on Co-integration Parameters in Heteroskedastic Vector Autoregressions
http://d.repec.org/n?u=RePEc:tin:wpaper:20130187&r=ets
It is well established that the shocks driving many key macro-economic and financial variables display time-varying volatility. In this paper we consider estimation and hypothesis testing on the coefficients of the co-integrating relations and the adjustment coefficients in vector autoregressions driven by both conditional and unconditional heteroskedasticity of a quite general and unknown form in the shocks. We show that the conventional results in Johansen (1996) for the maximum likelihood estimators and associated likelihood ratio tests derived under homoskedasticity do not in general hold in the presence of heteroskedasticity. As a consequence, standard confidence intervals and tests of hypothesis on these coefficients are potentially unreliable. Solutions to this inference problem based on Wald tests (using a "sandwich" estimator of the variance matrix) and on the use of the wild bootstrap are discussed. These do not require the practitioner to specify a parametric model for volatility, or to assume that the pattern of volatility is common to, or independent across, the vector of series under analysis. We formally establish the conditions under which these methods are asymptotically valid. A Monte Carlo simulation study demonstrates that significant improvements in finite sample size can be obtained by the bootstrap over the corresponding asymptotic tests in both heteroskedastic and homoskedastic environments. An application to the term structure of interest rates in the US illustrates the difference between standard and bootstrap inferences regarding hypotheses on the co-integrating vectors and adjustment coefficients.H. Peter Boswijk, Giuseppe Cavaliere, Anders Rahbek, A. M. Robert Taylor2013-11-19Co-integration, adjustment coefficients, (un)conditional heteroskedasticity, heteroskedasticity-robust inference, wild bootstrapNew HEAVY Models for Fat-Tailed Returns and Realized Covariance Kernels
http://d.repec.org/n?u=RePEc:tin:wpaper:20140073&r=ets
We develop a new model for the multivariate covariance matrix dynamics based on daily return observations and daily realized covariance matrix kernels based on intraday data. Both types of data may be fat-tailed. We account for this by assuming a matrix-F distribution for the realized kernels, and a multivariate Student’s t distribution for the returns. Using generalized autoregressive score dynamics for the unobserved true covariance matrix, our approach automatically corrects for the effect of outliers and incidentally large observations, both in returns and in covariances. Moreover, by an appropriate choice of scaling of the conditional score function we are able to retain a convenient matrix formulation for the dynamic updates of the covariance matrix. This makes the model highly computationally efficient. We show how the model performs in a controlled simulation setting as well as for empirical data. In our empirical application, we study daily returns and realized kernels from 15 equities over the period 2001-2012 and find that the new model statistically outperforms (recently developed) multivariate volatility models, both in-sample and out-of-sample. We also comment on the possibility to use composite likelihood methods for estimation if desired.Pawel Janus, André Lucas, Anne Opschoor2014-06-19realized covariance matrices, heavy tails, (degenerate) matrix-F distribution, generalized autoregressive score (GAS) dynamicsOn an Estimation Method for an Alternative Fractionally Cointegrated Model
http://d.repec.org/n?u=RePEc:tin:wpaper:20140052&r=ets
In this paper we consider the Fractional Vector Error Correction model proposed in Avarucci (2007), which is characterized by a richer lag structure than models proposed in Granger (1986) and Johansen (2008, 2009). We discuss the identification issues of the model of Avarucci (2007), following the ideas in Carlini and Santucci de Magistris (2014) for the model of Johansen (2008, 2009). We propose a 4-step estimation procedure that is based on the switching algorithm employed in Carlini and Mosconi (2014) and the GLS procedure in Mosconi and Paruolo (2014). The proposed procedure provides estimates of the long run parameters of the fractionally cointegrated system that are consistent and unbiased, which we demonstrate by a Monte Carlo experiment.Federico Carlini, Katarzyna Lasak2014-05-01Error correction model, Gaussian VAR model, Fractional Cointegration, Estimation algorithm, Maximum likelihood estimation, Switching Algorithm, Reduced Rank RegressionThe Forecast Combination Puzzle: A Simple Theoretical Explanation
http://d.repec.org/n?u=RePEc:tin:wpaper:20140127&r=ets
This papers offers a theoretical explanation for the stylized fact that forecast combinations with estimated optimal weights often perform poorly in applications. The properties of the forecast combination are typically derived under the assumption that the weights are fixed, while in practice they need to be estimated. If the fact that the weights are random rather than fixed is taken into account during the optimality derivation, then the forecast combination will be biased (even when the original forecasts are unbiased) and its variance is larger than in the fixed-weights case. In particular, there is no guarantee that the 'optimal' forecast combination will be better than the equal-weights case or even improve on the original forecasts. We provide the underlying theory, some special cases and an application in the context of model selection.Gerda Claeskens, Jan Magnus, Andrey Vasnev, Wendun Wang2014-09-19forecast combination, optimal weights, model selectionAsymmetric Realized Volatility Risk
http://d.repec.org/n?u=RePEc:tin:wpaper:20140075&r=ets
In this paper we document that realized variation measures constructed from high-frequency returns reveal a large degree of volatility risk in stock and index returns, where we characterize volatility risk by the extent to which forecasting errors in realized volatility are substantive. Even though returns standardized by ex post quadratic variation measures are nearly gaussian, this unpredictability brings considerably more uncertainty to the empirically relevant ex ante distribution of returns. Explicitly modeling this volatility risk is fundamental. We propose a dually asymmetric realized volatility model, which incorporates the fact that realized volatility series are systematically more volatile in high volatility periods. Returns in this framework display time varying volatility, skewness and kurtosis. We provide a detailed account of the empirical advantages of the model using data on the S&P 500 index and eight other indexes and stocks.David E. Allen, Michael McAleer, Marcel Scharth2014-06-23Realized volatility, volatility of volatility, volatility risk, value-at-risk, forecasting, conditional heteroskedasticityInteractions between Eurozone and US Booms and Busts: A Bayesian Panel Markov-switching VAR Model
http://d.repec.org/n?u=RePEc:tin:wpaper:20130142&r=ets
Interactions between the eurozone and US booms and busts and among major eurozone economies are analyzed by introducing a panel Markov-switching VAR model well suitable for a multi-country cyclical analysis. The model accommodates changes in low and high data frequencies and endogenous time-varying transition matrices of the country-specific Markov chains. The transition matrix of each Markov chain depends on its own past history and on the history of the other chains, thus allowing for modeling of the interactions between cycles. An endogenous common eurozone cycle is derived by aggregating country-specific cycles. The model is estimated using a simulation based Bayesian approach in which an efficient multi-move strategy algorithm is defined to draw common time-varying Markov-switching chains. Our results show that the US and eurozone cycles are not fully synchronized over the 1991-2013 sample period, with evidence of more recessions in the Eurozone. Shocks affect the US 1-quarter in advance of the eurozone, but these spread very rapidly among economies. An increase in the number of eurozone countries in recession increases the probability of the US to stay within recession, while the US recession indicator has a negative impact on the probability to stay in recession for eurozone countries. Turning point analysis shows that the cycles of Germany, France and Italy are closer to the US cycle than other countries. Belgium, Spain, and Germany, provide more timely information on the aggregate recession than Netherlands and France.Monica Billio, Roberto Casarin, Francesco Ravazzolo, Herman K. van Dijk2013-09-16Bayesian Model, Panel VAR, Markov-switching, International Business Cycles, Interaction MechanismA Test for the Portion of Bivariate Dependence in Multivariate Tail Risk
http://d.repec.org/n?u=RePEc:tin:wpaper:20140024&r=ets
In practice, multivariate dependencies between extreme risks are often only assessed in a pairwise way. We propose a test to detect when tail dependence is truly high-dimensional and bivariate simplifications would produce misleading results. This occurs when a significant portion of the multivariate dependence structure in the tails is of higher dimension than two. Our test statistic is based on a decomposition of the stable tail dependence function, which is standard in extreme value theory for describing multivariate tail dependence. The asymptotic properties of the test are provided and a bootstrap based finite sample version of the test is suggested. A simulation study documents the good performance of the test for standard sample sizes. In an application to international government bonds, we detect a high tail{risk and low return situation during the last decade which can essentially be attributed to increased higher{order tail risk. We also illustrate the empirical consequences from ignoring higher-dimensional tail risk.Carsten Bormann, Melanie Schienle, Julia Schaumburg2014-02-25decomposition of tail dependence, multivariate extreme values, stable tail dependence function, subsample bootstrap, tail correlationEmpirical Bayes Methods for Dynamic Factor Models
http://d.repec.org/n?u=RePEc:tin:wpaper:20140061&r=ets
We consider the dynamic factor model where the loading matrix, the dynamic factors and the disturbances are treated as latent stochastic processes. We present empirical Bayes methods that enable the efficient shrinkage-based estimation of the loadings and the factors. We show that our estimates have lower quadratic loss compared to the standard maximum likelihood estimates. We investigate the methods in a Monte Carlo study where we document the finite sample properties. Finally, we present and discuss the results of an empirical study concerning the forecasting of U.S. macroeconomic time series using our empirical Bayes methods.Siem Jan Koopman, Geert Mesters2014-05-23Importance sampling, Kalman filtering, Likelihood-based analysis, Posterior modes, Rao-Blackwellization, ShrinkageAsymmetry and Leverage in Conditional Volatility Models
http://d.repec.org/n?u=RePEc:tin:wpaper:20140125&r=ets
The three most popular univariate conditional volatility models are the generalized autoregressive conditional heteroskedasticity (GARCH) model of Engle (1982) and Bollerslev (1986), the GJR (or threshold GARCH) model of Glosten, Jagannathan and Runkle (1992), and the exponential GARCH (or EGARCH) model of Nelson (1990, 1991). The underlying stochastic specification to obtain GARCH was demonstrated by Tsay (1987), and that of EGARCH was shown recently in McAleer and Hafner (2014). These models are important in estimating and forecasting volatility, as well as capturing asymmetry, which is the different effects on conditional volatility of positive and negative effects of equal magnitude, and leverage, which is the negative correlation between returns shocks and subsequent shocks to volatility. As there seems to be some confusion in the literature between asymmetry and leverage, as well as which asymmetric models are purported to be able to capture leverage, the purpose of the paper is two-fold, namely: (1) to derive the GJR model from a random coefficient autoregressive process, with appropriate regularity conditions; and (2) to show that leverage is not possible in these univariate conditional volatility models.Michael McAleer2014-09-18Conditional volatility models, random coefficient autoregressive processes, random coefficient complex nonlinear moving average process, asymmetry, leverageLow Frequency and Weighted Likelihood Solutions for Mixed Frequency Dynamic Factor Models
http://d.repec.org/n?u=RePEc:tin:wpaper:20140105&r=ets
The multivariate analysis of a panel of economic and financial time series with mixed frequencies is a challenging problem. The standard solution is to analyze the mix of monthly and quarterly time series jointly by means of a multivariate dynamic model with a monthly time index: artificial missing values are inserted for the intermediate months of the quarterly time series. In this paper we explore an alternative solution for a class of dynamic factor models that is specified by means of a low frequency quarterly time index. We show that there is no need to introduce artificial missing values while the high frequency (monthly) information is preserved and can still be analyzed. We also provide evidence that the analysis based on a low frequency specification can be carried out in a computationally more efficient way. A comparison study with existing mixed frequency procedures is presented and discussed. Furthermore, we modify the method of maximum likelihood in the context of a dynamic factor model. We introduce variable-specific weights in the likelihood function to let some variable equations be of more importance during the estimation process. We derive the asymptotic properties of the weighted maximum likelihood estimator and we show that the estimator is consistent and asymptotically normal. We also verify the weighted estimation method in a Monte Carlo study to investigate the effect of differen t choices for the weights in different scenarios. Finally, we empirically illustrate the new developments for the extraction of a coincident economic indicator from a small panel of mixed frequency economic time series.Francisco Blasques, Siem Jan Koopman, Max Mallee2014-08-11Asymptotic theory, Forecasting, Kalman filter, Nowcasting, State spaceFrontiers in Time Series and Financial Econometrics: An Overview
http://d.repec.org/n?u=RePEc:tin:wpaper:20150026&r=ets
Two of the fastest growing frontiers in econometrics and quantitative finance are time series and financial econometrics. Significant theoretical contributions to financial econometrics have been made by experts in statistics, econometrics, mathematics, and time series analysis. The purpose of this special issue of the journal on “Frontiers in Time Series and Financial Econometrics” is to highlight several areas of research by leading academics in which novel methods have contributed significantly to time series and financial econometrics, including forecasting co-volatilities via factor models with asymmetry and long memory in realized covariance, prediction of Lévy-driven CARMA processes, functional index coefficient models with variable selection, LASSO estimation of threshold autoregressive models, high dimensional stochastic regression with latent factors, endogeneity and nonlinearity, sign-based portmanteau test for ARCH-type models with heavy-tailed innovations, toward optimal model averaging in regression models with time series errors, high dimensional dynamic stochastic copula models, a misspecification test for multiplicative error models of non-negative time series processes, sample quantile analysis for long-memory stochastic volatility models, testing for independence between functional time series, statistical inference for panel dynamic simultaneous equations models, specification tests of calibrated option pricing models, asymptotic inference in multiple-threshold double autoregressive models, a new hyperbolic GARCH model, intraday value-at-risk: an asymmetric autoregressive conditional duration approach, refinements in maximum likelihood inference on spatial autocorrelation in panel data, statistical inference of conditional quantiles in nonlinear time series models, quasi-likelihood estimation of a threshold diffusion process, threshold models in time series analysis - some reflections, and generalized ARMA models with martingale difference errors.Shiqing Ling, Michael McAleer, Howell Tong2015-02-20Time series, financial econometrics, threshold models, conditional volatility, stochastic volatility, copulas, conditional durationStationarity and Ergodicity Regions for Score Driven Dynamic Correlation Models
http://d.repec.org/n?u=RePEc:tin:wpaper:20130097&r=ets
We describe stationarity and ergodicity (SE) regions for a recently proposed class of score driven dynamic correlation models. These models have important applications in empirical work. The regions are derived from sufficiency conditions in Bougerol (1993) and take a non-standard form. We show that the non-standard shape of the sufficiency regions cannot be avoided by reparameterizing the model or by rescaling the score steps in the transition equation for the correlation parameter. This makes the result markedly different from the volatility case. Observationally equivalent decompositions of the stochastic recurrence equation yield regions with different sizes and shapes. We illustrate our results with an analysis of time-varying correlations between UK and Greek equity indices. We find that also in empirical applications different decompositions can give rise to different conclusions regarding the stability of the estimated model.Francisco Blasques, Andre Lucas, Erkki Silde2013-07-19dynamic copulas, generalized autoregressive score (GAS) models, stochastic recurrence equations, observation driven models, contraction propertiesBayesian Forecasting of US Growth using Basic Time Varying Parameter Models and Expectations Data
http://d.repec.org/n?u=RePEc:tin:wpaper:20140119&r=ets
Time varying patterns in US growth are analyzed using various univariate model structures, starting from a naive model structure where all features change every period to a model where the slow variation in the conditional mean and changes in the conditional variance are specified together with their interaction, including survey data on expected growth in order to strengthen the information in the model. Use is made of a simulation based Bayesian inferential method to determine the forecasting performance of the various model specifications. The extension of a basic growth model with a constant mean to models including time variation in the mean and variance requires careful investigation of possible identification issues of the parameters and existence conditions of the posterior under a diffuse prior. The use of diffuse priors leads to a focus on the likelihood fu nction and it enables a researcher and policy adviser to evaluate the scientific information contained in model and data. Empirical results indicate that incorporating time variation in mean growth rates as well as in volatility are important in order to improve for the predictive performances of growth models. Furthermore, using data information on growth expectations is important for forecasting growth in specific periods, such as the the recession periods around 2000s and around 2008.Nalan Basturk, Pinar Ceyhan, Herman K. van Dijk2014-09-01Growth, Time varying parameters, Expectations dataIn-Sample Bounds for Time-Varying Parameters of Observation Driven Models
http://d.repec.org/n?u=RePEc:tin:wpaper:20150027&r=ets
We study the performance of two analytical methods and one simulation method for computing in-sample confidence bounds for time-varying parameters. These in-sample bounds are designed to reflect parameter uncertainty in the associated filter. They are applicable to the complete class of observation driven models and are valid for a wide range of estimation procedures. A Monte Carlo study is conducted for time-varying parameter models such as generalized autoregressive conditional heteroskedasticity and autoregressive conditional duration models. Our results show clear differences between the actual coverage provided by our three methods of computing in-sample bounds. The analytical methods may be less reliable than the simulation method, their coverage performance is sufficiently adequate to provide a reasonable impression of the parameter uncertainty that is embedded in the time-varying parameter path. We illustrate our findings in a volatility analysis for monthly Standard & Poor's 500 index returns.Francisco Blasques, Siem Jan Koopman, Katarzyna Lasak, André Lucas2015-02-23autoregressive conditional duration, delta-method, generalized autoregressive conditional heteroskedasticity, score driven models, time-varying meanMaximum Likelihood Estimation for Generalized Autoregressive Score Models
http://d.repec.org/n?u=RePEc:tin:wpaper:20140029&r=ets
We study the strong consistency and asymptotic normality of the maximum likelihood estimator for a class of time series models driven by the score function of the predictive likelihood. This class of nonlinear dynamic models includes both new and existing observation driven time series models. Examples include models for generalized autoregressive conditional heteroskedasticity, mixed-measurement dynamic factors, serial dependence in heavy-tailed densities, and other time varying parameter processes. We formulate primitive conditions for global identification, invertibility, strong consistency, asymptotic normality under correct specification and under mis-specification. We provide key illustrations of how the theory can be applied to specific dynamic models.Francisco Blasques, Siem Jan Koopman, Andre Lucas2014-03-04time-varying parameter models, GAS, score driven models, Markov processes estimation, stationarity, invertibility, consistency, asymptotic normalityTime Varying Transition Probabilities for Markov Regime Switching Models
http://d.repec.org/n?u=RePEc:tin:wpaper:20140072&r=ets
We propose a new Markov switching model with time varying probabilities for the transitions. The novelty of our model is that the transition probabilities evolve over time by means of an observation driven model. The innovation of the time varying probability is generated by the score of the predictive likelihood function. We show how the model dynamics can be readily interpreted. We investigate the performance of the model in a Monte Carlo study and show that the model is successful in estimating a range of different dynamic patterns for unobserved regime switching probabilities. We also illustrate the new methodology in an empirical setting by studying the dynamic mean and variance behavior of U.S. Industrial Production growth. We find empirical evidence of changes in the regime switching probabilities, with more persistence for high volatility regimes in the earlier part of the sample, and more persistence for low volatility regimes in the later part of the sample.Marco Bazzi, Francisco Blasques, Siem Jan Koopman, Andre Lucas2014-06-17Hidden Markov Models; observation driven models; generalized autoregressive score dynamicsOptimal Formulations for Nonlinear Autoregressive Processes
http://d.repec.org/n?u=RePEc:tin:wpaper:20140103&r=ets
We develop optimal formulations for nonlinear autoregressive models by representing them as linear autoregressive models with time-varying temporal dependence coefficients. We propose a parameter updating scheme based on the score of the predictive likelihood function at each time point. The resulting time-varying autoregressive model is formulated as a nonlinear autoregressive model and is compared with threshold and smooth-transition autoregressive models. We establish the information theoretic optimality of the score driven nonlinear autoregressive process and the asymptotic theory for maximum likelihood parameter estimation. The performance of our model in extracting the time-varying or the nonlinear dependence for finite samples is studied in a Monte Carlo exercise. In our empirical study we present the in-sample and out-of-sample performances of our model for a weekly time series of unemployment insurance claims.Francisco Blasques, Siem Jan Koopman, André Lucas2014-08-11Asymptotic theory; Dynamic models, Observation driven time series models; Smooth-transition model; Time-Varying Parameters; Treshold autoregressive modelVector Autoregressions with Parsimoniously Time Varying Parameters and an Application to Monetary Policy
http://d.repec.org/n?u=RePEc:tin:wpaper:20140145&r=ets
This paper proposes a parsimoniously time varying parameter vector autoregressive model (with exogenous variables, VARX) and studies the properties of the Lasso and adaptive Lasso as estimators of this model. The parameters of the model are assumed to follow parsimonious random walks, where parsimony stems from the assumption that increments to the parameters have a non-zero probability of being exactly equal to zero. By varying the degree of parsimony our model can accommodate constant parameters, an unknown number of structural breaks, or parameters with a high degree of variation. We characterize the finite sample properties of the Lasso by deriving upper bounds on the estimation and prediction errors that are valid with high probability; and asymptotically we show that these bounds tend to zero with probability tending to one if the number of non zero increments grows slower than √T . By simulation experiments we investigate the properties of the Lasso and the adaptive Lasso in settings where the parameters are stable, experience structural breaks, or follow a parsimonious random walk. We use our model to investigate the monetary policy response to inflation and business cycle fluctuations in the US by estimating a parsimoniously time varying parameter Taylor rule. We document substantial changes in the policy response of the Fed in the 1980s and since 2008.Laurent Callot, Johannes Tang Kristensen2014-11-07Parsimony, time varying parameters, VAR, structural break, LassoForecasting Co-Volatilities via Factor Models with Asymmetry and Long Memory in Realized Covariance
http://d.repec.org/n?u=RePEc:tin:wpaper:20140037&r=ets
Modelling covariance structures is known to suffer from the curse of dimensionality. In order to avoid this problem for forecasting, the authors propose a new factor multivariate stochastic volatility (fMSV) model for realized covariance measures that accommodates asymmetry and long memory. Using the basic structure of the fMSV model, the authors extend the dynamic correlation MSV model, the conditional/stochastic Wishart autoregressive models, the matrix-exponential MSV model, and the Cholesky MSV model. Empirical results for 7 financial asset returns for US stock returns indicate that the new fMSV models outperform existing dynamic conditional correlation models for forecasting future covariances. Among the new fMSV models, the Cholesky MSV model with long memory and asymmetry shows stable and better forecasting performance for one-day, five-day and ten-day horizons in the periods before, during and after the global financial crisis.Manabu Asai, Michael McAleer2014-03-17Dimension reduction; Factor Model; Multivariate Stochastic Volatility; Leverage Effects; Long Memory; Realized Volatility.The Impact of Jumps and Leverage in Forecasting Co-Volatility
http://d.repec.org/n?u=RePEc:tin:wpaper:20150018&r=ets
The paper investigates the impact of jumps in forecasting co-volatility, accommodating leverage effects. We modify the jump-robust two time scale covariance estimator of Boudt and Zhang (2013)such that the estimated matrix is positive definite. Using this approach we can disentangle the estimates of the integrated co-volatility matrix and jump variations from the quadratic covariation matrix. Empirical results for three stocks traded on the New York Stock Exchange indicate that the co-jumps of two assets have a significant impact on future co-volatility, but that the impact is negligible for forecasting weekly and monthly horizons.Manabu Asai, Michael McAleer2015-02-09Co-Volatility; Forecasting; Jump; Leverage Effects; Realized Covariance; ThresholdTesting for Parameter Instability in Competing Modeling Frameworks
http://d.repec.org/n?u=RePEc:tin:wpaper:20140010&r=ets
We develop a new parameter stability test against the alternative of observation driven generalized autoregressive score dynamics. The new test generalizes the ARCH-LM test of Engle (1982) to settings beyond time-varying volatility and exploits any autocorrelation in the likelihood scores under the alternative. We compare the test's performance with that of alternative tests developed for competing time-varying parameter frameworks, such as structural breaks and observation driven parameter dynamics. The new test has higher and more stable power against alternatives with frequent regime switches or with non-local parameter driven time-variation. For parameter driven time variation close to the null or for infrequent structural changes, the test of Muller and Petalas (2010) performs best overall. We apply all tests empirically to a panel of losses given default over the period 1982--2010 and find significant evidence of parameter variation in the underlying beta distribution.Francesco Calvori, Drew Creal, Siem Jan Koopman, Andre Lucas2014-01-14time-varying parameters; observation driven models; parameter driven models; structural breaks; generalized autoregressive score model; regime switching; credit riskJoint Bayesian Analysis of Parameters and States in Nonlinear, Non-Gaussian State Space Models
http://d.repec.org/n?u=RePEc:tin:wpaper:20140118&r=ets
We propose a new methodology for designing flexible proposal densities for the joint posterior density of parameters and states in a nonlinear non-Gaussian state space model. We show that a highly efficient Bayesian procedure emerges when these proposal densities are used in an independent Metropolis-Hastings algorithm. A particular feature of our approach is that smoothed estimates of the states and the marginal likelihood are obtained directly as an output of the algorithm. Our method provides a computationally efficient alternative to several recently proposed algorithms. We present extensive simulation evidence for stochastic volatility and stochastic intensity models. For our empirical study, we analyse the performance of our method for stock returns and corporate default panel data. (This paper is an updated version of the paper that appeared earlier as Barra, I., Hoogerheide, L.F., Koopman, S.J., and Lucas, A. (2013) "Joint Independent Metropolis-Hastings Methods for Nonlinear Non-Gaussian State Space Models". TI Discussion Paper 13-050/III. Amsterdam: Tinbergen Institute.)István Barra, Lennart Hoogerheide, Siem Jan Koopman, André Lucas2014-09-02Bayesian inference, importance sampling, Monte Carlo estimation, Metropolis-Hastings algorithm, mixture of Student's t-distributionsA New Bootstrap Test for the Validity of a Set of Marginal Models for Multiple Dependent Time Series: An Application to Risk Analysis
http://d.repec.org/n?u=RePEc:tin:wpaper:20140028&r=ets
A novel simulation-based methodology is proposed to test the validity of a set of marginal time series models, where the dependence structure between the time series is taken ‘directly’ from the observed data. The procedure is useful when one wants to summarize the test results for several time series in one joint test statistic and p-value. The proposed test method can have higher power than a test for a univariate time series, especially for short time series. Therefore our test for multiple time series is particularly useful if one wants to assess Value-at-Risk (or Expected Shortfall) predictions over a small time frame (e.g., a crisis period). We apply our method to test GARCH model specifications for a large panel data set of stock returns.David Ardia, Lukasz Gatarek, Lennart F. Hoogerheide2014-02-28Bootstrap test, GARCH, marginal models, multiple time series, Value-at-RiskOn the Invertibility of EGARCH(p,q)
http://d.repec.org/n?u=RePEc:tin:wpaper:20150022&r=ets
Of the two most widely estimated univariate asymmetric conditional volatility models, the exponential GARCH (or EGARCH) specification can capture asymmetry, which refers to the different effects on conditional volatility of positive and negative effects of equal magnitude, and leverage, which refers to the negative correlation between the returns shocks and subsequent shocks to volatility. However, the statistical properties of the (quasi-) maximum likelihood estimator (QMLE) of the EGARCH parameters are not available under general conditions, but only for special cases under highly restrictive and unverifiable conditions, such as EGARCH(1,0) or EGARCH(1,1), and possibly only under simulation. A limitation in the development of asymptotic properties of the QMLE for the EGARCH(p,q) model is the lack of an invertibility condition for the returns shocks underlying the model. It is shown in this paper that the EGARCH(p,q) model can be derived from a stochastic process, for which the invertibility conditions can be stated simply and explicitly. This will be useful in re-interpreting the existing properties of the QMLE of the EGARCH(p,q) parameters.Guillaume Gaetan Martinet, Michael McAleer2015-02-12Leverage, asymmetry, existence, stochastic process, asymptotic properties, invertibilityLikelihood Ratio Test for Change in Persistence
http://d.repec.org/n?u=RePEc:rnp:ppaper:skr001&r=ets
In this paper we propose a likelihood ratio test for a change in persistence of a time series. We consider the null hypothesis of a constant persistence I(1) and an alternative in which the series changes from a stationary regime to a unit root regime and vice versa. Both known and unknown break dates are analyzed. Moreover, we consider a modication of a lag length selection procedure which provides better size control over various data generation processes. In general, our likelihood ratio-based tests show the best nite sample properties from all persistence change tests that use the null hypothesis of a unit root throughout.Skrobotov, Anton2015-01-28change in persistence, likelihood ratio test, unit root test, lag length selectionTime-consistency of risk measures with GARCH volatilities and their estimation
http://d.repec.org/n?u=RePEc:arx:papers:1504.04774&r=ets
In this paper we study time-consistent risk measures for returns that are given by a GARCH$(1,1)$ model. We present a construction of risk measures based on their static counterparts that overcomes the lack of time-consistency. We then study in detail our construction for the risk measures Value-at-Risk (VaR) and Average Value-at-Risk (AVaR). While in the VaR case we can derive an analytical formula for its time-consistent counterpart, in the AVaR case we derive lower and upper bounds to its time-consistent version. Furthermore, we incorporate techniques from Extreme Value Theory (EVT) to allow for a more tail-geared analysis of the corresponding risk measures. We conclude with an application of our results to stock prices to investigate the applicability of our results.Claudia Kl\"uppelberg, Jianing Zhang2015-04