nep-ets New Economics Papers
on Econometric Time Series
Issue of 2013‒06‒16
ten papers chosen by
Yong Yin
SUNY at Buffalo

  1. Nonlinear Forecasting With Many Predictors Using Kernel Ridge Regression By Peter Exterkate; Patrick J.F. Groenen; Christiaan Heij; Dick van Dijk
  2. Thresholds and Smooth Transitions in Vector Autoregressive Models By Kirstin Hubrich; Timo Teräsvirta
  3. Effective Measure of Endogeneity for the Autoregressive Conditional Duration Point Processes via Mapping to the Self-Excited Hawkes Process By Vladimir Filimonov; Spencer Wheatley; Didier Sornette
  4. Disentangling Continuous Volatility from Jumps in Long-Run Risk-Return Relationships By Éric Jacquier; Cédric Okou
  5. The change-point problem and segmentation of processes with conditional heteroskedasticity By Ana Badagián; Regina Kaiser; Daniel Peña
  6. Forecasting Value-at-Risk using Block Structure Multivariate Stochastic Volatility Models By Manabu Asai; Massimiliano Caporin; Michael McAleer
  7. Tractable latent state filtering for non-linear DSGE models using a second-order approximation By Robert Kollmann
  8. Fourier estimation of stochastic leverage using high frequency data By Imma Valentina Curato
  9. Likelihood-Based Confidence Sets for the Timing of Structural Breaks By Yunjong Eo; James Morley
  10. Bayesian Markov Switching Stochastic Correlation Models By Roberto Casarin; Marco Tronzano; Domenico Sartore

  1. By: Peter Exterkate (Aarhus University and CREATES); Patrick J.F. Groenen (Econometric Institute, Erasmus University Rotterdam); Christiaan Heij (Econometric Institute, Erasmus University Rotterdam); Dick van Dijk (Econometric Institute, Erasmus University Rotterdam)
    Abstract: This paper puts forward kernel ridge regression as an approach for forecasting with many predictors that are related nonlinearly to the target variable. In kernel ridge regression, the observed predictor variables are mapped nonlinearly into a high-dimensional space, where estimation of the predictive regression model is based on a shrinkage estimator to avoid overfitting. We extend the kernel ridge regression methodology to enable its use for economic time-series forecasting, by including lags of the dependent variable or other individual variables as predictors, as typically desired in macroeconomic and financial applications. Monte Carlo simulations as well as an empirical application to various key measures of real economic activity confirm that kernel ridge regression can produce more accurate forecasts than traditional linear and nonlinear methods for dealing with many predictors based on principal component regression.
    Keywords: High dimensionality, nonlinear forecasting, ridge regression, kernel methods.
    JEL: C53 C63 E27
    Date: 2013–05–30
  2. By: Kirstin Hubrich (European Central Bank, Frankfurt am Main); Timo Teräsvirta (Aarhus University, School of Economics and Management and CREATES)
    Keywords: common nonlinearity, impulse response analysis, linearity testing, multivariate nonlinear model, nonlinear cointegration, threshold estimation
    JEL: C32 C51 C52 C53
    Date: 2013–06–06
  3. By: Vladimir Filimonov; Spencer Wheatley; Didier Sornette
    Abstract: In order to disentangle the internal dynamics from exogenous factors within the Autoregressive Conditional Duration (ACD) model, we present an effective measure of endogeneity. Inspired from the Hawkes model, this measure is defined as the average fraction of events that are triggered due to internal feedback mechanisms within the total population. We provide a direct comparison of the Hawkes and ACD models based on numerical simulations and show that our effective measure of endogeneity for the ACD can be mapped onto the "branching ratio" of the Hawkes model.
    Date: 2013–06
  4. By: Éric Jacquier; Cédric Okou
    Abstract: Realized variance can be broken down into continuous volatility and jumps. We show that these two components have very different predictive powers on future long-term excess stock market returns. While continuous volatility is a key driver of medium to long-term risk-return relationships, jumps do not predict future medium- to long-term excess returns. We use inference methods robust to persistent predictors in a multi-horizon setup. That is, we use a rescaled Student-t to test for significant risk-return links, give asymptotic arguments and simulate its exact behavior under the null in the case of multiple regressors with different degrees of persistence. Then, with Wald tests of equality of the risk-return relationship at multiple horizons, we find no evidence against a proportional relationship, constant across horizons, between long-term continuous volatility and future returns. Two by-products of our analysis are that imposing model-based constraints on long term regressions can improve their efficiency, and short-run estimates are sensitive to short-term variability of the predictors. <P>
    Keywords: predictability, realized variance, continuous volatility, jumps, long-run returns, persistent regressor,
    Date: 2013–06–01
  5. By: Ana Badagián; Regina Kaiser; Daniel Peña
    Abstract: In this paper we explore, analyse and apply the change-points detection and location procedures to conditional heteroskedastic processes. We focus on processes that have constant conditional mean, but present a dynamic behavior in the conditional variance and which can also be affected by structural changes. Thus, the goal is to explore, analyse and apply the change-point detection and estimation methods to the situation when the conditional variance of a univariate process is heteroskedastic and exhibits change-points. Based on the fact that a GARCH process can be expressed as an ARMA model in the squares of the variable, we propose to detect and locate change-points by using the Bayesian Information Criterion as an extension of its application in linear models. The proposed procedure is characterized by its computational simplicity, reducing difficulties of the change-point detection in the complex non-linear processes. We compare this procedure with others available in the literature, which are based on cusum methods (Inclán and Tiao (1994), Kokoszka and Leipus (1999), Lee et al. (2004)), informational approach (Fukuda, 2010), minimum description length principle (Davis and Rodriguez-Yam (2008)), and the time varying spectrum (Ombao et al (2002)). We compute the empirical size and power properties by Monte Carlo simulation experiments considering several scenarios. We obtained a good size and power properties in detecting even small magnitudes of change and for low levels of persistence. The procedures were applied to the S\&P500 log returns time series, in order to compare with the results in Andreou and Ghysels (2002) and Davis and Rodriguez-Yam (2008). Changepoints detected by the proposed procedure were similar to the breaks found by the other procedures, and their location can be related with the Southeast Asia financial crisis and with other known financial events.
    Keywords: Heteroskedastic time series, Segmentation, Change-points
    Date: 2013–06
  6. By: Manabu Asai (Soka University, Japan); Massimiliano Caporin (University of Padova, Italy); Michael McAleer (Erasmus University Rotterdam, The Netherlands, Complutense University of Madrid, Spain, and Kyoto University, Japan)
    Abstract: Most multivariate variance or volatility models suffer from a common problem, the “curse of dimensionality”. For this reason, most are fitted under strong parametric restrictions that reduce the interpretation and flexibility of the models. Recently, the literature has focused on multivariate models with milder restrictions, whose purpose is to combine the need for interpretability and efficiency faced by model users with the computational problems that may emerge when the number of assets can be very large. We contribute to this strand of the literature by proposing a block-type parameterization for multivariate stochastic volatility models. The empirical analysis on stock returns on the US market shows that 1% and 5 % Value-at-Risk thresholds based on one-step-ahead forecasts of covariances by the new specification are satisfactory for the period including the Global Financial Crisis.
    Keywords: block structures; multivariate stochastic volatility; curse of dimensionality; leverage effects; multi-factors; heavy-tailed distribution
    JEL: C32 C51 C10
    Date: 2013–05–27
  7. By: Robert Kollmann
    Abstract: This paper develops a novel approach for estimating latent state variables of Dynamic Stochastic General Equilibrium (DSGE) models that are solved using a second-order accurate approximation. I apply the Kalman filter to a state-space representation of the second-order solution based on the ‘pruning’ scheme of Kim, Kim, Schaumburg and Sims (2008). By contrast to particle filters, no stochastic simulations are needed for the filter here--the present method is thus much faster. In Monte Carlo experiments, the filter here generates more accurate estimates of latent state variables than the standard particle filter. The present filter is also more accurate than a conventional Kalman filter that treats the linearized model as the true data generating process. Due to its high speed, the filter presented here is suited for the estimation of model parameters; a quasi-maximum likelihood procedure can be used for that purpose.
    Keywords: Simulation modeling ; Forecasting
    Date: 2013
  8. By: Imma Valentina Curato (Dipartimento di Economia e Management, Universita' degli Studi di Pisa)
    Abstract: In this paper, we define a new estimator of the leverage stochastic process based only on a pre-estimation of the Fourier coefficients of the volatility process. This feature constitutes a novelty in comparison with the leverage estimators proposed in the literature generally based on a pre-estimation of the spot volatility. Our estimator is proved to be consistent and in virtue of its definition it can be directly applied to estimate the leverage effect in case of irregular trading observations of the price path and microstructure noise contaminations.
    Keywords: leverage, non-parametric estimation, semi-martingale, Fourier transform, high frequency data.
    Date: 2013–06
  9. By: Yunjong Eo (School of Economics, the University of Sydney); James Morley (School of Economics, the University of New South Wales)
    Abstract: We propose the use of likelihood-based confidence sets for the timing of structural breaks in parameters from time series regression models. The confidence sets are valid for the broad setting of a system of multivariate linear regression equations under fairly general assumptions about the error and regressors and allowing for multiple breaks in mean and variance parameters. In our asymptotic analysis, we determine the critical values for a likelihood ratio test of a break date and the expected length of a likelihood-based confidence set constructed by inverting the likelihood ratio test. Notably, the likelihood-based confidence set is considerably shorter than for other methods employed in the literature. Monte Carlo analysis confirms better performance than other methods in terms of length and coverage accuracy in finite samples, including when the magnitude of breaks is small. An application to postwar U.S. real GDP and consumption leads to a much tighter 95% confidence set for the timing of the "Great Moderation" in the mid-1980s than previously found. Furthermore, when taking cointegration between output and consumption into account, confidence sets for structural break dates are even more precise and suggest a sudden "productivity growth slowdown" in the early 1970s and an additional large, abrupt decline in long-run growth in the mid-1990s.
    Keywords: Inverted Likelihood Ratio Confidence Sets, Multiple Breaks, Great Moderation, Productivity Growth Slowdown
    JEL: C22 C32 E20
    Date: 2013–05
  10. By: Roberto Casarin (Department of Economics, University of Venice Cà Foscari); Marco Tronzano (Department of Economics, University of Genova); Domenico Sartore (Department of Economics, University of Venice Cà Foscari)
    Abstract: This paper builds on Asai and McAleer (2009) and develops a new multivariate Dynamic Conditional Correlation (DCC) model where the parameters of the correlation dynamics and those of the log-volatility process are driven by two latent Markov chains. We outline a suitable Bayesian inference procedure, based on sequential MCMC estimation algorithms, and discuss some preliminary results on simulated data. We then apply the model to three major cross rates against the US Dollar (Euro, Yen, Pound), using high-frequency data since the beginning of the European Monetary Union. Estimated volatility paths reveal significant increases since mid-2007, documenting the destabilizing effects of the US sub-prime crisis and of the European sovereign debt crisis. Moreover, we find strong evidence supporting the existence of a time-varying correlation structure. Correlation paths display frequent shifts along the whole sample, both in low and in high volatility phases, pointing out the existence of contagion effects closely in line with the mechanisms outlined in the recent contagion literature (Forbes and Rigobon (2002) and Corsetti at al. (2005)).
    Keywords: Stochastic Correlation; Multivariate Stochastic Volatility; Markov-switching; Bayesian Inference; Monte Carlo Markov Chain.
    JEL: C1 C11 C15 C32 F31 G15
    Date: 2013

This nep-ets issue is ©2013 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.