nep-ets New Economics Papers
on Econometric Time Series
Issue of 2012‒03‒14
eight papers chosen by
Yong Yin
SUNY at Buffalo

  1. Modelling conditional correlations of asset returns: A smooth transition approach By Annastiina Silvennoinen; Timo Teräsvirta
  2. Modelling Changes in the Unconditional Variance of Long Stock Return Series By Cristina Amado; Timo Teräsvirta
  3. On the Oracle Property of the Adaptive Lasso in Stationary and Nonstationary Autoregressions By Anders Bredahl Kock
  4. Forecasting Value-at-Risk Using Block Structure Multivariate Stochastic Volatility Models By Manabu Asai; Massimiliano Caporin; Michael McAleer
  5. Forecasting Mixed Frequency Time Series with ECM-MIDAS Models By Götz Thomas; Hecq Alain; Urbain Jean-Pierre
  6. Bayesian Estimation of DSGE Models By Pablo A Guerron-Quintana; James M Nason
  7. Median-based seasonal adjustment in the presence of seasonal volatility By Cayton, Peter Julian; Bersales, Lisa Grace
  8. Testing for predictability in a noninvertible ARMA model By Lanne, Markku; Meitz, Mika; Saikkonen, Pentti

  1. By: Annastiina Silvennoinen (School of Economics and Finance); Timo Teräsvirta (Aarhus University, School of Economics and Management and CREATES)
    Abstract: In this paper we propose a new multivariate GARCH model with time-varying conditional correlation structure. The time-varying conditional correlations change smoothly between two extreme states of constant correlations according to a predetermined or exogenous transition variable. An LM-test is derived to test the constancy of correlations and LM- and Wald tests to test the hypothesis of partially constant correlations. Analytical expressions for the test statistics and the required derivatives are provided to make computations feasible. An empirical example based on daily return series of five frequently traded stocks in the S&P 500 stock index completes the paper.
    Keywords: GARCH, Constant conditional correlation, Dynamic conditional correlation, Return comovement, Variable correlation GARCH model, Volatility model evaluation
    JEL: C12 C32 C51 C52 G1
    Date: 2012–02–27
    URL: http://d.repec.org/n?u=RePEc:aah:create:2012-09&r=ets
  2. By: Cristina Amado (University of Minho and NIPE); Timo Teräsvirta (Aarhus University, School of Economics and Management and CREATES)
    Abstract: In this paper we develop a testing and modelling procedure for describing the long-term volatility movements over very long return series. For the purpose, we assume that volatility is multiplicatively decomposed into a conditional and an unconditional component as in Amado and Teräsvirta (2011). The latter component is modelled by incorporating smooth changes so that the unconditional variance is allowed to evolve slowly over time. Statistical inference is used for specifying the parameterization of the time-varying component by applying a sequence of Lagrange multiplier tests. The model building procedure is illustrated with an application to daily returns of the Dow Jones Industrial Average stock index covering a period of more than ninety years. The main conclusions are as follows. First, the LM tests strongly reject the assumption of constancy of the unconditional variance. Second, the results show that the long-memory property in volatility may be explained by ignored changes in the unconditional variance of the long series. Finally, based on a formal statistical test we find evidence of the superiority of volatility forecast accuracy of the new model over the GJR-GARCH model at all horizons for a subset of the long return series.
    Keywords: Model specification; Conditional heteroskedasticity; Lagrange multiplier test; Timevarying unconditional variance; Long financial time series; Volatility persistence.
    JEL: C12 C22 C51 C52 C53
    Date: 2012–02–28
    URL: http://d.repec.org/n?u=RePEc:aah:create:2012-07&r=ets
  3. By: Anders Bredahl Kock (Aarhus University and CREATES)
    Abstract: We show that the Adaptive LASSO is oracle efficient in stationary and non-stationary autoregressions. This means that it estimates parameters consistently, selects the correct sparsity pattern, and estimates the coefficients belonging to the relevant variables at the same asymptotic efficiency as if only these had been included in the model from the outset. In particular this implies that it is able to discriminate between stationary and non-stationary autoregressions and it thereby constitutes an addition to the set of unit root tests. However, it is also shown that the Adaptive LASSO has no power against shrinking alternatives of the form c/T where c is a constant and T the sample size if it is tuned to perform consistent model selection. We show that if the Adaptive LASSO is tuned to performed conservative model selection it has power even against shrinking alternatives of this form. Monte Carlo experiments reveal that the Adaptive LASSO performs particularly well in the presence of a unit root while being at par with its competitors in the stationary setting.
    Keywords: Adaptive LASSO, Oracle efficiency, Consistent model selection, Conservative model selection, autoregression, shrinkage.
    JEL: C13 C22
    Date: 2012–02–02
    URL: http://d.repec.org/n?u=RePEc:aah:create:2012-05&r=ets
  4. By: Manabu Asai (Soka University / Faculty of Economics); Massimiliano Caporin (Department of Economics and Management “Marco Fanno” University of Padova, Italy.); Michael McAleer (Econometric Institute, Erasmus School of Economics, Erasmus University Rotterdam and Tinbergen Institute, The Netherlands, Department of Quantitative Economics, Complutense University of Madrid, and Institute of Economic Research, Kyoto University)
    Abstract: Most multivariate variance or volatility models suffer from a common problem, the “curse of dimensionality”. For this reason, most are fitted under strong parametric restrictions that reduce the interpretation and flexibility of the models. Recently, the literature has focused on multivariate models with milder restrictions, whose purpose was to combine the need for interpretability and efficiency faced by model users with the computational problems that may emerge when the number of assets is quite large. We contribute to this strand of the literature proposing a block-type parameterization for multivariate stochastic volatility models. The empirical analysis on stock returns on US market shows that 1% and 5 % Value-at-Risk thresholds based on one-step-ahead forecasts of covariances by the new specification are satisfactory for the period includes the global financial crisis.
    Keywords: block structures; multivariate stochastic volatility; curse of dimensionality; leverage effects; multi-factors; heavy-tailed distribution.
    JEL: C32 C51 C10
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:ucm:doicae:1203&r=ets
  5. By: Götz Thomas; Hecq Alain; Urbain Jean-Pierre (METEOR)
    Abstract: This paper proposes a mixed-frequency error-correction model in order to develop a regressionapproach for non-stationary variables sampled at different frequencies that are possiblycointegrated. We show that, at the model representation level, the choice of the timing betweenthe low-frequency ependent and the high-frequency explanatory variables to be included in thelong-run has an impact on the remaining dynamics and on the forecasting properties. Then, wecompare in a set of Monte Carlo experiments the forecasting performances of the low-frequencyaggregated model and several mixed-frequency regressions. In particular, we look at both theunrestricted mixed-frequency model and at a more parsimonious MIDAS regression. Whilst theexisting literature has only investigated the potential improvements of the MIDAS framework forstationary time series, our study emphasizes the need to include the relevant cointegratingvectors in the non-stationary case. Furthermore, it is illustrated that the exact timing of thelong-run relationship does notmatter as long as the short-run dynamics are adapted according to the composition of thedisequilibrium error. Finally, the unrestricted model is shown to suffer from parameterproliferation for small sample sizeswhereas MIDAS forecasts are robust to over-parameterization. Hence, the data-driven,low-dimensional and flexible weighting structure makes MIDAS a robust and parsimonious method tofollow when the true underlying DGP is unknown while still exploiting information present in thehigh-frequency. An empirical application illustrates the theoretical and the Monte Carlo results.
    Keywords: econometrics;
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:dgr:umamet:2012012&r=ets
  6. By: Pablo A Guerron-Quintana; James M Nason
    Abstract: We survey Bayesian methods for estimating dynamic stochastic general equilibrium (DSGE) models in this article. We focus on New Keynesian (NK)DSGE models because of the interest shown in this class of models by economists in academic and policy-making institutions. This interest stems from the ability of this class of DSGE model to transmit real, nominal, and fiscal and monetary policy shocks into endogenous fluctuations at business cycle frequencies. Intuition about these propagation mechanisms is developed by reviewing the structure of a canonical NKDSGE model. Estimation and evaluation of the NKDSGE model rests on being able to detrend its optimality and equilibrium conditions, to construct a linear approximation of the model, to solve for its linear approximate decision rules, and to map from this solution into a state space model to generate Kalman filter projections. The likelihood of the linear approximate NKDSGE model is based on these projections. The projections and likelihood are useful inputs into the Metropolis-Hastings Markov chain Monte Carlo simulator that we employ to produce Bayesian estimates of the NKDSGE model. We discuss an algorithm that implements this simulator. This algorithm involves choosing priors of the NKDSGE model parameters and fixing initial conditions to start the simulator. The output of the simulator is posterior estimates of two NKDSGE models, which are summarized and compared to results in the existing literature. Given the posterior distributions, the NKDSGE models are evaluated with tools that determine which is most favored by the data. We also give a short history of DSGE model estimation as well as pointing to issues that are at the frontier of this research.
    JEL: C32 E10 E32
    Date: 2012–02
    URL: http://d.repec.org/n?u=RePEc:acb:camaaa:2012-10&r=ets
  7. By: Cayton, Peter Julian; Bersales, Lisa Grace
    Abstract: Philippine seasonal time series data tends to have unstable seasonal behavior, called seasonal volatility. Current Philippine seasonal adjustment methods use X-11-ARIMA, which has been shown to be poor in the presence of seasonal volatility. A modification of the Census X-11 method for seasonal adjustment is devised by changing the moving average filters into median-based filtering procedures using Tukey repeated median smoothing techniques. To study the ability of the new procedure, simulation experiments and application to real Philippine time series data were conducted and compared to Census X-11-ARIMA methods. The seasonal adjustment results will be evaluated based on their revision history, smoothness and accuracy in estimating the non-seasonal component. The results of research open the idea of using robust nonlinear filtering methods as an alternative in seasonal adjustment when moving average filters tend to fail under unfavorable conditions of time series data.
    Keywords: Tukey Median Smoothing; Unstable Seasonality; Seasonal Filtering; Census X-11-ARIMA; Robust Filtering
    JEL: C14 C82 C22 C49
    Date: 2012–03
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:37146&r=ets
  8. By: Lanne, Markku; Meitz, Mika; Saikkonen, Pentti
    Abstract: We develop likelihood-based tests for autocorrelation and predictability in a first order non- Gaussian and noninvertible ARMA model. Tests based on a special case of the general model, referred to as an all-pass model, are also obtained. Data generated by an all-pass process are uncorrelated but, in the non-Gaussian case, dependent and nonlinearly predictable. Therefore, in addition to autocorrelation the proposed tests can also be used to test for nonlinear predictability. This makes our tests different from their previous counterparts based on conventional invertible ARMA models. Unlike in the invertible case, our tests can also be derived by standard methods that lead to chi-squared or standard normal limiting distributions. A further convenience of the noninvertible ARMA model is that, to some extent, it can allow for conditional heteroskedasticity in the data which is useful when testing for predictability in economic and financial data. This is also illustrated by our empirical application to U.S. stock returns, where our tests indicate the presence of nonlinear predictability.
    Keywords: Non-Gaussian time series; noninvertible ARMA model; all-pass process; predictability of asset returns
    JEL: C53 G12 C22
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:37151&r=ets

This nep-ets issue is ©2012 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.