|
on Econometric Time Series |
By: | Francesca DI IORIO (Universit… di Napoli "Federico II"); Stefano FACHIN (Universir… La Sapienza, Roma); Riccardo LUCCHETTI (Universit… Politecnica delle Marche, Dipartimento di Scienze Economiche e Sociali) |
Abstract: | We review the I(2) model in the perspective of its application to near-I(2) data, and report the results of some Monte Carlo simulations on the small sample performance of asymptotic tests on the long-run coefficients in both I(2) and near-I(2) systems. Our findings suggest that these tests suffer from some finite-sample issues, such as size bias. However, the behaviour of these statistics is not markedly different in the I(2) and near-I(2) case at ordinary sample sizes, so the usage of the I(2) model with near-I(2) data is perfectly defensible in finite samples. |
Keywords: | Cointegration, Hypothesis testing, I(2), near-I(2) |
JEL: | C12 C32 C52 |
Date: | 2013–11 |
URL: | http://d.repec.org/n?u=RePEc:anc:wpaper:395&r=ets |
By: | Boswijk, H. P.; Zu, Y. |
Abstract: | The paper generalises recent unit root tests for nonstationary volatility to a multivariate context. Persistent changes in the innovation variance matrix lead to size distortions in conventional cointegration tests, and possibilities of increased power by taking the time-varying volatilities and correlations into account. The testing procedures are based on a likelihood analysis of the vector autoregressive model with a conditional covariance matrix that may be estimated nonparametrically. We find that under suitable conditions, adaptation with respect to the volatility matrix process is possible, in the sense that nonparametric volatility estimation does not lead to a loss of asymptotic local power. |
Date: | 2013 |
URL: | http://d.repec.org/n?u=RePEc:cty:dpaper:13/08&r=ets |
By: | Joshua C C Chan; Cody Y L Hsiao |
Abstract: | Financial time series often exhibit properties that depart from the usual assumptions of serial independence and normality. These include volatility clustering, heavy-tailedness and serial dependence. A voluminous literature on different approaches for modeling these empirical regularities has emerged in the last decade. In this paper we review the estimation of a variety of highly flexible stochastic volatility models, and introduce some efficient algorithms based on recent advances in state space simulation techniques. These estimation methods are illustrated via empirical examples involving precious metal and foreign exchange returns. The corresponding Matlab code is also provided. |
Keywords: | stochastic volatility, scale mixture of normal, state space model, Markov chain Monte Carlo, financial data |
JEL: | C11 C22 C58 |
Date: | 2013–11 |
URL: | http://d.repec.org/n?u=RePEc:een:camaaa:2013-74&r=ets |
By: | Leucht, Anne; Neumann, Michael H.; Kreiss, Jens-Peter |
Abstract: | We provide a consistent specification test for GARCH(1,1) models based on a test statistic of Cramér-von Mises type. Since the limit distribution of the test statistic under the null hypothesis depends on unknown quantities in a complicated manner, we propose a model-based (semiparametric)bootstrap method to approximate critical values of the test and verify its asymptotic validity. Finally, we illuminate the finite sample behavior of the test by some simulations. |
Keywords: | Bootstrap , Cramér-von Mises test , GARCH processes , V-statistic |
JEL: | C12 |
Date: | 2013 |
URL: | http://d.repec.org/n?u=RePEc:mnh:wpaper:35107&r=ets |
By: | D.S. Poskitt; Simone D. Grose; Gael M. Martin |
Abstract: | This paper investigates the accuracy of bootstrap-based inference in the case of long memory fractionally integrated processes. The re-sampling method is based on the semi-parametric sieve approach, whereby the dynamics in the process used to produce the bootstrap draws are captured by an autoregressive approximation. Application of the sieve method to data pre-filtered by a semi-parametric estimate of the long memory parameter is also explored. Higher-order improvements yielded by both forms of re-sampling are demonstrated using Edgeworth expansions for a broad class of statistics that includes first- and second-order moments, the discrete Fourier transform and regression coefficients. The methods are then applied to the problem of estimating the sampling distributions of the sample mean and of selected sample autocorrelation coefficients, in experimental settings. In the case of the sample mean, the pre-filtered version of the bootstrap is shown to avoid the distinct underestimation of the sampling variance of the mean which the raw sieve method demonstrates in finite samples, higher order accuracy of the latter notwithstanding. Pre-filtering also produces gains in terms of the accuracy with which the sampling distributions of the sample autocorrelations are reproduced, most notably in the part of the parameter space in which asymptotic normality does not obtain. Most importantly, the sieve bootstrap is shown to reproduce the (empirically infeasible) Edgeworth expansion of the sampling distribution of the autocorrelation coefficients, in the part of the parameter space in which the expansion is valid. |
Keywords: | Long memory, ARFIMA, sieve bootstrap, bootstrap-based inference, Edgeworth expansion, sampling distribution. |
Date: | 2013 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2013-25&r=ets |
By: | Zhu, Ke; Yu, Philip L.H.; Li, Wai Keung |
Abstract: | This paper investigates a quasi-likelihood ratio (LR) test for the thresholds in buffered autoregressive processes. Under the null hypothesis of no threshold, the LR test statistic converges to a function of a centered Gaussian process. Under local alternatives, this LR test has nontrivial asymptotic power. Furthermore, a bootstrap method is proposed to obtain the critical value for our LR test. Simulation studies and one real example are given to assess the performance of this LR test. The proof in this paper is not standard and can be used in other non-linear time series models. |
Keywords: | AR(p) model; Bootstrap method; Buffered AR(p) model; Likelihood ratio test; Marked empirical process; Threshold AR(p) model. |
JEL: | C1 C12 |
Date: | 2013–11–25 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:51706&r=ets |
By: | Fildes, Robert; Petropoulos, Fotios |
Abstract: | A major problem for many organisational forecasters is to choose the appropriate forecasting method for a large number of data series. Model selection aims to identify the best method of forecasting for an individual series within the data set. Various selection rules have been proposed in order to enhance forecasting accuracy. In theory, model selection is appealing, as no single extrapolation method is better than all others for all series in an organizational data set. However, empirical results have demonstrated limited effectiveness of these often complex rules. The current study explores the circumstances under which model selection is beneficial. Three measures are examined for characterising the data series, namely predictability (in terms of the relative performance of the random walk but also a method, theta, that performs well), trend and seasonality in the series. In addition, the attributes of the data set and the methods also affect selection performance, including the size of the pools of methods under consideration, the stability of methods’ performance and the correlation between methods. In order to assess the efficacy of model selection in the cases considered, simple selection rules are proposed, based on within-sample best fit or best forecasting performance for different forecast horizons. Individual (per series) selection is contrasted against the simpler approach (aggregate selection), where one method is applied to all data series. Moreover, simple combination of methods also provides an operational benchmark. The analysis shows that individual selection works best when specific sub-populations of data are considered (trended or seasonal series), but also when methods’ relative performance is stable over time or no method is dominant across the data series. |
Keywords: | automatic model selection, comparative methods, extrapolative methods, combination, stability |
JEL: | C13 C22 |
Date: | 2013–04 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:51772&r=ets |
By: | Francq, Christian; Sucarrat, Genaro |
Abstract: | Estimation of log-GARCH models via the ARMA representation is attractive because it enables a vast amount of already established results in the ARMA literature. We propose an exponential Chi-squared QMLE for log-GARCH models via the ARMA representation. The advantage of the estimator is that it corresponds to the theoretically and empirically important case where the conditional error of the log-GARCH model is normal. We prove the consistency and asymptotic normality of the estimator, and show that, asymptotically, it is as efficient as the standard QMLE in the log-GARCH(1,1) case. We also verify and study our results in finite samples by Monte Carlo simulations. An empirical application illustrates the versatility and usefulness of the estimator. |
Keywords: | Log-GARCH, EGARCH, Quasi Maximum Likelihood, Exponential Chi- Squared, ARMA |
JEL: | C13 C22 C58 |
Date: | 2013–10–24 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:51783&r=ets |