|
on Econometric Time Series |
By: | Agostino Tarsitano (Dipartimento di Economia e Statistica, Università della Calabria) |
Abstract: | Many time series are of short duration because data acquisition has, of necessity, proceeded for but a brief term. Such data have previously often been analyzed by methods that either do not explicitly take into account time related changes or that are designed for long time series. In this paper, we consider several ways of assigning a dissimilarity between univariate time series in short term behavior. In particular, we have defined a measure that works irrespective of different baselines and scaling factors and its effectiveness has been evaluated on real and synthetic data sets |
Keywords: | Time trajectories, Distances, PAM clustering, Representative trends |
Date: | 2009–02 |
URL: | http://d.repec.org/n?u=RePEc:clb:wpaper:200905&r=ets |
By: | Drew Creal; Siem Jan Koopman; Andre Lucas |
Abstract: | We propose a new class of observation driven time series models that we refer to as Generalized Autoregressive Score (GAS) models. The driving mechanism of the GAS model is the scaled likelihood score. This provides a unified and consistent framework for introducing time-varying parameters in a wide class of non-linear models. The GAS model encompasses other well-known models such as the generalized autoregressive conditional heteroskedasticity, autoregressive conditional duration, autoregressive conditional intensity and single source of error models. In addition, the GAS specification gives rise to a wide range of new observation driven models. Examples include non-linear regression models with time-varying parameters, observation driven analogues of unobserved components time series models, multivariate point process models with time-varying parameters and pooling restrictions, new models for time-varying copula functions and models for time-varying higher order moments. We study the properties of GAS models and provide several non-trivial examples of their application. |
Keywords: | dynamic models, time-varying parameters, non-linearity, exponential family, marked point processes, copulas |
JEL: | C10 C22 C32 C51 |
Date: | 2009–03 |
URL: | http://d.repec.org/n?u=RePEc:hst:ghsdps:gd08-038&r=ets |
By: | Ole E. Barndorff-Nielsen; Peter Reinhard Hansen; Asger Lunde; Neil Shephard |
Abstract: | We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement noise of certain types and can also handle non-synchronous trading. It is the first estimator which has these three properties which are all essential for empirical work in this area. We derive the large sample asymptotics of this estimator and assess its accuracy using a Monte Carlo study. We implement the estimator on some US equity data, comparing our results to previous work which has used returns measured over 5 or 10 minutes intervals. We show the new estimator is substantially more precise. |
Keywords: | HAC estimator, Long run variance estimator, Market frictions, Quadratic variation, Realised variance |
Date: | 2009–03 |
URL: | http://d.repec.org/n?u=RePEc:hst:ghsdps:gd08-037&r=ets |
By: | Peter C. B. Phillips; Jun Yu |
Abstract: | A model of financial asset price determination is proposed that incorporates flat trading features into an efficient price process. The model involves the superposition of a Brownian semimartingale process for the effcient price and a Bernoulli process that determines the extent of price trading. The approach is related to sticky price modeling and the Calvo pricing mechanism in macroeconomic dynamics. A limit theory for the conventional realized volatility (RV) measure of integrated volatility is developed. The results show that RV is still consistent but has an inflated asymptotic variance that depends on the probability of flat trading. Estimated quarticity is similarly affected, so that both the feasible central limit theorem and the inferential framework suggested in Barndorff-Nielson and Shephard (2002) remain valid under flat price trading even though there is information loss due to flat trading effects. The results are related to work by Jacod (1993) and Mykland and Zhang (2006) on realized volatility measures with random and intermittent sampling, and to ACD models for irregularly spaced transactions data. Extensions are given to include models with microstructure noise. Some simulation results are reported. Empirical evaluations with tick-by-tick data indicate that the effect of flat trading on the limit theory under microstructure noise is likely to be minor in most cases, thereby affirming the relevance of existing approaches. |
Keywords: | Bernoulli process, Brownian semimartingale, Calvo pricing, Flat trading, Microstructure noise, Quarticity function, Realized volatility, Stopping times |
JEL: | C15 G12 |
Date: | 2009–03 |
URL: | http://d.repec.org/n?u=RePEc:hst:ghsdps:gd08-039&r=ets |
By: | Choi, In; Kurozumi, Eiji |
Abstract: | In this paper, Mallows?(1973) Cp criterion, Akaike?s (1973) AIC, Hurvich and Tsai?s (1989) corrected AIC and the BIC of Akaike (1978) and Schwarz (1978) are derived for the leads-and-lags cointegrating regression. Deriving model selection criteria for the leads-and-lags regression is a nontrivial task since the true model is of in?nite dimension. This paper justi?es using the conventional formulas of those model selection criteria for the leads-and-lags cointegrating regression. The numbers of leads and lags can be selected in scienti?c ways using the model selection criteria. Simulation results regarding the bias and mean squared error of the long-run coeï¿ cient estimates are reported. It is found that the model selection criteria are successful in reducing bias and mean squared error relative to the conventional, ?xed selection rules. Among the model selection criteria, the BIC appears to be most successful in reducing MSE, and Cp in reducing bias. We also observe that, in most cases, the selection rules without the restriction that the numbers of the leads and lags be the same have an advantage over those with it. |
Keywords: | Cointegration, Leads-and-lags regression, AIC, Corrected AIC, BIC, Cp |
Date: | 2008–12 |
URL: | http://d.repec.org/n?u=RePEc:hit:ccesdp:6&r=ets |
By: | Hadri, Kaddour; Kurozumi, Eiji |
Abstract: | This paper develops a simple test for the null hypothesis of stationarity in heterogeneous panel data with cross-sectional dependence in the form of a common factor in the disturbance. We do not estimate the common factor but mop-up its effect by employing the same method as the one proposed in Pesaran (2007) in the unit root testing context. Our test is basically the same as the KPSS test but the regression is augmented by cross-sectional average of the observations. We also develop a Lagrange multiplier (LM) test allowing for cross-sectional dependence and, under restrictive assumptions, compare our augmented KPSS test with the extended LM test under the null of stationarity, under the local alternative and under the fixed alternative, and discuss the differences between these two tests. We also extend our test to the more realistic case where the shocks are serially correlated. We use Monte Carlo simulations to examine the finite sample property of the augmented KPSS test. |
Keywords: | Panel data, stationarity, KPSS test, cross-sectional dependence, LM test, locally best test |
JEL: | C12 C33 |
Date: | 2008–12 |
URL: | http://d.repec.org/n?u=RePEc:hit:ccesdp:7&r=ets |
By: | Mohitosh Kejriwal; Pierre Perron |
Abstract: | This paper considers issues related to testing for multiple structural changes in cointegrated systems. We derive the limiting distribution of the Sup-Wald test under mild conditions on the errors and regressors for a variety of testing problems. We show that even if the coefficients of the integrated regressors are held fixed but the intercept is allowed to change, the limit distributions are not the same as would prevail in a stationary framework. Including stationary regressors whose coefficients are not allowed to change does not affect the limiting distribution of the tests under the null hypothesis. We also propose a procedure that allows one to test the null hypothesis of, say, k changes, versus the alternative hypothesis of k + 1 changes. This sequential procedure is useful in that it permits consistent estimation of the number of breaks present. We show via simulations that our tests maintain the correct size in finite samples and are much more powerful than the commonly used LM tests, which suffer from important problems of non-monotonic power in the presence of serial correlation in the errors. |
Keywords: | change-point, sequential procedure, wald tests, unit roots, cointegration |
JEL: | C22 |
Date: | 2008–11 |
URL: | http://d.repec.org/n?u=RePEc:pur:prukra:1216&r=ets |
By: | Deniz Dilan Karaman Örsal; Bernd Droge |
Abstract: | In this note we establish the existence of the first two moments of the asymptotic trace statistic, which appears as weak limit of the likelihood ratio statistic for testing the cointe- gration rank in a vector autoregressive model and whose moments may be used to develop panel cointegration tests. Moreover, we justify the common practice to approximate these moments by simulating a certain statistic, which converges weakly to the asymptotic trace statistic. To accomplish this we show that the moments of the mentioned statistic converge to those of the asymptotic trace statistic as the time dimension tends to infinity. |
Keywords: | Cointegration, Trace statistic, Asymptotic moments, Uniform integrability |
JEL: | C32 C33 C12 |
Date: | 2009–02 |
URL: | http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2009-012&r=ets |
By: | Alessandra Amendola; Giuseppe Storti |
Abstract: | This paper proposes a novel approach to the combination of conditional covariance matrix forecasts based on the use of the Generalized Method of Moments (GMM). It is shown how the procedure can be generalized to deal with large dimensional systems by means of a two-step strategy. The finite sample properties of the GMM estimator of the combination weights are investigated by Monte Carlo simulations. Finally, in order to give an appraisal of the economic implications of the combined volatility predictor, the results of an application to tactical asset allocation are presented. |
Keywords: | Multivariate GARCH, Forecast Combination, GMM, Portfolio Optimization |
JEL: | C52 C53 C32 G11 |
Date: | 2009–01 |
URL: | http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2009-007&r=ets |
By: | Roy Cerqueti (Univesity of Macerata); Paolo Falbo (University of Brescia); Cristian Pelizzari (University of Brescia) |
Abstract: | <p> </p><p align="left"><font size="1">While the large portion of the literature on Markov chain (possibly of order<br />higher than one) bootstrap methods has focused on the correct estimation of<br />the transition probabilities, little or no attention has been devoted to the<br />problem of estimating the dimension of the transition probability matrix.<br />Indeed, it is usual to assume that the Markov chain has a one-step memory<br />property and that the state space could not to be clustered, and coincides<br />with the distinct observed values. In this paper we question the opportunity<br />of such a standard approach.<br />In particular we advance a method to jointly estimate the order of the Markov<br />chain and identify a suitable clustering of the states. Indeed in several real<br />life applications the "memory" of many<br />processes extends well over the last observation; in those cases a correct<br />representation of past trajectories requires a significantly richer set than<br />the state space. On the contrary it can sometimes happen that some distinct<br />values do not correspond to really "different<br />states of a process; this is a common conclusion whenever,<br />for example, a process assuming two distinct values in t is not affected in<br />its distribution in t+1. Such a situation would suggest to reduce the<br />dimension of the transition probability matrix.<br />Our methods are based on solving two optimization problems. More specifically<br />we consider two competing objectives that a researcher will in general pursue<br />when dealing with bootstrapping: preserving the similarity between the<br />observed and the bootstrap series and reducing the probabilities of getting a<br />perfect replication of the original sample. A brief axiomatic discussion is<br />developed to define the desirable properties for such optimal criteria. Two<br />numerical examples are presented to illustrate the method.</font></p><p align="left"> </p> |
Keywords: | order of Markov chains,similarity of time series,transition probability matrices,multiplicity of time series,partition of states of Markov chains,Markov chains,bootstrap methods |
JEL: | O1 O11 |
Date: | 2009–04 |
URL: | http://d.repec.org/n?u=RePEc:mcr:wpdief:wpaper35&r=ets |