nep-ets New Economics Papers
on Econometric Time Series
Issue of 2007‒03‒17
twelve papers chosen by
Yong Yin
SUNY at Buffalo

  1. Accurate Value-at-Risk Forecast with the (good old) Normal-GARCH Model By Christoph Hartz; Stefan Mittnik; Marc S. Paolella
  2. Asymptotics for Stationary Very Nearly Unit Root Processes By Donald W.K. Andrews; Patrik Guggenberger
  3. Estimating Long-Run Relationships between Observed Integrated Variables by Unobserved Component Methods By G. EVERAERT
  4. A GARCH-based method for clustering of financial time series: International stock markets evidence By Caiado, Jorge; Crato, Nuno
  5. An interpolated periodogram-based metric for comparison of time series with unequal lengths By Caiado, Jorge; Crato, Nuno; Peña, Daniel
  6. Nonstationary increments, scaling distributions, and variable diffusion processes in financial markets By Bassler, Kevin E.; McCauley, Joseph L.; Gunaratne, Gemunu H.
  7. Fokker-Planck and Chapman-Kolmogorov equations for Ito processes with finite memory By McCauley, Joseph L.
  8. Hurst exponents, Markov processes, and nonlinear diffusion equations By Bassler, Kevin E.; Gunaratne, Gemunu H.; McCauley, Joseph L.
  9. Hurst exponents, Markov processes, and fractional Brownian motion By McCauley, Joseph L.; Gunaratne, Gemunu H.; Bassler, Kevin E.
  10. Martingales, Detrending Data, and the Efficient Market Hypothesis By McCauley, Joseph L.; Bassler, Kevin E.; Gunaratne, Gemunu H.
  11. Efficient Estimation of the Parameter Path in Unstable Time Series Models By Mueller, Ulrich; Petalas, Philippe-Emmanuel
  12. Estimation Bias and Inference in Overlapping Autoregressions: Implications for the Target Zone Literature By Zsolt Darvas

  1. By: Christoph Hartz (University of Munich); Stefan Mittnik (University of Munich, Center for Financial Studies and ifo); Marc S. Paolella (University of Zurich)
    Abstract: A resampling method based on the bootstrap and a bias-correction step is developed for improving the Value-at-Risk (VaR) forecasting ability of the normal-GARCH model. Compared to the use of more sophisticated GARCH models, the new method is fast, easy to implement, numerically reliable, and, except for having to choose a window length L for the bias-correction step, fully data driven. The results for several different financial asset returns over a long out-of-sample forecasting period, as well as use of simulated data, strongly support use of the new method, and the performance is not sensitive to the choice of L.
    Keywords: Bootstrap, GARCH, Value-at-Risk
    JEL: C22 C53 C63 G12
    Date: 2006–11–03
  2. By: Donald W.K. Andrews (Cowles Foundation, Yale University); Patrik Guggenberger (Dept. of Economics, UCLA)
    Abstract: This paper considers a mean zero stationary first-order autoregressive (AR) model. It is shown that the least squares estimator and t statistic have Cauchy and standard normal asymptotic distributions, respectively, when the AR parameter rho_n is very near to one in the sense that 1 - rho_n = (n^{-1}).
    Keywords: Asymptotics, Least squares, Nearly nonstationary, Stationary initial condition, Unit root
    JEL: C22
    Date: 2007–03
  3. By: G. EVERAERT
    Abstract: A regression including integrated variables yields spurious results if the residuals contain a unit root. Although the obtained estimates are unreliable, this does not automatically imply that there is no long-run relation between the included variables as the unit root in the residuals may be induced by omitted or unobserved integrated variables. This paper uses an unobserved component model to estimate the partial long-run relation between observed integrated variables. This provides an alternative to standard cointegration analysis. The proposed methodology is described using a Monte Carlo simulation and applied to investigate purchasing-power parity.
    Keywords: Spurious Regression, Cointegration, Unobserved Component Model, PPP.
    JEL: C15 C32
    Date: 2007–01
  4. By: Caiado, Jorge; Crato, Nuno
    Abstract: In this paper, we introduce a volatility-based method for clustering analysis of financial time series. Using the generalized autoregressive conditional heteroskedasticity (GARCH) models we estimate the distances between the stock return volatilities. The proposed method uses the volatility behavior of the time series and solves the problem of different lengths. As an illustrative example, we investigate the similarities among major international stock markets using daily return series with different sample sizes from 1966 to 2006. From cluster analysis, most European markets countries, United States and Canada appear close together, and most Asian/Pacific markets and the South/Middle American markets appear in a distinct cluster. After the terrorist attack on September 11, 2001, the European stock markets have become more homogenous, and North American markets, Japan and Australia seem to come closer.
    Keywords: Cluster analysis; GARCH; International stock markets; Volatility.
    JEL: C32 G15
    Date: 2007
  5. By: Caiado, Jorge; Crato, Nuno; Peña, Daniel
    Abstract: We propose a periodogram-based metric for classification and clustering of time series with different sample sizes. For such cases, we know that the Euclidean distance between the periodogram ordinates cannot be used. One possible way to deal with this problem is to interpolate lineary one of the periodograms in order to estimate ordinates of the same frequencies.
    Keywords: Classification; Cluster analysis; Interpolation; Periodogram; Time series.
    JEL: C32
    Date: 2006
  6. By: Bassler, Kevin E.; McCauley, Joseph L.; Gunaratne, Gemunu H.
    Abstract: Arguably the most important problem in quantitative finance is to understand the nature of stochastic processes that underlie market dynamics. One aspect of the solution to this problem involves determining characteristics of the distribution of fluctuations in returns. Empirical studies conducted over the last decade have reported that they are non-Gaussian, scale in time, and have power-law (or fat) tails [1–5]. However, because they use sliding interval methods of analysis, these studies implicitly assume that the underlying process has stationary increments. We explicitly show that this assumption is not valid for the Euro-Dollar exchange rate between 1999-2004. In addition, we find that fluctuations in returns of the exchange rate are uncorrelated and scale as power laws for certain time intervals during each day. This behavior is consistent with a diffusive process with a diffusion coefficient that depends both on the time and the price change. Within scaling regions, we find that sliding interval methods can generate fat-tailed distributions as an artifact, and that the type of scaling reported in many previous studies does not exist.
    Keywords: Nonstationary increments; autocorrelations; scaling; Hurst exponents; Markov process
    JEL: C16 G0 C1
    Date: 2006–09–30
  7. By: McCauley, Joseph L.
    Abstract: The usual derivation of the Fokker-Planck partial differential eqn. (pde) assumes the Chapman-Kolmogorov equation for a Markov process [1,2]. Starting instead with an Ito stochastic differential equation (sde), we argue that finitely many states of memory are allowed in Kolmogorov’s two pdes, K1 (the backward time pde) and K2 (the Fokker-Planck pde), and show that a Chapman-Kolmogorov eqn. follows as well. We adapt Friedman’s derivation [3] to emphasize that finite memory is not excluded. We then give an example of a Gaussian transition density with 1-state memory satisfying both K1, K2, and the Chapman-Kolmogorov eqns. We begin the paper by explaining the meaning of backward time diffusion, and end by using our interpretation to produce a very short proof that the Green function for the Black-Scholes pde describes a Martingale in the risk neutral discounted stock price.
    Keywords: Stochastic process; martingale; Ito process; stochastic differential eqn.; memory; nonMarkov process; 2 backward time diffusion; Fokker-Planck; Kolmogorov’s partial differential eqns.; Chapman-Kolmogorov eqn.; Black- Scholes eqn.
    JEL: C69 G0
    Date: 2007–02–22
  8. By: Bassler, Kevin E.; Gunaratne, Gemunu H.; McCauley, Joseph L.
    Abstract: We show by explicit closed form calculations that a Hurst exponent Hâ 1/2 does not necessarily imply long time correlations like those found in fractional Brownian motion. We construct a large set of scaling solutions of Fokker-Planck partial differential equations where Hâ 1/2. Thus Markov processes, which by construction have no long time correlations, can have Hâ 1/2. If a Markov process scales with Hurst exponent Hâ 1/2 then it simply means that the process has nonstationary increments. For the scaling solutions, we show how to reduce the calculation of the probability density to a single integration once the diffusion coefficient D(x,t) is specified. As an example, we generate a class of student-t-like densities from the class of quadratic diffusion coefficients. Notably, the Tsallis density is one member of that large class. The Tsallis density is usually thought to result from a nonlinear diffusion equation, but instead we explicitly show that it follows from a Markov process generated by a linear Fokker-Planck equation, and therefore from a corresponding Langevin equation. Having a Tsallis density with Hâ 1/2 therefore does not imply dynamics with correlated signals, e.g., like those of fractional Brownian motion. A short review of the requirements for fractional Brownian motion is given for clarity, and we explain why the usual simple argument that Hâ 1/2 implies correlations fails for Markov processes with scaling solutions. Finally, we discuss the question of scaling of the full Green function g(x,t;xâ,tâ) of the Fokker-Planck pde.
    Keywords: Hurst exponent; Markov process; scaling; stochastic calculus; autocorrelations; fractional Brownian motion; Tsallis model; nonlinear diffusion
    JEL: G1 G10 G14
    Date: 2005–12–01
  9. By: McCauley, Joseph L.; Gunaratne, Gemunu H.; Bassler, Kevin E.
    Abstract: There is much confusion in the literature over Hurst exponents. Recently, we took a step in the direction of eliminating some of the confusion. One purpose of this paper is to illustrate the difference between fBm on the one hand and Gaussian Markov processes where Hâ 1/2 on the other. The difference lies in the increments, which are stationary and correlated in one case and nonstationary and uncorrelated in the other. The two- and one-point densities of fBm are constructed explicitly. The two-point density doesnât scale. The one-point density for a semi-infinite time interval is identical to that for a scaling Gaussian Markov process with Hâ 1/2 over a finite time interval. We conclude that both Hurst exponents and one point densities are inadequate for deducing the underlying dynamics from empirical data. We apply these conclusions in the end to make a focused statement about ânonlinear diffusionâ.
    Keywords: Markov processes; fractional Brownian motion; scaling; Hurst exponents; stationary and nonstationary increments; autocorrelations
    JEL: G00 C1
    Date: 2006–09–30
  10. By: McCauley, Joseph L.; Bassler, Kevin E.; Gunaratne, Gemunu H.
    Abstract: We discuss martingales, detrending data, and the efficient market hypothesis for stochastic processes x(t) with arbitrary diffusion coefficients D(x,t). Beginning with x-independent drift coefficients R(t) we show that Martingale stochastic processes generate uncorrelated, generally nonstationary increments. Generally, a test for a martingale is therefore a test for uncorrelated increments. A detrended process with an x- dependent drift coefficient is generally not a martingale, and so we extend our analysis to include the class of (x,t)-dependent drift coefficients of interest in finance. We explain why martingales look Markovian at the level of both simple averages and 2-point correlations. And while a Markovian market has no memory to exploit and presumably cannot be beaten systematically, it has never been shown that martingale memory cannot be exploited in 3-point or higher correlations to beat the market. We generalize our Markov scaling solutions presented earlier, and also generalize the martingale formulation of the efficient market hypothesis (EMH) to include (x,t)- dependent drift in log returns. We also use the analysis of this paper to correct a misstatement of the ‘fair game’ condition in terms of serial correlations in Fama’s paper on the EMH. We end with a discussion of Levy’scharacterization of Brownian motion and prove that an arbitrary martingale is topologically inequivalent to a Wiener process.
    Keywords: Martingales; Markov processes; detrending; memory; stationary and nonstationary increments; correlations; efficient market hypothesis.
    JEL: C53 G0 C2
    Date: 2007–02
  11. By: Mueller, Ulrich; Petalas, Philippe-Emmanuel
    Abstract: The paper investigates asymptotically efficient inference in general likelihood models with time varying parameters. Parameter path estimators and tests of parameter constancy are evaluated by their weighted average risk and weighted average power, respectively. The weight function is proportional to the distribution of a Gaussian process, and focusses on local parameter instabilities that cannot be detected with certainty even in the limit. It is shown that asymptotically, the sample information about the parameter path is efficiently summarized by a Gaussian pseudo model. This approximation leads to computationally convenient formulas for efficient path estimators and test statistics, and unifies the theory of stability testing and parameter path estimation.
    Keywords: Time Varying Parameters; Non-linear Non-Gaussian Smoothing; Weighted Average Risk; Weighted Average Power; Posterior Approximation; Contiguity
    JEL: C22
    Date: 2007–03
  12. By: Zsolt Darvas (Department of Mathematical Economics and Economic Analysis, Corvinus University of Budapest)
    Abstract: Samples with overlapping observations are used for the study of uncovered interest rate parity, the predictability of long run stock returns, and the credibility of exchange rate target zones. This paper quantifies the biases in parameter estimation and size distortions of hypothesis tests of overlapping linear and polynomial autoregressions, which have been used in target zone applications. We show that both estimation bias and size distortions generally depend on the amount of overlap, the sample size, and the autoregressive root of the data generating process. In particular, the estimates are biased in a way that makes it more likely that the predictions of the Bertola-Svensson-model will be supported. Size distortions of various tests also turn out to be substantial even when using a heteroskedasticity and autocorrelation consistent covariance matrix.
    Keywords: drift-adjustment method, exchange rate target zone, HAC covariance, overlapping observations, polynomial autoregression, size distortions, small sample bias
    JEL: C22 F31
    Date: 2007–02–27

This nep-ets issue is ©2007 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.