nep-ecm New Economics Papers
on Econometrics
Issue of 2007‒03‒17
twenty papers chosen by
Sune Karlsson
Orebro University

  1. Asymptotics for Stationary Very Nearly Unit Root Processes By Donald W.K. Andrews; Patrik Guggenberger
  2. Robust and efficient adaptive estimation of binary-choice regression models By Cizek,Pavel
  3. The Limit of Finite-Sample Size and a Problem with Subsampling By Donald W.K. Andrews; Patrik Guggenberger
  4. Efficient Estimation of the Parameter Path in Unstable Time Series Models By Mueller, Ulrich; Petalas, Philippe-Emmanuel
  5. Hybrid and Size-Corrected Subsample Methods By Donald W.K. Andrews; Patrik Guggenberger
  6. Non-Nested Testing in Models Estimated via Generalized Method of Moments By Alastair R. Hall; Denis Pelletier
  7. Valid Inference in Partially Unstable GMM Models By Li, Hong; Mueller, Ulrich
  8. Accurate Value-at-Risk Forecast with the (good old) Normal-GARCH Model By Christoph Hartz; Stefan Mittnik; Marc S. Paolella
  9. The Behavior of the Maximum Likelihood Estimator of Dynamic Panel Data Sample Selection Models By Wladimir Raymond; Pierre Mohnen; Franz Palm; Sybrand Schim van der Loeff
  10. Estimation Bias and Inference in Overlapping Autoregressions: Implications for the Target Zone Literature By Zsolt Darvas
  11. Estimating Long-Run Relationships between Observed Integrated Variables by Unobserved Component Methods By G. EVERAERT
  12. Improved Errors-in-Variables Estimators for Grouped Data By Devereux, Paul J.
  13. Kriging metamodeling in simulation : a review By Kleijnen,Jack P.C.
  14. Predicting the Term Structure of Interest Rates: Incorporating parameter uncertainty, model uncertainty and macroeconomic information By Michiel D. de Pooter; Francesco Ravazzolo; Dick van Dijk
  15. Density and Hazard Rate Estimation for Censored and ?-mixing Data Using Gamma Kernels By Taoufik Bouezmarni; Jeroen V.K. Rombouts
  16. Hurst exponents, Markov processes, and fractional Brownian motion By McCauley, Joseph L.; Gunaratne, Gemunu H.; Bassler, Kevin E.
  17. An interpolated periodogram-based metric for comparison of time series with unequal lengths By Caiado, Jorge; Crato, Nuno; Peña, Daniel
  18. Hurst exponents, Markov processes, and nonlinear diffusion equations By Bassler, Kevin E.; Gunaratne, Gemunu H.; McCauley, Joseph L.
  19. Predicting home-appliance acquisition sequences: Markov/Markov for Discrimination and survival analysis for modeling sequential information in NPTB models By A. PRINZIE; D. VAN DEN POEL
  20. A GARCH-based method for clustering of financial time series: International stock markets evidence By Caiado, Jorge; Crato, Nuno

  1. By: Donald W.K. Andrews (Cowles Foundation, Yale University); Patrik Guggenberger (Dept. of Economics, UCLA)
    Abstract: This paper considers a mean zero stationary first-order autoregressive (AR) model. It is shown that the least squares estimator and t statistic have Cauchy and standard normal asymptotic distributions, respectively, when the AR parameter rho_n is very near to one in the sense that 1 - rho_n = (n^{-1}).
    Keywords: Asymptotics, Least squares, Nearly nonstationary, Stationary initial condition, Unit root
    JEL: C22
    Date: 2007–03
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1607&r=ecm
  2. By: Cizek,Pavel (Tilburg University, Center for Economic Research)
    Abstract: The binary-choice regression models such as probit and logit are used to describe the effect of explanatory variables on a binary response variable. Typically estimated by the maximum likelihood method, estimates are very sensitive to deviations from a model, such as heteroscedasticity and data contamination. At the same time, the traditional robust (high-breakdown point) methods such as the maximum trimmed likelihood are not applicable since, by trimming observations, they induce the separation of data and non-identiffication of parameter estimates. To provide a robust estimation method for binary-choice regression, we consider a maximum symmetrically-trimmed likelihood estimator (MSTLE) and design a parameter-free adaptive procedure for choosing the amount of trimming. The proposed adaptive MSTLE preserves the robust properties of the original MSTLE, signifficantly improves the finite-sample behavior of MSTLE, and additionallyfensures asymptotic efficiency of the estimator under no contamination. The results concerning the trimming identiffication, robust properties, and asymptotic distribution of the proposed method are accompanied by simulation experiments and an application documenting the finite-sample behavior of some existing and the proposed methods.
    Keywords: asymptotic efficiency;binary-choice regression;breakdown point;maximum likelihood estimation;robust estimation;trimming
    JEL: C13 C20 C21 C22
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:dgr:kubcen:200712&r=ecm
  3. By: Donald W.K. Andrews (Cowles Foundation, Yale University); Patrik Guggenberger (Dept. of Economics, UCLA)
    Abstract: This paper considers inference based on a test statistic that has a limit distribution that is discontinuous in a nuisance parameter or the parameter of interest. The paper shows that subsample, b_n < n bootstrap, and standard fixed critical value tests based on such a test statistic often have asymptotic size -- defined as the limit of the finite-sample size -- that is greater than the nominal level of the tests. We determine precisely the asymptotic size of such tests under a general set of high-level conditions that are relatively easy to verify. The high-level conditions are verified in several examples. Analogous results are established for confidence intervals. The results apply to tests and confidence intervals (i) when a parameter may be near a boundary, (ii) for parameters defined by moment inequalities, (iii) based on super-efficient or shrinkage estimators, (iv) based on post-model selection estimators, (v) in scalar and vector autoregressive models with roots that may be close to unity, (vi) in models with lack of identification at some point(s) in the parameter space, such as models with weak instruments and threshold autoregressive models, (vii) in predictive regression models with nearly-integrated regressors, (viii) for non-differentiable functions of parameters, and (ix) for differentiable functions of parameters that have zero first-order derivative. Examples (i)-(iii) are treated in this paper. Examples (i) and (iv)-(vi) are treated in sequels to this paper, Andrews and Guggenberger (2005a, b). In models with unidentified parameters that are bounded by moment inequalities, i.e., example (ii), certain subsample confidence regions are shown to have asymptotic size equal to their nominal level. In all other examples listed above, some types of subsample procedures do not have asymptotic size equal to their nominal level.
    Keywords: Asymptotic size, b < n bootstrap, Finite-sample size, Over-rejection, Size correction, Subsample confidence interval, Subsample test
    JEL: C12 C15
    Date: 2007–03
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1605&r=ecm
  4. By: Mueller, Ulrich; Petalas, Philippe-Emmanuel
    Abstract: The paper investigates asymptotically efficient inference in general likelihood models with time varying parameters. Parameter path estimators and tests of parameter constancy are evaluated by their weighted average risk and weighted average power, respectively. The weight function is proportional to the distribution of a Gaussian process, and focusses on local parameter instabilities that cannot be detected with certainty even in the limit. It is shown that asymptotically, the sample information about the parameter path is efficiently summarized by a Gaussian pseudo model. This approximation leads to computationally convenient formulas for efficient path estimators and test statistics, and unifies the theory of stability testing and parameter path estimation.
    Keywords: Time Varying Parameters; Non-linear Non-Gaussian Smoothing; Weighted Average Risk; Weighted Average Power; Posterior Approximation; Contiguity
    JEL: C22
    Date: 2007–03
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:2260&r=ecm
  5. By: Donald W.K. Andrews (Cowles Foundation, Yale University); Patrik Guggenberger (Dept. of Economics, UCLA)
    Abstract: This paper considers the problem of constructing tests and confidence intervals (CIs) that have correct asymptotic size in a broad class of non-regular models. The models considered are non-regular in the sense that standard test statistics have asymptotic distributions that are discontinuous in some parameters. It is shown in Andrews and Guggenberger (2005a) that standard fixed critical value, subsample, and b < n bootstrap methods often have incorrect size in such models. This paper introduces general methods of constructing tests and CIs that have correct size. First, procedures are introduced that are a hybrid of subsample and fixed critical value methods. The resulting hybrid procedures are easy to compute and have correct size asymptotically in many, but not all, cases of interest. Second, the paper introduces size-correction and "plug-in" size-correction methods for fixed critical value, subsample, and hybrid tests. The paper also introduces finite-sample adjustments to the asymptotic results of Andrews and Guggenberger (2005a) for subsample and hybrid methods and employs these adjustments in size-correction. The paper discusses several examples in detail. The examples are: (i) tests when a nuisance parameter may be near a boundary, (ii) CIs in an autoregressive model with a root that may be close to unity, and (iii) tests and CIs based on a post-conservative model selection estimator.
    Keywords: Asymptotic size, Autoregressive model, b < n bootstrap, Finite-sample size, Hybrid test, Model selection, Over-rejection, Parameter near boundary, Size correction, Subsample confidence interval, Subsample test
    JEL: C12 C15
    Date: 2007–03
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1606&r=ecm
  6. By: Alastair R. Hall (Economics, School of Social Sciences, University of Manchester); Denis Pelletier (Department of Economics, North Carolina State University)
    Abstract: Rivers and Vuong (2002) develop a very general framework for choosing between two competing dynamic models. Within their framework, inference is based on a statistic that compares measures of goodness of fit between the two models. The null hypothesis is that the models have equal measures of goodness of fit; one model is preferred if its goodness of fit is statistically significantly smaller than its competitor. Under the null hypothesis, Rivers and Vuong (2002) show that their test statistic has a standard normal distribution under generic conditions that are argued to allow for a variety of estimation methods including Generalized Method of Moments (GMM). In this paper, we analyze the limiting distribution of Rivers and Vuong's (2002) statistic under the null hypothesis when inference is based on a comparison of GMM minimands evaluated at GMM estimators. It is shown that the limiting behaviour of this statistic depends on whether the models in question are correctly specified, locally misspecified or misspecified. Specifically, it is shown that: (i) if both models are correctly specified or locally misspecified then Rivers and Vuong's (2002) generic conditions are not satisfied, and the limiting distribution of the test statistic is non-standard under the null; (ii) if both models are misspecified then the generic conditions are satisfied, and so the statistic has a standard normal distribution under the null. In the latter case it is shown that the choice of weighting matrices affects the outcome of the test and thus the ranking of the models.
    Keywords: Generalized Method of Moments, Non-nested Hypothesis Testing, Model Selection
    JEL: C10 C32
    Date: 2007–03
    URL: http://d.repec.org/n?u=RePEc:ncs:wpaper:011&r=ecm
  7. By: Li, Hong; Mueller, Ulrich
    Abstract: The paper considers time series GMM models where a subset of the parameters are time varying. The magnitude of the time variation in the unstable parameters is such that efficient tests detect the instability with (possibly high) probability smaller than one, even in the limit. We show that for many forms of the instability and a large class of GMM models, standard GMM inference on the subset of stable parameters, ignoring the partial instability, remains asymptotically valid.
    Keywords: Structural Breaks; Parameter Stability Test; Contiguity; Euler Condition; New Keynesian Phillips Curve
    JEL: C32
    Date: 2006–08
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:2261&r=ecm
  8. By: Christoph Hartz (University of Munich); Stefan Mittnik (University of Munich, Center for Financial Studies and ifo); Marc S. Paolella (University of Zurich)
    Abstract: A resampling method based on the bootstrap and a bias-correction step is developed for improving the Value-at-Risk (VaR) forecasting ability of the normal-GARCH model. Compared to the use of more sophisticated GARCH models, the new method is fast, easy to implement, numerically reliable, and, except for having to choose a window length L for the bias-correction step, fully data driven. The results for several different financial asset returns over a long out-of-sample forecasting period, as well as use of simulated data, strongly support use of the new method, and the performance is not sensitive to the choice of L.
    Keywords: Bootstrap, GARCH, Value-at-Risk
    JEL: C22 C53 C63 G12
    Date: 2006–11–03
    URL: http://d.repec.org/n?u=RePEc:cfs:cfswop:wp200623&r=ecm
  9. By: Wladimir Raymond; Pierre Mohnen; Franz Palm; Sybrand Schim van der Loeff
    Abstract: This paper proposes a method to implement maximum likelihood estimation of the dynamic panel data type 2 and 3 tobit models. The likelihood function involves a two-dimensional indefinite integral evaluated using “two-step” Gauss-Hermite quadrature. A Monte Carlo study shows that the quadrature works well infinite sample for a number of evaluation points as small as two. Incorrectly ignoring the individual effects, or the dependence between the initial conditions and the individual effects results in an overestimation of the coefficients of the lagged dependent variables. An application to incremental and radical product innovations by Dutch business firms illustrates the method. <P>Cette étude propose une façon d’utiliser l’estimateur du maximum de vraisemblance pour des données panel et des modèles dynamiques de type tobit 2 ou tobit 3. La fonction de vraisemblance inclut une intégrale double qui est évaluée en utilisant une quadrature Gauss-Hermite à deux étapes. Une étude de Monte Carlo montre que la quadrature donne de bons résultats dans un échantillon fini même avec uniquement deux points d’évaluation. Si on ignore les effets individuels ou la dépendance entre ceux-ci et les conditions initiales, on obtient une estimation biaisée vers le haut des coefficients des variables endogènes retardées. Une application à l’étude des innovations de produit radicales et incrémentales avec des données panel d’entreprises néerlandaises illustre la méthode proposée.
    Keywords: panel data, maximum likelihood estimator, dynamic models, sample selection, données panel, maximum de vraisemblance, modéles dynamiques avec sélection
    Date: 2007–03–01
    URL: http://d.repec.org/n?u=RePEc:cir:cirwor:2007s-06&r=ecm
  10. By: Zsolt Darvas (Department of Mathematical Economics and Economic Analysis, Corvinus University of Budapest)
    Abstract: Samples with overlapping observations are used for the study of uncovered interest rate parity, the predictability of long run stock returns, and the credibility of exchange rate target zones. This paper quantifies the biases in parameter estimation and size distortions of hypothesis tests of overlapping linear and polynomial autoregressions, which have been used in target zone applications. We show that both estimation bias and size distortions generally depend on the amount of overlap, the sample size, and the autoregressive root of the data generating process. In particular, the estimates are biased in a way that makes it more likely that the predictions of the Bertola-Svensson-model will be supported. Size distortions of various tests also turn out to be substantial even when using a heteroskedasticity and autocorrelation consistent covariance matrix.
    Keywords: drift-adjustment method, exchange rate target zone, HAC covariance, overlapping observations, polynomial autoregression, size distortions, small sample bias
    JEL: C22 F31
    Date: 2007–02–27
    URL: http://d.repec.org/n?u=RePEc:mkg:wpaper:0701&r=ecm
  11. By: G. EVERAERT
    Abstract: A regression including integrated variables yields spurious results if the residuals contain a unit root. Although the obtained estimates are unreliable, this does not automatically imply that there is no long-run relation between the included variables as the unit root in the residuals may be induced by omitted or unobserved integrated variables. This paper uses an unobserved component model to estimate the partial long-run relation between observed integrated variables. This provides an alternative to standard cointegration analysis. The proposed methodology is described using a Monte Carlo simulation and applied to investigate purchasing-power parity.
    Keywords: Spurious Regression, Cointegration, Unobserved Component Model, PPP.
    JEL: C15 C32
    Date: 2007–01
    URL: http://d.repec.org/n?u=RePEc:rug:rugwps:07/452&r=ecm
  12. By: Devereux, Paul J.
    Abstract: In many economic applications, observations are naturally categorized into mutually exclusive and exhaustive groups. For example, individuals can be classified into cohorts and workers are employees of a particular firm. Grouping models are widely used in economics -- for example, cohort models have been used to study labour supply, wage inequality, consumption, and intergenerational transfer of human capital. The simplest grouping estimator involves taking the means of all variables for each group and then carrying out a group-level regression by OLS or weighted least squares. This estimator is biased in finite samples. I show that the standard errors in variables estimator (EVE) designed to correct for small sample bias is exactly equivalent to the Jack-knife Instrumental Variables Estimator (JIVE). Also EVE is closely related to the k-class of instrumental variables estimators. I then use results from the instrumental variables literature to develop an estimator (UEVE) with better finite-sample properties than existing errors in variables estimators. The theoretical results are demonstrated using Monte Carlo experiments. Finally, I use the estimators to implement a model of inter-temporal male labour supply using micro data from the United States Census. There are sizeable differences in the wage elasticity across estimators, showing the practical importance of the theoretical issues discussed in this paper even in circumstances where the sample size is quite large.
    Keywords: errors-in-variables; grouped data
    JEL: C21 J22
    Date: 2007–03
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:6167&r=ecm
  13. By: Kleijnen,Jack P.C. (Tilburg University, Center for Economic Research)
    Abstract: This article reviews Kriging (also called spatial correlation modeling). It presents the basic Kriging assumptions and formulas. contrasting Kriging and classic linear regression metamodels. Furthermore, it extends Kriging to random simulation, and discusses bootstrapping to estimate the variance of the Kriging predictor. Besides classic one-shot statistical designs such as Latin Hypercube Sampling, it reviews sequentialized and customized designs. It ends with topics for future research.
    Keywords: kriging;metamodel;response surface;interpolation;design
    JEL: C0 C1 C9
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:dgr:kubcen:200713&r=ecm
  14. By: Michiel D. de Pooter (Erasmus Universiteit Rotterdam); Francesco Ravazzolo (Erasmus Universiteit Rotterdam); Dick van Dijk (Erasmus Universiteit Rotterdam)
    Abstract: We forecast the term structure of U.S. Treasury zero-coupon bond yields by analyzing a range of models that have been used in the literature. We assess the relevance of parameter uncertainty by examining the added value of using Bayesian inference compared to frequentist estimation techniques, and model uncertainty by combining forecasts from individual models. Following current literature we also investigate the benefits of incorporating macroeconomic information in yield curve models. Our results show that adding macroeconomic factors is very beneficial for improving the out-of-sample forecasting performance of individual models. Despite this, the predictive accuracy of models varies over time considerably, irrespective of using the Bayesian or frequentist approach. We show that mitigating model uncertainty by combining forecasts leads to substantial gains in forecasting performance, especially when applying Bayesian model averaging.
    Keywords: Term structure of interest rates; Nelson-Siegel model; Affine term structure model; forecast combination; Bayesian analysis
    JEL: C5 C11 C32 E43 E47 F47
    Date: 2007–03–09
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20070028&r=ecm
  15. By: Taoufik Bouezmarni; Jeroen V.K. Rombouts (IEA, HEC Montréal)
    Abstract: In this paper we consider the nonparametric estimation for a density and hazard rate function for right censored ?-mixing survival time data using kernel smoothing techniques. Since survival times are positive with potentially a high concentration at zero, one has to take into account the bias problems when the functions are estimated in the boundary region. In this paper, gamma kernel estimators of the density and the hazard rate function are proposed. The estimators use adaptive weights depending on the point in which we estimate the function, and they are robust to the boundary bias problem. For both estimators, the mean squared error properties, including the rate of convergence, the almost sure consistency and the asymptotic normality are investigated. The results of a simulation demonstrate the excellent performance of the proposed estimators.
    Keywords: Gamma kernel, Kaplan Meier, density and hazard function, mean integrated squared error, consistency, asymptotic normality.
    Date: 2006–12
    URL: http://d.repec.org/n?u=RePEc:iea:carech:0616&r=ecm
  16. By: McCauley, Joseph L.; Gunaratne, Gemunu H.; Bassler, Kevin E.
    Abstract: There is much confusion in the literature over Hurst exponents. Recently, we took a step in the direction of eliminating some of the confusion. One purpose of this paper is to illustrate the difference between fBm on the one hand and Gaussian Markov processes where Hâ 1/2 on the other. The difference lies in the increments, which are stationary and correlated in one case and nonstationary and uncorrelated in the other. The two- and one-point densities of fBm are constructed explicitly. The two-point density doesnât scale. The one-point density for a semi-infinite time interval is identical to that for a scaling Gaussian Markov process with Hâ 1/2 over a finite time interval. We conclude that both Hurst exponents and one point densities are inadequate for deducing the underlying dynamics from empirical data. We apply these conclusions in the end to make a focused statement about ânonlinear diffusionâ.
    Keywords: Markov processes; fractional Brownian motion; scaling; Hurst exponents; stationary and nonstationary increments; autocorrelations
    JEL: G00 C1
    Date: 2006–09–30
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:2154&r=ecm
  17. By: Caiado, Jorge; Crato, Nuno; Peña, Daniel
    Abstract: We propose a periodogram-based metric for classification and clustering of time series with different sample sizes. For such cases, we know that the Euclidean distance between the periodogram ordinates cannot be used. One possible way to deal with this problem is to interpolate lineary one of the periodograms in order to estimate ordinates of the same frequencies.
    Keywords: Classification; Cluster analysis; Interpolation; Periodogram; Time series.
    JEL: C32
    Date: 2006
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:2075&r=ecm
  18. By: Bassler, Kevin E.; Gunaratne, Gemunu H.; McCauley, Joseph L.
    Abstract: We show by explicit closed form calculations that a Hurst exponent Hâ 1/2 does not necessarily imply long time correlations like those found in fractional Brownian motion. We construct a large set of scaling solutions of Fokker-Planck partial differential equations where Hâ 1/2. Thus Markov processes, which by construction have no long time correlations, can have Hâ 1/2. If a Markov process scales with Hurst exponent Hâ 1/2 then it simply means that the process has nonstationary increments. For the scaling solutions, we show how to reduce the calculation of the probability density to a single integration once the diffusion coefficient D(x,t) is specified. As an example, we generate a class of student-t-like densities from the class of quadratic diffusion coefficients. Notably, the Tsallis density is one member of that large class. The Tsallis density is usually thought to result from a nonlinear diffusion equation, but instead we explicitly show that it follows from a Markov process generated by a linear Fokker-Planck equation, and therefore from a corresponding Langevin equation. Having a Tsallis density with Hâ 1/2 therefore does not imply dynamics with correlated signals, e.g., like those of fractional Brownian motion. A short review of the requirements for fractional Brownian motion is given for clarity, and we explain why the usual simple argument that Hâ 1/2 implies correlations fails for Markov processes with scaling solutions. Finally, we discuss the question of scaling of the full Green function g(x,t;xâ,tâ) of the Fokker-Planck pde.
    Keywords: Hurst exponent; Markov process; scaling; stochastic calculus; autocorrelations; fractional Brownian motion; Tsallis model; nonlinear diffusion
    JEL: G1 G10 G14
    Date: 2005–12–01
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:2152&r=ecm
  19. By: A. PRINZIE; D. VAN DEN POEL
    Abstract: The acquisition process of consumer durables is a ‘sequence’ of purchase events. Priority-pattern research exploits this ‘sequential order’ to describe a prototypical acquisition order for durables. This paper adds a predictive perspective to increase managerial relevance. Besides order information, the acquisition sequence also reveals precise timing between purchase events (‘sequential duration’) as examined in the literature on durable replacement and time-to-first acquisition. This paper bridges the gap between priority-pattern research and research on duration between durable acquisitions to improve the prediction of the product group the customer might acquire his next durable from, i.e. Next-Product-to-Buy (NPTB) model. We evaluate four multinomial-choice models incorporating: 1) general covariates, 2) general covariates and sequential order, 3) general covariates and sequential duration, and 4) general covariates, sequential order and duration. The results favor the model including general covariates and duration information (3). The high predictive value of sequentialduration information emphasizes the predictive power of duration as compared to order information.
    Keywords: cross-sell, sequence analysis, choice modeling, durable goods, analytical CRM
    Date: 2007–01
    URL: http://d.repec.org/n?u=RePEc:rug:rugwps:07/442&r=ecm
  20. By: Caiado, Jorge; Crato, Nuno
    Abstract: In this paper, we introduce a volatility-based method for clustering analysis of financial time series. Using the generalized autoregressive conditional heteroskedasticity (GARCH) models we estimate the distances between the stock return volatilities. The proposed method uses the volatility behavior of the time series and solves the problem of different lengths. As an illustrative example, we investigate the similarities among major international stock markets using daily return series with different sample sizes from 1966 to 2006. From cluster analysis, most European markets countries, United States and Canada appear close together, and most Asian/Pacific markets and the South/Middle American markets appear in a distinct cluster. After the terrorist attack on September 11, 2001, the European stock markets have become more homogenous, and North American markets, Japan and Australia seem to come closer.
    Keywords: Cluster analysis; GARCH; International stock markets; Volatility.
    JEL: C32 G15
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:2074&r=ecm

This nep-ecm issue is ©2007 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.