nep-ets New Economics Papers
on Econometric Time Series
Issue of 2006‒06‒03
six papers chosen by
Yong Yin
SUNY at Buffalo

  1. Pairwise Tests of Purchasing Power Parity Using Aggregate and Disaggregate Price Measures By M. Hashem Pesaran; Ron P. Smith; Takashi Yamagata; Liudmyla Hvozdyk
  2. Estimation of Stochastic Volatility Models : An Approximation to the Nonlinear State Space By Junji Shimada; Yoshihiko Tsukuda
  3. Testing for Nonlinear Adjustment in Smooth Transition Vector Error Correction Models By Byeongseon Seo
  4. Volatility Forecast Comparison using Imperfect Volatility Proxies By Andrew Patton
  5. Information processing and measures of integration: New York, London and Tokyo By Susan Thorp; George Milunovich
  6. Robust Tail Inference for Dependent,Heterogeneous Stochastic Processes By Jonathan Hill

  1. By: M. Hashem Pesaran; Ron P. Smith; Takashi Yamagata; Liudmyla Hvozdyk
    Abstract: In this paper we adopt a new approach to testing for purchasing power parity, PPP, that is robust to base country effects, cross-section dependence, and aggregation. Given data on N +1 countries, i, j = 0, 1, 2, ..., N, the standard procedure is to apply unit root or stationarity tests to N relative prices against a base country, 0, e.g. the US. The evidence is that such tests are sensitive to the choice of base country. In addition, the analysis is subject to a high degree of cross section dependence which is difficult to deal with particularly when N is large. In this paper we test for PPP applying a pairwise approach to the disaggregated data set recently analysed by Imbs, Mumtaz, Ravan and Rey (2005, QJE). We consider a variety of tests applied to all possible N(N +1)/2 pairs of real exchange rate pairs between the N + 1 countries and estimate the proportion of the pairs that are stationary, for the aggregates and each of the 19 commodity groups. This approach is invariant to base country effects and the proportion that are non-stationary can be consistently estimated even if there is cross-sectional dependence. To deal with small sample problems and residual cross section dependence, we use a factor augmented sieve bootstrap approach and present bootstrap pairwise estimates of the proportions that are stationary. The bootstrapped rejection frequencies at 26%-49% based on unit root tests suggest some evidence in favour of the PPP in the case of the disaggregate data as compared to 6%-14% based on aggregate price series.
    Keywords: purchasing power parity, panel data, pairwise approach, cross section dependence
    JEL: C23 F31 F41
    Date: 2006
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_1704&r=ets
  2. By: Junji Shimada; Yoshihiko Tsukuda
    Abstract: The stochastic volatility (SV) models had not been popular as the ARCH (autoregressive conditional heteroskedasticity) models in practical applications until recent years even though the SV models have close relationship to financial economic theories. The main reason is that the likelihood of the SV models is not easy to evaluate unlike the ARCH models. Developments of Markov Chain Monte-Carlo (MCMC) methods have increased the popularity of Bayesian inference in many fields of research including the SV models. After Jacquire et al. (1994) applied a Bayesian analysis for estimating the SV model in their epoch making work, the Bayesian approach has greatly contributed to the research on the SV models. The classical analysis based on the likelihood for estimating the (SV) model has been extensively studied in the recent years. Danielson (1994) approximates the marginal likelihood of the observable process by simulating the latent volatility conditional on the available information. Shephard and Pitt (1997) gave an idea of evaluating likelihood by exploiting sampled volatility. Durbin and Koopman (1997) explored the idea of Shephard and Pitt (1997) and evaluated the likelihood by Monte-Carlo integration. Sandmann and Koopman (1998) applied this method for the SV model. Durbin and Koopman (2000) reviewed the methods of Monte Carlo maximum likelihood from both Bayesian and classical perspectives. The purpose of this paper is to propose the Laplace approximation (LA) method to the nonlinear state space representation, and to show that the LA method is workable for estimating the SV models including the multivariate SV model and the dynamic bivariate mixture (DBM) model. The SV model can be regarded as a nonlinear state space model. The LA method approximates the logarithm of the joint density of current observation and volatility conditional on the past observations by the second order Taylor expansion around its mode, and then applies the nonlinear filtering algorithm. This idea of approximation is found in Shephard and Pitt (1997) and Durbin and Koopmann (1997). The Monte-Carlo Likelihood (MCL: Sandmann and Koopman (1998)) is now a standard classical method for estimating the SV models. It is based on importance sampling technique. Importance sampling is regarded as an exact method for maximum likelihood estimation. We show that the LA method of this paper approximates the weight function by unity in the context of importance sampling. We do not need to carry out the Monte Carlo integration for obtaining the likelihood since the approximate likelihood function can be analytically obtained. If one-step ahead prediction density of observation and volatility variables conditional on the past observations is sufficiently accurately approximated, the LA method is workable. We examine how the LA method works by simulations as well as various empirical studies. We conduct the Monte-Carlo simulations for the univariate SV model for examining the small sample properties and compare them with those of other methods. Simulation experiments reveals that our method is comparable to the MCL, Maximum Likelihood (Fridman and Harris (1998)) and MCMC methods. We apply this method to the univariate SV models with normal distribution or t-distribution, the bivariate SV model and the dynamic bivariate mixture model, and empirically illustrate how the LA method works for each of the extended models. The empirical results on the stock markets reveal that our method provides very similar estimates of coefficients to those of the MCL. As a result, this paper demonstrates that the LA method is workable in two ways: simulation studies and empirical studies. Naturally, the workability is limited to the cases we have examined. But we believe the LA method is applicable to many SV models based on our study of this paper
    Keywords: Stochastic volatility, Nonlinear state space representation
    JEL: C13 C22
    Date: 2004–08–11
    URL: http://d.repec.org/n?u=RePEc:ecm:feam04:611&r=ets
  3. By: Byeongseon Seo
    Abstract: The smooth transition autoregressive (STAR) model was proposed by Chan and Tong (1986) as a generalization of the threshold autoregressive (TAR) model, and since then it has attracted wide attention in the recent literature on the business cycles and the equilibrium parity relationships of commodity prices, exchange rates, and equity prices. Economic behavior is affected by asymmetric transaction costs and institutional rigidities, and thus a large number of studies - for example, Neftci (1984), Terasvirta and Anderson (1992), and Michael, Nobay, and Peel (1997) - have shown that many economic variables and relations display asymmetry and nonlinear adjustment. One of the most crucial issues in models of this kind is testing for the presence of nonlinear adjustment with the null of linearity. Luukkonen, Saikkonen, and Terasvirta (1988) expanded the transition function and proposed the variable addition tests as the tests of linearity against smooth transition nonlinearity, and the tests have been used in many empirical studies. However, the test statistics are based on the polynomial approximation, and the approximation errors may affect statistical inference depending on the parameter values of transition rate and location. Furthermore, the tests are not directly related to the smooth transition model, and thus we cannot retrace what causes the rejection of linearity. This paper considers the direct tests for nonlinear adjustment, which are based on the exact specification of smooth transition. The smooth transition model entails transition parameters, which cannot be identified under the null hypothesis. However, the optimality issue in the smooth transition model has not been treated extensively. The optimality issue regarding unidentified parameters has been developed by Davies (1987), Andrews (1993), and Hansen (1996). Hansen (1996) particularly considered the optimality issue in threshold models. The threshold parameter cannot be identified under the null hypothesis, and as a result the likelihood ratio statistic has the nonstandard distribution. The smooth transition model generalizes the threshold model, and thus this paper develops the appropriate tests and the associated distribution theory based on the optimality argument. Many empirical studies have found evidence on the presence of stochastic nonlinear dependence in equilibrium relations such as purchasing power parity. For example, Michael, Nobay, and Peel (1997), considering the equilibrium model of real exchange rate in the presence of transaction costs, found strong evidence of nonlinear adjustment, which conforms to the exponential smooth transition model. There exists a huge literature, and it is growing in this area. However, the econometric methods and the formal theory have been limited. This paper proposes the tests for nonlinear adjustment in the smooth transition vector error correction models, and thereby fills the deficiency in the literature. One technical difficulty is to estimate the smooth transition model. As noted by Haggan and Ozaki (1981) and Terasvirta (1994), it is difficult to estimate the smooth transition parameters jointly with the other slope parameters. The gradient of the transition parameter forces its estimate to blow up to infinity; thus, we cannot depend on the standard estimation algorithm. Our tests are based on the Lagrange multiplier statistic, which can be calculated under the null hypothesis. Therefore, our tests are easy to implement and thus useful. This paper finds that our tests have the asymptotic distribution, which is based on the Gaussian process. However, the asymptotic distribution depends on the nuisance parameters and the covariances are data-dependent; thus, the tabulation of asymptotic distribution is not feasible. This paper suggests the bootstrap inference to approximate the sampling distribution of the test statistics. Simulation evidence shows that the bootstrap inference generates moderate size and power performances
    Keywords: Nonlinearity; Smooth Transition; VECM
    JEL: C32
    Date: 2004–08–11
    URL: http://d.repec.org/n?u=RePEc:ecm:feam04:749&r=ets
  4. By: Andrew Patton (London School of Economics)
    Abstract: The use of a conditionally unbiased, but imperfect, volatility proxy can lead to undesirable outcomes in standard methods for comparing conditional variance forecasts. We derive necessary and sufficient conditions on functional form of the loss function for the ranking of competing volatility forecasts to be robust to the presence of noise in the volatility proxy, and derive some interesting special cases of this class of ?robust? loss functions. We motivate the theory with analytical results on the distortions caused by some widely-used loss functions, when used with standard volatility proxies such as squared returns, the intra-daily range or realised volatility. The methods are illustrated with an application to the volatility of returns on IBM over the period 1993 to 2003.
    Keywords: forecast evaluation; forecast comparison; loss functions; realised Variance; range
    JEL: C53 C52 C22
    Date: 2006–05–01
    URL: http://d.repec.org/n?u=RePEc:uts:rpaper:175&r=ets
  5. By: Susan Thorp (School of Finance and Economics, University of Technology, Sydney); George Milunovich (Department of Economics, Macquarie University)
    Abstract: Equity markets do not pass all overnight information into prices instantaneously at the opening of trade. The New York market takes up to 30 minutes after the opening time to absorb overnight foreign news, Tokyo takes about 90 minutes, and London about 120 minutes on average. These delays in information absorption are not commercially significant but do have implications for measures of market integration. We adjust intra-daily return series for non-instantaneous news absorption and then use adjusted series to predict opening price variation in three major equity markets. Because the adjusted daytime returns series are uncorrelated, we can accurately measure the size, and identify the sources, of transmissions. Overnight news, as represented by foreign daytime returns, explains 12% of opening price variation (close-open returns) in New York, 14% in Tokyo and 30% in London. For New York and Tokyo, the largest influences come from the market that trades immediately prior (London and New York respectively) whereas opening price variation in London is linked closer with New York than Tokyo. Foreign volatility spillovers are also significant, and subject to asymmetry effects.
    Keywords: GARCH; spillover; integration; transmission; efficiency
    JEL: G14 G15
    Date: 2006–05–01
    URL: http://d.repec.org/n?u=RePEc:uts:rpaper:177&r=ets
  6. By: Jonathan Hill (Department of Economics, Florida International University)
    Abstract: This paper analyzes an estimator of the tail shape of a distribution due to B. Hill (1975) under general conditions of dependence and heterogeneity. For processes with extremes that are near-epoch-dependent on the extremes of a mixing process, we prove a (possibly stochastic) weighted average tail index over a window of sample tail regions has the same Gaussian distribution limit as any one tail index estimator. We provide a new approximation of the mean-square-error of the Hill-estimator for any process with regularly varying distribution tails, as well as a new kernel estimator of a generalized mean-square-error based on a data-driven weighted average of the bias and variance. A broad simulation study demonstrates the strength of the kernel estimator for matters of inference when the data are dependent and heterogeneous. We demonstrate that minimum mean-square-error and meansquare-error weighted average estimators have superlative properties, including sharpness of confidence bands and the propensity to generate an estimator that is approximately normally distributed.
    Keywords: Hill estimator, regular variation, extremal near epoch dependence, kernel estimator, mean-square-error
    JEL: C15 C29 C49
    Date: 2006–05
    URL: http://d.repec.org/n?u=RePEc:fiu:wpaper:0604&r=ets

This nep-ets issue is ©2006 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.