nep-ets New Economics Papers
on Econometric Time Series
Issue of 2007‒01‒13
sixteen papers chosen by
Yong Yin
SUNY at Buffalo

  1. Optimal Bandwidth Selection in Heteroskedasticity-Autocorrelation Robust Testing∗ By Yixiao Sun; Peter Phillips; Sainan Jin
  2. Efficientt Conditional Quantile Estimation: The Time Series Case By Ivana Komunjer; Quang Vuong
  3. Transition Modeling and Econometric Convergence Tests By Peter C.B. Phillips; Donggyu Sul
  4. Simulation-based Estimation of Contingent-claims Prices By Peter C.B. Phillips; Jun Yu
  5. Maximum Likelihood and Gaussian Estimation of Continuous Time Models in Finance By Peter C.B. Phillips; Jun Yu
  6. Information Loss in Volatility Measurement with Flat Price Trading By Peter C.B. Phillips; Jun Yu
  7. GMM Estimation for Dynamic Panels with Fixed Effects and Strong Instruments at Unity By Chirok Han; Peter C.B. Phillips
  8. Testing for State Dependence with Time-Variant Transition Probabilities By Timothy J. Halliday
  9. Testing for Volatility Interactions in the Constant Conditional Correlation GARCH Model By Nakatani, Tomoaki; Teräsvirta, Timo
  10. Spatial effects in multivariate ARCH By Caporin Massimiliano; Paruolo Paolo
  11. Stability conditions for a Piecewise Deterministic Markov Process By Fonseca Giovanni
  12. Stability tests for heterogeneous panel data. By Felix Chan; Tommaso Mancini-Griffoli; Laurent L. Pauwels
  13. An Empirical Study of Asian Stock Volatility Using Stochastic Volatility Factor Model: Factor Analysis and Forecasting By Silvia S.W. Lui
  14. Quasi ML Estimation of the Panel AR(1) Model with Arbitrary Initial Conditions By Hugo Kruiniger
  15. Asymmetric effects and long memory in the volatility of Dow Jones stocks By Marcel Scharth; Marcelo Cunha Medeiros
  16. Bandwidth Selection for Semiparametric Estimators Using the m-out-of-n Bootstrap By Chuan Goh

  1. By: Yixiao Sun (University of California, San Diego); Peter Phillips (Cowles Foundation, Yale University, University of Auckland & University of York); Sainan Jin (Guanghua School of Management Peking University)
    Abstract: In time series regressions with nonparametrically autocorrelated errors, it is now standard empirical practice to use kernel-based robust standard errors that involve some smoothing function over the sample autocorrelations. The underlying smoothing parameter b, which can be defined as the ratio of the bandwidth (or truncation lag) to the sample size, is a tuning parameter that plays a key role in determining the asymptotic properties of the standard errors and associated semiparametric tests. Small-b asymptotics involve standard limit theory such as standard normal or chi-squared limits, whereas fixed-b asymptotics typically lead to nonstandard limit distributions involving Brownian bridge functionals. The present paper shows that the nonstandard fixed-b limit distributions of such nonparametrically studentized tests provide more accurate approximations to the finite sample distributions than the standard small-b limit distribution. In particular, using asymptotic expansions of both the finite sample distribution and the nonstandard limit distribution, we confirm that the second-order corrected critical value based on the expansion of the nonstandard limiting distribution is also second-order correct under the standard small-b asymptotics. We further show that, for typical economic time series, the optimal bandwidth that minimizes a weighted average of type I and type II errors is larger by an order of magnitude than the bandwidth that minimizes the asymptotic mean squared error of the corresponding long-run variance estimator. A plug-in procedure for implementing this optimal bandwidth is suggested and simulations confirm that the new plug-in procedure works well in finite samples.
    Keywords: Asymptotic expansion, bandwidth choice, kernel method, long-run variance, loss function, nonstandard asymptotics, robust standard error, Type I and Type II errors,
    Date: 2005–10–01
  2. By: Ivana Komunjer (University of California, San Diego); Quang Vuong (The Pennsylvania State University)
    Abstract: In this paper we consider the problem of efficient estimation in conditional quantile models with time series data. Our first result is to derive the semiparametric efficiency bound in time series models of conditional quantiles; this is a nontrivial extension of a large body of work on efficient estimation, which has traditionally focused on models with independent and identically distributed data. In particular, we generalize the bound derived by New and Powell (1990) to the case where the data is weakly dependent and heterogeneous. We then proceed by constructing an M-estimator which achieves the semiparametric efficiency bound. Our efficient M-estimator is obtained by minimizing an objective function which depends on a nonparametric estimator of the conditional distribution of the variable of interest rather than its density.
    Keywords: semiparametric efficientcy, time series models, dependence, parametric submodels, conditional quantiles,
    Date: 2006–10–01
  3. By: Peter C.B. Phillips (Cowles Foundation, Yale University); Donggyu Sul (University of Auckland)
    Abstract: A new panel data model is proposed to represent the behavior of economies in transition allowing for a wide range of possible time paths and individual heterogeneity. The model has both common and individual specific components and is formulated as a nonlinear time varying factor model. When applied to a micro panel, the decomposition provides flexibility in idiosyncratic behavior over time and across section, while retaining some commonality across the panel by means of an unknown common growth component. This commonality means that when the heterogeneous time varying idiosyncratic components converge over time to a constant, a form of panel convergence holds, analogous to the concept of conditional sigma convergence. The paper provides a framework of asymptotic representations for the factor components which enables the development of econometric procedures of estimation and testing. In particular, a simple regression based convergence test is developed, whose asymptotic properties are analyzed under both null and local alternatives, and a new method of clustering panels into club convergence groups is constructed. These econometric methods are applied to analyze convergence in cost of living indices among 19 US. metropolitan cities.
    Keywords: Club convergence, Relative convergence, Common factor, Convergence, log t regression test, Panel data, Transition
    JEL: C33 F21 G12
    Date: 2007–01
  4. By: Peter C.B. Phillips (Cowles Foundation, Yale University); Jun Yu (Singapore Management University)
    Abstract: A new methodology is proposed to estimate theoretical prices of financial contingent-claims whose values are dependent on some other underlying financial assets. In the literature the preferred choice of estimator is usually maximum likelihood (ML). ML has strong asymptotic justification but is not necessarily the best method in finite samples. The present paper proposes instead a simulation-based method that improves the finite sample performance of the ML estimator while maintaining its good asymptotic properties. The methods are implemented and evaluated here in the Black-Scholes option pricing model and in the Vasicek bond pricing model, but have wider applicability. Monte Carlo studies show that the proposed procedures achieve bias reductions over ML estimation in pricing contingent claims. The bias reductions are sometimes accompanied by reductions in variance, leading to significant overall gains in mean squared estimation error. Empirical applications to US treasury bills highlight the differences between the bond prices implied by the simulation-based approach and those delivered by ML. Some consequences for the statistical testing of contingent-claim pricing models are discussed.
    Keywords: Bias reduction, Bond pricing, Indirect inference, Option pricing, Simulation-based estimation
    JEL: C15 G12
    Date: 2007–01
  5. By: Peter C.B. Phillips (Cowles Foundation, Yale University); Jun Yu (Singapore Management University)
    Abstract: This paper overviews maximum likelihood and Gaussian methods of estimating continuous time models used in finance. Since the exact likelihood can be constructed only in special cases, much attention has been devoted to the development of methods designed to approximate the likelihood. These approaches range from crude Euler-type approximations and higher order stochastic Taylor series expansions to more complex polynomial-based expansions and infill approximations to the likelihood based on a continuous time data record. The methods are discussed, their properties are outlined and their relative finite sample performance compared in a simulation experiment with the nonlinear CIR diffusion model, which is popular in empirical finance. Bias correction methods are also considered and particular attention is given to jackknife and indirect inference estimators. The latter retains the good asymptotic properties of ML estimation while removing finite sample bias. This method demonstrates superior performance in finite samples.
    Keywords: Maximum likelihood, Transition density, Discrete sampling, Continuous record, Realized volatility, Bias reduction, Jackknife, Indirect inference
    JEL: C22 C32
    Date: 2007–01
  6. By: Peter C.B. Phillips (Cowles Foundation, Yale University); Jun Yu (Singapore Management University)
    Abstract: A model of price determination is proposed that incorporates flat trading features into an efficient price process. The model involves the superposition of a Brownian semimartingale process for the efficient price and a Bernoulli process that determines the extent of flat price trading. A limit theory for the conventional realized volatility (RV) measure of integrated volatility is developed. The results show that RV is still consistent but has an inflated asymptotic variance that depends on the probability of flat trading. Estimated quarticity is similarly affected, so that both the feasible central limit theorem and the inferential framework suggested in Barndorff-Nielson and Shephard (2002) remain valid under flat price trading.
    Keywords: Bernoulli process, Brownian semimartingale, Flat trading, Quarticity function, Realized volatility
    JEL: C15 G12
    Date: 2007–01
  7. By: Chirok Han (Victoria University of Wellington); Peter C.B. Phillips (Cowles Foundation, Yale University)
    Abstract: This paper develops new estimation and inference procedures for dynamic panel data models with fixed effects and incidental trends. A simple consistent GMM estimation method is proposed that avoids the weak moment condition problem that is known to affect conventional GMM estimation when the autoregressive coefficient (rho) is near unity. In both panel and time series cases, the estimator has standard Gaussian asymptotics for all values of rho in (-1, 1] irrespective of how the composite cross section and time series sample sizes pass to infinity. Simulations reveal that the estimator has little bias even in very small samples. The approach is applied to panel unit root testing.
    Keywords: Asymptotic normality, Asymptotic power envelope, Moment conditions, Panel unit roots, Point optimal test, Unit root tests, Weak instruments
    JEL: C22 C23
    Date: 2007–01
  8. By: Timothy J. Halliday (Department of Economics, University of Hawaii at Manoa; John A. Burns School of Medicine)
    Abstract: We consider the identification of state dependence in a dynamic Logit model with timevariant transition probabilities and an arbitrary distribution of the unobserved heterogeneity. We derive a simple result that allows us to test for the presence of state dependence in this model. Monte Carlo evidence suggests that this test has desirable properties even when there are some violations of the model’s assumptions. We also consider alternative tests for state dependence that will have desirable properties only when the transition probabilities do not depend on time and provide evidence that there is an "acceptable" range in which ignoring time-dependence does not matter too much. We conclude with an application to the Barker Hypothesis.
    Keywords: Dynamic Panel Data Models, State Dependence, Health
    Date: 2006–12–31
  9. By: Nakatani, Tomoaki (Dept. of Economic Statistics, Stockholm School of Economics); Teräsvirta, Timo (School of Management and Economics)
    Abstract: In this paper we propose a Lagrange multiplier (LM) test for volatility interactions among markets or assets. The null hypothesis is the Constant Conditional Correlation (CCC) GARCH model of Bollerslev (1990) in which volatility of an asset is described only through lagged squared innovations and volatility of its own. The alternative hypothesis is an extension of that model in which volatility is modelled as a linear combination not only of its own lagged squared residuals and volatility but also of those in the other equations while keeping the conditional correlation structure constant. This configuration enables us to test for volatility transmissions among variables in the model. We derive an LM test of the null hypothesis. Monte Carlo experiments show that the test has satisfactory finite sample properties. The size distortions become negligible when the sample size reaches 2000. The test is applied to pairs of foreign exchange returns and individual stock returns. Results indicate that six pairs out of seven investigated seem to have volatility interactions, and that significant interaction effects typically result from the lagged squared innovations of the other variables. <p>
    Keywords: Multivariate GARCH; Volatility interactions; Lagrange multiplier test; Monte Carlo simulation; Conditional correlations
    JEL: C12 C32 C51 C52 G19
    Date: 2007–01–05
  10. By: Caporin Massimiliano (Department of Economics, University of Padova, Italy); Paruolo Paolo (Department of Economics, University of Insubria, Italy)
    Abstract: This paper proposes a new approach for the specification of multivariate GARCH models for data sets with a potentially large cross-section dimension. The approach exploits the spatial dependence structure associated with asset characteristics, like industrial sectors and capitalization size. We use the acronym SEARCH for this model, short for Spatial Effects in ARCH. This parametrization extends current feasible specifications for large scale GARCH models, while keeping the numbers of parameters linear with respect to the number of assets. An application to daily returns on 20 stocks from the NYSE for the period January 1994 to June 2001 shows the benefits of the present specification.
    JEL: C32 C51 C52
    Date: 2005–05
  11. By: Fonseca Giovanni (Department of Economics, University of Insubria, Italy)
    Abstract: In the present paper we study the stability of a threshold continuos-time model that belongs to the class of Piecewise Deterministic Markov Processes. We derive a sufficient condition on the coefficients of the model to ensure the exponential ergodicity of the process under two different assumptions on the jumps.
    Keywords: Threshold process, Compound Poisson Process, Stationary process, Ergodicity.
    Date: 2005–05
  12. By: Felix Chan; Tommaso Mancini-Griffoli; Laurent L. Pauwels
    Abstract: This paper proposes a new test for structural instability in heterogeneous panels. The test builds on the seminal work of Andrews (2003) originally developed for time series. It is robust to non-normal, heteroskedastic and serially correlated errors, and allows for the number of post break observations to be small. Importantly, the test considers the alternative of a break affecting only some - and not all - individuals of the panel. Under mild assumptions the test statistic is shown to be asymptotically normal, thanks to the additional cross sectional dimension of panel data. This greatly facilitates the calculation of critical values. Monte Carlo experiments show that the test has good size and power under a wide range of circumstances. The test is then applied to investigate the effect of the Euro on trade.
    Date: 2006
  13. By: Silvia S.W. Lui (Queen Mary, University of London)
    Abstract: This paper is an empirical study of Asian stock volatility using stochastic volatility factor (SVF) model of Cipollini and Kapetanios (2005). We adopt their approach to carry out factor analysis and to forecast volatility. Our results show some Asian factors exhibit long memory that is in line with existing empirical findings in financial volatility. However, their local-factor SVF model is not powerful enough in forecasting Asian volatility. This has led us to propose an extension to a multi-factor SVF model. We also discuss how to produce forecast using this multi-factor model.
    Keywords: Stochastic volatility, Local-factor model, Multi-factor model, Principal components, Forecasting
    JEL: C32 C33 C53 G15
    Date: 2006–12
  14. By: Hugo Kruiniger (Queen Mary, University of London)
    Abstract: In this paper we show that the Quasi ML estimation method yields consistent Random and Fixed Effects estimators for the autoregression parameter ρ in the panel AR(1) model with arbitrary initial conditions even when the errors are drawn from heterogenous distributions. We compare both analytically and by means of Monte Carlo simulations the QML estimators with the GMM estimator proposed by Arellano and Bond (1991) [AB], which ignores some of the moment conditions implied by the model. Unlike the AB GMM estimator, the QML estimators for ρ only suffer from a weak instruments problem when ρ is close to one if the cross-sectional average of the variances of the errors is constant over time, e.g. under time-series homoskedasticity. However, even in this case the QML estimators are still consistent when ρ is equal to one and they display only a relatively small bias when ρ is close to one. In contrast, the AB GMM estimator is inconsistent when ρ is equal to one, and is severly biased when ρ is close to one. Finally, we study the finite sample properties of two types of estimators for the standard errors of the QML estimators for ρ, and the bounds of QML based confidence intervals for ρ.
    Keywords: Dynamic panel data, Initial conditions, Quasi ML, GMM, Weak moment conditions, Local-to-zero asymptotics
    JEL: C12 C13 C23
    Date: 2006–12
  15. By: Marcel Scharth (Department of Economics - PUC-Rio); Marcelo Cunha Medeiros (Department of Economics PUC-Rio)
    Abstract: Does volatility reflect a continuous reaction to past shocks or changes in the markets induce shifts in the volatility dynamics? In this paper, we provide empirical evidence that cumulated price variations convey meaningful information about multiple regimes in the realized volatility of stocks, where large falls (rises) in prices are linked to persistent regimes of high (low) variance in stock returns. Incorporating past cumulated daily returns as a explanatory variable in a flexible and systematic nonlinear framework, we estimate that falls of different magnitudes over less than two months are associated with volatility levels 20% and 60% higher than the average of periods with stable or rising prices. We show that this effect accounts for large empirical values of long memory parameter estimates. Finally, we analyze that the proposed model significantly improves out of sample performance in relation to standard methods. This result is more pronounced in periods of high volatility.
    Keywords: Realized volatility, long memory, nonlinear models, asymmetric effects, regime switching, regression trees, smooth transition, value-at-risk, forecasting, empirical finance.
    Date: 2006–11
  16. By: Chuan Goh
    Abstract: This paper considers a class of semiparametric estimators that take the form of density-weighted averages. These arise naturally in a consideration of semiparametric methods for the estimation of index and sample-selection models involving preliminary kernel density estimates. The question considered in this paper is that of selecting the degree of smoothing to be used in computing the preliminary density estimate. This paper proposes a bootstrap method for estimating the mean squared error and associated optimal bandwidth. The particular bootstrap method suggested here involves using a resample of smaller size than the original sample. This method of bandwidth selection is presented with specific reference to the case of estimators of average densities, of density-weighted average derivatives and of density-weighted conditional covariances.
    Keywords: bandwidth selection, density-weighted averages, bootstrap, m-out-of-n bootstrap, kernel density estimation
    JEL: C14
    Date: 2007–01–02

This nep-ets issue is ©2007 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.