
on Econometric Time Series 
By:  Yixiao Sun (University of California, San Diego); Peter Phillips (Cowles Foundation, Yale University, University of Auckland & University of York); Sainan Jin (Guanghua School of Management Peking University) 
Abstract:  In time series regressions with nonparametrically autocorrelated errors, it is now standard empirical practice to use kernelbased robust standard errors that involve some smoothing function over the sample autocorrelations. The underlying smoothing parameter b, which can be defined as the ratio of the bandwidth (or truncation lag) to the sample size, is a tuning parameter that plays a key role in determining the asymptotic properties of the standard errors and associated semiparametric tests. Smallb asymptotics involve standard limit theory such as standard normal or chisquared limits, whereas fixedb asymptotics typically lead to nonstandard limit distributions involving Brownian bridge functionals. The present paper shows that the nonstandard fixedb limit distributions of such nonparametrically studentized tests provide more accurate approximations to the finite sample distributions than the standard smallb limit distribution. In particular, using asymptotic expansions of both the finite sample distribution and the nonstandard limit distribution, we confirm that the secondorder corrected critical value based on the expansion of the nonstandard limiting distribution is also secondorder correct under the standard smallb asymptotics. We further show that, for typical economic time series, the optimal bandwidth that minimizes a weighted average of type I and type II errors is larger by an order of magnitude than the bandwidth that minimizes the asymptotic mean squared error of the corresponding longrun variance estimator. A plugin procedure for implementing this optimal bandwidth is suggested and simulations confirm that the new plugin procedure works well in finite samples. 
Keywords:  Asymptotic expansion, bandwidth choice, kernel method, longrun variance, loss function, nonstandard asymptotics, robust standard error, Type I and Type II errors, 
Date:  2005–10–01 
URL:  http://d.repec.org/n?u=RePEc:cdl:ucsdec:200512&r=ets 
By:  Ivana Komunjer (University of California, San Diego); Quang Vuong (The Pennsylvania State University) 
Abstract:  In this paper we consider the problem of efficient estimation in conditional quantile models with time series data. Our first result is to derive the semiparametric efficiency bound in time series models of conditional quantiles; this is a nontrivial extension of a large body of work on efficient estimation, which has traditionally focused on models with independent and identically distributed data. In particular, we generalize the bound derived by New and Powell (1990) to the case where the data is weakly dependent and heterogeneous. We then proceed by constructing an Mestimator which achieves the semiparametric efficiency bound. Our efficient Mestimator is obtained by minimizing an objective function which depends on a nonparametric estimator of the conditional distribution of the variable of interest rather than its density. 
Keywords:  semiparametric efficientcy, time series models, dependence, parametric submodels, conditional quantiles, 
Date:  2006–10–01 
URL:  http://d.repec.org/n?u=RePEc:cdl:ucsdec:200610&r=ets 
By:  Peter C.B. Phillips (Cowles Foundation, Yale University); Donggyu Sul (University of Auckland) 
Abstract:  A new panel data model is proposed to represent the behavior of economies in transition allowing for a wide range of possible time paths and individual heterogeneity. The model has both common and individual specific components and is formulated as a nonlinear time varying factor model. When applied to a micro panel, the decomposition provides flexibility in idiosyncratic behavior over time and across section, while retaining some commonality across the panel by means of an unknown common growth component. This commonality means that when the heterogeneous time varying idiosyncratic components converge over time to a constant, a form of panel convergence holds, analogous to the concept of conditional sigma convergence. The paper provides a framework of asymptotic representations for the factor components which enables the development of econometric procedures of estimation and testing. In particular, a simple regression based convergence test is developed, whose asymptotic properties are analyzed under both null and local alternatives, and a new method of clustering panels into club convergence groups is constructed. These econometric methods are applied to analyze convergence in cost of living indices among 19 US. metropolitan cities. 
Keywords:  Club convergence, Relative convergence, Common factor, Convergence, log t regression test, Panel data, Transition 
JEL:  C33 F21 G12 
Date:  2007–01 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1595&r=ets 
By:  Peter C.B. Phillips (Cowles Foundation, Yale University); Jun Yu (Singapore Management University) 
Abstract:  A new methodology is proposed to estimate theoretical prices of financial contingentclaims whose values are dependent on some other underlying financial assets. In the literature the preferred choice of estimator is usually maximum likelihood (ML). ML has strong asymptotic justification but is not necessarily the best method in finite samples. The present paper proposes instead a simulationbased method that improves the finite sample performance of the ML estimator while maintaining its good asymptotic properties. The methods are implemented and evaluated here in the BlackScholes option pricing model and in the Vasicek bond pricing model, but have wider applicability. Monte Carlo studies show that the proposed procedures achieve bias reductions over ML estimation in pricing contingent claims. The bias reductions are sometimes accompanied by reductions in variance, leading to significant overall gains in mean squared estimation error. Empirical applications to US treasury bills highlight the differences between the bond prices implied by the simulationbased approach and those delivered by ML. Some consequences for the statistical testing of contingentclaim pricing models are discussed. 
Keywords:  Bias reduction, Bond pricing, Indirect inference, Option pricing, Simulationbased estimation 
JEL:  C15 G12 
Date:  2007–01 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1596&r=ets 
By:  Peter C.B. Phillips (Cowles Foundation, Yale University); Jun Yu (Singapore Management University) 
Abstract:  This paper overviews maximum likelihood and Gaussian methods of estimating continuous time models used in finance. Since the exact likelihood can be constructed only in special cases, much attention has been devoted to the development of methods designed to approximate the likelihood. These approaches range from crude Eulertype approximations and higher order stochastic Taylor series expansions to more complex polynomialbased expansions and infill approximations to the likelihood based on a continuous time data record. The methods are discussed, their properties are outlined and their relative finite sample performance compared in a simulation experiment with the nonlinear CIR diffusion model, which is popular in empirical finance. Bias correction methods are also considered and particular attention is given to jackknife and indirect inference estimators. The latter retains the good asymptotic properties of ML estimation while removing finite sample bias. This method demonstrates superior performance in finite samples. 
Keywords:  Maximum likelihood, Transition density, Discrete sampling, Continuous record, Realized volatility, Bias reduction, Jackknife, Indirect inference 
JEL:  C22 C32 
Date:  2007–01 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1597&r=ets 
By:  Peter C.B. Phillips (Cowles Foundation, Yale University); Jun Yu (Singapore Management University) 
Abstract:  A model of price determination is proposed that incorporates flat trading features into an efficient price process. The model involves the superposition of a Brownian semimartingale process for the efficient price and a Bernoulli process that determines the extent of flat price trading. A limit theory for the conventional realized volatility (RV) measure of integrated volatility is developed. The results show that RV is still consistent but has an inflated asymptotic variance that depends on the probability of flat trading. Estimated quarticity is similarly affected, so that both the feasible central limit theorem and the inferential framework suggested in BarndorffNielson and Shephard (2002) remain valid under flat price trading. 
Keywords:  Bernoulli process, Brownian semimartingale, Flat trading, Quarticity function, Realized volatility 
JEL:  C15 G12 
Date:  2007–01 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1598&r=ets 
By:  Chirok Han (Victoria University of Wellington); Peter C.B. Phillips (Cowles Foundation, Yale University) 
Abstract:  This paper develops new estimation and inference procedures for dynamic panel data models with fixed effects and incidental trends. A simple consistent GMM estimation method is proposed that avoids the weak moment condition problem that is known to affect conventional GMM estimation when the autoregressive coefficient (rho) is near unity. In both panel and time series cases, the estimator has standard Gaussian asymptotics for all values of rho in (1, 1] irrespective of how the composite cross section and time series sample sizes pass to infinity. Simulations reveal that the estimator has little bias even in very small samples. The approach is applied to panel unit root testing. 
Keywords:  Asymptotic normality, Asymptotic power envelope, Moment conditions, Panel unit roots, Point optimal test, Unit root tests, Weak instruments 
JEL:  C22 C23 
Date:  2007–01 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1599&r=ets 
By:  Timothy J. Halliday (Department of Economics, University of Hawaii at Manoa; John A. Burns School of Medicine) 
Abstract:  We consider the identification of state dependence in a dynamic Logit model with timevariant transition probabilities and an arbitrary distribution of the unobserved heterogeneity. We derive a simple result that allows us to test for the presence of state dependence in this model. Monte Carlo evidence suggests that this test has desirable properties even when there are some violations of the model’s assumptions. We also consider alternative tests for state dependence that will have desirable properties only when the transition probabilities do not depend on time and provide evidence that there is an "acceptable" range in which ignoring timedependence does not matter too much. We conclude with an application to the Barker Hypothesis. 
Keywords:  Dynamic Panel Data Models, State Dependence, Health 
Date:  2006–12–31 
URL:  http://d.repec.org/n?u=RePEc:hai:wpaper:200614&r=ets 
By:  Nakatani, Tomoaki (Dept. of Economic Statistics, Stockholm School of Economics); Teräsvirta, Timo (School of Management and Economics) 
Abstract:  In this paper we propose a Lagrange multiplier (LM) test for volatility interactions among markets or assets. The null hypothesis is the Constant Conditional Correlation (CCC) GARCH model of Bollerslev (1990) in which volatility of an asset is described only through lagged squared innovations and volatility of its own. The alternative hypothesis is an extension of that model in which volatility is modelled as a linear combination not only of its own lagged squared residuals and volatility but also of those in the other equations while keeping the conditional correlation structure constant. This configuration enables us to test for volatility transmissions among variables in the model. We derive an LM test of the null hypothesis. Monte Carlo experiments show that the test has satisfactory finite sample properties. The size distortions become negligible when the sample size reaches 2000. The test is applied to pairs of foreign exchange returns and individual stock returns. Results indicate that six pairs out of seven investigated seem to have volatility interactions, and that significant interaction effects typically result from the lagged squared innovations of the other variables. <p> 
Keywords:  Multivariate GARCH; Volatility interactions; Lagrange multiplier test; Monte Carlo simulation; Conditional correlations 
JEL:  C12 C32 C51 C52 G19 
Date:  2007–01–05 
URL:  http://d.repec.org/n?u=RePEc:hhs:hastef:0649&r=ets 
By:  Caporin Massimiliano (Department of Economics, University of Padova, Italy); Paruolo Paolo (Department of Economics, University of Insubria, Italy) 
Abstract:  This paper proposes a new approach for the specification of multivariate GARCH models for data sets with a potentially large crosssection dimension. The approach exploits the spatial dependence structure associated with asset characteristics, like industrial sectors and capitalization size. We use the acronym SEARCH for this model, short for Spatial Effects in ARCH. This parametrization extends current feasible specifications for large scale GARCH models, while keeping the numbers of parameters linear with respect to the number of assets. An application to daily returns on 20 stocks from the NYSE for the period January 1994 to June 2001 shows the benefits of the present specification. 
JEL:  C32 C51 C52 
Date:  2005–05 
URL:  http://d.repec.org/n?u=RePEc:ins:quaeco:qf0501&r=ets 
By:  Fonseca Giovanni (Department of Economics, University of Insubria, Italy) 
Abstract:  In the present paper we study the stability of a threshold continuostime model that belongs to the class of Piecewise Deterministic Markov Processes. We derive a sufficient condition on the coefficients of the model to ensure the exponential ergodicity of the process under two different assumptions on the jumps. 
Keywords:  Threshold process, Compound Poisson Process, Stationary process, Ergodicity. 
Date:  2005–05 
URL:  http://d.repec.org/n?u=RePEc:ins:quaeco:qf0502&r=ets 
By:  Felix Chan; Tommaso ManciniGriffoli; Laurent L. Pauwels 
Abstract:  This paper proposes a new test for structural instability in heterogeneous panels. The test builds on the seminal work of Andrews (2003) originally developed for time series. It is robust to nonnormal, heteroskedastic and serially correlated errors, and allows for the number of post break observations to be small. Importantly, the test considers the alternative of a break affecting only some  and not all  individuals of the panel. Under mild assumptions the test statistic is shown to be asymptotically normal, thanks to the additional cross sectional dimension of panel data. This greatly facilitates the calculation of critical values. Monte Carlo experiments show that the test has good size and power under a wide range of circumstances. The test is then applied to investigate the effect of the Euro on trade. 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:pse:psecon:200649&r=ets 
By:  Silvia S.W. Lui (Queen Mary, University of London) 
Abstract:  This paper is an empirical study of Asian stock volatility using stochastic volatility factor (SVF) model of Cipollini and Kapetanios (2005). We adopt their approach to carry out factor analysis and to forecast volatility. Our results show some Asian factors exhibit long memory that is in line with existing empirical findings in financial volatility. However, their localfactor SVF model is not powerful enough in forecasting Asian volatility. This has led us to propose an extension to a multifactor SVF model. We also discuss how to produce forecast using this multifactor model. 
Keywords:  Stochastic volatility, Localfactor model, Multifactor model, Principal components, Forecasting 
JEL:  C32 C33 C53 G15 
Date:  2006–12 
URL:  http://d.repec.org/n?u=RePEc:qmw:qmwecw:wp581&r=ets 
By:  Hugo Kruiniger (Queen Mary, University of London) 
Abstract:  In this paper we show that the Quasi ML estimation method yields consistent Random and Fixed Effects estimators for the autoregression parameter ρ in the panel AR(1) model with arbitrary initial conditions even when the errors are drawn from heterogenous distributions. We compare both analytically and by means of Monte Carlo simulations the QML estimators with the GMM estimator proposed by Arellano and Bond (1991) [AB], which ignores some of the moment conditions implied by the model. Unlike the AB GMM estimator, the QML estimators for ρ only suffer from a weak instruments problem when ρ is close to one if the crosssectional average of the variances of the errors is constant over time, e.g. under timeseries homoskedasticity. However, even in this case the QML estimators are still consistent when ρ is equal to one and they display only a relatively small bias when ρ is close to one. In contrast, the AB GMM estimator is inconsistent when ρ is equal to one, and is severly biased when ρ is close to one. Finally, we study the finite sample properties of two types of estimators for the standard errors of the QML estimators for ρ, and the bounds of QML based confidence intervals for ρ. 
Keywords:  Dynamic panel data, Initial conditions, Quasi ML, GMM, Weak moment conditions, Localtozero asymptotics 
JEL:  C12 C13 C23 
Date:  2006–12 
URL:  http://d.repec.org/n?u=RePEc:qmw:qmwecw:wp582&r=ets 
By:  Marcel Scharth (Department of Economics  PUCRio); Marcelo Cunha Medeiros (Department of Economics PUCRio) 
Abstract:  Does volatility reflect a continuous reaction to past shocks or changes in the markets induce shifts in the volatility dynamics? In this paper, we provide empirical evidence that cumulated price variations convey meaningful information about multiple regimes in the realized volatility of stocks, where large falls (rises) in prices are linked to persistent regimes of high (low) variance in stock returns. Incorporating past cumulated daily returns as a explanatory variable in a flexible and systematic nonlinear framework, we estimate that falls of different magnitudes over less than two months are associated with volatility levels 20% and 60% higher than the average of periods with stable or rising prices. We show that this effect accounts for large empirical values of long memory parameter estimates. Finally, we analyze that the proposed model significantly improves out of sample performance in relation to standard methods. This result is more pronounced in periods of high volatility. 
Keywords:  Realized volatility, long memory, nonlinear models, asymmetric effects, regime switching, regression trees, smooth transition, valueatrisk, forecasting, empirical finance. 
Date:  2006–11 
URL:  http://d.repec.org/n?u=RePEc:rio:texdis:532&r=ets 
By:  Chuan Goh 
Abstract:  This paper considers a class of semiparametric estimators that take the form of densityweighted averages. These arise naturally in a consideration of semiparametric methods for the estimation of index and sampleselection models involving preliminary kernel density estimates. The question considered in this paper is that of selecting the degree of smoothing to be used in computing the preliminary density estimate. This paper proposes a bootstrap method for estimating the mean squared error and associated optimal bandwidth. The particular bootstrap method suggested here involves using a resample of smaller size than the original sample. This method of bandwidth selection is presented with specific reference to the case of estimators of average densities, of densityweighted average derivatives and of densityweighted conditional covariances. 
Keywords:  bandwidth selection, densityweighted averages, bootstrap, moutofn bootstrap, kernel density estimation 
JEL:  C14 
Date:  2007–01–02 
URL:  http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa274&r=ets 