
on Econometrics 
By:  Peter C.B. Phillips (Cowles Foundation, Yale University); KeLi Xu (Dept. of Mathematics, Yale University) 
Abstract:  This paper proposes a novel positive nonparametric estimator of the conditional variance function without relying on a logarithmic transformation. The basic idea is to apply the reweighted NadarayaWatson regression estimator of Hall and Presnell (1999, Journal of the Royal Statistical Society B, 61, 143158) to squared residuals. The new conditional variance estimator is asymptotically equivalent to the local linear estimator and is restricted to be positive in finite samples. A small simulation is performed to compare the new methodology with Ziegelmann's (2002) local exponential and Yu and Jones's (2004) local likelihoodbased estimators of the conditional variance. 
Keywords:  Conditional variance function, Empirical likelihood, Heteroskedasticity, Local linear estimator, NadarayaWatson estimator, Nonlinear time series; Nonparametric regression, Volatility 
JEL:  C22 
Date:  2007–06 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1612&r=ecm 
By:  Ralf Becker; Adam Clements 
Abstract:  This paper presents a GARCH type volatility model with a timevarying unconditional volatility which is a function of macroeconomic information. It is an extension of the SPLINE GARCH model proposed by Engle and Rangel (2005). The advantage of the model proposed in this paper is that the macroeconomic information available (and/or forecasts)is used in the parameter estimation process. Based on an application of this model to S&P500 share index returns, it is demonstrated that forecasts of macroeconomic variables can be easily incorporated into volatility forecasts for share index returns. It transpires that the model proposed here can lead to significantly improved volatility forecasts compared to traditional GARCH type volatility models. 
Keywords:  Volatility, macroeconomic data, forecast, spline, GARCH. 
JEL:  C12 C22 G00 
Date:  2007–06–14 
URL:  http://d.repec.org/n?u=RePEc:qut:auncer:200793&r=ecm 
By:  Javier Hidalgo 
Abstract:  We describe and examine a consistent test for the correct specification of aregression function with dependent data. The test is based on the supremum of thedifference between the parametric and nonparametric estimates of the regressionmodel. Rather surprisingly, the behaviour of the test depends on whether theregressors are deterministic or stochastic. In the former situation, the normalizationconstants necessary to obtain the limiting Gumbel distribution are data dependentand difficult to estimate, so to obtain valid critical values may be difficult, whereasin the latter, the asymptotic distribution may not be even known. Because of that,under very mild regularity conditions we describe a bootstrap analogue for the test,showing its asymptotic validity and finite sample behaviour in a small Monte Carloexperiment. 
Keywords:  Functional specification. Variable selection. Nonparametric kernelregression. Frequency domain bootstrap. 
JEL:  C14 C22 
Date:  2007–05 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:/2007/518&r=ecm 
By:  Gunky Kim; Mervyn J. Silvapulle; Paramsothy Silvapulle 
Abstract:  A semiparametric method is studied for estimating the dependence parameter and the joint distribution of the error term in a class of multivariate time series models when the marginal distributions of the errors are unknown. This method is a natural extension of Genest et al. (1995a) for independent and identically distributed observations. The proposed method first obtains √nconsistent estimates of the parameters of each univariate marginal timeseries, and computes the corresponding residuals. These are then used to estimate the joint distribution of the multivariate error terms, which is specified using a copula. Our developments and proofs make use of, and build upon, recent elegant results of Koul and Ling (2006) and Koul (2002) for these models. The rigorous proofs provided here also lay the foundation and collect together the technical arguments that would be useful for other potential extensions of this semiparametric approach. It is shown that the proposed estimator of the dependence parameter of the multivariate error term is asymptotically normal, and a consistent estimator of its large sample variance is also given so that confidence intervals may be constructed. A large scale simulation study was carried out to compare the estimators particularly when the error distributions are unknown, which is almost always the case in practice. In this simulation study, our proposed semiparametric method performed better than the wellknown parametric methods. An example on exchange rates is used to illustrate the method. 
Keywords:  Association; Copula; Estimating Equation; Pseudolikelihood; Semiparametric. 
JEL:  C13 C14 
Date:  2007–06 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:20078&r=ecm 
By:  Sokbae Lee; Myunghwan Seo 
Abstract:  This paper is concerned with semiparametric estimation of a threshold binaryresponse model. The estimation method considered in the paper is semiparametricsince the parameters for a regression function are finitedimensional, whileallowing for heteroskedasticity of unknown form. In particular, the paper considersManski (1975, 1985)'s maximum score estimator. The model in this paper isirregular because of a changepoint due to an unknown threshold in a covariate.This irregularity coupled with the discontinuity of the objective function of themaximum score estimator complicates the analysis of the asymptotic behavior ofthe estimator. Sufficient conditions for the identification of parameters are givenand the consistency of the estimator is obtained. It is shown that the estimator ofthe threshold parameter is nconsistent and the estimator of the remainingregression parameters is cube root nconsistent. Furthermore, we obtain theasymptotic distribution of the estimators. It turns out that a suitably normalizedestimator of the regression parameters converges weakly to the distribution towhich it would converge weakly if the true threshold value were known andlikewise for the threshold estimator. 
Keywords:  Binary response model, maximum score estimation, semiparametricestimation, threshold regression, nonlinear random utility models. 
JEL:  C25 
Date:  2007–02 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:/2007/516&r=ecm 
By:  Peter C.B. Phillips (Cowles Foundation, Yale University); Chang Sik Kim (School of Economics, Sungkyunkwan University) 
Abstract:  An asymptotic expansion is given for the autocovariance matrix of a vector of stationary longmemory processes with memory parameters d satisfying 0 < d < 1/2. The theory is then applied to deliver formulae for the long run covariance matrices of multivariate time series with long memory. 
Keywords:  Asymptotic expansion, Autocovariance function, Fourier integral, Long memory, Long run variance, Spectral density 
JEL:  C22 
Date:  2007–06 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1611&r=ecm 
By:  Jeremy J. Nalewaik 
Abstract:  This paper discusses extensions of standard Markov switching models that allow estimated probabilities to reflect parameter breaks at or close to the end of the sample, too close for standard maximum likelihood techniques to produce precise parameter estimates. The basic technique is a supplementary estimation procedure, bringing additional information to bear to estimate the statistical properties of the endofsample observations that behave differently from the rest. Empirical results using realtime data show that these techniques improve the ability of a Markov switching model based on GDP and GDI to recognize the start of the 2001 recession. 
Date:  2007 
URL:  http://d.repec.org/n?u=RePEc:fip:fedgfe:200723&r=ecm 
By:  Peter C.B. Phillips (Cowles Foundation, Yale University); Tassos Magdalinos (University of Nottingham, UK) 
Abstract:  A limit theory is developed for multivariate regression in an explosive cointegrated system. The asymptotic behavior of the least squares estimator of the cointegrating coefficients is found to depend upon the precise relationship between the explosive regressors. When the eigenvalues of the autoregressive matrix are distinct, the centered least squares estimator has an exponential rate of convergence and a mixed normal limit distribution. No central limit theory is applicable here and Gaussian innovations are assumed. On the other hand, when some regressors exhibit common explosive behavior, a different mixed normal limiting distribution is derived with rate of convergence reduced to n^0.5. In the latter case, mixed normality applies without any distributional assumptions on the innovation errors by virtue of a Lindeberg type central limit theorem. Conventional statistical inference procedures are valid in this case, the stationary convergence rate dominating the behavior of the least squares estimator. 
Keywords:  Central limit theory, Exposive cointegration, Explosive process, Mixed normality 
JEL:  C22 
Date:  2007–06 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1614&r=ecm 
By:  Peter M Robinson 
Abstract:  Efficient semiparametric and parametric estimates are developed for aspatial autoregressive model, containing nonstochastic explanatoryvariables and innovations suspected to be nonnormal. The main stress ison the case of distribution of unknown, nonparametric, form, where seriesnonparametric estimates of the score function are employed in adaptiveestimates of parameters of interest. These estimates are as efficient asones based on a correct form, in particular they are more efficient thanpseudoGaussian maximum likelihood estimates at nonGaussiandistributions. Two different adaptive estimates are considered. One entails astringent condition on the spatial weight matrix, and is suitable only whenobservations have substantially many "neighbours". The other adaptiveestimate relaxes this requirement, at the expense of alternative conditionsand possible computational expense. A Monte Carlo study of finite sampleperformance is included. 
Keywords:  Spatial autoregression, Efficient estimation, Adaptive estimation,Simultaneity bias.© The author. All rights reserved. Short sections of text, not to exceed two paragraphs,may be quoted without explicit permission provided that full credit, including © notice, isgiven to the source. 
JEL:  C13 C14 C21 
Date:  2007–02 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:/2007/515&r=ecm 
By:  Pim Ouwehand; Rob J. Hyndman; Ton G. de Kok; Karel H. van Donselaar 
Abstract:  We present an approach to improve forecast accuracy by simultaneously forecasting a group of products that exhibit similar seasonal demand patterns. Better seasonality estimates can be made by using information on all products in a group, and using these improved estimates when forecasting at the individual product level. This approach is called the group seasonal indices (GSI) approach, and is a generalization of the classical HoltWinters procedure. This article describes an underlying state space model for this method and presents simulation results that show when it yields more accurate forecasts than HoltWinters. 
Keywords:  Common seasonality; demand forecasting; exponential smoothing; HoltWinters; state space model. 
JEL:  C53 C22 C52 
Date:  2007–06 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:20077&r=ecm 
By:  Peter C.B. Phillips (Cowles Foundation, Yale University) 
Abstract:  Some exact distribution theory is developed for structural equation models with and without identities. The theory includes LIML, IV and OLS. We relate the new results to earlier studies in the literature, including the pioneering work of Bergstrom (1962). General IV exact distribution formulae for a structural equation model without an identity are shown to apply also to models with an identity by specializing along a certain asymptotic parameter sequence. Some of the new exact results are obtained by means of a uniform asymptotic expansion. An interesting consequence of the new theory is that the uniform asymptotic approximation provides the exact distribution of the OLS estimator in the model considered by Bergstrom (1962). This example appears to be the first instance in the statistical literature of a uniform approximation delivering an exact expression for a probability density. 
Keywords:  Exact distribution, Identity, IV estimation, LIML, Structural equation, Uniform asymptotic expansion 
JEL:  C30 
Date:  2007–06 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1613&r=ecm 
By:  Alessandro De Gregorio (Università di Milano, Italy); Stefano Iacus (Department of Economics, Business and Statistics, University of Milan, IT) 
Abstract:  The telegraph process models a random motion with finite velocity and it is usually proposed as an alternative to diffusion models. The process describes the position of a particle moving on the real line, alternatively with constant velocity +v or v. The changes of direction are governed by an homogeneous Poisson process with rate lambda > 0. In this paper, we consider a change point estimation problem for the rate of the underlying Poisson process by means of least squares method. The consistency and the rate of convergence for the change point estimator are obtained and its asymptotic distribution is derived. Applications to real data are also presented. 
Keywords:  discrete observations, change point problem, volatility regime switch, telegraph process, 
Date:  2007–05–03 
URL:  http://d.repec.org/n?u=RePEc:bep:unimip:1053&r=ecm 
By:  Rao, B. Bhaskara 
Abstract:  Applied economists working with time series data face a dilemma in selecting between models with deterministic and stochastic trends. While models with deterministic trends are widely used, models with stochastic trends are not so well known. In an influential paper Harvey (1997) strongly advocates a structural time series approach with stochastic trends in place of the widely used autoregressive models based on unit root tests and cointegration techniques. Therefore, it is important to understand their relative merits. This paper suggests that both methodologies are useful and they may perform differently in different models. This paper provides a few guidelines to the applied economists to understand these alternative methods. 
Keywords:  Stochastic and Deterministic Trends; BaiPerron Tests; STAMP; Structural Time Series Models. 
JEL:  C10 C22 C13 C00 C20 
Date:  2007–06–16 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:3580&r=ecm 
By:  David Hendry (Department of Economics, University of Oxford); Carlos Santos (Faculdade de Economia e Gestão, Universidade Católica Portuguesa (Porto)) 
Abstract:  We develop a new automaticallycomputable test for super exogeneity, using a variant of generaltospecific modelling. Based on the recent developments in impulse saturation applied to marginal models under the null that no impulses matter, we select the significant impulses for testing in the conditional. The approximate analytical noncentrality of the test is derived for a failure of invariance and for a failure of weak exogeneity when there is a shift in the marginal model. Monte Carlo simulations confirm the nominal significance levels under the null, and power against the two alternatives. 
Keywords:  super exogeneity, generaltospecific, test power, indicators, cobreaking 
JEL:  C51 C22 
Date:  2007–06 
URL:  http://d.repec.org/n?u=RePEc:cap:wpaper:112007&r=ecm 
By:  Ralf Becker; Adam Clements 
Abstract:  Forecasting volatility has received a great deal of research attention. Many articles have considered the relative performance of econometric model based and option implied volatility forecasts. While many studies have found that implied volatility is the preferred approach, a number of issues remain unresolved. One issue being the relative merit of combination forecasts. By utilising recent econometric advances, this paper considers whether combination forecasts of S&P 500 volatility are statistically superior to a wide range of model based forecasts and implied volatility. It is found that combination forecasts are the dominant approach, indicating that the VIX cannot simply be viewed as a combination of various model based forecasts. 
Keywords:  Implied volatility, volatility forecasts, volatility models, realized volatility, combination forecasts. 
JEL:  C12 C22 G00 
Date:  2007–06–14 
URL:  http://d.repec.org/n?u=RePEc:qut:auncer:200792&r=ecm 
By:  Colignatus, Thomas 
Abstract:  Nominal data in contingency tables currently lack a correlation coefficient, such as has already been defined for real data. A measure can be designed using the determinant, with the useful interpretation that the determinant gives the ratio between volumes. A contingency table by itself gives all connections between the variables. Required operations are only normalization and aggregation by means of that determinant, so that, in fact, a contingency table is its own correlation matrix. The idea for the normalization is that the conditional probabilities given the row and column sums can also be seen as regression coefficients that hence depend upon correlations. With M a m × n contingency table and n ≤ m the suggested measure is r = Sqrt[det[A'A]] with A = Normalized[M]. The sign can be recovered from a generalization of the determinant to nonsquare matrices. With M an n1 × n2 × ... × nk contingency matrix, we can construct a matrix of pairwise correlations R. A matrix of such pairwise correlations is called an association matrix. If that matrix is also positive semidefinite (PSD) then it is a proper correlation matrix. The overall correlation then is R = f[R] where f can be chosen to impose PSDness. An option is to use f[R] = Sqrt[1  det[R]]. However, for both nominal and cardinal data the advisable choice is to take the maximal multiple correlation within R. The resulting measure of “nominal correlation” measures the distance between a main diagonal and the offdiagonal elements, and thus is a measure of strong correlation. Cramer’s V measure for pairwise correlation can be generalized in this manner too. It measures the distance between all diagonals (including crossdiagaonals and subdiagonals) and statistical independence, and thus is a measure of weaker correlation. Finally, when also variances are defined then regression coefficients can be determined from the variancecovariance matrix. 
Keywords:  association; correlation; contingency table; volume ratio; determinant; nonparametric methods; nominal data; nominal scale; categorical data; Fisher’s exact test; odds ratio; tetrachoric correlation coefficient; phi; Cramer’s V; Pearson; contingency coefficient; uncertainty coefficient; Theil’s U; eta; metaanalysis; Simpson’s paradox; causality; statistical independence; regression 
JEL:  C10 
Date:  2007–03–15 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:3394&r=ecm 
By:  Afonso Gonçalves da Silva; Afonso Gonçalves da Silva; Peter M Robinson; Peter M Robinson 
Abstract:  Asset returns are frequently assumed to be determined by one or more commonfactors. We consider a bivariate factor model, where the unobservable commonfactor and idiosyncratic errors are stationary and serially uncorrelated, but havestrong dependence in higher moments. Stochastic volatility models for the latentvariables are employed, in view of their direct application to asset pricing models.Assuming the underlying persistence is higher in the factor than in the errors, afractional cointegrating relationship can be recovered by suitable transformation ofthe data. We propose a narrow band semiparametric estimate of the factorloadings, which is shown to be consistent with a rate of convergence, and its finitesample properties are investigated in a Monte Carlo experiment. 
Keywords:  Fractional cointegration, stochastic volatility, narrow band leastsquares, semiparametric analysis. 
JEL:  C22 
Date:  2007–05 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:/2007/519&r=ecm 
By:  Michael J. Dueker; Zacharias Psaradakis; Martin Sola; Fabio Spagnolo 
Abstract:  In this paper we propose a contemporaneous threshold multivariate smooth transition autoregressive (CMSTAR) model in which the regime weights depend on the ex ante probabilities that latent regimespecific variables exceed certain threshold values. The model is a multivariate generalization of the contemporaneous threshold autoregressive model introduced by Dueker et al. (2007). A key feature of the model is that the transition function depends on all the parameters of the model as well as on the data. The stability and distributional properties of the proposed model are investigated. The CMSTAR model is also used to examine the relationship between US stock prices and interest rates. 
Keywords:  Timeseries analysis ; Capital assets pricing model 
Date:  2007 
URL:  http://d.repec.org/n?u=RePEc:fip:fedlwp:2007019&r=ecm 
By:  Carlo Fiorio (University of Milan); Vassilis Hajivassiliou (London School of Economics) 
Abstract:  This paper analyses the distribution of the classical tratio statistic from distributions with no finite moments and shows how classical testing is affected. Some surprising results are obtained in terms of bimodality vs. the usual unimodality of the standard studentized tdistribution prevailing in classical conditions. The paper develops a new distribution termed the "double Pareto," which allows the thickness of the tails and the existence of moments to be determined parametrically. We also consider infinitemoments distributions truncated on a compact support to investigate the relative importance of tail thickness in case of finite moments. We find that the bimodality persists even in such cases.Simulation results are used to highlight the dangers of relying on naive testing in the face of thicktailed distributions. Special cases analyzed include one and twosample statistical inference problems, as well as linear regression econometric problems. 
Keywords:  thicktailed distributions, studentized tdistribution, Pareto distribution, bimodality, truncated distribution, 
Date:  2007–05–03 
URL:  http://d.repec.org/n?u=RePEc:bep:unimip:1054&r=ecm 
By:  Frank A Cowell; MariaPia VictoriaFeser 
Date:  2007–03 
URL:  http://d.repec.org/n?u=RePEc:cep:stidar:91&r=ecm 
By:  Garrone Giovanna; Marchionatti Roberto (University of Turin) 
Date:  2007–05 
URL:  http://d.repec.org/n?u=RePEc:uto:cesmep:200703&r=ecm 
By:  Colignatus, Thomas 
Abstract:  Logistic regression (LR) is one of the most used estimation techniques for nominal data collected in contingency tables, and the question arises how the recently proposed concept of nominal correlation and regression (NCR) relates to it. (1) LR targets the cells in the contingency table while NCR targets only the variables. (2) Where the methods seem to overlap, such as in the 2 × 2 × 2 case, there still is the difference between the use of categories by LR (notably the categories Success, Cause and Confounder) and the use of variables by NCR (notably the variables Effect, Truth and Confounding). (3) Since LR looks for the most parsimonious model, the analysis might be helped by NCR, that is very parsimonious since it uses only the variables and not all the cells of the contingency table. (4) While LR may generate statistically significant regressions, NRC may show that the correlation still is low. (5) Risk difference regression may be a bridge to understand more about the difference between LR and NCR. (6) The use of LR and NCR next to each other may help to focus on the research question and the amount of detail required for it. 
Keywords:  Experimental economics; causality; cause and effect; confounding; contingency table; epidemiology; correlation; regression; logistic regression; 
JEL:  C10 
Date:  2007–06–19 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:3615&r=ecm 
By:  Garrone Giovanna; Marchionatti Roberto (University of Turin) 
Date:  2007–03 
URL:  http://d.repec.org/n?u=RePEc:uto:cesmep:200702&r=ecm 