nep-ecm New Economics Papers
on Econometrics
Issue of 2007‒06‒23
23 papers chosen by
Sune Karlsson
Orebro University

  1. Tilted Nonparametric Estimation of Volatility Functions By Peter C.B. Phillips; Ke-Li Xu
  2. Forecasting stock market volatility conditional on macroeconomic conditions. By Ralf Becker; Adam Clements
  3. SPECIFICATION TESTING FORREGRESSION MODELS WITHDEPENDENT DATA By Javier Hidalgo
  4. Estimating the Error Distribution in the Multivariate Heteroscedastic Time Series Models By Gunky Kim; Mervyn J. Silvapulle; Paramsothy Silvapulle
  5. SEMIPARAMETRIC ESTIMATION OF A BINARYRESPONSE MODEL WITH A CHANGE-POINTDUE TO A COVARIATE THRESHOLD By Sokbae Lee; Myunghwan Seo
  6. Long Run Covariance Matrices for Fractionally Integrated Processes By Peter C.B. Phillips; Chang Sik Kim
  7. Incorporating vintage differences and forecasts into Markov switching models By Jeremy J. Nalewaik
  8. Limit Theory for Explosively Cointegrated Systems By Peter C.B. Phillips; Tassos Magdalinos
  9. Efficient Estimation of the SemiparametricSpatial Autoregressive Model By Peter M Robinson
  10. A state space model for exponential smoothing with group seasonality By Pim Ouwehand; Rob J. Hyndman; Ton G. de Kok; Karel H. van Donselaar
  11. Exact Distribution Theory in Structural Estimation with an Identity By Peter C.B. Phillips
  12. Change point estimation for the telegraph process observed at discrete times By Alessandro De Gregorio; Stefano Iacus
  13. Deterministic and stochastic trends in the time series models: A guide for the applied economist By Rao, B. Bhaskara
  14. AUTOMATIC TESTS FOR SUPER EXOGENEITY By David Hendry; Carlos Santos
  15. Are combination forecasts of S&P 500 volatility statistically superior? By Ralf Becker; Adam Clements
  16. Correlation and regression in contingency tables. A measure of association or correlation in nominal data (contingency tables), using determinants By Colignatus, Thomas
  17. Fractional Cointegration In StochasticVolatility Models By Afonso Gonçalves da Silva; Afonso Gonçalves da Silva; Peter M Robinson; Peter M Robinson
  18. Multivariate contemporaneous threshold autoregressive models By Michael J. Dueker; Zacharias Psaradakis; Martin Sola; Fabio Spagnolo
  19. Inference and Thick Tails: Some Surprising Results By Carlo Fiorio; Vassilis Hajivassiliou
  20. Modelling Lorenz Curves:robust and semi-parametric issues By Frank A Cowell; Maria-Pia Victoria-Feser
  21. Keynes, statistics and econometrics By Garrone Giovanna; Marchionatti Roberto
  22. A comparison of nominal regression and logistic regression for contingency tables, including the 2 × 2 × 2 case in causality By Colignatus, Thomas
  23. The appropriate style of economic discourse. Keynes on Economics and Econometrics By Garrone Giovanna; Marchionatti Roberto

  1. By: Peter C.B. Phillips (Cowles Foundation, Yale University); Ke-Li Xu (Dept. of Mathematics, Yale University)
    Abstract: This paper proposes a novel positive nonparametric estimator of the conditional variance function without relying on a logarithmic transformation. The basic idea is to apply the re-weighted Nadaraya-Watson regression estimator of Hall and Presnell (1999, Journal of the Royal Statistical Society B, 61, 143--158) to squared residuals. The new conditional variance estimator is asymptotically equivalent to the local linear estimator and is restricted to be positive in finite samples. A small simulation is performed to compare the new methodology with Ziegelmann's (2002) local exponential and Yu and Jones's (2004) local likelihood-based estimators of the conditional variance.
    Keywords: Conditional variance function, Empirical likelihood, Heteroskedasticity, Local linear estimator, Nadaraya-Watson estimator, Nonlinear time series; Nonparametric regression, Volatility
    JEL: C22
    Date: 2007–06
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1612&r=ecm
  2. By: Ralf Becker; Adam Clements
    Abstract: This paper presents a GARCH type volatility model with a time-varying unconditional volatility which is a function of macroeconomic information. It is an extension of the SPLINE GARCH model proposed by Engle and Rangel (2005). The advantage of the model proposed in this paper is that the macroeconomic information available (and/or forecasts)is used in the parameter estimation process. Based on an application of this model to S&P500 share index returns, it is demonstrated that forecasts of macroeconomic variables can be easily incorporated into volatility forecasts for share index returns. It transpires that the model proposed here can lead to significantly improved volatility forecasts compared to traditional GARCH type volatility models.
    Keywords: Volatility, macroeconomic data, forecast, spline, GARCH.
    JEL: C12 C22 G00
    Date: 2007–06–14
    URL: http://d.repec.org/n?u=RePEc:qut:auncer:2007-93&r=ecm
  3. By: Javier Hidalgo
    Abstract: We describe and examine a consistent test for the correct specification of aregression function with dependent data. The test is based on the supremum of thedifference between the parametric and nonparametric estimates of the regressionmodel. Rather surprisingly, the behaviour of the test depends on whether theregressors are deterministic or stochastic. In the former situation, the normalizationconstants necessary to obtain the limiting Gumbel distribution are data dependentand difficult to estimate, so to obtain valid critical values may be difficult, whereasin the latter, the asymptotic distribution may not be even known. Because of that,under very mild regularity conditions we describe a bootstrap analogue for the test,showing its asymptotic validity and finite sample behaviour in a small Monte Carloexperiment.
    Keywords: Functional specification. Variable selection. Nonparametric kernelregression. Frequency domain bootstrap.
    JEL: C14 C22
    Date: 2007–05
    URL: http://d.repec.org/n?u=RePEc:cep:stiecm:/2007/518&r=ecm
  4. By: Gunky Kim; Mervyn J. Silvapulle; Paramsothy Silvapulle
    Abstract: A semiparametric method is studied for estimating the dependence parameter and the joint distribution of the error term in a class of multivariate time series models when the marginal distributions of the errors are unknown. This method is a natural extension of Genest et al. (1995a) for independent and identically distributed observations. The proposed method first obtains √n-consistent estimates of the parameters of each univariate marginal time-series, and computes the corresponding residuals. These are then used to estimate the joint distribution of the multivariate error terms, which is specified using a copula. Our developments and proofs make use of, and build upon, recent elegant results of Koul and Ling (2006) and Koul (2002) for these models. The rigorous proofs provided here also lay the foundation and collect together the technical arguments that would be useful for other potential extensions of this semiparametric approach. It is shown that the proposed estimator of the dependence parameter of the multivariate error term is asymptotically normal, and a consistent estimator of its large sample variance is also given so that confidence intervals may be constructed. A large scale simulation study was carried out to compare the estimators particularly when the error distributions are unknown, which is almost always the case in practice. In this simulation study, our proposed semiparametric method performed better than the well-known parametric methods. An example on exchange rates is used to illustrate the method.
    Keywords: Association; Copula; Estimating Equation; Pseudolikelihood; Semiparametric.
    JEL: C13 C14
    Date: 2007–06
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2007-8&r=ecm
  5. By: Sokbae Lee; Myunghwan Seo
    Abstract: This paper is concerned with semiparametric estimation of a threshold binaryresponse model. The estimation method considered in the paper is semiparametricsince the parameters for a regression function are finite-dimensional, whileallowing for heteroskedasticity of unknown form. In particular, the paper considersManski (1975, 1985)'s maximum score estimator. The model in this paper isirregular because of a change-point due to an unknown threshold in a covariate.This irregularity coupled with the discontinuity of the objective function of themaximum score estimator complicates the analysis of the asymptotic behavior ofthe estimator. Sufficient conditions for the identification of parameters are givenand the consistency of the estimator is obtained. It is shown that the estimator ofthe threshold parameter is n-consistent and the estimator of the remainingregression parameters is cube root n-consistent. Furthermore, we obtain theasymptotic distribution of the estimators. It turns out that a suitably normalizedestimator of the regression parameters converges weakly to the distribution towhich it would converge weakly if the true threshold value were known andlikewise for the threshold estimator.
    Keywords: Binary response model, maximum score estimation, semiparametricestimation, threshold regression, nonlinear random utility models.
    JEL: C25
    Date: 2007–02
    URL: http://d.repec.org/n?u=RePEc:cep:stiecm:/2007/516&r=ecm
  6. By: Peter C.B. Phillips (Cowles Foundation, Yale University); Chang Sik Kim (School of Economics, Sungkyunkwan University)
    Abstract: An asymptotic expansion is given for the autocovariance matrix of a vector of stationary long-memory processes with memory parameters d satisfying 0 < d < 1/2. The theory is then applied to deliver formulae for the long run covariance matrices of multivariate time series with long memory.
    Keywords: Asymptotic expansion, Autocovariance function, Fourier integral, Long memory, Long run variance, Spectral density
    JEL: C22
    Date: 2007–06
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1611&r=ecm
  7. By: Jeremy J. Nalewaik
    Abstract: This paper discusses extensions of standard Markov switching models that allow estimated probabilities to reflect parameter breaks at or close to the end of the sample, too close for standard maximum likelihood techniques to produce precise parameter estimates. The basic technique is a supplementary estimation procedure, bringing additional information to bear to estimate the statistical properties of the end-of-sample observations that behave differently from the rest. Empirical results using real-time data show that these techniques improve the ability of a Markov switching model based on GDP and GDI to recognize the start of the 2001 recession.
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:fip:fedgfe:2007-23&r=ecm
  8. By: Peter C.B. Phillips (Cowles Foundation, Yale University); Tassos Magdalinos (University of Nottingham, UK)
    Abstract: A limit theory is developed for multivariate regression in an explosive cointegrated system. The asymptotic behavior of the least squares estimator of the cointegrating coefficients is found to depend upon the precise relationship between the explosive regressors. When the eigenvalues of the autoregressive matrix are distinct, the centered least squares estimator has an exponential rate of convergence and a mixed normal limit distribution. No central limit theory is applicable here and Gaussian innovations are assumed. On the other hand, when some regressors exhibit common explosive behavior, a different mixed normal limiting distribution is derived with rate of convergence reduced to n^0.5. In the latter case, mixed normality applies without any distributional assumptions on the innovation errors by virtue of a Lindeberg type central limit theorem. Conventional statistical inference procedures are valid in this case, the stationary convergence rate dominating the behavior of the least squares estimator.
    Keywords: Central limit theory, Exposive cointegration, Explosive process, Mixed normality
    JEL: C22
    Date: 2007–06
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1614&r=ecm
  9. By: Peter M Robinson
    Abstract: Efficient semiparametric and parametric estimates are developed for aspatial autoregressive model, containing nonstochastic explanatoryvariables and innovations suspected to be non-normal. The main stress ison the case of distribution of unknown, nonparametric, form, where seriesnonparametric estimates of the score function are employed in adaptiveestimates of parameters of interest. These estimates are as efficient asones based on a correct form, in particular they are more efficient thanpseudo-Gaussian maximum likelihood estimates at non-Gaussiandistributions. Two different adaptive estimates are considered. One entails astringent condition on the spatial weight matrix, and is suitable only whenobservations have substantially many "neighbours". The other adaptiveestimate relaxes this requirement, at the expense of alternative conditionsand possible computational expense. A Monte Carlo study of finite sampleperformance is included.
    Keywords: Spatial autoregression, Efficient estimation, Adaptive estimation,Simultaneity bias.© The author. All rights reserved. Short sections of text, not to exceed two paragraphs,may be quoted without explicit permission provided that full credit, including © notice, isgiven to the source.
    JEL: C13 C14 C21
    Date: 2007–02
    URL: http://d.repec.org/n?u=RePEc:cep:stiecm:/2007/515&r=ecm
  10. By: Pim Ouwehand; Rob J. Hyndman; Ton G. de Kok; Karel H. van Donselaar
    Abstract: We present an approach to improve forecast accuracy by simultaneously forecasting a group of products that exhibit similar seasonal demand patterns. Better seasonality estimates can be made by using information on all products in a group, and using these improved estimates when forecasting at the individual product level. This approach is called the group seasonal indices (GSI) approach, and is a generalization of the classical Holt-Winters procedure. This article describes an underlying state space model for this method and presents simulation results that show when it yields more accurate forecasts than Holt-Winters.
    Keywords: Common seasonality; demand forecasting; exponential smoothing; Holt-Winters; state space model.
    JEL: C53 C22 C52
    Date: 2007–06
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2007-7&r=ecm
  11. By: Peter C.B. Phillips (Cowles Foundation, Yale University)
    Abstract: Some exact distribution theory is developed for structural equation models with and without identities. The theory includes LIML, IV and OLS. We relate the new results to earlier studies in the literature, including the pioneering work of Bergstrom (1962). General IV exact distribution formulae for a structural equation model without an identity are shown to apply also to models with an identity by specializing along a certain asymptotic parameter sequence. Some of the new exact results are obtained by means of a uniform asymptotic expansion. An interesting consequence of the new theory is that the uniform asymptotic approximation provides the exact distribution of the OLS estimator in the model considered by Bergstrom (1962). This example appears to be the first instance in the statistical literature of a uniform approximation delivering an exact expression for a probability density.
    Keywords: Exact distribution, Identity, IV estimation, LIML, Structural equation, Uniform asymptotic expansion
    JEL: C30
    Date: 2007–06
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1613&r=ecm
  12. By: Alessandro De Gregorio (Università di Milano, Italy); Stefano Iacus (Department of Economics, Business and Statistics, University of Milan, IT)
    Abstract: The telegraph process models a random motion with finite velocity and it is usually proposed as an alternative to diffusion models. The process describes the position of a particle moving on the real line, alternatively with constant velocity +v or -v. The changes of direction are governed by an homogeneous Poisson process with rate lambda > 0. In this paper, we consider a change point estimation problem for the rate of the underlying Poisson process by means of least squares method. The consistency and the rate of convergence for the change point estimator are obtained and its asymptotic distribution is derived. Applications to real data are also presented.
    Keywords: discrete observations, change point problem, volatility regime switch, telegraph process,
    Date: 2007–05–03
    URL: http://d.repec.org/n?u=RePEc:bep:unimip:1053&r=ecm
  13. By: Rao, B. Bhaskara
    Abstract: Applied economists working with time series data face a dilemma in selecting between models with deterministic and stochastic trends. While models with deterministic trends are widely used, models with stochastic trends are not so well known. In an influential paper Harvey (1997) strongly advocates a structural time series approach with stochastic trends in place of the widely used autoregressive models based on unit root tests and cointegration techniques. Therefore, it is important to understand their relative merits. This paper suggests that both methodologies are useful and they may perform differently in different models. This paper provides a few guidelines to the applied economists to understand these alternative methods.
    Keywords: Stochastic and Deterministic Trends; Bai-Perron Tests; STAMP; Structural Time Series Models.
    JEL: C10 C22 C13 C00 C20
    Date: 2007–06–16
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:3580&r=ecm
  14. By: David Hendry (Department of Economics, University of Oxford); Carlos Santos (Faculdade de Economia e Gestão, Universidade Católica Portuguesa (Porto))
    Abstract: We develop a new automatically-computable test for super exogeneity, using a variant of general-to-specific modelling. Based on the recent developments in impulse saturation applied to marginal models under the null that no impulses matter, we select the significant impulses for testing in the conditional. The approximate analytical non-centrality of the test is derived for a failure of invariance and for a failure of weak exogeneity when there is a shift in the marginal model. Monte Carlo simulations confirm the nominal significance levels under the null, and power against the two alternatives.
    Keywords: super exogeneity, general-to-specific, test power, indicators, cobreaking
    JEL: C51 C22
    Date: 2007–06
    URL: http://d.repec.org/n?u=RePEc:cap:wpaper:112007&r=ecm
  15. By: Ralf Becker; Adam Clements
    Abstract: Forecasting volatility has received a great deal of research attention. Many articles have considered the relative performance of econometric model based and option implied volatility forecasts. While many studies have found that implied volatility is the preferred approach, a number of issues remain unresolved. One issue being the relative merit of combination forecasts. By utilising recent econometric advances, this paper considers whether combination forecasts of S&P 500 volatility are statistically superior to a wide range of model based forecasts and implied volatility. It is found that combination forecasts are the dominant approach, indicating that the VIX cannot simply be viewed as a combination of various model based forecasts.
    Keywords: Implied volatility, volatility forecasts, volatility models, realized volatility, combination forecasts.
    JEL: C12 C22 G00
    Date: 2007–06–14
    URL: http://d.repec.org/n?u=RePEc:qut:auncer:2007-92&r=ecm
  16. By: Colignatus, Thomas
    Abstract: Nominal data in contingency tables currently lack a correlation coefficient, such as has already been defined for real data. A measure can be designed using the determinant, with the useful interpretation that the determinant gives the ratio between volumes. A contingency table by itself gives all connections between the variables. Required operations are only normalization and aggregation by means of that determinant, so that, in fact, a contingency table is its own correlation matrix. The idea for the normalization is that the conditional probabilities given the row and column sums can also be seen as regression coefficients that hence depend upon correlations. With M a m × n contingency table and n ≤ m the suggested measure is r = Sqrt[det[A'A]] with A = Normalized[M]. The sign can be recovered from a generalization of the determinant to non-square matrices. With M an n1 × n2 × ... × nk contingency matrix, we can construct a matrix of pairwise correlations R. A matrix of such pairwise correlations is called an association matrix. If that matrix is also positive semi-definite (PSD) then it is a proper correlation matrix. The overall correlation then is R = f[R] where f can be chosen to impose PSD-ness. An option is to use f[R] = Sqrt[1 - det[R]]. However, for both nominal and cardinal data the advisable choice is to take the maximal multiple correlation within R. The resulting measure of “nominal correlation” measures the distance between a main diagonal and the off-diagonal elements, and thus is a measure of strong correlation. Cramer’s V measure for pairwise correlation can be generalized in this manner too. It measures the distance between all diagonals (including cross-diagaonals and subdiagonals) and statistical independence, and thus is a measure of weaker correlation. Finally, when also variances are defined then regression coefficients can be determined from the variance-covariance matrix.
    Keywords: association; correlation; contingency table; volume ratio; determinant; nonparametric methods; nominal data; nominal scale; categorical data; Fisher’s exact test; odds ratio; tetrachoric correlation coefficient; phi; Cramer’s V; Pearson; contingency coefficient; uncertainty coefficient; Theil’s U; eta; meta-analysis; Simpson’s paradox; causality; statistical independence; regression
    JEL: C10
    Date: 2007–03–15
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:3394&r=ecm
  17. By: Afonso Gonçalves da Silva; Afonso Gonçalves da Silva; Peter M Robinson; Peter M Robinson
    Abstract: Asset returns are frequently assumed to be determined by one or more commonfactors. We consider a bivariate factor model, where the unobservable commonfactor and idiosyncratic errors are stationary and serially uncorrelated, but havestrong dependence in higher moments. Stochastic volatility models for the latentvariables are employed, in view of their direct application to asset pricing models.Assuming the underlying persistence is higher in the factor than in the errors, afractional cointegrating relationship can be recovered by suitable transformation ofthe data. We propose a narrow band semiparametric estimate of the factorloadings, which is shown to be consistent with a rate of convergence, and its finitesample properties are investigated in a Monte Carlo experiment.
    Keywords: Fractional cointegration, stochastic volatility, narrow band leastsquares, semiparametric analysis.
    JEL: C22
    Date: 2007–05
    URL: http://d.repec.org/n?u=RePEc:cep:stiecm:/2007/519&r=ecm
  18. By: Michael J. Dueker; Zacharias Psaradakis; Martin Sola; Fabio Spagnolo
    Abstract: In this paper we propose a contemporaneous threshold multivariate smooth transition autoregressive (C-MSTAR) model in which the regime weights depend on the ex ante probabilities that latent regime-specific variables exceed certain threshold values. The model is a multivariate generalization of the contemporaneous threshold autoregressive model introduced by Dueker et al. (2007). A key feature of the model is that the transition function depends on all the parameters of the model as well as on the data. The stability and distributional properties of the proposed model are investigated. The C-MSTAR model is also used to examine the relationship between US stock prices and interest rates.
    Keywords: Time-series analysis ; Capital assets pricing model
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:fip:fedlwp:2007-019&r=ecm
  19. By: Carlo Fiorio (University of Milan); Vassilis Hajivassiliou (London School of Economics)
    Abstract: This paper analyses the distribution of the classical t-ratio statistic from distributions with no finite moments and shows how classical testing is affected. Some surprising results are obtained in terms of bimodality vs. the usual unimodality of the standard studentized t-distribution prevailing in classical conditions. The paper develops a new distribution termed the "double Pareto," which allows the thickness of the tails and the existence of moments to be determined parametrically. We also consider infinite-moments distributions truncated on a compact support to investigate the relative importance of tail thickness in case of finite moments. We find that the bimodality persists even in such cases.Simulation results are used to highlight the dangers of relying on naive testing in the face of thick-tailed distributions. Special cases analyzed include one- and two-sample statistical inference problems, as well as linear regression econometric problems.
    Keywords: thick-tailed distributions, studentized t-distribution, Pareto distribution, bimodality, truncated distribution,
    Date: 2007–05–03
    URL: http://d.repec.org/n?u=RePEc:bep:unimip:1054&r=ecm
  20. By: Frank A Cowell; Maria-Pia Victoria-Feser
    Date: 2007–03
    URL: http://d.repec.org/n?u=RePEc:cep:stidar:91&r=ecm
  21. By: Garrone Giovanna; Marchionatti Roberto (University of Turin)
    Date: 2007–05
    URL: http://d.repec.org/n?u=RePEc:uto:cesmep:200703&r=ecm
  22. By: Colignatus, Thomas
    Abstract: Logistic regression (LR) is one of the most used estimation techniques for nominal data collected in contingency tables, and the question arises how the recently proposed concept of nominal correlation and regression (NCR) relates to it. (1) LR targets the cells in the contingency table while NCR targets only the variables. (2) Where the methods seem to overlap, such as in the 2 × 2 × 2 case, there still is the difference between the use of categories by LR (notably the categories Success, Cause and Confounder) and the use of variables by NCR (notably the variables Effect, Truth and Confounding). (3) Since LR looks for the most parsimonious model, the analysis might be helped by NCR, that is very parsimonious since it uses only the variables and not all the cells of the contingency table. (4) While LR may generate statistically significant regressions, NRC may show that the correlation still is low. (5) Risk difference regression may be a bridge to understand more about the difference between LR and NCR. (6) The use of LR and NCR next to each other may help to focus on the research question and the amount of detail required for it.
    Keywords: Experimental economics; causality; cause and effect; confounding; contingency table; epidemiology; correlation; regression; logistic regression;
    JEL: C10
    Date: 2007–06–19
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:3615&r=ecm
  23. By: Garrone Giovanna; Marchionatti Roberto (University of Turin)
    Date: 2007–03
    URL: http://d.repec.org/n?u=RePEc:uto:cesmep:200702&r=ecm

This nep-ecm issue is ©2007 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.