
on Econometrics 
By:  M. Hashem Pesaran; Aman Ullah; Takashi Yamagata 
Abstract:  This paper proposes biasadjusted normal approximation versions of Lagrange multiplier (NLM) test of error cross section independence of Breusch and Pagan (1980) in the case of panel models with strictly exogenous regressors and normal errors. The exact mean and variance of the Lagrange multiplier (LM) test statistic are provided for the purpose of the biasadjustments, and it is shown that the proposed tests have a standard normal distribution for the fixed time series dimension (T) as the cross section dimension (N) tends to infinity. Importantly, the proposed biasadjusted NLM tests are consistent even when the Pesaran’s (2004) CD test is inconsistent. The finite sample evidence shows that the bias adjusted NLM tests successfully control the size, maintaining satisfactory power. However, it is also shown that the biasadjusted NLM tests are not as robust as the CD test to nonnormal errors and/or in the presence of weakly exogenous regressors. 
Keywords:  Cross Section Dependence, Spatial Dependence, LM test, Panel Model, Biasadjusted Test 
JEL:  C12 C13 C33 
Date:  2006–05 
URL:  http://d.repec.org/n?u=RePEc:cam:camdae:0641&r=ecm 
By:  Junji Shimada; Yoshihiko Tsukuda 
Abstract:  The stochastic volatility (SV) models had not been popular as the ARCH (autoregressive conditional heteroskedasticity) models in practical applications until recent years even though the SV models have close relationship to financial economic theories. The main reason is that the likelihood of the SV models is not easy to evaluate unlike the ARCH models. Developments of Markov Chain MonteCarlo (MCMC) methods have increased the popularity of Bayesian inference in many fields of research including the SV models. After Jacquire et al. (1994) applied a Bayesian analysis for estimating the SV model in their epoch making work, the Bayesian approach has greatly contributed to the research on the SV models. The classical analysis based on the likelihood for estimating the (SV) model has been extensively studied in the recent years. Danielson (1994) approximates the marginal likelihood of the observable process by simulating the latent volatility conditional on the available information. Shephard and Pitt (1997) gave an idea of evaluating likelihood by exploiting sampled volatility. Durbin and Koopman (1997) explored the idea of Shephard and Pitt (1997) and evaluated the likelihood by MonteCarlo integration. Sandmann and Koopman (1998) applied this method for the SV model. Durbin and Koopman (2000) reviewed the methods of Monte Carlo maximum likelihood from both Bayesian and classical perspectives. The purpose of this paper is to propose the Laplace approximation (LA) method to the nonlinear state space representation, and to show that the LA method is workable for estimating the SV models including the multivariate SV model and the dynamic bivariate mixture (DBM) model. The SV model can be regarded as a nonlinear state space model. The LA method approximates the logarithm of the joint density of current observation and volatility conditional on the past observations by the second order Taylor expansion around its mode, and then applies the nonlinear filtering algorithm. This idea of approximation is found in Shephard and Pitt (1997) and Durbin and Koopmann (1997). The MonteCarlo Likelihood (MCL: Sandmann and Koopman (1998)) is now a standard classical method for estimating the SV models. It is based on importance sampling technique. Importance sampling is regarded as an exact method for maximum likelihood estimation. We show that the LA method of this paper approximates the weight function by unity in the context of importance sampling. We do not need to carry out the Monte Carlo integration for obtaining the likelihood since the approximate likelihood function can be analytically obtained. If onestep ahead prediction density of observation and volatility variables conditional on the past observations is sufficiently accurately approximated, the LA method is workable. We examine how the LA method works by simulations as well as various empirical studies. We conduct the MonteCarlo simulations for the univariate SV model for examining the small sample properties and compare them with those of other methods. Simulation experiments reveals that our method is comparable to the MCL, Maximum Likelihood (Fridman and Harris (1998)) and MCMC methods. We apply this method to the univariate SV models with normal distribution or tdistribution, the bivariate SV model and the dynamic bivariate mixture model, and empirically illustrate how the LA method works for each of the extended models. The empirical results on the stock markets reveal that our method provides very similar estimates of coefficients to those of the MCL. As a result, this paper demonstrates that the LA method is workable in two ways: simulation studies and empirical studies. Naturally, the workability is limited to the cases we have examined. But we believe the LA method is applicable to many SV models based on our study of this paper 
Keywords:  Stochastic volatility, Nonlinear state space representation 
JEL:  C13 C22 
Date:  2004–08–11 
URL:  http://d.repec.org/n?u=RePEc:ecm:feam04:611&r=ecm 
By:  Byeongseon Seo 
Abstract:  The smooth transition autoregressive (STAR) model was proposed by Chan and Tong (1986) as a generalization of the threshold autoregressive (TAR) model, and since then it has attracted wide attention in the recent literature on the business cycles and the equilibrium parity relationships of commodity prices, exchange rates, and equity prices. Economic behavior is affected by asymmetric transaction costs and institutional rigidities, and thus a large number of studies  for example, Neftci (1984), Terasvirta and Anderson (1992), and Michael, Nobay, and Peel (1997)  have shown that many economic variables and relations display asymmetry and nonlinear adjustment. One of the most crucial issues in models of this kind is testing for the presence of nonlinear adjustment with the null of linearity. Luukkonen, Saikkonen, and Terasvirta (1988) expanded the transition function and proposed the variable addition tests as the tests of linearity against smooth transition nonlinearity, and the tests have been used in many empirical studies. However, the test statistics are based on the polynomial approximation, and the approximation errors may affect statistical inference depending on the parameter values of transition rate and location. Furthermore, the tests are not directly related to the smooth transition model, and thus we cannot retrace what causes the rejection of linearity. This paper considers the direct tests for nonlinear adjustment, which are based on the exact specification of smooth transition. The smooth transition model entails transition parameters, which cannot be identified under the null hypothesis. However, the optimality issue in the smooth transition model has not been treated extensively. The optimality issue regarding unidentified parameters has been developed by Davies (1987), Andrews (1993), and Hansen (1996). Hansen (1996) particularly considered the optimality issue in threshold models. The threshold parameter cannot be identified under the null hypothesis, and as a result the likelihood ratio statistic has the nonstandard distribution. The smooth transition model generalizes the threshold model, and thus this paper develops the appropriate tests and the associated distribution theory based on the optimality argument. Many empirical studies have found evidence on the presence of stochastic nonlinear dependence in equilibrium relations such as purchasing power parity. For example, Michael, Nobay, and Peel (1997), considering the equilibrium model of real exchange rate in the presence of transaction costs, found strong evidence of nonlinear adjustment, which conforms to the exponential smooth transition model. There exists a huge literature, and it is growing in this area. However, the econometric methods and the formal theory have been limited. This paper proposes the tests for nonlinear adjustment in the smooth transition vector error correction models, and thereby fills the deficiency in the literature. One technical difficulty is to estimate the smooth transition model. As noted by Haggan and Ozaki (1981) and Terasvirta (1994), it is difficult to estimate the smooth transition parameters jointly with the other slope parameters. The gradient of the transition parameter forces its estimate to blow up to infinity; thus, we cannot depend on the standard estimation algorithm. Our tests are based on the Lagrange multiplier statistic, which can be calculated under the null hypothesis. Therefore, our tests are easy to implement and thus useful. This paper finds that our tests have the asymptotic distribution, which is based on the Gaussian process. However, the asymptotic distribution depends on the nuisance parameters and the covariances are datadependent; thus, the tabulation of asymptotic distribution is not feasible. This paper suggests the bootstrap inference to approximate the sampling distribution of the test statistics. Simulation evidence shows that the bootstrap inference generates moderate size and power performances 
Keywords:  Nonlinearity; Smooth Transition; VECM 
JEL:  C32 
Date:  2004–08–11 
URL:  http://d.repec.org/n?u=RePEc:ecm:feam04:749&r=ecm 
By:  Dirk Hoorelbeke 
Abstract:  The Lagrange multiplier test, or score test, suggested independently by Aitchison and Silvey (1958) and Rao (1948), tests for parametric restrictions. Although the score test is an intuitively appealing and often used procedure, the exact distribution of the score test statistic is generally unknown and is often approximated by its firstorder asymptotic $\chi^2$ distribution. In problems of econometric inference, however, firstorder asymptotic theory may be a poor guide, and this is also true for the score test, as demonstrated in different Monte Carlo studies. See e.g. Breusch and Pagan (1979), Bera and Jarque (1981), Davidson and MacKinnon (1983, 1984, 1992), Chesher and Spady (1991) and Horowitz (1994), among many others. One can use the bootstrap distribution of the score test statistic to obtain a critical value. This can give already satisfactory results in terms of ERP (error in rejection probability: the difference between nominal and actual rejection probability under the null hypothesis). However, the score test uses a quadratic form statistic. In the construction and implementation of such a quadratic form statistic two important aspects, which determine the performance of the test (both under the null and the alternative), are (i) the weighting matrix (the covariance matrix of the score vector) and (ii) the critical value. Since the score test statistic is asymptotically pivotal, the bootstrap critical value is secondorder correct. However, one can achieve better performance, as well in terms of ERP as of power, by using a better estimate of the weighting matrix used in the quadratic form. In this paper we propose a bootstrapbased method to obtain both a secondorder correct estimate of the covariance matrix of the score vector and a secondorder correct critical value, using only one round of simulations (instead of B1 + B1 x B2). The method works as follows. Assume there exists a matrix A such that the score vector premultiplied by A is asymptotically pivotal. An obvious choice for A is the inverse of a square root of a covariance matrix estimate of the score vector, yielding a multivariate studentized score vector. This is not the only possible choice for A, though. Since then the transformed score vector is asymptotically pivotal, the bootstrap distribution is a secondorder approximation to the exact finite sample distribution. As such, the bootstrap covariance matrix of the transformed score vector is also secondorder correct. The next step is to construct a quadratic form statistic in the transformed score vector using its bootstrap covariance matrix as weighting matrix. This statistic is asymptotically (as both the sample size and the number of bootstrap simulations go to infinity) chisquared distributed with q (the dimension of the score) degrees of freedom. In practice, however, the number of bootstrap simulations is fixed to, say, B simulations. In this case the statistic is asymptotically (for the sample size tending to infinity) Hotelling Tsquared distributed with q and B1 degrees of freedom. Using a finite B, the exact finite sample covariance matrix of the transformed score vector is estimated with some noise, but the Tsquared critical values correct for this. When the Tsquared critical values are used, one is still only firstorder correct. But the distribution of the new statistic can also be approximated by the empirical distribution function of the quadratic forms in the bootstrap replications of the transformed score vector using the inverse of the bootstrap covariance matrix as weighting matrix. The appropriate quantile of this empirical distribution delivers a critical value which is secondorder correct. In a Monte Carlo simulation study we look at the information matrix test (White, 1982) in the regression model. Chesher (1983) showed that the information matrix is a score test for parameter constancy. We correct the ChesherLancaster version (Chesher, 1983 and Lancaster, 1984) of the information matrix test with the method proposed above and look at the ERP under the null and the power under a heteroskedastic alternative. The corrected statistic outperforms the ChesherLancaster statistic both in terms of ERP (with asymptotic or bootstrap critical values) and power. 
Keywords:  bootstrap; score test 
JEL:  C12 C15 
Date:  2004–08–11 
URL:  http://d.repec.org/n?u=RePEc:ecm:nasm04:228&r=ecm 
By:  Markus Frölich 
Abstract:  This note argues that nonparametric regression not only relaxes functional form assumptions visavis parametric regression, but that it also permits endogenous control variables. To control for selection bias or to make an exclusion restriction in instrumental variables regression valid, additional control variables are often added to a regression. If any of these control variables is endogenous, OLS or 2SLS would be inconsistent and would require further instrumental variables. Nonparametric approaches are still consistent, though. A few examples are examined and it is found that the asymptotic bias of OLS can indeed be very large. 
Keywords:  Endogeneity, nonparametric regression, instrumental variables 
JEL:  C13 C14 
Date:  2006–05 
URL:  http://d.repec.org/n?u=RePEc:usg:dp2006:200611&r=ecm 
By:  Hidehiko Ichimura (Faculty of Economics, University of Tokyo) 
Abstract:  This paper develops a concrete formula for the asymptotic distribution of twostep, possibly nonsmooth semiparametric Mestimators under general misspecification. Our regularity conditions are relatively straightforward to verify and also weaker than those available in the literature. The firststage nonparametric estimation may depend on finite dimensional parameters. We characterize: (1) conditions under which the firststage estimation of nonparametric components do not affect the asymptotic distribution, (2) conditions under which the asymptotic distribution is affected by the derivatives of the firststage nonparametric estimator with respect to the finitedimensional parameters, and (3) conditions under which one can allow nonsmooth objective functions. Our framework is illustrated by applying it to three examples: (1) profiled estimation of a single index quantile regression model, (2) semiparametric least squares estimation under model misspecification, and (3) a smoothed matching estimator. 
Date:  2006–05 
URL:  http://d.repec.org/n?u=RePEc:tky:fseres:2006cf426&r=ecm 
By:  Jonathan Hill (Department of Economics, Florida International University) 
Abstract:  This paper analyzes an estimator of the tail shape of a distribution due to B. Hill (1975) under general conditions of dependence and heterogeneity. For processes with extremes that are nearepochdependent on the extremes of a mixing process, we prove a (possibly stochastic) weighted average tail index over a window of sample tail regions has the same Gaussian distribution limit as any one tail index estimator. We provide a new approximation of the meansquareerror of the Hillestimator for any process with regularly varying distribution tails, as well as a new kernel estimator of a generalized meansquareerror based on a datadriven weighted average of the bias and variance. A broad simulation study demonstrates the strength of the kernel estimator for matters of inference when the data are dependent and heterogeneous. We demonstrate that minimum meansquareerror and meansquareerror weighted average estimators have superlative properties, including sharpness of confidence bands and the propensity to generate an estimator that is approximately normally distributed. 
Keywords:  Hill estimator, regular variation, extremal near epoch dependence, kernel estimator, meansquareerror 
JEL:  C15 C29 C49 
Date:  2006–05 
URL:  http://d.repec.org/n?u=RePEc:fiu:wpaper:0604&r=ecm 
By:  JeanMarie Dufour; Abdeljelil Farhat; Lynda Khalaf 
Abstract:  This paper illustrates the usefulness of resampling based methods in the context of multiple (simultaneous) tests, with emphasis on econometric applications. Economic theory often suggests joint (or simultaneous) hypotheses on econometric models; consequently, the problem of evaluating joint rejection probabilities arises frequently in econometrics and statistics. In this regard, it is well known that ignoring the joint nature of multiple hypotheses may lead to serious test size distortions. Whereas most available multiple test techniques are conservative in the presence of nonindependent statistics, our proposed tests provably achieve size control. Specifically, we use the Monte Carlo (MC) test technique to extend several well known combination methods to the nonindependent statistics contexts. We first cast the multiple test problem into a unified statistical framework which: (i) serves to show how exact global size control is achieved through the MC test method, and (ii) yields a number of superior tests previously not considered. Secondly, we provide a review of relevant available results. Finally, we illustrate the applicability of our proposed procedure to the problem of momentsbased normality tests. For this problem, we propose an exact variant of Kiefer and Salmon’s (1983) test, and an alternative combination method which exploits the well known FisherPearson procedure. Our simulation study reveals that the latter method seems to correct for the problem of test biases against platikurtic alternatives. In general, our results show that concrete and nonspurious power gains (over standard combination methods) can be achieved through our multiple Monte Carlo test approach. <P>Cet article illustre l’applicabilité des méthodes de rééchantillonnage dans le cadre des tests multiples (simultanés), pour divers problèmes économétriques. Les hypothèses simultanées sont une conséquence habituelle de la théorie économique, de sorte que le contrôle de la probabilité de rejet de combinaisons de tests est un problème que l’on rencontre fréquemment dans divers contextes économétriques et statistiques. À ce sujet, on sait que le fait d’ignorer le caractère conjoint des hypothèses multiples peut faire en sorte que le niveau de la procédure globale dépasse considérablement le niveau désiré. Alors que la plupart des méthodes d’inférence multiple sont conservatrices en présence de statistiques non indépendantes, les tests que nous proposons visent à contrôler exactement le niveau de signification. Pour ce faire, nous considérons des critères de test combinés proposés initialement pour des statistiques indépendantes. En appliquant la méthode des tests de Monte Carlo, nous montrons comment ces méthodes de combinaison de tests peuvent s’appliquer à de tels cas, sans recours à des approximations asymptotiques. Après avoir passé en revue les résultats antérieurs sur ce sujet, nous montrons comment une telle méthodologie peut être utilisée pour construire des tests de normalité basés sur plusieurs moments pour les erreurs de modèles de régression linéaires. Pour ce problème, nous proposons une généralisation valide à distance finie du test asymptotique proposé par Kiefer et Salmon (1983) ainsi que des tests combinés suivant les méthodes de Tippett et de PearsonFisher. Nous observons empiriquement que les procédures de test corrigées par la méthode des tests de Monte Carlo ne souffrent pas du problème de biais (ou sousrejet) souvent rapporté dans cette littérature – notamment contre les lois platikurtiques – et permettent des gains sensibles de puissance par rapport aux méthodes combinées usuelles. 
Keywords:  linear regression, normality test, goodness of fit, skewness, kurtosis, higher moments, Monte Carlo, induced test, test combination, simultaneous inference, Tippett, Fisher, Pearson, SURE, heteroskedasticity test, régression linéaire, test de normalité, ajustement, asymétrie, aplatissement, moments d’ordre supérieur, Monte Carlo, test induit, combinaison de tests, inférence simultanée, Tippett, Fisher, Pearson, SURE, test d’hétéroscédasticité 
JEL:  C1 C12 C15 C2 C52 C21 C22 
Date:  2005–02–01 
URL:  http://d.repec.org/n?u=RePEc:cir:cirwor:2005s05&r=ecm 
By:  Andrew Patton (London School of Economics) 
Abstract:  The use of a conditionally unbiased, but imperfect, volatility proxy can lead to undesirable outcomes in standard methods for comparing conditional variance forecasts. We derive necessary and sufficient conditions on functional form of the loss function for the ranking of competing volatility forecasts to be robust to the presence of noise in the volatility proxy, and derive some interesting special cases of this class of ?robust? loss functions. We motivate the theory with analytical results on the distortions caused by some widelyused loss functions, when used with standard volatility proxies such as squared returns, the intradaily range or realised volatility. The methods are illustrated with an application to the volatility of returns on IBM over the period 1993 to 2003. 
Keywords:  forecast evaluation; forecast comparison; loss functions; realised Variance; range 
JEL:  C53 C52 C22 
Date:  2006–05–01 
URL:  http://d.repec.org/n?u=RePEc:uts:rpaper:175&r=ecm 
By:  M. Hashem Pesaran; Ron P. Smith; Takashi Yamagata; Liudmyla Hvozdyk 
Abstract:  In this paper we adopt a new approach to testing for purchasing power parity, PPP, that is robust to base country effects, crosssection dependence, and aggregation. Given data on N +1 countries, i, j = 0, 1, 2, ..., N, the standard procedure is to apply unit root or stationarity tests to N relative prices against a base country, 0, e.g. the US. The evidence is that such tests are sensitive to the choice of base country. In addition, the analysis is subject to a high degree of cross section dependence which is difficult to deal with particularly when N is large. In this paper we test for PPP applying a pairwise approach to the disaggregated data set recently analysed by Imbs, Mumtaz, Ravan and Rey (2005, QJE). We consider a variety of tests applied to all possible N(N +1)/2 pairs of real exchange rate pairs between the N + 1 countries and estimate the proportion of the pairs that are stationary, for the aggregates and each of the 19 commodity groups. This approach is invariant to base country effects and the proportion that are nonstationary can be consistently estimated even if there is crosssectional dependence. To deal with small sample problems and residual cross section dependence, we use a factor augmented sieve bootstrap approach and present bootstrap pairwise estimates of the proportions that are stationary. The bootstrapped rejection frequencies at 26%49% based on unit root tests suggest some evidence in favour of the PPP in the case of the disaggregate data as compared to 6%14% based on aggregate price series. 
Keywords:  purchasing power parity, panel data, pairwise approach, cross section dependence 
JEL:  C23 F31 F41 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:ces:ceswps:_1704&r=ecm 
By:  Sara LopezPintado; Juan Romo 
Abstract:  We propose robust inference tools for functional data based on the notion of depth for curves. We extend the ideas of trimmed regions, contours and central regions to functions and study their structural properties and asymptotic behavior. Next, we introduce a scale curve to describe dispersion in a sample of functions. The computational burden of these techniques is not heavy and so they are also adequate to analyze highdimensional data. All these inferential methods are applied to different real data sets. 
Date:  2006–05 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:ws063113&r=ecm 
By:  Sara LopezPintado; Juan Romo 
Abstract:  The statistical analysis of functional data is a growing need in many research areas. We propose a new depth notion for functional observations based on the graphic representation of the curves. Given a collection of functions, it allows to establish the centrality of a function and provides a natural centeroutward ordering of the sample curves. Robust statistics such as the median function or a trimmed mean function can be defined from this depth definition. Its finitedimensional version provides a new depth for multivariate data that is computationally very fast and turns out to be convenient to study highdimensional observations. The natural properties are established for the new depth and the uniform consistency of the sample depth is proved. Simulation results show that the trimmed mean presents a better behavior than the mean for contaminated models. Several real data sets are considered to illustrate this new concept of depth. Finally, we use this new depth to generalize to functions the Wilcoxon rank sum test. It allows to decide whether two groups of curves come from the same population. This functional rank test is applied to girls and boys growth curves concluding that they present different growth patterns. 
Date:  2006–05 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:ws063012&r=ecm 
By:  Stephane Hess; Denis Bolduc; John Polak 
Abstract:  The area of discrete choice modelling has developed rapidly in recent years. In particular, continuing refinements of the Generalised Extreme Value (GEV) model family have permitted the representation of increasingly complex patterns of substitution and parallel advances in estimation capability have led to the increased use of model forms requiring simulation in estimation and application. One model form especially, namely the Mixed Multinomial Logit (MMNL) model, is being used ever more widely. Aside from allowing for random variations in tastes across decisionmakers in a Random Coefficients Logit (RCL) framework, this model additionally allows for the representation of interalternative correlation as well as heteroscedasticity in an Error Components Logit (ECL) framework, enabling the model to approximate any Random Utility model arbitrarily closely. While the various developments discussed above have led to gradual gains in modelling flexibility, little effort has gone into the development of model forms allowing for a representation of heterogeneity across respondents in the correlation structure in place between alternatives. Such correlation heterogeneity is however possibly a crucial factor in the variation of choicemaking behaviour across decisionmakers, given the potential presence of individualspecific terms in the unobserved part of utility of multiple alternatives. To the authors' knowledge, there has so far only been one application of a model allowing for such heterogeneity, by Bhat (1997). In this Covariance NL model, the logsum parameters themselves are a function of sociodemographic attributes of the decisionmakers, such that the correlation heterogeneity is explained with the help of these attributes. While the results by Bhat show the presence of statistically significant levels of covariance heterogeneity, the improvements in terms of model performance are almost negligible. While it is possible to interpret this as a lack of covariance heterogeneity in the data, another explanation is possible. It is clearly imaginable that a major part of the covariance heterogeneity cannot be explained in a deterministic fashion, either due to data limitations, or because of the presence of actual random variation, in a situation analogous to the case of random taste heterogeneity that cannot be explained in a deterministic fashion. In this paper, we propose two different ways of modelling such random variations in the correlation structure across individuals. The first approach is based on the use of an underlying GEV structure, while the second approach consists of an extension of the ECL model. In the former approach, the choice probabilities are given by integration of underlying GEV choice probabilities, such as Nested Logit, over the assumed distribution of the structural parameters. In the most basic specification, the structural parameters are specified as simple random variables, where appropriate choices of statistical distributions and/or mathematical transforms guarantee that the resulting structural parameters fall into the permissible range of values. Several extensions are then discussed in the paper that allow for a mixture of random and deterministic variations in the correlation structure. In an ECL model, correlation across alternatives is introduced with the help of normally distributed errorterms with a mean of zero that are shared by alternatives that are closer substitutes for each other, with the extent of correlation being determined by the estimates of the standard deviations of the errorcomponents. The extension of this model to a structure allowing for random covariance heterogeneity is again divided into two parts. In the first approach, correlation is assumed to vary purely randomly; this is obtained through simple integration over the distribution of the standard deviations of the errorterms, superseding the integration over the distribution of the errorcomponents with a specific draw for the standard deviations. The second extension is similar to the one used in the GEV case, with the standard deviations being composed of a deterministic term and a random term, either as a pure deviation, or in the form of random coefficients in the parameterisation of the distribution of the standard deviations. We next show that our Covariance GEV (CGEV) model generalises all existing GEV model structures, while the Covariance ECL (CECL) model can theoretically approximate all RUM models arbitrarily closely. Although this also means that the CECL model can closely replicate the behaviour of the CGEV model, there are some differences between the two models, which can be related to the differences in the underlying errorstructure of the base models (GEV vs ECL). The CECL model has the advantage of implicitly allowing for heteroscedasticity, although this is also possible with the CGEV model, by adding appropriate errorcomponents, leading to an ECCGEV model. In terms of estimation, the CECL model has a runtime advantage for basic nesting structures, when the number of errorcomponents, and hence dimensions of integration, is low enough not to counteract the gains made by being based on a more straightforward integrand (MNL vs advanced GEV). However, in more complicated structures, this advantage disappears, in a situation that is analogous to the case of Mixed GEV models compared to ECL models. A final disadvantage of the CECL model structure comes in the form of an additional set of identification conditions. The paper presents applications of these model structures to both crosssectional and panel datasets from the field of travel behaviour analysis. The applications illustrate the gains in model performance that can be obtained with our proposed structures when compared to models governed by a homogeneous covariance structure assumption. As expected, the gains in performance are more important in the case of data with repeated observations for the same individual, where the notion of individualspecific substitution patterns applies more directly. The applications also confirm the slight differences between the CGEV and CECL models discussed above. The paper concludes with a discussion of how the two structures can be extended to allow for random taste heterogeneity. The resulting models thus allow for random variations in choice behaviour both in the evaluation of measured attributes C as well as the correlation across alternatives in the unobserved utility terms. This further increases the flexibility of the two model structures, and their potential for analysing complex behaviour in transport and other areas of research. 
Date:  2005–08 
URL:  http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa05p375&r=ecm 
By:  Martin Biewen (University of Frankfurt) 
Abstract:  We derive the sampling variances of generalized entropy and Atkinson indices when estimated from complex survey data, and we show how they can be calculated straightforwardly by using widely available software. We also show that, when the same approach is used to derive variance formulae for the i.i.d. case, it leads to estimators that are simpler than those proposed before. Both cases are illustrated with a comparison of income inequality in Britain and Germany. 
Date:  2006–05–24 
URL:  http://d.repec.org/n?u=RePEc:boc:dsug06:04&r=ecm 
By:  Paul J. Devereux (UCLA); Gautam Tripathi (University of Connecticut) 
Abstract:  Economists and other social scientists often face situations where they have access to two datasets that they can use but one set of data suffers from censoring or truncation. If the censored sample is much bigger than the uncensored sample, it is common for researchers to use the censored sample alone and attempt to deal with the problem of partial observation in some manner. Alternatively, they simply use only the uncensored sample and ignore the censored one so as to avoid biases. It is rarely the case that researchers use both datasets together, mainly because they lack guidance about how to combine them. In this paper, we develop a tractable semiparametric framework for combining the censored and uncensored datasets so that the resulting estimators are consistent, asymptotically normal, and use all information optimally. When the censored sample, which we refer to as the master sample, is much bigger than the uncensored sample (which we call the refreshment sample), the latter can be thought of as providing identification where it is otherwise absent. In contrast, when the refreshment sample is large and could typically be used alone, our methodology can be interpreted as using information from the censored sample to increase effciency. To illustrate our results in an empirical setting, we show how to estimate the effect of changes in compulsory schooling laws on age at first marriage, a variable that is censored for younger individuals. We also demonstrate how refreshment samples for this application can be created by matching cohort information across census datasets. 
Keywords:  Censoring, Empirical Likelihood, GMM, Refreshment samples, Truncation 
JEL:  C14 C24 C34 C51 
Date:  2005–04 
URL:  http://d.repec.org/n?u=RePEc:uct:uconnp:200510&r=ecm 
By:  Marie Diron (Brevan Howard Asset Management LLP, London, SW1Y 6XA, United Kingdom.) 
Abstract:  Economic policy makers, international organisations and privatesector forecasters commonly use shortterm forecasts of real GDP growth based on monthly indicators, such as industrial production, retail sales and confidence surveys. An assessment of the reliability of such tools and of the source of potential forecast errors is essential. While many studies have evaluated the size of forecast errors related to model specifications and unavailability of data in real time, few have provided a complete assessment of forecast errors, which should notably take into account the impact of data revision. This paper proposes to bridge this gap. Using four years of data vintages for euro area conjunctural indicators, the paper decomposes forecast errors into four elements (model specification, erroneous extrapolations of the monthly indicators, revisions to the monthly indicators and revisions to the GDP data series) and assesses their relative sizes. The results show that gains in accuracy of forecasts achieved by using monthly data on actual activity rather than surveys or financial indicators are offset by the fact that the former set of monthly data is harder to forecast and less timely than the latter set. While the results presented in the paper remain tentative due to limited data availability, they provide a benchmark which future research may build on. 
Keywords:  Forecasting, conjunctural analysis, bridge equations, realtime forecasting, vintage data. 
JEL:  C22 C53 E17 E37 E66 
Date:  2006–05 
URL:  http://d.repec.org/n?u=RePEc:ecb:ecbwps:20060622&r=ecm 
By:  Cees Diks (CeNDEF, Universiteit van Amsterdam); Florian Wagener (CeNDEF, Universiteit van Amsterdam) 
Abstract:  This article presents a bifurcation theory of smooth stochastic dynamical systems that are governed by everywhere positive transition densities. The local dependence structure of the unique strictly stationary evolution of such a system can be expressed by the ratio of joint and marginal probability densities; this 'dependence ratio' is a geometric invariant of the system. By introducing a weak equivalence notion of these dependence ratios, we arrive at a bifurcation theory for which in the compact case, the set of stable (nonbifurcating) systems is open and dense. The theory is illustrated with some simple examples. 
Keywords:  stochastic bifurcation theory 
JEL:  C14 C22 C32 
Date:  2006–05–09 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20060043&r=ecm 
By:  Knoppik, Christoph 
Abstract:  A new econometric approach to the analysis of downward nominal wage rigidity in micro data is proposed, the kernellocation approach. It combines kerneldensity estimation and the principle of joint variation of location and shape of the distribution of per cent annual nominal wage changes. The approach provides partial estimates of the counterfactual and factual distributions, of the rigidity function and of the degree of downward nominal wage rigidity. It avoids problematic assumptions of other semi or nonparametric approaches to downward nominal wage rigidity in micro data and allows a discussion of the type of downward nominal wage rigidity encountered in data. 
Keywords:  Downward Nominal Wage Rigidity, KernelLocation Approach, Micro Data 
JEL:  E24 J30 
Date:  2006–05–24 
URL:  http://d.repec.org/n?u=RePEc:bay:rdwiwi:662&r=ecm 
By:  Steven B. Caudill; Peter A. Groothuis; John C. Whitehead 
Abstract:  The most persistently troubling empirical result in the contingent valuation method literature is the tendency for hypothetical willingness to pay to overestimate real willingness to pay. We suggest a new approach to test and correct for hypothetical bias using a latent choice multinomial logit (LCMNL) model. To develop this model, we extend Dempster, Laird, and Rubin’s (1977) work on the EM algorithm to the estimation of a multinomial logit model with missing information on categorical membership. Using data on both the quality of water in the Catawba River in North Carolina and the preservation of Saginaw wetlands in Michigan, we find two types of “yes” responders in both data sets. We suggest that one set of yes responses are yeasayers who suffer from hypothetical bias and answer yes to the hypothetical question but would not pay the bid amount if it were real. The second group does not suffer from hypothetical bias and would pay the bid amount if it were real. 
Keywords:  C25, P230, Q51 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:apl:wpaper:0609&r=ecm 