
on Econometrics 
By:  Donald W.K. Andrews (Cowles Foundation, Yale University); Patrik Guggenberger (Dept. of Economics, UCLA) 
Abstract:  This paper considers a mean zero stationary firstorder autoregressive (AR) model. It is shown that the least squares estimator and t statistic have Cauchy and standard normal asymptotic distributions, respectively, when the AR parameter rho_n is very near to one in the sense that 1  rho_n = (n^{1}). 
Keywords:  Asymptotics, Least squares, Nearly nonstationary, Stationary initial condition, Unit root 
JEL:  C22 
Date:  2007–03 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1607&r=ecm 
By:  Cizek,Pavel (Tilburg University, Center for Economic Research) 
Abstract:  The binarychoice regression models such as probit and logit are used to describe the effect of explanatory variables on a binary response variable. Typically estimated by the maximum likelihood method, estimates are very sensitive to deviations from a model, such as heteroscedasticity and data contamination. At the same time, the traditional robust (highbreakdown point) methods such as the maximum trimmed likelihood are not applicable since, by trimming observations, they induce the separation of data and nonidentiffication of parameter estimates. To provide a robust estimation method for binarychoice regression, we consider a maximum symmetricallytrimmed likelihood estimator (MSTLE) and design a parameterfree adaptive procedure for choosing the amount of trimming. The proposed adaptive MSTLE preserves the robust properties of the original MSTLE, signifficantly improves the finitesample behavior of MSTLE, and additionallyfensures asymptotic efficiency of the estimator under no contamination. The results concerning the trimming identiffication, robust properties, and asymptotic distribution of the proposed method are accompanied by simulation experiments and an application documenting the finitesample behavior of some existing and the proposed methods. 
Keywords:  asymptotic efficiency;binarychoice regression;breakdown point;maximum likelihood estimation;robust estimation;trimming 
JEL:  C13 C20 C21 C22 
Date:  2007 
URL:  http://d.repec.org/n?u=RePEc:dgr:kubcen:200712&r=ecm 
By:  Donald W.K. Andrews (Cowles Foundation, Yale University); Patrik Guggenberger (Dept. of Economics, UCLA) 
Abstract:  This paper considers inference based on a test statistic that has a limit distribution that is discontinuous in a nuisance parameter or the parameter of interest. The paper shows that subsample, b_n < n bootstrap, and standard fixed critical value tests based on such a test statistic often have asymptotic size  defined as the limit of the finitesample size  that is greater than the nominal level of the tests. We determine precisely the asymptotic size of such tests under a general set of highlevel conditions that are relatively easy to verify. The highlevel conditions are verified in several examples. Analogous results are established for confidence intervals. The results apply to tests and confidence intervals (i) when a parameter may be near a boundary, (ii) for parameters defined by moment inequalities, (iii) based on superefficient or shrinkage estimators, (iv) based on postmodel selection estimators, (v) in scalar and vector autoregressive models with roots that may be close to unity, (vi) in models with lack of identification at some point(s) in the parameter space, such as models with weak instruments and threshold autoregressive models, (vii) in predictive regression models with nearlyintegrated regressors, (viii) for nondifferentiable functions of parameters, and (ix) for differentiable functions of parameters that have zero firstorder derivative. Examples (i)(iii) are treated in this paper. Examples (i) and (iv)(vi) are treated in sequels to this paper, Andrews and Guggenberger (2005a, b). In models with unidentified parameters that are bounded by moment inequalities, i.e., example (ii), certain subsample confidence regions are shown to have asymptotic size equal to their nominal level. In all other examples listed above, some types of subsample procedures do not have asymptotic size equal to their nominal level. 
Keywords:  Asymptotic size, b < n bootstrap, Finitesample size, Overrejection, Size correction, Subsample confidence interval, Subsample test 
JEL:  C12 C15 
Date:  2007–03 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1605&r=ecm 
By:  Mueller, Ulrich; Petalas, PhilippeEmmanuel 
Abstract:  The paper investigates asymptotically efficient inference in general likelihood models with time varying parameters. Parameter path estimators and tests of parameter constancy are evaluated by their weighted average risk and weighted average power, respectively. The weight function is proportional to the distribution of a Gaussian process, and focusses on local parameter instabilities that cannot be detected with certainty even in the limit. It is shown that asymptotically, the sample information about the parameter path is efficiently summarized by a Gaussian pseudo model. This approximation leads to computationally convenient formulas for efficient path estimators and test statistics, and unifies the theory of stability testing and parameter path estimation. 
Keywords:  Time Varying Parameters; Nonlinear NonGaussian Smoothing; Weighted Average Risk; Weighted Average Power; Posterior Approximation; Contiguity 
JEL:  C22 
Date:  2007–03 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:2260&r=ecm 
By:  Donald W.K. Andrews (Cowles Foundation, Yale University); Patrik Guggenberger (Dept. of Economics, UCLA) 
Abstract:  This paper considers the problem of constructing tests and confidence intervals (CIs) that have correct asymptotic size in a broad class of nonregular models. The models considered are nonregular in the sense that standard test statistics have asymptotic distributions that are discontinuous in some parameters. It is shown in Andrews and Guggenberger (2005a) that standard fixed critical value, subsample, and b < n bootstrap methods often have incorrect size in such models. This paper introduces general methods of constructing tests and CIs that have correct size. First, procedures are introduced that are a hybrid of subsample and fixed critical value methods. The resulting hybrid procedures are easy to compute and have correct size asymptotically in many, but not all, cases of interest. Second, the paper introduces sizecorrection and "plugin" sizecorrection methods for fixed critical value, subsample, and hybrid tests. The paper also introduces finitesample adjustments to the asymptotic results of Andrews and Guggenberger (2005a) for subsample and hybrid methods and employs these adjustments in sizecorrection. The paper discusses several examples in detail. The examples are: (i) tests when a nuisance parameter may be near a boundary, (ii) CIs in an autoregressive model with a root that may be close to unity, and (iii) tests and CIs based on a postconservative model selection estimator. 
Keywords:  Asymptotic size, Autoregressive model, b < n bootstrap, Finitesample size, Hybrid test, Model selection, Overrejection, Parameter near boundary, Size correction, Subsample confidence interval, Subsample test 
JEL:  C12 C15 
Date:  2007–03 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1606&r=ecm 
By:  Alastair R. Hall (Economics, School of Social Sciences, University of Manchester); Denis Pelletier (Department of Economics, North Carolina State University) 
Abstract:  Rivers and Vuong (2002) develop a very general framework for choosing between two competing dynamic models. Within their framework, inference is based on a statistic that compares measures of goodness of fit between the two models. The null hypothesis is that the models have equal measures of goodness of fit; one model is preferred if its goodness of fit is statistically significantly smaller than its competitor. Under the null hypothesis, Rivers and Vuong (2002) show that their test statistic has a standard normal distribution under generic conditions that are argued to allow for a variety of estimation methods including Generalized Method of Moments (GMM). In this paper, we analyze the limiting distribution of Rivers and Vuong's (2002) statistic under the null hypothesis when inference is based on a comparison of GMM minimands evaluated at GMM estimators. It is shown that the limiting behaviour of this statistic depends on whether the models in question are correctly specified, locally misspecified or misspecified. Specifically, it is shown that: (i) if both models are correctly specified or locally misspecified then Rivers and Vuong's (2002) generic conditions are not satisfied, and the limiting distribution of the test statistic is nonstandard under the null; (ii) if both models are misspecified then the generic conditions are satisfied, and so the statistic has a standard normal distribution under the null. In the latter case it is shown that the choice of weighting matrices affects the outcome of the test and thus the ranking of the models. 
Keywords:  Generalized Method of Moments, Nonnested Hypothesis Testing, Model Selection 
JEL:  C10 C32 
Date:  2007–03 
URL:  http://d.repec.org/n?u=RePEc:ncs:wpaper:011&r=ecm 
By:  Li, Hong; Mueller, Ulrich 
Abstract:  The paper considers time series GMM models where a subset of the parameters are time varying. The magnitude of the time variation in the unstable parameters is such that efficient tests detect the instability with (possibly high) probability smaller than one, even in the limit. We show that for many forms of the instability and a large class of GMM models, standard GMM inference on the subset of stable parameters, ignoring the partial instability, remains asymptotically valid. 
Keywords:  Structural Breaks; Parameter Stability Test; Contiguity; Euler Condition; New Keynesian Phillips Curve 
JEL:  C32 
Date:  2006–08 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:2261&r=ecm 
By:  Christoph Hartz (University of Munich); Stefan Mittnik (University of Munich, Center for Financial Studies and ifo); Marc S. Paolella (University of Zurich) 
Abstract:  A resampling method based on the bootstrap and a biascorrection step is developed for improving the ValueatRisk (VaR) forecasting ability of the normalGARCH model. Compared to the use of more sophisticated GARCH models, the new method is fast, easy to implement, numerically reliable, and, except for having to choose a window length L for the biascorrection step, fully data driven. The results for several different financial asset returns over a long outofsample forecasting period, as well as use of simulated data, strongly support use of the new method, and the performance is not sensitive to the choice of L. 
Keywords:  Bootstrap, GARCH, ValueatRisk 
JEL:  C22 C53 C63 G12 
Date:  2006–11–03 
URL:  http://d.repec.org/n?u=RePEc:cfs:cfswop:wp200623&r=ecm 
By:  Wladimir Raymond; Pierre Mohnen; Franz Palm; Sybrand Schim van der Loeff 
Abstract:  This paper proposes a method to implement maximum likelihood estimation of the dynamic panel data type 2 and 3 tobit models. The likelihood function involves a twodimensional indefinite integral evaluated using “twostep” GaussHermite quadrature. A Monte Carlo study shows that the quadrature works well infinite sample for a number of evaluation points as small as two. Incorrectly ignoring the individual effects, or the dependence between the initial conditions and the individual effects results in an overestimation of the coefficients of the lagged dependent variables. An application to incremental and radical product innovations by Dutch business firms illustrates the method. <P>Cette étude propose une façon d’utiliser l’estimateur du maximum de vraisemblance pour des données panel et des modèles dynamiques de type tobit 2 ou tobit 3. La fonction de vraisemblance inclut une intégrale double qui est évaluée en utilisant une quadrature GaussHermite à deux étapes. Une étude de Monte Carlo montre que la quadrature donne de bons résultats dans un échantillon fini même avec uniquement deux points d’évaluation. Si on ignore les effets individuels ou la dépendance entre ceuxci et les conditions initiales, on obtient une estimation biaisée vers le haut des coefficients des variables endogènes retardées. Une application à l’étude des innovations de produit radicales et incrémentales avec des données panel d’entreprises néerlandaises illustre la méthode proposée. 
Keywords:  panel data, maximum likelihood estimator, dynamic models, sample selection, données panel, maximum de vraisemblance, modéles dynamiques avec sélection 
Date:  2007–03–01 
URL:  http://d.repec.org/n?u=RePEc:cir:cirwor:2007s06&r=ecm 
By:  Zsolt Darvas (Department of Mathematical Economics and Economic Analysis, Corvinus University of Budapest) 
Abstract:  Samples with overlapping observations are used for the study of uncovered interest rate parity, the predictability of long run stock returns, and the credibility of exchange rate target zones. This paper quantifies the biases in parameter estimation and size distortions of hypothesis tests of overlapping linear and polynomial autoregressions, which have been used in target zone applications. We show that both estimation bias and size distortions generally depend on the amount of overlap, the sample size, and the autoregressive root of the data generating process. In particular, the estimates are biased in a way that makes it more likely that the predictions of the BertolaSvenssonmodel will be supported. Size distortions of various tests also turn out to be substantial even when using a heteroskedasticity and autocorrelation consistent covariance matrix. 
Keywords:  driftadjustment method, exchange rate target zone, HAC covariance, overlapping observations, polynomial autoregression, size distortions, small sample bias 
JEL:  C22 F31 
Date:  2007–02–27 
URL:  http://d.repec.org/n?u=RePEc:mkg:wpaper:0701&r=ecm 
By:  G. EVERAERT 
Abstract:  A regression including integrated variables yields spurious results if the residuals contain a unit root. Although the obtained estimates are unreliable, this does not automatically imply that there is no longrun relation between the included variables as the unit root in the residuals may be induced by omitted or unobserved integrated variables. This paper uses an unobserved component model to estimate the partial longrun relation between observed integrated variables. This provides an alternative to standard cointegration analysis. The proposed methodology is described using a Monte Carlo simulation and applied to investigate purchasingpower parity. 
Keywords:  Spurious Regression, Cointegration, Unobserved Component Model, PPP. 
JEL:  C15 C32 
Date:  2007–01 
URL:  http://d.repec.org/n?u=RePEc:rug:rugwps:07/452&r=ecm 
By:  Devereux, Paul J. 
Abstract:  In many economic applications, observations are naturally categorized into mutually exclusive and exhaustive groups. For example, individuals can be classified into cohorts and workers are employees of a particular firm. Grouping models are widely used in economics  for example, cohort models have been used to study labour supply, wage inequality, consumption, and intergenerational transfer of human capital. The simplest grouping estimator involves taking the means of all variables for each group and then carrying out a grouplevel regression by OLS or weighted least squares. This estimator is biased in finite samples. I show that the standard errors in variables estimator (EVE) designed to correct for small sample bias is exactly equivalent to the Jackknife Instrumental Variables Estimator (JIVE). Also EVE is closely related to the kclass of instrumental variables estimators. I then use results from the instrumental variables literature to develop an estimator (UEVE) with better finitesample properties than existing errors in variables estimators. The theoretical results are demonstrated using Monte Carlo experiments. Finally, I use the estimators to implement a model of intertemporal male labour supply using micro data from the United States Census. There are sizeable differences in the wage elasticity across estimators, showing the practical importance of the theoretical issues discussed in this paper even in circumstances where the sample size is quite large. 
Keywords:  errorsinvariables; grouped data 
JEL:  C21 J22 
Date:  2007–03 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:6167&r=ecm 
By:  Kleijnen,Jack P.C. (Tilburg University, Center for Economic Research) 
Abstract:  This article reviews Kriging (also called spatial correlation modeling). It presents the basic Kriging assumptions and formulas. contrasting Kriging and classic linear regression metamodels. Furthermore, it extends Kriging to random simulation, and discusses bootstrapping to estimate the variance of the Kriging predictor. Besides classic oneshot statistical designs such as Latin Hypercube Sampling, it reviews sequentialized and customized designs. It ends with topics for future research. 
Keywords:  kriging;metamodel;response surface;interpolation;design 
JEL:  C0 C1 C9 
Date:  2007 
URL:  http://d.repec.org/n?u=RePEc:dgr:kubcen:200713&r=ecm 
By:  Michiel D. de Pooter (Erasmus Universiteit Rotterdam); Francesco Ravazzolo (Erasmus Universiteit Rotterdam); Dick van Dijk (Erasmus Universiteit Rotterdam) 
Abstract:  We forecast the term structure of U.S. Treasury zerocoupon bond yields by analyzing a range of models that have been used in the literature. We assess the relevance of parameter uncertainty by examining the added value of using Bayesian inference compared to frequentist estimation techniques, and model uncertainty by combining forecasts from individual models. Following current literature we also investigate the benefits of incorporating macroeconomic information in yield curve models. Our results show that adding macroeconomic factors is very beneficial for improving the outofsample forecasting performance of individual models. Despite this, the predictive accuracy of models varies over time considerably, irrespective of using the Bayesian or frequentist approach. We show that mitigating model uncertainty by combining forecasts leads to substantial gains in forecasting performance, especially when applying Bayesian model averaging. 
Keywords:  Term structure of interest rates; NelsonSiegel model; Affine term structure model; forecast combination; Bayesian analysis 
JEL:  C5 C11 C32 E43 E47 F47 
Date:  2007–03–09 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20070028&r=ecm 
By:  Taoufik Bouezmarni; Jeroen V.K. Rombouts (IEA, HEC Montréal) 
Abstract:  In this paper we consider the nonparametric estimation for a density and hazard rate function for right censored ?mixing survival time data using kernel smoothing techniques. Since survival times are positive with potentially a high concentration at zero, one has to take into account the bias problems when the functions are estimated in the boundary region. In this paper, gamma kernel estimators of the density and the hazard rate function are proposed. The estimators use adaptive weights depending on the point in which we estimate the function, and they are robust to the boundary bias problem. For both estimators, the mean squared error properties, including the rate of convergence, the almost sure consistency and the asymptotic normality are investigated. The results of a simulation demonstrate the excellent performance of the proposed estimators. 
Keywords:  Gamma kernel, Kaplan Meier, density and hazard function, mean integrated squared error, consistency, asymptotic normality. 
Date:  2006–12 
URL:  http://d.repec.org/n?u=RePEc:iea:carech:0616&r=ecm 
By:  McCauley, Joseph L.; Gunaratne, Gemunu H.; Bassler, Kevin E. 
Abstract:  There is much confusion in the literature over Hurst exponents. Recently, we took a step in the direction of eliminating some of the confusion. One purpose of this paper is to illustrate the difference between fBm on the one hand and Gaussian Markov processes where Hâ 1/2 on the other. The difference lies in the increments, which are stationary and correlated in one case and nonstationary and uncorrelated in the other. The two and onepoint densities of fBm are constructed explicitly. The twopoint density doesnât scale. The onepoint density for a semiinfinite time interval is identical to that for a scaling Gaussian Markov process with Hâ 1/2 over a finite time interval. We conclude that both Hurst exponents and one point densities are inadequate for deducing the underlying dynamics from empirical data. We apply these conclusions in the end to make a focused statement about ânonlinear diffusionâ. 
Keywords:  Markov processes; fractional Brownian motion; scaling; Hurst exponents; stationary and nonstationary increments; autocorrelations 
JEL:  G00 C1 
Date:  2006–09–30 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:2154&r=ecm 
By:  Caiado, Jorge; Crato, Nuno; Peña, Daniel 
Abstract:  We propose a periodogrambased metric for classification and clustering of time series with different sample sizes. For such cases, we know that the Euclidean distance between the periodogram ordinates cannot be used. One possible way to deal with this problem is to interpolate lineary one of the periodograms in order to estimate ordinates of the same frequencies. 
Keywords:  Classification; Cluster analysis; Interpolation; Periodogram; Time series. 
JEL:  C32 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:2075&r=ecm 
By:  Bassler, Kevin E.; Gunaratne, Gemunu H.; McCauley, Joseph L. 
Abstract:  We show by explicit closed form calculations that a Hurst exponent Hâ 1/2 does not necessarily imply long time correlations like those found in fractional Brownian motion. We construct a large set of scaling solutions of FokkerPlanck partial differential equations where Hâ 1/2. Thus Markov processes, which by construction have no long time correlations, can have Hâ 1/2. If a Markov process scales with Hurst exponent Hâ 1/2 then it simply means that the process has nonstationary increments. For the scaling solutions, we show how to reduce the calculation of the probability density to a single integration once the diffusion coefficient D(x,t) is specified. As an example, we generate a class of studenttlike densities from the class of quadratic diffusion coefficients. Notably, the Tsallis density is one member of that large class. The Tsallis density is usually thought to result from a nonlinear diffusion equation, but instead we explicitly show that it follows from a Markov process generated by a linear FokkerPlanck equation, and therefore from a corresponding Langevin equation. Having a Tsallis density with Hâ 1/2 therefore does not imply dynamics with correlated signals, e.g., like those of fractional Brownian motion. A short review of the requirements for fractional Brownian motion is given for clarity, and we explain why the usual simple argument that Hâ 1/2 implies correlations fails for Markov processes with scaling solutions. Finally, we discuss the question of scaling of the full Green function g(x,t;xâ,tâ) of the FokkerPlanck pde. 
Keywords:  Hurst exponent; Markov process; scaling; stochastic calculus; autocorrelations; fractional Brownian motion; Tsallis model; nonlinear diffusion 
JEL:  G1 G10 G14 
Date:  2005–12–01 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:2152&r=ecm 
By:  A. PRINZIE; D. VAN DEN POEL 
Abstract:  The acquisition process of consumer durables is a ‘sequence’ of purchase events. Prioritypattern research exploits this ‘sequential order’ to describe a prototypical acquisition order for durables. This paper adds a predictive perspective to increase managerial relevance. Besides order information, the acquisition sequence also reveals precise timing between purchase events (‘sequential duration’) as examined in the literature on durable replacement and timetofirst acquisition. This paper bridges the gap between prioritypattern research and research on duration between durable acquisitions to improve the prediction of the product group the customer might acquire his next durable from, i.e. NextProducttoBuy (NPTB) model. We evaluate four multinomialchoice models incorporating: 1) general covariates, 2) general covariates and sequential order, 3) general covariates and sequential duration, and 4) general covariates, sequential order and duration. The results favor the model including general covariates and duration information (3). The high predictive value of sequentialduration information emphasizes the predictive power of duration as compared to order information. 
Keywords:  crosssell, sequence analysis, choice modeling, durable goods, analytical CRM 
Date:  2007–01 
URL:  http://d.repec.org/n?u=RePEc:rug:rugwps:07/442&r=ecm 
By:  Caiado, Jorge; Crato, Nuno 
Abstract:  In this paper, we introduce a volatilitybased method for clustering analysis of financial time series. Using the generalized autoregressive conditional heteroskedasticity (GARCH) models we estimate the distances between the stock return volatilities. The proposed method uses the volatility behavior of the time series and solves the problem of different lengths. As an illustrative example, we investigate the similarities among major international stock markets using daily return series with different sample sizes from 1966 to 2006. From cluster analysis, most European markets countries, United States and Canada appear close together, and most Asian/Pacific markets and the South/Middle American markets appear in a distinct cluster. After the terrorist attack on September 11, 2001, the European stock markets have become more homogenous, and North American markets, Japan and Australia seem to come closer. 
Keywords:  Cluster analysis; GARCH; International stock markets; Volatility. 
JEL:  C32 G15 
Date:  2007 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:2074&r=ecm 