
on Econometrics 
By:  Donald W.K. Andrews (Cowles Foundation, Yale University); James H. Stock (Dept. of Economics, Harvard University) 
Abstract:  This paper reviews recent developments in methods for dealing with weak instruments (IVs) in IV regression models. The focus is more on tests (and confidence intervals derived from tests) than estimators. The paper also presents new testing results under "many weak IV asymptotics," which are relevant when the number of IVs is large and the coefficients on the IVs are relatively small. Asymptotic power envelopes for invariant tests are established. Power comparisons of the conditional likelihood ratio (CLR), AndersonRubin, and Lagrange multiplier tests are made. Numerical results show that the CLR test is on the asymptotic power envelope. This holds no matter what the relative magnitude of the IV strength to the number of IVs. 
Keywords:  Conditional likelihood ratio test, instrumental variables, many instrumental variables, power envelope, weak instruments 
JEL:  C12 C30 
Date:  2005–08 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1530&r=ecm 
By:  Paulo M. M. Rodrigues 
Abstract:  In this paper, we analyse the properties of recursive trend adjusted unit root tests. We show that OLS based recursive trend adjustment can produce unit root tests which are not invariant when the data is generated from a random walk with drift and investigate whether the power performance previously observed in the literature is maintained under invariant versions of the tests. A finite sample evaluation of the size and power of the invariant procedures is presented. 
Keywords:  Recursive Trend Adjustment, Unit root tests, Invariance 
JEL:  C12 C22 
Date:  2004 
URL:  http://d.repec.org/n?u=RePEc:eui:euiwps:eco2004/31&r=ecm 
By:  Steve Bond (Institute for Fiscal Studies and Nuffield College, Oxford); Céline Nauges; Frank Windmeijer (Institute for Fiscal Studies) 
Abstract:  We consider a number of unit root tests for micro panels where the number of individuals is typically large, but the number of time periods is often very small. As we discuss, the presence of a unit root is closely related to the identification of parameters of interest in this context. Calculations of asymptotic local power and Monte Carlo evidence indicate that two simple ttests based on ordinary least squares estimators perform particularly well. 
Keywords:  Generalised Method of Moments, identification, unit root tests 
JEL:  C12 C23 
Date:  2005–07 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:07/05&r=ecm 
By:  Paulo M.M. Rodrigues; A.M. Robert Taylor 
Abstract:  In this paper we derive, under the assumption of Gaussian errors with known error covariance matrix, asymptotic local power bounds for seasonal unit root tests for both known and unknown deterministic scenarios and for an arbitrary seasonal aspect. We demonstrate that the optimal test of a unit root at a given spectral frequency behaves asymptotically independently of whether unit roots exist at other frequencies or not. We also develop modified versions of the optimal tests which attain the asymptotic Gaussian power bounds under much weaker conditions. We further propose nearefficient regressionbased seasonal unit root tests using pseudoGLS detrending and show that these have limiting null distributions and asymptotic local power functions of a known form. Monte Carlo experiments indicate that the regressionbased tests perform well in finite samples. 
Keywords:  Point optimal invariant (seasonal) unit root tests; asymptotic local power bounds; near seasonal integration 
JEL:  C22 
Date:  2004 
URL:  http://d.repec.org/n?u=RePEc:eui:euiwps:eco2004/29&r=ecm 
By:  Jaroslava Hlouskova; Martin Wagner 
Abstract:  This paper presents results concerning the size and power of first generation panel unit root and stationarity tests obtained from a large scale simulation study, with in total about 290 million test statistics computed. The tests developed in the following papers are included: Levin, Lin and Chu (2002), Harris and Tzavalis (1999), Breitung (2000), Im, Pesaran and Shin (1997 and 2003), Maddala and Wu (1999), Hadri (2000), and Hadri and Larsson (2005). Our simulation setup is designed to address i.a. the following issues. First, we assess the performance as a function of the time and the crosssection dimension. Second, we analyze the impact of positive MA roots on the test performance. Third, we investigate the power of the panel unit root tests (and the size of the stationarity tests) for a variety of first order autoregressive coefficients. Fourth, we consider both of the two usual specifications of deterministic variables in the unit root literature. 
Keywords:  Panel Unit Root Test, Panel Stationarity Test, Size, Power, Simulation Study 
JEL:  C12 C15 C23 
Date:  2005 
URL:  http://d.repec.org/n?u=RePEc:eui:euiwps:eco2005/05&r=ecm 
By:  Dietmar Bauer; Martin Wagner 
Abstract:  We investigate autoregressive approximations of multiple frequency I(1) processes, of which I(1) processes are a special class. The underlying data generating process is assumed to allow for an infinite order autoregressive representation where the coefficients of the Wold representation of the suitably differenced process satisfy mild summability constraints. An important special case of this process class are VARMA processes. The main results link the approximation properties of autoregressions for the nonstationary multiple frequency I(1) process to the corresponding properties of a related stationary process, which are well known (cf. Section 7.4 of Hannan and Deistler, 1988). First, error bounds on the estimators of the autoregressive coefficients are derived that hold uniformly in the lag length. Second, the asymptotic properties of order estimators obtained with information criteria are shown to be closely related to those for the associated stationary process obtained by suitable differencing. For multiple frequency I(1) VARMA processes we establish divergence of order estimators based on the BIC criterion at a rate proportional to the logarithm of the sample size. 
Keywords:  Unit Roots, Multiple Frequency I(1) Process, Nonrational Transfer Function, Cointegration, VARMA Process, Information Criteria 
JEL:  C13 C32 
Date:  2005 
URL:  http://d.repec.org/n?u=RePEc:eui:euiwps:eco2005/09&r=ecm 
By:  Rodney W. Strachan 
Abstract:  This paper generalises the cointegrating model of Phillips (1991) to allow for I (0) , I (1) and I (2) processes. The model has a simple form that permits a wider range of I (2) processes than are usually considered, including a more flexible form of polynomial cointegration. Further, the specification relaxes restrictions identified by Phillips (1991) on the I (1) and I (2) cointegrating vectors and restrictions on how the stochastic trends enter the system. To date there has been little work on Bayesian I (2) analysis and so this paper attempts to address this gap in the literature. A method of Bayesian inference in potentially I (2) processes is presented with application to Australian money demand using a Jeffreys prior and a shrinkage prior. 
Date:  2005–07 
URL:  http://d.repec.org/n?u=RePEc:lec:leecon:05/14&r=ecm 
By:  Gary Koop; Roberto LeónGonzález; Rodney W. Strachan 
Abstract:  A message coming out of the recent Bayesian literature on cointegration is that it is important to elicit a prior on the space spanned by the cointegrating vectors (as opposed to a particular identified choice for these vectors). In this note, we discuss a sensible way of eliciting such a prior. Furthermore, we develop a collapsed Gibbs sampling algorithm to carry out efficient posterior simulation in cointegration models. The computational advantages of our algorithm are most pronounced with our model, since the form of our prior precludes simple posterior simulation using conventional methods (e.g. a Gibbs sampler involves nonstandard posterior conditionals). However, the theory we draw upon implies our algorithm will be more efficient even than the posterior simulation methods which are used with identified versions of cointegration models. 
Date:  2005–07 
URL:  http://d.repec.org/n?u=RePEc:lec:leecon:05/13&r=ecm 
By:  Helmut Luetkepohl 
Abstract:  Vector autoregressive movingaverage (VARMA) processes are suitable models for producing linear forecasts of sets of time series variables. They provide parsimonious representations of linear data generation processes (DGPs). The setup for these processes in the presence of cointegrated variables is considered. Moreover, a unique or identified parameterization based on the echelon form is presented. Model specification, estimation, model checking and forecasting are discussed. Special attention is paid to forecasting issues related to contemporaneously and temporally aggregated processes. 
JEL:  C32 
Date:  2004 
URL:  http://d.repec.org/n?u=RePEc:eui:euiwps:eco2004/25&r=ecm 
By:  Helmut Luetkepohl 
Abstract:  Vector autoregressive (VAR) models are capable of capturing the dynamic structure of many time series variables. Impulse response functions are typically used to investigate the relationships between the variables included in such models. In this context the relevant impulses or innovations or shocks to be traced out in an impulse response analysis have to be specified by imposing appropriate identifying restrictions. Taking into account the cointegration structure of the variables offers interesting possibilities for imposing identifying restrictions. Therefore VAR models which explicitly take into account the cointegration structure of the variables, socalled vector error correction models, are considered. Specification, estimation and validation of reduced form vector error correction models is briefly outlined and imposing structural short and longrun restrictions within these models is discussed. 
Keywords:  Cointegration, vector autoregressive process, vector error correction model 
JEL:  C32 
Date:  2005 
URL:  http://d.repec.org/n?u=RePEc:eui:euiwps:eco2005/02&r=ecm 
By:  Jonathan B. Hill (Department of Economics, Florida International University) 
Abstract:  This paper develops a simple sequential multiple horizon noncausation test strategy for trivariate VAR models (with one auxiliary variable). We apply the test strategy to a rolling window study of money supply and real income, with the price of oil, the unemployment rate and the spread between the Treasury bill and commercial paper rates as auxiliary processes. Ours is the first study to control simultaneously for common stochastic trends, sensitivity of causality tests to chosen sample period, null hypothesis overrejection, sequential test size bounds, and the possibility of causal delays. Evidence suggests highly significant direct or indirect causality from M1 to real income, in particular through the unemployment rate and M2 once we control for cointegration. 
Keywords:  multiple horizon causality, Wald tests, parametric bootstrap, moneyincome causality, rolling windows, cointegration 
JEL:  C12 C32 C53 E47 
Date:  2004–07 
URL:  http://d.repec.org/n?u=RePEc:fiu:wpaper:0413&r=ecm 
By:  N. Vijayamohanan Pillai (Centre for Development Studies) 
Abstract:  The present paper proposes certain statistical tests, both conceptually simple and computationally easy, for analysing statespecific prima facie probabilistic causality and error correction mechanism in the context of a Markov chain of time series data arranged in a contingency table of present versus previous states. It thus shows that error correction necessarily follows causality (that is temporal dependence) or vice versa, suggesting apparently that the two represent the same aspect! The result is applied to an analysis of inflation in India during the last three decades separately and also together based on the monthly general price level (WPI  all commodities) and 23 constituent groups/items, as well as on the three consumer price index (CPI) numbers. 
Keywords:  Markov chain, Steady state probability, India, Inflation, Return period 
JEL:  E31 C1 
Date:  2004–12 
URL:  http://d.repec.org/n?u=RePEc:ind:cdswpp:366&r=ecm 
By:  Catalin Starica (Chalmers & Gothenburg University); Stefano Herzel (University of Perugia); Tomas Nord (Chalmers University of Technology) 
Abstract:  The paper investigates from an empirical perspective aspects related to the occurrence of the IGARCH effect and to its impact on volatility forecasting. It reports the results of a detailed analysis of twelve samples of returns on financial indexes from major economies (Australia, Austria, Belgium, France, Germany, Japan, Sweden, UK, and US). The study is conducted in a novel, nonstationary modeling framework proposed in Starica and Granger (2005). The analysis shows that samples characterized by more pronounced changes in the unconditional variance display stronger IGARCH effect and pronounced differences between estimated GARCH(1,1) unconditional variance and the sample variance. Moreover, we document particularly poor longerhorizon forecasting performance of the GARCH(1,1) model for samples characterized by strong discrepancy between the two measures of unconditional variance. The periods of poor forecasting behavior can be as long as four years. The forecasting behavior is evaluated through a direct comparison with a naive nonstationary approach and is based on mean square errors (MSE) as well as on an option replicating exercise. 
Keywords:  stock returns, volatility forecasting, GARCH(1,1), IGARCH effect, hedging, nonstationary, longer horizon forecasting 
JEL:  C14 C32 
Date:  2005–08–02 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0508003&r=ecm 
By:  Peter Zadrozny 
Abstract:  A univariate GARCH(p,q) process is quickly transformed to a univariate autoregressive movingaverage process in squares of an underlying variable. For positive integer m, eigenvalue restrictions have been proposed as necessary and sufficient restrictions for existence of a unique mth moment of the output of a univariate GARCH process or, equivalently, the 2mth moment of the underlying variable. However, proofs in the literature that an eigenvalue restriction is necessary and sufficient for existence of unique 4th or higher even moments of the underlying variable, are either incorrect, incomplete, or unecessarily long. Thus, the paper contains a short and general proof that an eigenvalue restriction is necessary and sufficient for existence of a unique 4th moment of the underlying variable of a univariate GARCH process. The paper also derives an expression for computing the 4th moment in terms of the GARCH parameters, which immediately implies a necessary and sufficient inequality restriction for existence of the 4th moment. Because the inequality restriction is easily computed in a finite number of basic arithmetic operations on the GARCH parameters and does not require computing eigenvalues, it provides an easy means for computing "by hand" the 4th moment and for checking its existence for lowdimensional GARCH processes. Finally, the paper illustrates the computations with some GARCH(1,1) processes reported in the literature. 
Keywords:  statespace form, Lyapunov equations, nonnegative and irreducible matrices 
JEL:  C32 C63 G12 
Date:  2005 
URL:  http://d.repec.org/n?u=RePEc:ces:ceswps:_1505&r=ecm 
By:  David E. Giles (Department of Economics, University of Victoria) 
Abstract:  We consider the class of generalized entropy (GE) measures that are commonly used to measure inequality. When used in the context of very small samples, as is frequently the case in studies of industrial concentration, these measures are significantly biased. We derive the analytic expression for this bias for an arbitrary member of the GE family, using a smallsigma expansion. This expression is valid regardless of the sample size, is increasingly accurate as the sampling error decreases, and provides the basis for constructing ‘biascorrected’ inequality measures. We illustrate the application of these results to data for the Canadian banking sector, and various U.S. industrial sectors. 
Keywords:  Inequality indices, generalized entropy, bias, smallsigma expansion 
JEL:  C13 C16 D31 
Date:  2005–08–02 
URL:  http://d.repec.org/n?u=RePEc:vic:vicewp:0514&r=ecm 
By:  Issac Martín de Diego; Alberto Muñoz; Javier M. Moguerza 
Abstract:  The problem of combining different sources of information arises in several situations, for instance, the classification of data with asymmetric similarity matrices or the construction of an optimal classifier from a collection of kernels. Often, each source of information can be expressed as a kernel (similarity) matrix and, therefore, a collection of kernels is available. In this paper we propose a new class of methods in order to produce, for classification purposes, an unique and optimal kernel. Then, the constructed kernel is used to train a Support Vector Machine (SVM). The key ideas within the kernel construction are two: the quantification, relative to the classification labels, of the difference of information among the kernels; and the extension of the concept of linear combination of kernels to the concept of functional (matrix) combination of kernels. The proposed methods have been successfully evaluated and compared with other powerful classifiers and kernel combination techniques on a variety of artificial and real classification problems. 
Date:  2005–07 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:ws054508&r=ecm 
By:  Elena Tchernykh; William H. Branson; 
Abstract:  This paper presents techniques for modelling and estimating the behavior of financial market price or return differentials that follow nonlinear regimeswitching behaviour. The methodology to be used here is estimation of variants of threshold autoregression (TAR) models. In the basic model the differentials are random within a band defined by transactions costs and contract risk; they occasionally jump outside the band, and then follow an autoregressive path back towards the band. The principal reference is Tchernykh (1998). The application here is to deviations from covered interest parity (CIP) between forward foreign exchange (FX) markets in Hong Kong and the Philippines. We have observed that these deviations from the band follow irregular steps, rather than single jumps. Therefore a Modified TAR model (MTAR) that allows for this behaviour is also estimated. The estimation methodology is a regimeswitching maximum likelihood procedure. The estimates can provide indicators for policymakers of the market's expectation of crisis, and could also provide indicators for the private sector of convergence of deviations to their usual bands. The TAR model has the potential to be applied to differentials between linked pairs of financial market prices more generally. 
JEL:  F31 C13 
Date:  2005–08 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:11517&r=ecm 
By:  Patrick Marsh 
Abstract:  This paper explores the properties of a new nonparametric goodness of fit test, based on the likelihood ratio test of Portnoy (1988). It is applied via the consistent series density estimator of Crain (1974) and Barron and Sheu (1991). The asymptotic properties are established as trivial corollaries to the results of those papers as well as from similar results in Marsh (2000) and Claeskens and Hjort (2004). The paper focuses on the computational and numerical properties. Specifically it is found that the choice of approximating basis is not crucial and that the choice of model dimension, through consistent selection criteria, yields a feasible procedure. Extensive numerical experiments show that the usage of asymptotic critical values is feasible in moderate sample seizes. More importantly the new tests are shown to have significantly more power than established tests such as the KolmogorovSmirnov, Cramervon Mises or AndersonDarling. Indeed, for certain interesting alternatives the power of the proposed tests may be several times that of the established ones. 
URL:  http://d.repec.org/n?u=RePEc:yor:yorken:05/24&r=ecm 
By:  Patrick Marsh 
Abstract:  This paper proposes a test for the hypothesis that two samples have the same distribution. The likelihood ratio test of Portnoy (1988) is applied in the context of the consistent series density estimator of Crain (1974) and Barron and Sheu (1991). It is proven that the test, when suitably standardised, is asymptotically standard normal and consistent against any complementary alternative. In comparison with the established KolmogorovSmirnov and Cramervon Mises procedures the proposed test enjoys broadly comparable finite sample size properties, but vastly superior power properties. 
URL:  http://d.repec.org/n?u=RePEc:yor:yorken:05/25&r=ecm 
By:  Viktor Winschel (University of Mannheim) 
Abstract:  A welfare analysis of a risky policy is impossible within a linear or linearized model and its certainty equivalence property. The presented algorithms are designed as a toolbox for a general model class. The computational challenges are considerable and I concentrate on the numerics and statistics for a simple model of dynamic consumption and labor choice. I calculate the optimal policy and estimate the posterior density of structural parameters and the marginal likelihood within a nonlinear state space model. My approach is even in an interpreted language twenty time faster than the only alternative compiled approach. The model is estimated on simulated data in order to test the routines against known true parameters. The policy function is approximated by Smolyak Chebyshev polynomials and the rational expectation integral by Smolyak Gaussian quadrature. The Smolyak operator is used to extend univariate approximation and integration operators to many dimensions. It reduces the curse of dimensionality from exponential to polynomial growth. The likelihood integrals are evaluated by a Gaussian quadrature and Gaussian quadrature particle filter. The bootstrap or sequential importance resampling particle filter is used as an accuracy benchmark. The posterior is estimated by the Gaussian filter and a Metropolis Hastings algorithm. I propose a genetic extension of the standard MetropolisHastings algorithm by parallel random walk sequences. This improves the robustness of start values and the global maximization properties. Moreover it simplifies a cluster implementation and the random walk variances decision is reduced to only two parameters so that almost no trial sequences are needed. Finally the marginal likelihood is calculated as a criterion for nonnested and quasitrue models in order to select between the nonlinear estimates and a first order perturbation solution combined with the Kalman filter. 
Keywords:  stochastic dynamic general equilibrium model, Chebyshev polynomials, Smolyak operator, nonlinear state space filter, Curse of Dimensionality, posterior of structural parameters, marginal likelihood 
JEL:  E0 F0 C11 C13 C15 C32 C44 C52 C63 C68 C88 
Date:  2005–07–29 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpge:0507014&r=ecm 
By:  Jonathan B. Hill (Department of Economics, Florida International University) 
Abstract:  We develop tests of linearity that are consistent against a class of Compound Smooth Transition Autoregressive (CoSTAR) models of the conditional mean. Our method is an extension of the suptest developed by Bierens (1990) and Bierens and Ploberger (1997), provides maximal power against popular STAR alternatives and is consistent against any deviation from the null hypothesis. Moreover, the test method can be extended to consistent tests of number of threshold regimes, flexible parametric forms, conditional homoscedasticity against linear or smooth transition GARCH, and nonlinear causality tests. Of particular note, we improve on Bierens's (1990) test theory by considering a vector conditional moment that leads to a suptest statistic that is never degenerate under the alternative of functional misspecification. Such nondegeneracy will even help improve on the optimal tests of Andrews and Ploberger (1994). Moreover, our test is a true test against smooth transition alternatives, whereas the universally employed polynomial regression test of Luukkonen et al (1988) and Teräsvirta (1994) requires the assumption that the true data generating mechanism is STAR. A simulation study demonstrates that the suggested STAR supstatistic renders a test with superlative empirical size and power attributes, in particular in comparison to the Bierens (1990) test, the neural test by Lee, White and Granger (1993), and specifically the polynomial regression test employed throughout the STAR literature. Finally, we apply the new tests to various macroeconomic processes. 
Keywords:  conditional moment tests, vector weighted tests, consistent tests of functional form, smooth transition autoregression, nondegenerate test 
JEL:  C12 C22 C45 C52 
Date:  2004–04 
URL:  http://d.repec.org/n?u=RePEc:fiu:wpaper:0406&r=ecm 
By:  Jonathan B. Hill (Department of Economics, Florida International University) 
Abstract:  The universal method for testing linearity against smooth transition autoregressive (STAR) alternatives is the linearization of the STAR model around the null nuisance parameter value, and performing Ftests on polynomial regressions in the spirit of the RESET test. Polynomial regressors, however, are poor proxies for the nonlinearity associated with STAR processes, and are not consistent (asymptotic power of one) against STAR alternatives, let alone general deviations from the null. Moreover, the most popularly used STAR forms of nonlinearity, exponential and logistic, are known to be exploitable for consistent conditional moment tests of functional form, cf. Bierens and Ploberger (1997). In this paper, pushing asymptotic theory aside, we compare the small sample performance of the standard polynomial test with an essentially ignored consistent conditional moment test of linear autoregression against smooth transition alternatives. In particular, we compute an LM supstatistic and characterize the asymptotic pvalue by Hansen's (1996) bootstrap method. In our simulations, we randomly select all STAR parameters in order not to bias experimental results based on the use of "safe", "interior" parameter values that exaggerate the smooth transition nonlinearity. Contrary to past studies, we find that the traditional polynomial regression method performs only moderately well, and that the LM suptest outperforms the traditional test method, in particular for small samples and for LSTAR processes. 
Keywords:  Smooth transition AR, consistent conditional moment test, Lagrange Multiplier, bootstrap 
JEL:  C1 C2 C4 C5 C8 
Date:  2004–07 
URL:  http://d.repec.org/n?u=RePEc:fiu:wpaper:0412&r=ecm 
By:  Isabel Proenca (ISEGUTL); Joao Santos Silva (ISEGUTL) 
Abstract:  Although semiparametric alternatives are available, parametric binary choice models are widely used in practice, in spite of their sensitivity to misspecification. Here we present the results of a simulation study on the finite sample performance of parametric and semiparametric specification tests for this kind of models. The results obtained indicate that the computationally demanding semiparametric tests do not generally outperform the simpler score tests against parametric alternatives. 
JEL:  C12 C14 C25 C52 
Date:  2005–08–05 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0508008&r=ecm 
By:  Garlappi, Lorenzo; Uppal, Raman; Wang, Tan 
Abstract:  In this paper, we show how an investor can incorporate uncertainty about expected returns when choosing a meanvariance optimal portfolio. In contrast to the Bayesian approach to estimation error, where there is only a single prior and the investor is neutral to uncertainty, we consider the case where the investor has multiple priors and is averse to uncertainty. We characterize the multiple priors with a confidence interval around the estimated value of expected returns and we model aversion to uncertainty via a minimization over the set of priors. The multiprior model has several attractive features: One, just like the Bayesian model, it is firmly grounded in decision theory. Two, it is flexible enough to allow for different degrees of uncertainty about expected returns for different subsets of assets, and also about the underlying assetpricing model generating returns. Three, for several formulations of the multiprior model we obtain closedform expressions for the optimal portfolio, and in one special case we prove that the portfolio from the multiprior model is equivalent to a ‘shrinkage’ portfolio based on the meanvariance and minimumvariance portfolios, which allows for a transparent comparison with Bayesian portfolios. Finally, we illustrate how to implement the multiprior model for a fund manager allocating wealth across eight international equity indices; our empirical analysis suggests that allowing for parameter and model uncertainty reduces the fluctuation of portfolio weights over time and improves the outof sample performance relative to the meanvariance and Bayesian models. 
Keywords:  ambiguity; asset allocation; estimation error; portfolio choice; robustness; uncertainty 
JEL:  D81 G11 
Date:  2005–07 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:5148&r=ecm 
By:  Wolfgang Haerdle (HumboldtUniversity of Berlin); Enno MAMMEN (RuprechtKarlsUniversity Heidelberg); Isabel Proenca (ISEGUTL) 
Abstract:  Single index models are frequently used in econometrics and biometrics. Logit and Probit models are special cases with fixed link functions. In this paper we consider a bootstrap specification test that detects nonparametric deviations of the link function. The bootstrap is used with the aim to find a more accurate distribution under the null than the normal approximation. We prove that the statistic and its bootstrapped version have the same asymptotic distribution. In a simulation study we show that the bootstrap is able to capture the negative bias and the skewness of the test statistic. It yields better approximations to the true critical values and consequently it has a more accurate level and superior power properties. We propose a modification of the HH statistic which reduces considerably the dependency of the test performance on the bandwidth choice. We show that the bootstrap of this modified statistic works as well. 
Keywords:  Bootstrap, kernel estimate, single index model, specification test. 
JEL:  C1 C2 C3 C4 C5 C8 
Date:  2005–08–05 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0508007&r=ecm 
By:  Roger Klein; Francis Vella 
Abstract:  This paper provides a control function estimator to adjust for endogeneity in the triangular simultaneous equations model where there are no available exclusion restrictions to generate suitable instruments. Our approach is to exploit the dependence of the errors on exogenous variables (e.g. heteroscedasticity) to adjust the conventional control function estimator. The form of the error dependence on the exogenous variables is subject to restrictions, but is not parametrically specified. In addition to providing the estimator and deriving its largesample properties, we present simulation evidence which indicates the estimator works well. 
Date:  2005–07 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:08/05&r=ecm 
By:  Matthias Mohr (European Central Bank) 
Abstract:  This paper proposes a new univariate method to decompose a time series into a trend, a cyclical and a seasonal component: the TrendCycle filter (TC filter) and its extension, the TrendCycleSeason filter (TCS filter). They can be regarded as extensions of the HodrickPrescott filter (HP filter). In particular, the stochastic model of the HP filter is extended by explicit models for the cyclical and the seasonal component. The introduction of a stochastic cycle improves the filter in three respects: first, trend and cyclical components are more consistent with the underlying theoretical model of the filter. Second, the endof sample reliability of the trend estimates and the cyclical component is improved compared to the HP filter since the procyclical bias in end ofsample trend estimates is virtually removed. Finally, structural breaks in the original time series can be easily accounted for. 
Keywords:  economic cycles, time series, filtering, trendcycle decomposition, seasonality 
JEL:  C13 C22 E32 
Date:  2005–08–03 
URL:  http://d.repec.org/n?u=RePEc:wpa:wuwpem:0508004&r=ecm 