
on Econometrics 
By:  Cees Diks (University of Amsterdam); Valentyn Panchenko (School of Economics, University of New South Wales); Dick van Dijk (Econometric Institute, Erasmus University Rotterdam) 
Abstract:  We introduce a statistical test for comparing the predictive accuracy of competing copula specifications in multivariate density forecasts, based on the KullbackLeibler Information Criterion (KLIC). The test is valid under general conditions: in particular it allows for parameter estimation uncertainty and for the copulas to be nested or nonnested. Monte Carlo simulations demonstrate that the proposed test has satisfactory size and power properties in finite samples. Applying the test to daily exchange rate returns of several major currencies against the US dollar we find that the Student’s t copula is favored over Gaussian, Gumbel and Clayton copulas. This suggests that these exchange rate returns are characterized by symmetric tail dependence. 
Keywords:  Copulabased density forecast; semiparametric statistics; outofsample forecast evaluation; KullbackLeibler Information Criterion; empirical copula 
JEL:  C12 C14 C32 C52 C53 
Date:  2008–10 
URL:  http://d.repec.org/n?u=RePEc:swe:wpaper:200823&r=ecm 
By:  Morten Ørregaard Nielsen (School of Economics and Management, University of Aarhus, Denmark) 
Abstract:  In this paper a nonparametric variance ratio testing approach is proposed for determining the number of cointegrating relations in fractionally integrated systems. The test statistic is easily calculated without prior knowledge of the integration order of the data, the strength of the cointegrating relations, or the cointegration vector(s). The latter property makes it easier to implement than regressionbased approaches, especially when examining relationships between several variables with possibly multiple cointegrating vectors. Since the test is nonparametric, it does not require the specification of a particular model and is invariant to shortrun dynamics. Nor does it require the choice of any smoothing parameters that change the test statistic without being reflected in the asymptotic distribution. Furthermore, a consistent estimator of the cointegration space can be obtained from the procedure. The asymptotic distribution theory for the proposed test is nonstandard but easily tabulated. Monte Carlo simulations demonstrate excellent finite sample properties, even rivaling those of wellspecified parametric tests. The proposed methodology is applied to the term structure of interest rates, where, contrary to both fractional and integerbased parametric approaches, evidence in favor of the expectations hypothesis is found using the nonparametric approach. 
Keywords:  Cointegration rank, cointegration space, fractional integration and cointegration, interest rates, long memory, nonparametric, term structure, variance ratio 
JEL:  C32 
Date:  2009–01–12 
URL:  http://d.repec.org/n?u=RePEc:aah:create:200902&r=ecm 
By:  Veronika Czellar; Elvezio Ronchetti 
Abstract:  Indirect inference (Smith, 1993; Gouriéroux, Monfort and Renault, 1993) is a simulationbased estimation method dealing with econometric models whose likelihood function is intractable. Typical examples are diffusion models described by stochastic differential equations. A potential problem that arises when estimating a diffusion model is the possible model misspecifcation which can lead to biased estimators and misleading test results. To correct the bias due to model misspecifcation, Genton and Ronchetti (2003) proposed robust indirect inference. The standard asymptotic approximation to the finite sample distribution of the robust indirect estimators and tests, however, can be very poor and can lead to misleading inference. To improve the finite sample accuracy, we propose in this paper an optimal choice of the auxiliary discretized model and a new test based on asymptotically equivalent Mestimators of the robust indirect estimators. We apply the robust indirect saddlepoint tests using an optimal choice of discretization to various contaminated diffusion models and we illustrate the gain in finite sample accuracy when using the new technique. 
Keywords:  indirect inference, Mestimators, influence function, robust statistics, saddlepoint approximations 
Date:  2008–08 
URL:  http://d.repec.org/n?u=RePEc:gen:geneem:2008.01&r=ecm 
By:  Oliver Blaskowitz; Helmut Herwartz 
Abstract:  Common approaches to test for the economic value of directional forecasts are based on the classical Chisquare test for independence, Fisher’s exact test or the Pesaran and Timmerman (1992) test for market timing. These tests are asymptotically valid for serially independent observations. Yet, in the presence of serial correlation they are markedly oversized as confirmed in a simulation study. We summarize serial correlation robust test procedures and propose a bootstrap approach. By means of a Monte Carlo study we illustrate the relative merits of the latter. Two empirical applications demonstrate the relevance to account for serial correlation in economic time series when testing for the value of directional forecasts. 
Keywords:  Directional forecasts, directional accuracy, forecast evaluation, testing independence, contingency tables, bootstrap 
JEL:  C32 C52 C53 E17 E27 E47 F17 F37 F47 G11 
Date:  2008–12 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2008073&r=ecm 
By:  Konstantinos Fokianos (Department of Mathematics & Statistics, University of Cyprus); Anders Rahbek (Department of Economics, University of Copenhagen); Dag Tjøstheim (Department of Mathematics, University of Bergen) 
Abstract:  This paper considers geometric ergodicity and likelihood based inference for linear and nonlinear Poisson autoregressions. In the linear case the conditional mean is linked linearly to its past values as well as the observed values of the Poisson process. This also applies to the conditional variance, implying an interpretation as an integer valued GARCH process. In a nonlinear conditional Poisson model, the conditional mean is a nonlinear function of its past values and a nonlinear function of past observations. As a particular example an exponential autoregressive Poisson model for time series is considered. Under geometric ergodicity the maximum likelihood estimators of the parameters are shown to be asymptotically Gaussian in the linear model. In addition we provide a consistent estimator of the asymptotic covariance, which is used in the simulations and the analysis of some transaction data. Our approach to verifying geometric ergodicity proceeds via Markov theory and irreducibility. Finding transparent conditions for proving ergodicity turns out to be a delicate problem in the original model formulation. This problem is circumvented by allowing a perturbation of the model. We show that as the perturbations can be chosen to be arbitrarily small, the differences between the perturbed and nonperturbed versions vanish as far as the asymptotic distribution of the parameter estimates is concerned. 
Keywords:  generalized linear models; noncanonical link function; count data; Poisson regression; likelihood; geometric ergodicity; integer GARCH; observation driven models; asymptotic theory 
Date:  2008–05 
URL:  http://d.repec.org/n?u=RePEc:kud:kuiedp:0835&r=ecm 
By:  Giuseppe Cavaliere (Department of Statistical Sciences, University of Bologna); Anders Rahbek (Department of Economics, University of Copenhagen); A. M. Robert Taylor (University of Nottingham) 
Abstract:  Many key macroeconomic and ?nancial variables are characterised by permanent changes in unconditional volatility. In this paper we analyse vector autoregressions with nonstationary (unconditional) volatility of a very general form, which includes single and multiple volatility breaks as special cases. We show that the conventional rank statistics computed as in Johansen (1988,1991) are potentially unreliable. In particular, their large sample distributions depend on the integrated covariation of the underlying multivariate volatility process which impacts on both the size and power of the associated cointegration tests, as we demonstrate numerically. A solution to the identi?ed inference problem is provided by considering wild bootstrapbased implementations of the rank tests. These do not require the practitioner to specify a parametric model for volatility, nor to assume that the pattern of volatility is common to, or independent across, the vector of series under analysis. The bootstrap is shown to perform very well in practice. 
Keywords:  cointegration; nonstationary volatility; trace and maximum eigenvalue tests; wild bootstrap 
JEL:  C30 C32 
Date:  2008–09 
URL:  http://d.repec.org/n?u=RePEc:kud:kuiedp:0834&r=ecm 
By:  William Barnett (Department of Economics, The University of Kansas); Philippe de Peretti (Universite de la Sorbonne) 
Abstract:  In aggregation theory, the admissibility condition for clustering together components to be aggregated is blockwise weak separability, which also is the condition needed to separate out sectors of the economy. Although weak separability is thereby of central importance in aggregation and index number theory and in econometrics, prior attempts to produce statistical tests of weak separability have performed poorly in Monte Carlo studies. This paper deals with semi nonparametric tests for weak separability. It introduces both a necessary and su¢ cient test, and a fully stochastic procedure allowing to take into account measurement error. Simulations show that the test performs well, even for large measurement errors. 
Keywords:  weak separability, quantity aggregation, clustering, sectors, index number theory, seminonparametrics 
JEL:  C12 C14 C43 D12 
Date:  2009–01 
URL:  http://d.repec.org/n?u=RePEc:kan:wpaper:200904&r=ecm 
By:  Badi H. Baltagi (Center for Policy Research, Maxwell School, Syracuse University, Syracuse, NY 132441020); Byoung Cheol Jung; Seuck Heun Song 
Abstract:  This paper considers a panel data regression model with heteroskedastic as well as serially correlated disturbances, and derives a joint LM test for homoskedasticity and no first order serial correlation. The restricted model is the standard random individual error component model. It also derives a conditional LM test for homoskedasticity given serial correlation, as well as a conditional LM test for no first order serial correlation given heteroskedasticity, all in the context of a random effects panel data model. Monte Carlo results show that these tests, along with their likelihood ratio alternatives, have good size and power under various forms of heteroskedasticity including exponential and quadratic functional forms. 
Keywords:  Panel data; heteroskedasticity; serial correlation; Lagrange Multiplier tests; likelihood ratio; random effects 
JEL:  C23 
Date:  2008–12 
URL:  http://d.repec.org/n?u=RePEc:max:cprwps:111&r=ecm 
By:  Jeffrey A. Mills 
Abstract:  Bayesian hypothesis testing of a precise null hypothesis suffers from a paradox discovered by Jeffreys (1939), Lindley (1957) and Bartlett (1957). This paradox appears to indicate that the usual priors, both proper and improper, are inappropriate for testing precise null hypotheses, and lead to difficulties in specifying prior distributions that could be widely accepted as appropriate in this situation. This paper considers an alternative hypothesis testing procedure and derives the Bayes factor for this procedure, which turns out to be B = p(?0  x)/sup?[p(?i  x)], the ratio of the posterior density function evaluated at the value in the null hypothesis, ?0, and evaluated at its supremum. This leads to a Bayesian hypothesis testing procedure in which the JeffreysLindleyBartlett paradox does not occur. Further, under the proposed procedure, the prior does not depend on the hypotheses to be tested, there is no need to place nonzero mass on a particular point in a continuous distribution, and the same hypothesis testing procedure applies for all continuous and discrete distributions. Further, the resulting test procedure is robust to reasonable variations in the prior, uniformly most powerful and easy to interpret correctly in practice. Several examples are given to illustrate the use and performance of the test. A justification for the proposed procedure is given based on the argument that scientific inference always at least implicitly involves and requires precise alternative working hypotheses. 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:cin:ucecwp:200901&r=ecm 
By:  Kristof DE WITTE; Mika KORTELAINEN 
Abstract:  This paper proposes a fully nonparametric framework to estimate relative efficiency of entities while accounting for a mixed set of continuous and discrete (both ordered and unordered) exogenous variables. Using robust partial frontier techniques, the probabilistic and conditional characterization of the production process, as well as insights from the recent developments in nonparametric econometrics, we present a generalized approach for conditional efficiency measurement. To do so, we utilize a tailored mixed kernel function with a datadriven bandwidth selection. So far only descriptive analysis for studying the effect of heterogeneity in conditional efficiency estimation has been suggested. We show how to use and interpret nonparametric bootstrapbased significance tests in a generalized conditional efficiency framework. This allows us to study statistical significance of continuous and discrete environmental variables. The proposed approach is illustrated by a sample of British pupils from the OECD Pisa data set. The results show that several exogenous discrete factors have a significant effect on the educational process. 
Date:  2008–12 
URL:  http://d.repec.org/n?u=RePEc:ete:ceswps:ces0833&r=ecm 
By:  Marc Hallin; Ramon van den Akker; Bas Werker 
Abstract:  We propose a class of simple rankbased tests for the null hypothesis of a unit root. This class is indexed by the choice of a reference density g, which needs not coincide with the unknown actual innovation density f. The validity of these tests, in terms of exact finite sample size, is guaranteed by distributionfreeness, irrespective of the value of the drift and the actual underlying f. When based on a Gaussian reference density g, our tests (of the van der Waerden form) perform uniformly better, in terms of asymptotic relative effciency, than the Dickey and Fuller test except under Gaussian f, where they are doing equally well. Under Student t3 density f, the effciency gain is as high as 110%, meaning that DickeyFuller requires over twice as many observations as we do in order to achieve comparable performance. This gain is even larger in case the underlying f has fatter tails; under Cauchy f, where Dickey and Fuller is no longer valid, it can be considered infinite. The test associated with reference density g is semiparametrically e±cient when f happens to coincide with g, in the ubiquitous case that the model contains a nonzero drift. Finally, with an estimated density f(n) substituted for the reference density g, our tests achieve uniform (with respect to f) semiparametric e±ciency. 
Keywords:  DickeyFuller test, Local Asymptotic Normality 
JEL:  C12 C22 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:eca:wpaper:2009_001&r=ecm 
By:  James J. Heckman; Sergio Urzua; Edward Vytlacil 
Abstract:  This paper develops the method of local instrumental variables for mod els with multiple, unordered treatments when treatment choice is determined by a nonparametric version of the multinomial choice model. Responses to interventions are permitted to be heterogeneous in a general way and agents are allowed to select a treatment (e.g. participate in a program) with at least partial knowledge of the idiosyncratic response to the treatments. We define treatment effects in a general model with multiple treatments as differences in counterfactual outcomes that would have been observed if the agent faced different choice sets. We show how versions of local instrumental variables can identify the corresponding treatment parameters. Direct application of local instrumental variables identies the marginal treatment effect of one option versus the next best alternative without requiring knowledge of any structural parameters from the choice equation or any large support assumptions. Using local instrumental variables to identify other treatment parameters requires ei ther large support assumptions or knowledge of the latent index function of the multinomial choice model. 
Date:  2008–12–15 
URL:  http://d.repec.org/n?u=RePEc:ucd:wpaper:200830&r=ecm 
By:  Jeffrey A. Mills 
Abstract:  Mills (2008) examines an alternative procedure for testing precise hypotheses based on specifying a set of precise alternative hypotheses. Mills shows that this method resolves several problems with the standard procedure, particularly the JeffreysLindleyBartlett paradox, and has desirable properties. This paper applies this new testing procedure to the unit root hypothesis for an AR(1) model. A Monte Carlo simulation experiment is conducted to study the performance of the test in terms of robustness to the specification of the prior distribution. The resulting new test is compared with the best alternatives, namely the tests of Conigliani and Spezzaferri (2007) and Elliot, Rothenberg and Stock (1996). 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:cin:ucecwp:200902&r=ecm 
By:  J.P. Florensy (IDEI, Toulouse); J. J. Heckmanz (University of Chicago and University College Dublin); C. Meghirx (IFS and UCL); E. Vytlacil (Yale University) 
Abstract:  We use the control function approach to identify the average treatment effect and the effect of treatment on the treated in models with a continuous endogenous regressor whose impact is heterogeneous. We assume a stochastic polynomial restriction on the form of the heterogeneity but, unlike alternative nonparametric control function approaches, our approach does not require large support assumptions. 
Date:  2008–12–15 
URL:  http://d.repec.org/n?u=RePEc:ucd:wpaper:200832&r=ecm 
By:  Markku Lanne; Helmut Luetkepohl 
Abstract:  The role of expectations for economic fluctuations has received considerable attention in recent business cycle analysis. We exploit Markov regime switching models to identify shocks in cointegrated structural vector autoregressions and investigate different identification schemes for bivariate systems comprising U.S. stock prices and total factor productivity. The former variable is viewed as re°ecting expectations of economic agents about future productivity. It is found that some previously used identification schemes can be rejected in our model setup. The results crucially depend on the measure used for total factor productivity. 
Keywords:  Cointegration, Markov regime switching model, vector error correction model, structural vector autoregression, mixed normal distribution 
JEL:  C32 
Date:  2008 
URL:  http://d.repec.org/n?u=RePEc:eui:euiwps:eco2008/29&r=ecm 
By:  ALBERTO MAYDEU (Instituto de Empresa) 
Abstract:  We show how to test hypotheses for coefficient alpha in three different situations: Hypothesis tests of whether coefficient alpha equals a prespecified value, Hypothesis tests involving two statistically independent sample alphas as may arise when testing the equality of coefficient alpha across groups, Hypothesis tests involving two statistically dependent sample alphas as may arise when testing the equality of alpha across time, or when testing the equality of alpha for two test scores within the same sample. 
Keywords:  Coefficient alpha, Hypothesis testing, Structural equation modeling 
Date:  2008–01 
URL:  http://d.repec.org/n?u=RePEc:emp:wpaper:wp0808&r=ecm 
By:  James J. Heckman (University of Chicago, Chicago, Illinois 60637, USA; American Bar Foundation, Chicago, Illinois; Geary Institute, University College Dublin, Ireland) 
Date:  2008–12–15 
URL:  http://d.repec.org/n?u=RePEc:ucd:wpaper:200826&r=ecm 
By:  Ignacio Rodríguez Carreño (Universidad de Navarra, Depto. Métodos Cuantitativos, Pamplona); L. Gila Useros, A. Malanda Trigueros, J. Navallas Irujo, J. Rodríguez Falces 
Abstract:  A new automatic method based on the wavelet and Hilbert transforms for measuring the motor unit action potential (MUAP) duration is presented in this work. A total of 182 MUAPs from two different muscles were analysed. The average MUAP waveform was wavelettransfomed, and a particular scale of the wavelet transform was selected to avoid baseline fluctuation and high frequency noise. Then, the Hilbert transform was applied to this wavelet scale to obtain its envelope. Amplitude and slope criteria in this envelope were used to obtain the MUAP start and end points. The results of the new method were compared to the gold standard of duration marker positions obtained by manual measurement. These new method was also compared to two another automatic duration methods: a recently method developed by the authors and a conventional automatic duration algorithm. The differences between the new algorithm’s marker positions and the gold standard of duration marker positions were in some cases smaller than those observed with the recently published and the conventional method. Our new method for automatic measurement of MUAP duration is more accurate than other available conventional algorithms and performs better than the recent method in some cases. 
Date:  2008–12–19 
URL:  http://d.repec.org/n?u=RePEc:una:unccee:wp1408&r=ecm 
By:  Michiels F.; Koch I.; De Schepper A. 
Abstract:  We introduce and discuss a new parametric copula builder which is named the “Lambda construction method”. The methodology is explained and illustrated using 3 types of Lambda functions. It shows that the Lambda method has strong visual advantages for recognizing key dependence characteristics and importing them into the copula model. Furthermore, the Lambda method facilitates the representation of a copula family as a collection of comparable test spaces as defined in Michiels and De Schepper (2008). As such, the modeling capacity of these families is discussed in a clear way. 
Date:  2008–12 
URL:  http://d.repec.org/n?u=RePEc:ant:wpaper:2008021&r=ecm 
By:  Goos P.; Vermeulen B.; Vandebroek M. 
Abstract:  Despite the fact that many conjoint choice experiments offer respondents a nochoice option in every choice set, the optimal design of conjoint choice experiments involving nochoice options has received only a limited amount of attention in the literature. In this article, we present an approach to construct Doptimal designs for this type of experiment. For that purpose, we derive the information matrix of a nested multinomial logit model that is appropriate for analyzing data from choice experiments with nochoice options. The newly derived information matrix is compared to the information matrix for the multinomial logit model that is used in the literature to construct designs for choice experiments. It is also used to quantify the loss of information in a choice experiment due to the presence of a nochoice option. 
Date:  2008–12 
URL:  http://d.repec.org/n?u=RePEc:ant:wpaper:2008020&r=ecm 