nep-ecm New Economics Papers
on Econometrics
Issue of 2009‒01‒17
twenty papers chosen by
Sune Karlsson
Orebro University

  1. Out-of-sample comparison of copula specifications in multivariate density forecasts By Cees Diks; Valentyn Panchenko; Dick van Dijk
  2. Nonparametric Cointegration Analysis of Fractional Systems With Unknown Integration Orders By Morten Ørregaard Nielsen
  3. Accurate and robust indirect inference for diffusion models By Veronika Czellar; Elvezio Ronchetti
  4. Testing directional forecast value in the presence of serial correlation By Oliver Blaskowitz; Helmut Herwartz
  5. Poisson Autoregression By Konstantinos Fokianos; Anders Rahbek; Dag Tjøstheim
  6. Testing for Co-integration in Vector Autoregressions with Non-Stationary Volatility By Giuseppe Cavaliere; Anders Rahbek; A. M. Robert Taylor
  7. Admissible Clustering of Aggregator Components: A Necessary and Sufficient Stochastic Semi-Nonparametric Test for Weak Separability. By William Barnett; Philippe de Peretti
  8. Testing for Heteroskedasticity and Serial Correlation in a Random Effects Panel Data Model By Badi H. Baltagi; Byoung Cheol Jung; Seuck Heun Song
  9. Resolving some paradoxes and problems with Bayesian precise hypothesis testing. By Jeffrey A. Mills
  10. Blaming the exogenous environment? Conditional efficiency estimation with continuous and discrete environmental variables By Kristof DE WITTE; Mika KORTELAINEN
  11. A class of Simple Semiparametrically Efficient Rank-Based Unit Root Tests By Marc Hallin; Ramon van den Akker; Bas Werker
  13. A Robust, Uniformly Most Powerful Unit Root Test By Jeffrey A. Mills
  14. Identification of Treatment Effects Using Control Functions in Models with Continuous, Endogenous Treatment and Heterogeneous Effects By J.P. Florensy; J. J. Heckmanz; C. Meghirx; E. Vytlacil
  15. Stock Prices and Economic Fluctuations: A Markov Switching Structural Vector Autoregressive Analysis By Markku Lanne; Helmut Luetkepohl
  16. Testing reliability hypotheses based on coefficient Alpha By ALBERTO MAYDEU
  17. Econometric Causality By James J. Heckman
  18. New Automatic Measurement Method of the Motor Unit Action Potential Duration based on the Wavelet and Hilbert Transforms By Ignacio Rodríguez Carreño; L. Gila Useros, A. Malanda Trigueros, J. Navallas Irujo, J. Rodríguez Falces
  19. Exploring the Lambda Copula Construction Method for Archimedean copulas: Discussion of Three Lambda Types By Michiels F.; Koch I.; De Schepper A.
  20. D-optimal conjoint choice designs with no-choice options for a nested logit model By Goos P.; Vermeulen B.; Vandebroek M.

  1. By: Cees Diks (University of Amsterdam); Valentyn Panchenko (School of Economics, University of New South Wales); Dick van Dijk (Econometric Institute, Erasmus University Rotterdam)
    Abstract: We introduce a statistical test for comparing the predictive accuracy of competing copula specifications in multivariate density forecasts, based on the Kullback-Leibler Information Criterion (KLIC). The test is valid under general conditions: in particular it allows for parameter estimation uncertainty and for the copulas to be nested or nonnested. Monte Carlo simulations demonstrate that the proposed test has satisfactory size and power properties in finite samples. Applying the test to daily exchange rate returns of several major currencies against the US dollar we find that the Student’s t copula is favored over Gaussian, Gumbel and Clayton copulas. This suggests that these exchange rate returns are characterized by symmetric tail dependence.
    Keywords: Copula-based density forecast; semiparametric statistics; out-of-sample forecast evaluation; Kullback-Leibler Information Criterion; empirical copula
    JEL: C12 C14 C32 C52 C53
    Date: 2008–10
  2. By: Morten Ørregaard Nielsen (School of Economics and Management, University of Aarhus, Denmark)
    Abstract: In this paper a nonparametric variance ratio testing approach is proposed for determining the number of cointegrating relations in fractionally integrated systems. The test statistic is easily calculated without prior knowledge of the integration order of the data, the strength of the cointegrating relations, or the cointegration vector(s). The latter property makes it easier to implement than regression-based approaches, especially when examining relationships between several variables with possibly multiple cointegrating vectors. Since the test is nonparametric, it does not require the specification of a particular model and is invariant to short-run dynamics. Nor does it require the choice of any smoothing parameters that change the test statistic without being reflected in the asymptotic distribution. Furthermore, a consistent estimator of the cointegration space can be obtained from the procedure. The asymptotic distribution theory for the proposed test is non-standard but easily tabulated. Monte Carlo simulations demonstrate excellent finite sample properties, even rivaling those of well-specified parametric tests. The proposed methodology is applied to the term structure of interest rates, where, contrary to both fractional and integer-based parametric approaches, evidence in favor of the expectations hypothesis is found using the nonparametric approach.
    Keywords: Cointegration rank, cointegration space, fractional integration and cointegration, interest rates, long memory, nonparametric, term structure, variance ratio
    JEL: C32
    Date: 2009–01–12
  3. By: Veronika Czellar; Elvezio Ronchetti
    Abstract: Indirect inference (Smith, 1993; Gouriéroux, Monfort and Renault, 1993) is a simulation-based estimation method dealing with econometric models whose likelihood function is intractable. Typical examples are diffusion models described by stochastic differential equations. A potential problem that arises when estimating a diffusion model is the possible model misspecifcation which can lead to biased estimators and misleading test results. To correct the bias due to model misspecifcation, Genton and Ronchetti (2003) proposed robust indirect inference. The standard asymptotic approximation to the finite sample distribution of the robust indirect estimators and tests, however, can be very poor and can lead to misleading inference. To improve the finite sample accuracy, we propose in this paper an optimal choice of the auxiliary discretized model and a new test based on asymptotically equivalent M-estimators of the robust indirect estimators. We apply the robust indirect saddlepoint tests using an optimal choice of discretization to various contaminated diffusion models and we illustrate the gain in finite sample accuracy when using the new technique.
    Keywords: indirect inference, M-estimators, influence function, robust statistics, saddlepoint approximations
    Date: 2008–08
  4. By: Oliver Blaskowitz; Helmut Herwartz
    Abstract: Common approaches to test for the economic value of directional forecasts are based on the classical Chi-square test for independence, Fisher’s exact test or the Pesaran and Timmerman (1992) test for market timing. These tests are asymptotically valid for serially independent observations. Yet, in the presence of serial correlation they are markedly oversized as confirmed in a simulation study. We summarize serial correlation robust test procedures and propose a bootstrap approach. By means of a Monte Carlo study we illustrate the relative merits of the latter. Two empirical applications demonstrate the relevance to account for serial correlation in economic time series when testing for the value of directional forecasts.
    Keywords: Directional forecasts, directional accuracy, forecast evaluation, testing independence, contingency tables, bootstrap
    JEL: C32 C52 C53 E17 E27 E47 F17 F37 F47 G11
    Date: 2008–12
  5. By: Konstantinos Fokianos (Department of Mathematics & Statistics, University of Cyprus); Anders Rahbek (Department of Economics, University of Copenhagen); Dag Tjøstheim (Department of Mathematics, University of Bergen)
    Abstract: This paper considers geometric ergodicity and likelihood based inference for linear and nonlinear Poisson autoregressions. In the linear case the conditional mean is linked linearly to its past values as well as the observed values of the Poisson process. This also applies to the conditional variance, implying an interpretation as an integer valued GARCH process. In a nonlinear conditional Poisson model, the conditional mean is a nonlinear function of its past values and a nonlinear function of past observations. As a particular example an exponential autoregressive Poisson model for time series is considered. Under geometric ergodicity the maximum likelihood estimators of the parameters are shown to be asymptotically Gaussian in the linear model. In addition we provide a consistent estimator of the asymptotic covariance, which is used in the simulations and the analysis of some transaction data. Our approach to verifying geometric ergodicity proceeds via Markov theory and irreducibility. Finding transparent conditions for proving ergodicity turns out to be a delicate problem in the original model formulation. This problem is circumvented by allowing a perturbation of the model. We show that as the perturbations can be chosen to be arbitrarily small, the differences between the perturbed and non-perturbed versions vanish as far as the asymptotic distribution of the parameter estimates is concerned.
    Keywords: generalized linear models; non-canonical link function; count data; Poisson regression; likelihood; geometric ergodicity; integer GARCH; observation driven models; asymptotic theory
    Date: 2008–05
  6. By: Giuseppe Cavaliere (Department of Statistical Sciences, University of Bologna); Anders Rahbek (Department of Economics, University of Copenhagen); A. M. Robert Taylor (University of Nottingham)
    Abstract: Many key macro-economic and ?nancial variables are characterised by permanent changes in unconditional volatility. In this paper we analyse vector autoregressions with non-stationary (unconditional) volatility of a very general form, which includes single and multiple volatility breaks as special cases. We show that the conventional rank statistics computed as in Johansen (1988,1991) are potentially unreliable. In particular, their large sample distributions depend on the integrated covariation of the underlying multivariate volatility process which impacts on both the size and power of the associated co-integration tests, as we demonstrate numerically. A solution to the identi?ed inference problem is provided by considering wild bootstrap-based implementations of the rank tests. These do not require the practitioner to specify a parametric model for volatility, nor to assume that the pattern of volatility is common to, or independent across, the vector of series under analysis. The bootstrap is shown to perform very well in practice.
    Keywords: co-integration; non-stationary volatility; trace and maximum eigenvalue tests; wild bootstrap
    JEL: C30 C32
    Date: 2008–09
  7. By: William Barnett (Department of Economics, The University of Kansas); Philippe de Peretti (Universite de la Sorbonne)
    Abstract: In aggregation theory, the admissibility condition for clustering together components to be aggregated is blockwise weak separability, which also is the condition needed to separate out sectors of the economy. Although weak separability is thereby of central importance in aggregation and index number theory and in econometrics, prior attempts to produce statistical tests of weak separability have performed poorly in Monte Carlo studies. This paper deals with semi- nonparametric tests for weak separability. It introduces both a necessary and su¢ cient test, and a fully stochastic procedure allowing to take into account measurement error. Simulations show that the test performs well, even for large measurement errors.
    Keywords: weak separability, quantity aggregation, clustering, sectors, index number theory, semi-nonparametrics
    JEL: C12 C14 C43 D12
    Date: 2009–01
  8. By: Badi H. Baltagi (Center for Policy Research, Maxwell School, Syracuse University, Syracuse, NY 13244-1020); Byoung Cheol Jung; Seuck Heun Song
    Abstract: This paper considers a panel data regression model with heteroskedastic as well as serially correlated disturbances, and derives a joint LM test for homoskedasticity and no first order serial correlation. The restricted model is the standard random individual error component model. It also derives a conditional LM test for homoskedasticity given serial correlation, as well as a conditional LM test for no first order serial correlation given heteroskedasticity, all in the context of a random effects panel data model. Monte Carlo results show that these tests, along with their likelihood ratio alternatives, have good size and power under various forms of heteroskedasticity including exponential and quadratic functional forms.
    Keywords: Panel data; heteroskedasticity; serial correlation; Lagrange Multiplier tests; likelihood ratio; random effects
    JEL: C23
    Date: 2008–12
  9. By: Jeffrey A. Mills
    Abstract: Bayesian hypothesis testing of a precise null hypothesis suffers from a paradox discovered by Jeffreys (1939), Lindley (1957) and Bartlett (1957). This paradox appears to indicate that the usual priors, both proper and improper, are inappropriate for testing precise null hypotheses, and lead to difficulties in specifying prior distributions that could be widely accepted as appropriate in this situation. This paper considers an alternative hypothesis testing procedure and derives the Bayes factor for this procedure, which turns out to be B = p(?0 | x)/sup?[p(?i | x)], the ratio of the posterior density function evaluated at the value in the null hypothesis, ?0, and evaluated at its supremum. This leads to a Bayesian hypothesis testing procedure in which the Jeffreys-Lindley-Bartlett paradox does not occur. Further, under the proposed procedure, the prior does not depend on the hypotheses to be tested, there is no need to place non-zero mass on a particular point in a continuous distribution, and the same hypothesis testing procedure applies for all continuous and discrete distributions. Further, the resulting test procedure is robust to reasonable variations in the prior, uniformly most powerful and easy to interpret correctly in practice. Several examples are given to illustrate the use and performance of the test. A justification for the proposed procedure is given based on the argument that scientific inference always at least implicitly involves and requires precise alternative working hypotheses.
    Date: 2009
  10. By: Kristof DE WITTE; Mika KORTELAINEN
    Abstract: This paper proposes a fully nonparametric framework to estimate relative efficiency of entities while accounting for a mixed set of continuous and discrete (both ordered and unordered) exogenous variables. Using robust partial frontier techniques, the probabilistic and conditional characterization of the production process, as well as insights from the recent developments in nonparametric econometrics, we present a generalized approach for conditional efficiency measurement. To do so, we utilize a tailored mixed kernel function with a data-driven bandwidth selection. So far only descriptive analysis for studying the effect of heterogeneity in conditional efficiency estimation has been suggested. We show how to use and interpret nonparametric bootstrap-based significance tests in a generalized conditional efficiency framework. This allows us to study statistical significance of continuous and discrete environmental variables. The proposed approach is illustrated by a sample of British pupils from the OECD Pisa data set. The results show that several exogenous discrete factors have a significant effect on the educational process.
    Date: 2008–12
  11. By: Marc Hallin; Ramon van den Akker; Bas Werker
    Abstract: We propose a class of simple rank-based tests for the null hypothesis of a unit root. This class is indexed by the choice of a reference density g, which needs not coincide with the unknown actual innovation density f. The validity of these tests, in terms of exact finite sample size, is guaranteed by distribution-freeness, irrespective of the value of the drift and the actual underlying f. When based on a Gaussian reference density g, our tests (of the van der Waerden form) perform uniformly better, in terms of asymptotic relative effciency, than the Dickey and Fuller test --except under Gaussian f, where they are doing equally well. Under Student t3 density f, the effciency gain is as high as 110%, meaning that Dickey-Fuller requires over twice as many observations as we do in order to achieve comparable performance. This gain is even larger in case the underlying f has fatter tails; under Cauchy f, where Dickey and Fuller is no longer valid, it can be considered infinite. The test associated with reference density g is semiparametrically e±cient when f happens to coincide with g, in the ubiquitous case that the model contains a non-zero drift. Finally, with an estimated density f(n) substituted for the reference density g, our tests achieve uniform (with respect to f) semiparametric e±ciency.
    Keywords: Dickey-Fuller test, Local Asymptotic Normality
    JEL: C12 C22
    Date: 2009
  12. By: James J. Heckman; Sergio Urzua; Edward Vytlacil
    Abstract: This paper develops the method of local instrumental variables for mod- els with multiple, unordered treatments when treatment choice is determined by a nonparametric version of the multinomial choice model. Responses to interventions are permitted to be heterogeneous in a general way and agents are allowed to select a treatment (e.g. participate in a program) with at least partial knowledge of the idiosyncratic response to the treatments. We define treatment effects in a general model with multiple treatments as differences in counterfactual outcomes that would have been observed if the agent faced different choice sets. We show how versions of local instrumental variables can identify the corresponding treatment parameters. Direct application of local instrumental variables identies the marginal treatment effect of one option versus the next best alternative without requiring knowledge of any structural parameters from the choice equation or any large support assumptions. Using local instrumental variables to identify other treatment parameters requires ei- ther large support assumptions or knowledge of the latent index function of the multinomial choice model.
    Date: 2008–12–15
  13. By: Jeffrey A. Mills
    Abstract: Mills (2008) examines an alternative procedure for testing precise hypotheses based on specifying a set of precise alternative hypotheses. Mills shows that this method resolves several problems with the standard procedure, particularly the Jeffreys-Lindley-Bartlett paradox, and has desirable properties. This paper applies this new testing procedure to the unit root hypothesis for an AR(1) model. A Monte Carlo simulation experiment is conducted to study the performance of the test in terms of robustness to the specification of the prior distribution. The resulting new test is compared with the best alternatives, namely the tests of Conigliani and Spezzaferri (2007) and Elliot, Rothenberg and Stock (1996).
    Date: 2009
  14. By: J.P. Florensy (IDEI, Toulouse); J. J. Heckmanz (University of Chicago and University College Dublin); C. Meghirx (IFS and UCL); E. Vytlacil (Yale University)
    Abstract: We use the control function approach to identify the average treatment effect and the effect of treatment on the treated in models with a continuous endogenous regressor whose impact is heterogeneous. We assume a stochastic polynomial restriction on the form of the heterogeneity but, unlike alternative nonparametric control function approaches, our approach does not require large support assumptions.
    Date: 2008–12–15
  15. By: Markku Lanne; Helmut Luetkepohl
    Abstract: The role of expectations for economic fluctuations has received considerable attention in recent business cycle analysis. We exploit Markov regime switching models to identify shocks in cointegrated structural vector autoregressions and investigate different identification schemes for bivariate systems comprising U.S. stock prices and total factor productivity. The former variable is viewed as re°ecting expectations of economic agents about future productivity. It is found that some previously used identification schemes can be rejected in our model setup. The results crucially depend on the measure used for total factor productivity.
    Keywords: Cointegration, Markov regime switching model, vector error correction model, structural vector autoregression, mixed normal distribution
    JEL: C32
    Date: 2008
  16. By: ALBERTO MAYDEU (Instituto de Empresa)
    Abstract: We show how to test hypotheses for coefficient alpha in three different situations: Hypothesis tests of whether coefficient alpha equals a prespecified value, Hypothesis tests involving two statistically independent sample alphas as may arise when testing the equality of coefficient alpha across groups, Hypothesis tests involving two statistically dependent sample alphas as may arise when testing the equality of alpha across time, or when testing the equality of alpha for two test scores within the same sample.
    Keywords: Coefficient alpha, Hypothesis testing, Structural equation modeling
    Date: 2008–01
  17. By: James J. Heckman (University of Chicago, Chicago, Illinois 60637, USA; American Bar Foundation, Chicago, Illinois; Geary Institute, University College Dublin, Ireland)
    Date: 2008–12–15
  18. By: Ignacio Rodríguez Carreño (Universidad de Navarra, Depto. Métodos Cuantitativos, Pamplona); L. Gila Useros, A. Malanda Trigueros, J. Navallas Irujo, J. Rodríguez Falces
    Abstract: A new automatic method based on the wavelet and Hilbert transforms for measuring the motor unit action potential (MUAP) duration is presented in this work. A total of 182 MUAPs from two different muscles were analysed. The average MUAP waveform was wavelet-transfomed, and a particular scale of the wavelet transform was selected to avoid baseline fluctuation and high frequency noise. Then, the Hilbert transform was applied to this wavelet scale to obtain its envelope. Amplitude and slope criteria in this envelope were used to obtain the MUAP start and end points. The results of the new method were compared to the gold standard of duration marker positions obtained by manual measurement. These new method was also compared to two another automatic duration methods: a recently method developed by the authors and a conventional automatic duration algorithm. The differences between the new algorithm’s marker positions and the gold standard of duration marker positions were in some cases smaller than those observed with the recently published and the conventional method. Our new method for automatic measurement of MUAP duration is more accurate than other available conventional algorithms and performs better than the recent method in some cases.
    Date: 2008–12–19
  19. By: Michiels F.; Koch I.; De Schepper A.
    Abstract: We introduce and discuss a new parametric copula builder which is named the “Lambda construction method”. The methodology is explained and illustrated using 3 types of Lambda functions. It shows that the Lambda method has strong visual advantages for recognizing key dependence characteristics and importing them into the copula model. Furthermore, the Lambda method facilitates the representation of a copula family as a collection of comparable test spaces as defined in Michiels and De Schepper (2008). As such, the modeling capacity of these families is discussed in a clear way.
    Date: 2008–12
  20. By: Goos P.; Vermeulen B.; Vandebroek M.
    Abstract: Despite the fact that many conjoint choice experiments offer respondents a no-choice option in every choice set, the optimal design of conjoint choice experiments involving no-choice options has received only a limited amount of attention in the literature. In this article, we present an approach to construct D-optimal designs for this type of experiment. For that purpose, we derive the information matrix of a nested multinomial logit model that is appropriate for analyzing data from choice experiments with no-choice options. The newly derived information matrix is compared to the information matrix for the multinomial logit model that is used in the literature to construct designs for choice experiments. It is also used to quantify the loss of information in a choice experiment due to the presence of a no-choice option.
    Date: 2008–12

This nep-ecm issue is ©2009 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.