nep-ecm New Economics Papers
on Econometrics
Issue of 2010‒02‒20
twenty-one papers chosen by
Sune Karlsson
Orebro University

  1. Interval Estimation of Potentially Misspecified Quantile Models in the Presence of Missing Data By Patrick Kline; Andres Santos
  2. Bootstrap Sequential Determination of the Co-integration Rank in VAR Models By ; Anders Rahbek; A.M.Robert Taylor
  3. Chain-Ladder as Maximum Likelihood Revisited.. By Kuang, D; Nielsen, Bent; Nielsen, J. P.
  4. A Random Matrix Approach to VARMA Processes By Zdzis{\l}aw Burda; Andrzej Jarosz; Maciej A. Nowak; Ma\l{}gorzata Snarska
  5. Discussion of The Forward Search: Theory and Data Analysis by Anthony C. Atkinson, Marco Riani, and Andrea Ceroli By Søren Johansen; Bent Nielsen
  6. Fuzzy clustering of univariate and multivariate time series by genetic multiobjective optimization By S. Bandyopadhyay; R. Baragona; U. Maulik
  7. Non-linear DSGE Models and The Optimized Particle Filter By Martin M. Andreasen
  8. Cointegration Analysis with State Space Models By Wagner, Martin
  9. Overview about bias in Customer Satisfaction Surveys and focus on self-selection error By Giovanna Nicolini; Luciana Dalla Valle
  10. Understanding copula transforms: a review of dependence properties By Michiels F.; De Schepper A.
  11. Is the Spurious Regression Problem Spurious? By Bennett T. McCallum
  12. Short-Run Parameter Changes in a Cointegrated Vector Autoregressive Model.. By Kurita, Takamitsu; Nielsen, Bent
  13. Testing for Group-Wise Convergence with an Application to Euro Area Inflation By Lopez, Claude; Papell, David
  14. Methodological advances in the assessment of equilibrium exchange rates By Matthieu Bussière; Michele Ca’ Zorzi; Alexander Chudík; Alistair Dieppe
  15. Fractional regression models for second stage DEA efficiency analyses By Esmeralda A. Ramalho,; Joaquim J.S. Ramalho; Pedro D. Henriques
  16. Multivariate Matching Methods That are Monotonic Imbalance Bounding By Stefano Iacus; Gary King; Giuseppe Porro
  17. On the Distortion of a Copula and its Margins By Valdez, Emiliano A.
  18. So you want to run an experiment, now what? Some Simple Rules of Thumb for Optimal Experimental Design By John A. List; Sally Sadoff; Mathis Wagner
  19. Design and analysis of industrial strip-plot experiments By Arnouts H.; Goos P.
  20. A Theory-Based Approach to Hedonic Price Regressions with Time-Varying Unobserved Product Attributes: The Price of Pollution By Patrick Bajari; Jane Cooley; Kyoo il Kim; Christopher Timmins
  21. Estimating the Technology of Cognitive and Noncognitive Skill Formation By Flavio Cunha; James Heckman; Susanne Schennach

  1. By: Patrick Kline; Andres Santos
    Abstract: This paper develops practical methods for relaxing the missing at random assumption when estimating models of conditional quantiles with missing outcome data and discrete covariates. We restrict the degree of non-ignorable selection governing the missingness process by imposing bounds on the Kolmogorov-Smirnov (KS) distance between the distribution of outcomes among missing observations and the overall (unselected) distribution. Two methods are developed for conducting inference in this environment. The first allows us to perform finite sample inference on the identified set and is well suited to tests of model specification. The second enables us to conduct inference on the parameters of potentially misspecified models. To illustrate our techniques, we revisit the results of Angrist, Chernozhukov, and Fernandez-Val (2006) regarding changes across Decennial Censuses in the quantile specific returns to schooling.
    JEL: C01 C1 J3
    Date: 2010–02
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:15716&r=ecm
  2. By: (Department of Statistical Sciences, University of Bologna); Anders Rahbek (Department of Economics, University of Copenhagen and CREATES); A.M.Robert Taylor (School of Economics and Granger Centre for Time Series Econometrics, University of Nottingham)
    Abstract: Determining the co-integrating rank of a system of variables has become a fundamental aspect of applied research in macroeconomics and finance. It is wellknown that standard asymptotic likelihood ratio tests for co-integration rank of Johansen (1996) can be unreliable in small samples with empirical rejection frequencies often very much in excess of the nominal level. As a consequence, bootstrap versions of these tests have been developed. To be useful, however, sequential procedures for determining the co-integrating rank based on these bootstrap tests need to be consistent, in the sense that the probability of selecting a rank smaller than (equal to) the true co-integrating rank will converge to zero (one minus the marginal significance level), as the sample size diverges, for general I(1) processes. No such likelihood-based procedure is currently known to be available. In this paper we fill this gap in the literature by proposing a bootstrap sequential algorithm which we demonstrate delivers consistent cointegration rank estimation for general I(1) processes. Finite sample Monte Carlo simulations show the proposed procedure performs well in practice.
    Keywords: Co-integration, trace test, sequential rank determination, i.i.d.bootstrap, wild bootstrap
    JEL: C30 C32
    Date: 2010–02–01
    URL: http://d.repec.org/n?u=RePEc:aah:create:2010-07&r=ecm
  3. By: Kuang, D; Nielsen, Bent; Nielsen, J. P.
    Abstract: It has long been known that maximum likelihood estimation in a Poisson model reproduces the chain-ladder technique. We revisit this model. A new canonical parametrisation is proposed to circumvent the inherent identification problem in the parametrisation. The maximum likelihood estimators for the canonical parameter are simple, interpretable and easy to derive. The boundary problem where all observations in one particular development year or on particular underwriting year is zero is also analysed.
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:ner:oxford:http://economics.ouls.ox.ac.uk/14469/&r=ecm
  4. By: Zdzis{\l}aw Burda; Andrzej Jarosz; Maciej A. Nowak; Ma\l{}gorzata Snarska
    Abstract: We apply random matrix theory to derive spectral density of large sample covariance matrices generated by multivariate VMA(q), VAR(q) and VARMA(q1,q2) processes. In particular, we consider a limit where the number of random variables N and the number of consecutive time measurements T are large but the ratio N/T is fixed. In this regime the underlying random matrices are asymptotically equivalent to Free Random Variables (FRV). We apply the FRV calculus to calculate the eigenvalue density of the sample covariance for several VARMA-type processes. We explicitly solve the VARMA(1,1) case and demonstrate a perfect agreement between the analytical result and the spectra obtained by Monte Carlo simulations. The proposed method is purely algebraic and can be easily generalized to q1>1 and q2>1.
    Date: 2010–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1002.0934&r=ecm
  5. By: Søren Johansen (Department of Economics, University of Copenhagen and CREATES, University of Aarhus); Bent Nielsen (Department of Economics, University of Oxford)
    Abstract: The Forward Search Algorithm is a statistical algorithm for obtaining robust estimators of regression coefficients in the presence of outliers. The algorithm selects a succession of subsets of observations from which the parameters are estimated. The present note shows how the theory of empirical processes can contribute to the understanding of how the subsets are chosen and how the sequence of estimators is changing.
    Keywords: Empirical processes, Huber's skip, least trimmed squares estimator, one-step estimator, outlier robustness
    JEL: C2
    Date: 2010–02–06
    URL: http://d.repec.org/n?u=RePEc:aah:create:2010-06&r=ecm
  6. By: S. Bandyopadhyay; R. Baragona; U. Maulik
    Abstract: Given a set of time series, it is of interest to discover subsets that share similar properties. For instance, this may be useful for identifying and estimating a single model that may fit conveniently several time series, instead of performing the usual identification and estimation steps for each one. On the other hand time series in the same cluster are related with respect to the measures assumed for cluster analysis and are suitable for building multivariate time series models. Though many approaches to clustering time series exist, in this view the most effective method seems to have to rely on choosing some features relevant for the problem at hand and seeking for clusters according to their measurements, for instance the autoregressive coe±cients, spectral measures or the eigenvectors of the covariance matrix. Some new indexes based on goodnessof-fit criteria will be proposed in this paper for fuzzy clustering of multivariate time series. A general purpose fuzzy clustering algorithm may be used to estimate the proper cluster structure according to some internal criteria of cluster validity. Such indexes are known to measure actually definite often conflicting cluster properties, compactness or connectedness, for instance, or distribution, orientation, size and shape. It is argued that the multiobjective optimization supported by genetic algorithms is a most effective choice in such a di±cult context. In this paper we use the Xie-Beni index and the C-means functional as objective functions to evaluate the cluster validity in a multiobjective optimization framework. The concept of Pareto optimality in multiobjective genetic algorithms is used to evolve a set of potential solutions towards a set of optimal non-dominated solutions. Genetic algorithms are well suited for implementing di±cult optimization problems where objective functions do not usually have good mathematical properties such as continuity, differentiability or convexity. In addition the genetic algorithms, as population based methods, may yield a complete Pareto front at each step of the iterative evolutionary procedure. The method is illustrated by means of a set of real data and an artificial multivariate time series data set.
    Keywords: Fuzzy clustering, Internal criteria of cluster validity, Genetic algorithms, Multiobjective optimization, Time series, Pareto optimality
    Date: 2010–02–08
    URL: http://d.repec.org/n?u=RePEc:com:wpaper:028&r=ecm
  7. By: Martin M. Andreasen (Bank of England and CREATES)
    Abstract: This paper improves the accuracy and speed of particle filtering for non-linear DSGE models with potentially non-normal shocks. This is done by introducing a new proposal distribution which i) incorporates information from new observables and ii) has a small optimization step that minimizes the distance to the optimal proposal distribution. A particle filter with this proposal distribution is shown to deliver a high level of accuracy even with relatively few particles, and this filter is therefore much more efficient than the standard particle filter.
    Keywords: Likelihood inference, Non-linear DSGE models, Non-normal shocks, Particle filtering
    JEL: C13 C15 E10 E32
    Date: 2010–01–27
    URL: http://d.repec.org/n?u=RePEc:aah:create:2010-05&r=ecm
  8. By: Wagner, Martin (Department of Economics and Finance, Institute for Advanced Studies, Vienna, Austria and Frisch Centre for Economic Research, Oslo, Norway)
    Abstract: This paper presents and exemplifies results developed for cointegration analysis with state space models by Bauer and Wagner in a series of papers. Unit root processes, cointegration and polynomial cointegration are defined. Based upon these definitions the major part of the paper discusses how state space models, which are equivalent to VARMA models, can be fruitfully employed for cointegration analysis. By means of detailing the cases most relevant for empirical applications, the I(1), MFI(1) and I(2) cases, a canonical representation is developed and thereafter some available statistical results are briefly mentioned.
    Keywords: State space models, unit roots, cointegration, polynomial cointegration, pseudo maximum likelihood estimation, subspace algorithms
    JEL: C13 C32
    Date: 2010–02
    URL: http://d.repec.org/n?u=RePEc:ihs:ihsesp:248&r=ecm
  9. By: Giovanna Nicolini (Department of Economics, Business and Statistics); Luciana Dalla Valle (University of Milan)
    Abstract: The present paper provides an overview of the main types of surveys carried out for customer satisfaction analyses. In order to carry out these surveys it is possible to plan a census or select a sample. The higher the accuracy of the survey, the more reliable the results of the analysis. For this very reason, researchers pay special attention to surveys with bias due to non sampling errors, in particular to self-selection errors. These phenomena are very frequent especially in web surveys. Some methods we consider are able to correct the self-selection bias. In literature these methods have been suggested and applied in other fields as well. Here we adapt and employ these techniques as far as customer-satisfaction survey data are concerned.
    Keywords: Self-selection errors, Propensity score matching, Two-step Heckman procedure, Hierarchical bayesian approach, Nonlinear Principal Component Analysis,
    Date: 2009–11–16
    URL: http://d.repec.org/n?u=RePEc:bep:unimip:1094&r=ecm
  10. By: Michiels F.; De Schepper A.
    Abstract: A copula is a flexible modeling tool which contributes substantially to the study of dependencies among random variables. A broad copula class with many nice properties is the Archimedean copula class. Usually, one works with the classical bivariate models, e.g. as summarized in Nelsen (2006), which are one-parametric models. However, in many cases when practitioners want to model dependencies by means of copulas, it would be more rational to work with multi-parametric models. Indeed, multi-parametric models would allow to better harmonize empirical information with the model, as it would be possible to directly import more than one characteristic into the model, e.g. measures of concordance, tail dependence and so on. Various ways exist and have been explored to define multi-parameter Archimedean models. This paper intends to elaborate on one particular method, namely the technique of transforms. More specifically, the contribution of this article is threefold: 1. Genest et al. (1998) sum up five feasible transformations applicable on the Archimedean generator '. In this note we present an overview of these transformations by generalizing tail dependence properties and limiting cases. 2. In an earlier paper, see Michiels et al. (2008), we showed that it can be advantageous to work with the -function instead of with the generator function. We investigate here the effect of transforms on this -function. 3. We introduce a new type of transform which is concordance invariant. As such, this type of transform has practical use as it allows to create comparable test spaces (see Michiels and De Schepper (2008)) from a particular copula family. The paper is organised as follows. In section two the most important copula properties are discussed, with the focus on the Archimedean class. Next, section three reviews generally known copula transforms and generator transforms. In section four the effect of the transforms on the -level is being discussed, which allows the derivation of general tail dependence properties and limiting cases. We also introduce a concordance invariant transform and illustrate its use through simulations. Finally, section five concludes.
    Date: 2009–12
    URL: http://d.repec.org/n?u=RePEc:ant:wpaper:2009012&r=ecm
  11. By: Bennett T. McCallum
    Abstract: So-called “spurious regression” relationships between random-walk (or strongly autoregressive) variables are generally accompanied by clear signs of severe autocorrelation in their residuals. A conscientious researcher would therefore not end an investigation with such a result, but would likely re-estimate with an autocorrelation correction. Simulations show, for several typical cases, that the test-rejection statistics for the re-estimated relationships are very close to the true values, so do not yield results of the spurious type.
    JEL: C22 C29
    Date: 2010–01
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:15690&r=ecm
  12. By: Kurita, Takamitsu; Nielsen, Bent
    Abstract: A family of cointegrated vector autoregressive models with adjusted short-run dynamics is introduced. These models can describe evolving short-run dynamics in a more flexible way than standard vector autoregressions, and yet likelihood analysis is based on reduced rank regression using conventional asymptotic tables. The family of dynamics-adjusted vector autoregressions consists of three models: a model subject to short-run parameter changes, a model with partial short-run dynamics and a model with short-run explanatory variables. An empirical illustration using US gasoline prices is presented, together with some simulation experiments.
    JEL: C51 C52 C31
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:ner:oxford:http://economics.ouls.ox.ac.uk/14475/&r=ecm
  13. By: Lopez, Claude; Papell, David
    Abstract: We propose a new procedure to increase the power of panel unit root tests when used to study group-wise convergence. When testing for stationarity of the differential between a group of series and their cross-sectional means, although each differential has non-zero mean, the group of differentials has a cross-sectional average of zero for each time period by construction. We incorporate this constraint for estimation and generating finite sample critical values. Applying this new procedure to Euro Area inflation, we find strong evidence of convergence among the inflation rates soon after the implementation of the Maastricht treaty and a dramatic decrease in the persistence of the differential after the occurrence of the single currency.
    Keywords: group wise convergence; inflation; euro
    JEL: C32 E31
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:20585&r=ecm
  14. By: Matthieu Bussière (Banque de France, 31 rue Croix-des-Petits-Champs, 75001 Paris, France.); Michele Ca’ Zorzi (European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.); Alexander Chudík (European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.); Alistair Dieppe (European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.)
    Abstract: This paper reviews three different concepts of equilibrium exchange rates that are widely used in policy analysis and constitute the backbone of the IMF CGER assessment: the Macroeconomic Balance, the External Sustainability and the reduced form approaches. We raise a number of econometric issues that were previously neglected, proposing some methodological advances to address them. The first issue relates to the presence of model uncertainty in deriving benchmarks for the current account, introducing Bayesian averaging techniques as a solution. The second issue reveals that, if one considers all the sets of plausible identification schemes, the uncertainty surrounding export and import exchange rate elasticities is large even at longer horizons. The third issue discusses the uncertainty associated to the estimation of a reduced form relationship for the real exchange rate, concluding that inference can be improved by panel estimation. The fourth and final issue addresses the presence of strong and weak cross section dependence in panel estimation, suggesting which panel estimators one could use in this case. Overall, the analysis puts forward a number of innovative solutions in dealing with the large uncertainties surrounding equilibrium exchange rate estimates. JEL Classification: F31, F32, F41.
    Keywords: Equilibrium exchange rates, IMF CGER methodologies, current account, trade elasticities, global imbalances.
    Date: 2010–01
    URL: http://d.repec.org/n?u=RePEc:ecb:ecbwps:20101151&r=ecm
  15. By: Esmeralda A. Ramalho, (Departamento de Economia, Universidade de Evora and CEFAGE-UE); Joaquim J.S. Ramalho (Departamento de Economia, Universidade de Evora and CEFAGE-UE); Pedro D. Henriques (Departamento de Economia, Universidade de Evora and CEFAGE-UE)
    Abstract: TData envelopment analysis (DEA) is commonly used to measure the relative efficiency of decision-making units. Often, in a second stage, a regression model is estimated to relate DEA efficiency scores to exogenous factors. In this paper, we argue that the traditional linear or tobit approaches to second-stage DEA analysis do not constitute a reasonable data-generating process for DEA scores. Under the assumption that DEA scores can be treated as descriptive measures of the relative performance of units in the sample, we show that using fractional regression models are the most natural way of modeling bounded, proportional response variables such as DEA scores. We also propose generalizations of these models and, given that DEA scores take frequently the value of unity, examine the use of two-part models in this framework. Several tests suitable for assessing the specification of each alternative model are also discussed.
    Keywords: Second-stage DEA; Fractional data; Specification tests; One outcomes; Two-part models.
    JEL: C12 C13 C25 C51
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:cfe:wpcefa:2010_01&r=ecm
  16. By: Stefano Iacus (Department of Economics, Business and Statistics, University of Milan, IT); Gary King (Institute for Quantitative Social Science, Harvard University); Giuseppe Porro (Department of Economics and Statistics, University of Trieste)
    Abstract: We introduce a new ``Monotonic Imbalance Bounding'' (MIB) class of matching methods for causal inference that satisfies several important in-sample properties. MIB generalizes and extends in several new directions the only existing class, ``Equal Percent Bias Reducing'' (EPBR), which is designed to satisfy weaker properties and only in expectation. We also offer strategies to obtain specific members of the MIB class, and present a member of this class, called Coarsened Exact Matching, whose properties we analyze from this new perspective.
    Keywords: causal inference, treatment effect, matching,
    Date: 2009–10–16
    URL: http://d.repec.org/n?u=RePEc:bep:unimip:1089&r=ecm
  17. By: Valdez, Emiliano A.
    Abstract: This article examines the notion of distortion of copulas, a natural extension of distortion within the univariate framework. We study three approaches to this extension: (1) distortion of the margins alone while keeping the original copula structure, (2) distortion of the margins while simultaneously altering the copula structure, and (3) synchronized distortion of the copula and its margins. When applying distortion within the multivariate framework, it is important to preserve the properties of a copula function. For the first two approaches, this is a rather straightforward result, however for the third approach, the proof has been exquisitely constructed in Morillas (2005). These three approaches of multivariate distortion unify the different types of multivariate distortion that have scarcely scattered in the literature. Our contribution in this paper is to further consider this unifying framework: we give numerous examples to illustrate and we examine their properties particularly with some aspects of ordering multivariate risks. The extension of multivariate distortion can be practically implemented in risk management where there is a need to perform aggregation and attribution of portfolios of correlated risks. Furthermore, ancillary to the results discussed in this article, we are able to generalize the formula developed by Genest and Rivest (2001) for computing the distribution of the probability integral transformation of a random vector and extend it to the case within the distortion framework.
    Keywords: Multivariate distortion; Ordering of risks; Probability integral transformation
    JEL: C10 C46
    Date: 2009–12–23
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:20524&r=ecm
  18. By: John A. List; Sally Sadoff; Mathis Wagner
    Abstract: Experimental economics represents a strong growth industry. In the past several decades the method has expanded beyond intellectual curiosity, now meriting consideration alongside the other more traditional empirical approaches used in economics. Accompanying this growth is an influx of new experimenters who are in need of straightforward direction to make their designs more powerful. This study provides several simple rules of thumb that researchers can apply to improve the efficiency of their experimental designs. We buttress these points by including empirical examples from the literature.
    JEL: C9 C91 C92 C93
    Date: 2010–01
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:15701&r=ecm
  19. By: Arnouts H.; Goos P.
    Abstract: The cost of experimentation can often be reduced by forgoing complete randomization. A well-known design with restricted randomization is a split-plot design, which is commonly used in industry when some experimental factors are harder to change than others or when a two-stage production process is studied. Split-plot designs are also often used in robust product design to develop products that are insensitive to environmental or noise factors. Another, lesser known, type of experimental design plan that can be used in such situations is the strip-plot experimental design. Strip-plot designs are economically attractive in situations where the factors are hard to change and the process under investigation consists of two distinct stages, and where it is possible to apply the second stage to groups of semi-finished products from the first stage. They have a correlation structure similar to row-column designs and can be seen as special cases of split-lot designs. In this paper,we show how optimal design of experiments allows for the creation of a broad range of strip-plot designs.
    Date: 2009–06
    URL: http://d.repec.org/n?u=RePEc:ant:wpaper:2009007&r=ecm
  20. By: Patrick Bajari; Jane Cooley; Kyoo il Kim; Christopher Timmins
    Abstract: We propose a new strategy for a pervasive problem in the hedonics literature—recovering hedonic prices in the presence of time-varying correlated unobservables. Our approach relies on an assumption about homebuyer rationality, under which prior sales prices can be used to control for time-varying unobservable attributes of the house or neighborhood. Using housing transactions data from California’s Bay Area between 1990 and 2006, we apply our estimator to recover marginal willingness to pay for reductions in three of the EPA’s “criteria” air pollutants. Our findings suggest that ignoring bias from time-varying correlated unobservables considerably understates the benefits of a pollution reduction policy.
    JEL: C01 Q51
    Date: 2010–02
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:15724&r=ecm
  21. By: Flavio Cunha; James Heckman; Susanne Schennach
    Abstract: This paper formulates and estimates multistage production functions for childrens' cognitive and noncognitive skills. Skills are determined by parental environments and investments at different stages of childhood. We estimate the elasticity of substitution between investments in one period and stocks of skills in that period to assess the benefits of early investment in children compared to later remediation. We establish nonparametric identification of a general class of production technologies based on nonlinear factor models with endogenous inputs. A by-product of our approach is a framework for evaluating childhood and schooling interventions that does not rely on arbitrarily scaled test scores as outputs and recognizes the differential effects of the same bundle of skills in different tasks. Using the estimated technology, we determine optimal targeting of interventions to children with different parental and personal birth endowments. Substitutability decreases in later stages of the life cycle in the production of cognitive skills. It increases slightly in later stages of the life cycle in the production of noncognitive skills. This finding has important implications for the design of policies that target the disadvantaged. For some configurations of disadvantage and for some outcomes, the return to investments in the later stages of childhood may exceed that to investments in the early stage.
    JEL: C31 J13
    Date: 2010–01
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:15664&r=ecm

This nep-ecm issue is ©2010 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.