nep-ecm New Economics Papers
on Econometrics
Issue of 2017‒03‒12
twelve papers chosen by
Sune Karlsson
Örebro universitet

  1. Robust Evaluation of Multivariate Density Forecasts By Dovern, Jonas; Manner, Hans
  2. New concepts of symmetry for copulas By Mangold, Benedikt
  3. Correcting Measurement Errors in Transition Models Based on Retrospective Panel Data By Shaimaa Yassin; Francois Langot
  4. Testing for Stochastic Dominance in Social Networks By Firmin Doko Tchatoka; Robert Garrard; Virginie Masson
  5. Evaluating local average and quantile treatment effects under endogeneity based on instruments: a review By Huber, Martin; Wüthrich, Kaspar
  6. Detecting Co-Movements in Noncausal Time Series By Cubadda, Gianluca; Hecq, Alain; Telg, Sean
  7. A Sample Selection Model for Fractional Response Variables By Schwiebert, Jörg
  8. Optimizing policymakers' loss functions in crisis prediction: before, within or after? By Sarlin, Peter; von Schweinitz, Gregor
  9. The Asymptotic Properties of GMM and Indirect Inference under Second Inference By Prosper Donovon; Alastair R. Hall
  10. Inference in Second-Order Identified Models By Prosper Donovon; Alastair R. Hall
  11. What Happens When Econometrics and Psychometrics Collide? An Example Using PISA Data By John Jerrim; Luis Alejandro Lopez-Agudo; Oscar D. Marcenaro-Gutierrez; Nikki Shure
  12. Bootstrapping Structural Change Tests By Otilia Boldea; Alastair R. Hall; Adriana Cornea-Madeira

  1. By: Dovern, Jonas; Manner, Hans
    Abstract: We derive new tests for proper calibration of multivariate density forecasts based on Rosenblatt probability integral transforms. These tests have the advantage that they i) do not depend on the ordering of variables in the forecasting model, ii) are applicable to densities of arbitrary dimensions, and iii) have superior power relative to existing approaches. We furthermore develop adjusted tests that allow for estimated parameters and, consequently, can be used as in-sample specification tests. We demonstrate the problems of existing tests and how our new approaches can overcome those using two applications based on multivariate GARCH-based models for stock market returns and on a macroeconomic Bayesian vectorautoregressive model.
    JEL: C12 C32 C53
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:zbw:vfsc16:145547&r=ecm
  2. By: Mangold, Benedikt
    Abstract: This paper introduces two new concepts of symmetry for multivariate copulas with a focus on tails regions. Properties of the symmetry concepts are investigated for bivariate copulas and a connection to radial symmetry is established. Two nonparametric testing procedures for the new concepts are developed using a vector of locally most powerful rank test statistics, applied to a new generalization of the FGM copula which parameterizes every vertex of the unit cube. This vector quantifies deviations from independence in each vertex and the tests for the new symmetry concepts are based on comparisons of these deviations. A combination of both tests can be used to test for radial symmetry. It is shown that this combined test has a similar power in detecting bivariate radial symmetry compared to recently published nonparametric tests. Further, an application to insurance data is provided. Finally, an improvement of the selection process in the context of vine copula fitting is proposed that is based on the elimination of copula families with unsuitable symmetry properties.
    Keywords: radial symmetry,vertex symmetry,diametrical symmetry,copula,vine copula,rank-based inference
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:zbw:iwqwdp:062017&r=ecm
  3. By: Shaimaa Yassin (Institute of Economic Research (IRENE), University of Neuchatel); Francois Langot (University of Le Mans (GAINS-TEPP))
    Abstract: We propose in this paper a dynamic n-state transition model to correct for measurement error, that could arise for example from recall and/or design bias, in retrospective panels. Our model allows the correction of measurement errors, when very little auxiliary information is available, over a long period of time taking into consideration the conjuncture fluctuations. The technique suggested shows that it is sufficient to have population moments (for at least one point in time) to correct over- or under-reporting biases. Using a Simulated Method of Moments, one can estimate a transition- and time-specific correction matrix for the labor market flows in a biased retrospective panel. Using retrospective and contemporaneous data from Egypt, we estimate the model and show the significance and robustness of our correction. We show through a reform evaluation that neglecting measurement error in the data would have produced significantly different and misleading results.
    Keywords: Panel Data, Retrospective Recall, Measurement Error, Labor Markets, Transition Models.
    JEL: C83 C81 J01 J62 J64
    Date: 2017–03
    URL: http://d.repec.org/n?u=RePEc:irn:wpaper:17-04&r=ecm
  4. By: Firmin Doko Tchatoka (School of Economics, University of Adelaide); Robert Garrard (School of Economics, University of Adelaide); Virginie Masson (School of Economics, University of Adelaide)
    Abstract: This paper illustrates how stochastic dominance criteria can be used to rank social networks in terms of efficiency, and develops statistical inference procedures for assessing these criteria. The tests proposed can be viewed as extensions of a Pearson goodness-of-fit test and a studentized maximum modulus test often used to partially rank income distributions and inequality measures. We establish uniform convergence of the empirical size of the tests to the nominal level, and show their consistency under the usual conditions that guarantee the validity of the approximation of a multinomial distribution to a Gaussian distribution. Furthermore, we propose a bootstrap method that enhances the finite-sample properties of the tests. The performance of the tests is illustrated via Monte Carlo experiments and an empirical application to risk sharing networks in rural India.
    Keywords: Networks; Tests of stochastic dominance; Bootstrap; Uniform convergence.
    JEL: C12 C13 C36
    Date: 2017–02
    URL: http://d.repec.org/n?u=RePEc:adl:wpaper:2017-02&r=ecm
  5. By: Huber, Martin; Wüthrich, Kaspar
    Abstract: This paper provides a review of methodological advancements in the evaluation of heterogeneous treatment effect models based on instrumental variable (IV) methods. We focus on models that achieve identification through a monotonicity assumption on the selection equation and analyze local average and quantile treatment effects for the subpopulation of compliers. We start with a comprehensive discussion of the binary treatment and binary instrument case which is relevant for instance in randomized experiments with imperfect compliance. We then review extensions to identification and estimation with covariates, multi-valued and multiple treatments and instruments, outcome attrition and measurement error, and the identification of direct and indirect treatment effects, among others. We also discuss testable implications and possible relaxations of the IV assumptions, approaches to extrapolate from local to global treatment effects, and the relationship to other IV approaches.
    Keywords: instrument; LATE; treatment effects; selection on unobservables
    JEL: C26
    Date: 2017–02–24
    URL: http://d.repec.org/n?u=RePEc:fri:fribow:fribow00479&r=ecm
  6. By: Cubadda, Gianluca; Hecq, Alain; Telg, Sean
    Abstract: This paper introduces the notion of common noncausal features and proposes tools for detecting the presence of co-movements in economic and financial time series subject to phenomena such as asymmetric cycles and speculative bubbles. For purely causal or noncausal vector autoregressive models with more than one lag, the presence of a reduced rank structure allows to identify causal from noncausal systems using the usual Gaussian likelihood framework. This result cannot be extended to mixed causal-noncausal models, and an approximate maximum likelihood estimator assuming non-Gaussian disturbances is needed for this case. We find common bubbles in both commodity prices and price indicators.
    Keywords: mixed causal-noncausal process, common features, vector autoregressive models, commodity prices, common bubbles.
    JEL: C12 C32 E32
    Date: 2017–03–02
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:77254&r=ecm
  7. By: Schwiebert, Jörg
    Abstract: This paper develops a sample selection model for fractional response variables, i.e., variables taking values between zero and one. It is shown that the proposed model is consistent with the nature of the fractional response variable, i.e., it generates predictions between zero and one. A simulation study shows that the model performs well in finite samples and that competing models, the Heckman selection model and the fractional probit model (without selectivity), generate biased estimates. An empirical application to the impact of education on women's perceived probability of job loss illustrates that the choice of an appropriate model is important in practice. In particular, the Heckman selection model and the fractional probit model are found to underestimate (in absolute terms) the impact of education on the perceived probability of job loss.
    JEL: C24 C25 C21
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:zbw:vfsc16:145527&r=ecm
  8. By: Sarlin, Peter; von Schweinitz, Gregor
    Abstract: Early-warning models most commonly optimize signaling thresholds on crisis probabilities. The ex-post threshold optimization is based upon a loss function accounting for preferences between forecast errors, but comes with two crucial drawbacks: unstable thresholds in recursive estimations and an in-sample overfit at the expense of out-of-sample performance. We propose two alternatives for threshold setting: (i) including preferences in the estimation itself and (ii) setting thresholds ex-ante according to preferences only. Given probabilistic model output, it is intuitive that a decision rule is independent of the data or model specification, as thresholds on probabilities represent a willingness to issue a false alarm vis-à-vis missing a crisis. We provide simulated and real-world evidence that this simplification results in stable thresholds and improves out-of-sample performance. Our solution is not restricted to binary-choice models, but directly transferable to the signaling approach and all probabilistic early-warning models. JEL Classification: C35, C53, G01
    Keywords: early-warning models, loss functions, predictive performance, threshold setting
    Date: 2017–02
    URL: http://d.repec.org/n?u=RePEc:ecb:ecbwps:20172025&r=ecm
  9. By: Prosper Donovon; Alastair R. Hall
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:man:sespap:1705&r=ecm
  10. By: Prosper Donovon; Alastair R. Hall
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:man:sespap:1703&r=ecm
  11. By: John Jerrim (Department of Social Science, UCL Institute of Education, University College London); Luis Alejandro Lopez-Agudo (Departamento de Economía Aplicada (Estadística y Econometría). Facultad de Ciencias Económicas y Empresariales. Universidad de Málaga); Oscar D. Marcenaro-Gutierrez (Departamento de Economía Aplicada (Estadística y Econometría). Facultad de Ciencias Económicas y Empresariales. Universidad de Málaga); Nikki Shure (Department of Social Science, UCL Institute of Education and Institute of Labor Economics)
    Abstract: International large-scale assessments such as PISA are increasingly being used to benchmark the academic performance of young people across the world. Yet many of the technicalities underpinning these datasets are miss-understood by applied researchers, who sometimes fail to take into account their complex survey and test designs. The aim of this paper is to generate a better understanding amongst economists about how such databases are created, and what this implies for the empirical methodologies one should or should not apply. In doing so, we explain how some of the modelling strategies preferred by economists is at odds with the design of these studies. In doing so, we hope to generate a better understanding of international large-scale education datasets, and promote better practice in their use.
    Keywords: Survey design; Test design; PISA; Weights; Replicate weights; Plausible values
    JEL: I20 C18 C10 C55
    Date: 2017–02–22
    URL: http://d.repec.org/n?u=RePEc:qss:dqsswp:1704&r=ecm
  12. By: Otilia Boldea; Alastair R. Hall; Adriana Cornea-Madeira
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:man:sespap:1704&r=ecm

This nep-ecm issue is ©2017 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.