nep-ecm New Economics Papers
on Econometrics
Issue of 2011‒04‒09
fifteen papers chosen by
Sune Karlsson
Orebro University

  1. Quantile Regression in the Presence of Sample Selection By Huber, Martin; Melly, Blaise
  2. An M-Estimator for Tail Dependence in Arbitrary Dimensions By Einmahl, J.H.J.; Krajina, A.; Segers, J.
  3. A Class of Simple Distribution-free Rank-based Unit Root Tests (Revision of DP 2010-72) By Hallin, M.; Akker, R. van den; Werker, B.J.M.
  4. An iterative plug-in algorithm for decomposing seasonal time series using the Berlin Method By Yuanhua Feng
  5. On the Choice of Prior in Bayesian Model Averaging By Einmahl, J.H.J.; Kumar, K.; Magnus, J.R.
  6. INFERENCE PRINCIPLES FOR MULTIVARIATE SURVEILLANCE By Frisén, Marianne
  7. Second-order Refinement of Empirical Likelihood for Testing  Overidentifying Restrictions By Yukitoshi Matsushita; Taisuke Otsu
  8. FDR Control in the Presence of an Unknown Correlation Structure By Cerqueti, Roy; Costantini, Mauro; Lupi, Claudio
  9. Sequential Testing with Uniformly Distributed Size By Stanislav Anatolyev; Grigory Kosenok
  10. A Stochastic Frontier Model with short-run and long-run inefficiency random effects By Roberto Colombi; Gianmaria Martini; Giorgio Vittadini
  11. Plug-in estimation of level sets in a non-compact setting with applications in multivariate risk theory By Elena Di Bernadino; Thomas Laloë; Véronique Maume-Deschamps; Clémentine Prieur
  12. Bayesian Integration of Large Scale SNA Data Frameworks with an Application to Guatemala By Tongeren, J.W. Van; Magnus, J.R.
  13. Autoregression-Based Estimation of the New Keynesian Phillips Curve By Lanne, Markku; Luoto, Jani
  14. Panel Data Econometrics and Climate Change. By Muris, C.H.M.
  15. Ambiguity and Robust Statistics By Simone Cerreia-Vioglio; Fabio Maccheroni; Massimo Marinacci; Luigi Montrucchio

  1. By: Huber, Martin; Melly, Blaise
    Abstract: Most sample selection models assume that the errors are independent of the regressors. Under this assumption, all quantile and mean functions are parallel, which implies that quantile estimators cannot reveal any (per definition non-existing) heterogeneity. However, quantile estimators are useful for testing the independence assumption, because they are consistent under the null hypothesis. We propose tests for this crucial restriction that are based on the entire conditional quantile regression process after correcting for sample selection bias. Monte Carlo simulations demonstrate that they are powerful and two empirical illustrations indicate that violations of this assumption are likely to be ubiquitous in labor economics.
    Keywords: Sample selection, quantile regression, independence, test
    JEL: C12 C13 C14 C21
    Date: 2011–03
    URL: http://d.repec.org/n?u=RePEc:usg:econwp:2011:09&r=ecm
  2. By: Einmahl, J.H.J.; Krajina, A.; Segers, J. (Tilburg University, Center for Economic Research)
    Abstract: Consider a random sample in the max-domain of attraction of a multivariate extreme value distribution such that the dependence structure of the attractor belongs to a parametric model. A new estimator for the unknown parameter is defined as the value that minimises the distance between a vector of weighted integrals of the tail dependence function and their empirical counterparts. The minimisation problem has, with probability tending to one, a unique, global solution. The estimator is consistent and asymptotically normal. The spectral measures of the tail dependence models to which the method applies can be discrete or continuous. Examples demonstrate the applicability and the performance of the method.
    Keywords: asymptotic statistics;factor model;M-estimation;multivariate extremes;tail dependence.
    JEL: C13 C14
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:dgr:kubcen:2011013&r=ecm
  3. By: Hallin, M.; Akker, R. van den; Werker, B.J.M. (Tilburg University, Center for Economic Research)
    Abstract: We propose a class of distribution-free rank-based tests for the null hypothesis of a unit root. This class is indexed by the choice of a reference density g, which needs not coincide with the unknown actual innovation density f. The validity of these tests, in terms of exact finite sample size, is guaranteed, irrespective of the actual underlying density, by distribution-freeness. Those tests are locally and asymptotically optimal under a particular asymptotic scheme, for which we provide a complete analysis of asymptotic relative efficiencies. Rather than asymptotic optimality, however, we emphasize finitesample performances. Finite-sample performances of unit root tests, however, depend quite heavily on initial values. We therefore investigate those performances as a function of initial values. It appears that our rank-based tests significantly outperform the traditional Dickey-Fuller tests, as well as the more recent procedures proposed by Elliot, Rothenberg, and Stock (1996), Ng and Perron (2001), and Elliott and M¨uller (2006), for a broad range of initial values and for heavy-tailed innovation densities. As such, they provide a useful complement to existing techniques.
    Keywords: Unit root;Dickey-Fuller test;Local Asymptotic Normality;Rank test.
    JEL: C12 C22
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:dgr:kubcen:2011002&r=ecm
  4. By: Yuanhua Feng (University of Paderborn)
    Abstract: We propose a fast data-driven procedure for decomposing seasonal time series using the Berlin Method, the software used by the German Federal Statistical Office in this context. Formula of the asymptotic optimal bandwidth h_A is obtained. Meth- ods for estimating the unknowns in h_A are proposed. The algorithm is developed by adapting the well known iterative plug-in idea to time series decomposition. Asymptotic behaviour of the proposal is investigated. Some computational aspects are discussed in detail. Data example show that the proposal works very well in the practice and that data-driven bandwidth selection is a very useful tool to improve the Berlin Method. Deep insights into the iterative plug-in rule are also provided.
    Keywords: Time series decomposition, Berlin Method, local regression, bandwidth selection, iterative plug-in
    Date: 2010–11
    URL: http://d.repec.org/n?u=RePEc:pdn:wpaper:33&r=ecm
  5. By: Einmahl, J.H.J.; Kumar, K.; Magnus, J.R. (Tilburg University, Center for Economic Research)
    Abstract: Bayesian model averaging attempts to combine parameter estimation and model uncertainty in one coherent framework. The choice of prior is then critical. Within an explicit framework of ignorance we define a ‘suitable’ prior as one which leads to a continuous and suitable analog to the pretest estimator. The normal prior, used in standard Bayesian model averaging, is shown to be unsuitable. The Laplace (or lasso) prior is almost suitable. A suitable prior (the Subbotin prior) is proposed and its properties are investigated.
    Keywords: Model averaging;Bayesian analysis;Subbotin prior.
    JEL: C11 C51 C52
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:dgr:kubcen:2011003&r=ecm
  6. By: Frisén, Marianne (Statistical Research Unit, Department of Economics, School of Business, Economics and Law, Göteborg University)
    Abstract: Multivariate surveillance is of interest in industrial production as it enables the monitoring of several components. Recently there has been an increased interest also in other areas such as detection of bioterrorism, spatial surveillance and transaction strategies in finance. Multivariate counterparts to the univariate Shewhart, EWMA and CUSUM methods have earlier been proposed. A review of general approaches to multivariate surveillance is given with respect to how suggested methods relate to general statistical inference principles. Multivariate on-line surveillance problems can be complex. The sufficiency principle can be of great use to find simplifications without loss of information. We will use this to clarify the structure of some problems. This will be of help to find relevant metrics for evaluations of multivariate surveillance and to find optimal methods. The sufficiency principle will be used to determine efficient methods to combine data from sources with different time lag. Surveillance of spatial data is one example. Illustrations will be given of surveillance of outbreaks of influenza.
    Keywords: Sequential; Surveillance; Multivariate; Sufficiency
    JEL: C44
    Date: 2011–03–29
    URL: http://d.repec.org/n?u=RePEc:hhs:gunsru:2011_005&r=ecm
  7. By: Yukitoshi Matsushita (Graduate School of Humanities and Social Science, University of Tsukuba); Taisuke Otsu (Cowles Foundation, Yale University)
    Abstract: This paper studies second-order properties of the empirical likelihood overidentifying restriction test to check the validity of moment condition models. We show that the empirical likelihood test is Bartlett correctable and suggest second-order refinement methods for the test based on the empirical Bartlett correction and adjusted empirical likelihood. Our second-order analysis supplements the one in Chen and Cui (2007) who considered parameter hypothesis testing for overidentified models. In simulation studies we find that the empirical Bartlett correction and adjusted empirical likelihood assisted by bootstrapping provide remarkable improvements for the size properties.
    Keywords: Empirical likelihood, GMM, Overidentification test, Bartlett correction, Higher order analysis
    JEL: C12 C14
    Date: 2011–04
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1791&r=ecm
  8. By: Cerqueti, Roy; Costantini, Mauro; Lupi, Claudio
    Abstract: The false discovery rate (FDR, Benjamini and Hochberg 1995) is a powerful approach to multiple testing. However, the original approach developed by Benjamini and Hochberg (1995) applies only to independent tests. Yekutieli (2008) showed that a modification of the Benjamini-Hochberg (BH) approach can be used in the presence of dependent tests and labelled his procedure as separate subsets BH (ssBH). However, Yekutieli (2008) left the practical specification of the subsets of p values largely unresolved. In this paper we propose a modification of the ssBH procedure based on a selection of the subsets that guarantees that the dependence properties needed to control the FDR are satisfied. We label this new procedure as the separate pairs BH (spBH). An extensive Monte Carlo analysis is presented that compares the properties of the BH and spBH procedures.
    Keywords: Multiple testing, False discovery rate, Copulas
    JEL: C10 C12
    Date: 2011–03–28
    URL: http://d.repec.org/n?u=RePEc:mol:ecsdps:esdp11059&r=ecm
  9. By: Stanislav Anatolyev (New Economic School); Grigory Kosenok (New Economic School)
    Abstract: Sequential procedures of testing for structural stability do not provide enough guidance on the shape of boundaries that are used to decide on acceptance or rejection, requiring only that the overall size of the test is asymptotically controlled. We introduce and motivate a reasonable criterion for a shape of boundaries which requires that the test size be uniformly distributed over the testing period. Under this criterion, we numerically construct boundaries for most popular sequential tests that are characterized by a test statistic behaving asymptotically either as a Wiener process or Brownian bridge. We handle this problem both in a context of retrospecting a historical sample and in a context of monitoring newly arriving data. We tabulate the boundaries by …tting them to certain exible but parsimonious functional forms. Interesting patterns emerge in an illustrative application of sequential tests to the Phillips curve model.
    Keywords: Structural stability; sequential tests; CUSUM; retrospection; monitoring; boundaries; asymptotic size
    Date: 2011–04
    URL: http://d.repec.org/n?u=RePEc:cfr:cefirw:w0123_april2011&r=ecm
  10. By: Roberto Colombi; Gianmaria Martini; Giorgio Vittadini
    Abstract: This paper presents a new stochastic frontier model for panel data. The model takes into account firm unobservable heterogeneity and short-run and long-run sources of inefficiency. Each of these features is modeled by a specific random effect. In this way, firms’ latent heterogeneity is not wrongly modeled as inefficiency, and it is possible to disentangle a time-persistent component from the total inefficiency. Under reasonable assumptions, we show that the closed-skew normal distribution allows us to derive both the log-likelihood function of the model and the posterior expected values of the random effects. The new model is compared with nested models by analyzing the efficiency of firms belonging to different sectors.
    Keywords: Closed-Skew Normal Distribution, Longitudinal Data Analysis, Mixed Models, Stochastic Frontiers
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:brh:wpaper:1101&r=ecm
  11. By: Elena Di Bernadino (SAF - Laboratoire de Sciences Actuarielle et Financière - Université Claude Bernard - Lyon I : EA2429); Thomas Laloë (JAD - Laboratoire Jean Alexandre Dieudonné - CNRS : UMR6621 - Université de Nice Sophia-Antipolis); Véronique Maume-Deschamps (SAF - Laboratoire de Sciences Actuarielle et Financière - Université Claude Bernard - Lyon I : EA2429); Clémentine Prieur (INRIA Rhône-Alpes / LJK Laboratoire Jean Kuntzmann - MOISE - CNRS : UMR5224 - INRIA - Laboratoire Jean Kuntzmann - Université Joseph Fourier - Grenoble I - Institut Polytechnique de Grenoble, (Méthodes d'Analyse Stochastique des Codes et Traitements Numériques) - GdR MASCOT-NUM - CNRS : GDR3179)
    Abstract: This paper deals with the problem of estimating the level sets of an unknown distribution function $F$. A plug-in approach is followed. That is, given a consistent estimator $F_n$ of $F$, we estimate the level sets of $F$ by the level sets of $F_n$. In our setting no compactness property is a priori required for the level sets to estimate. We state consistency results with respect to the Hausdorff distance and the volume of the symmetric difference. Our results are motivated by applications in multivariate risk theory. In this sense we also present simulated and real examples which illustrate our theoretical results.
    Keywords: Level sets ; Distribution function ; Plug-in estimation ; Hausdorff distance ; Conditional Tail Expectation
    Date: 2011–03–28
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-00580624&r=ecm
  12. By: Tongeren, J.W. Van; Magnus, J.R. (Tilburg University, Center for Economic Research)
    Abstract: We present a Bayesian estimation method applied to an extended set of national accounts data and estimates of approximately 2500 variables. The method is based on conventional national accounts frameworks as compiled by countries in Central America, in particular Guatemala, and on concepts that are defined in the international standards of the System of National Accounts. Identities between the variables are exactly satisfied by the estimates. The method uses ratios between the variables as Bayesian conditions, and introduces prior reliabilities of values of basic data and ratios as criteria to adjust these values in order to satisfy the conditions. The paper not only presents estimates and precisions, but also discusses alternative conditions and reliabilities, in order to test the impact of framework assumptions and carry out sensitivity analyses. These tests involve, among others, the impact on Bayesian estimates of limited annual availability of data, of very low reliabilities (close to non-availability) of price indices, of limited availability of important administrative and survey data, and also the impact of aggregation of the basic data. We introduce the concept of `tentative' estimates that are close to conventional national accounts estimates, in order to establish a close link between the Bayesian estimation approach and conventional national accounting.
    Keywords: Macro accounts;system of national accounts;data frameworks;ratios;reliability;Bayesian estimation;sensitivity analysis;aggregation.
    JEL: C11 C82 C87 E01 E20 P44
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:dgr:kubcen:2011022&r=ecm
  13. By: Lanne, Markku; Luoto, Jani
    Abstract: We propose an estimation method of the new Keynesian Phillips curve (NKPC) based on a univariate noncausal autoregressive model for the inflation rate. By construction, our approach avoids a number of problems related to the GMM estimation of the NKPC. We estimate the hybrid NKPC with quarterly U.S. data (1955:1-2010:3), and both expected future inflation and lagged inflation are found important in determining the inflation rate, with the former clearly dominating. Moreover, inflation persistence turns out to be intrinsic rather than inherited from a persistent driving process.
    Keywords: Noncausal time series; Non-Gaussian time series; inflation; Phillips curve
    JEL: C51 E31 C22
    Date: 2011–03
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:29801&r=ecm
  14. By: Muris, C.H.M. (Tilburg University)
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:ner:tilbur:urn:nbn:nl:ui:12-4578467&r=ecm
  15. By: Simone Cerreia-Vioglio; Fabio Maccheroni; Massimo Marinacci; Luigi Montrucchio
    Abstract: Starting with the seminal paper of Gilboa and Schmeidler (1989) an analogy between the maxmin approach of Decision Theory under Ambiguity and the minimax approach of Robust Statistics -- e.g. Huber and Strassen (1973) -- has been hinted at. The present paper formally clarifies this relation by showing the conditions under which the two approaches are actually equivalent.
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:igi:igierp:382&r=ecm

This nep-ecm issue is ©2011 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.