
on Econometrics 
By:  Huber, Martin; Melly, Blaise 
Abstract:  Most sample selection models assume that the errors are independent of the regressors. Under this assumption, all quantile and mean functions are parallel, which implies that quantile estimators cannot reveal any (per definition nonexisting) heterogeneity. However, quantile estimators are useful for testing the independence assumption, because they are consistent under the null hypothesis. We propose tests for this crucial restriction that are based on the entire conditional quantile regression process after correcting for sample selection bias. Monte Carlo simulations demonstrate that they are powerful and two empirical illustrations indicate that violations of this assumption are likely to be ubiquitous in labor economics. 
Keywords:  Sample selection, quantile regression, independence, test 
JEL:  C12 C13 C14 C21 
Date:  2011–03 
URL:  http://d.repec.org/n?u=RePEc:usg:econwp:2011:09&r=ecm 
By:  Einmahl, J.H.J.; Krajina, A.; Segers, J. (Tilburg University, Center for Economic Research) 
Abstract:  Consider a random sample in the maxdomain of attraction of a multivariate extreme value distribution such that the dependence structure of the attractor belongs to a parametric model. A new estimator for the unknown parameter is defined as the value that minimises the distance between a vector of weighted integrals of the tail dependence function and their empirical counterparts. The minimisation problem has, with probability tending to one, a unique, global solution. The estimator is consistent and asymptotically normal. The spectral measures of the tail dependence models to which the method applies can be discrete or continuous. Examples demonstrate the applicability and the performance of the method. 
Keywords:  asymptotic statistics;factor model;Mestimation;multivariate extremes;tail dependence. 
JEL:  C13 C14 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:dgr:kubcen:2011013&r=ecm 
By:  Hallin, M.; Akker, R. van den; Werker, B.J.M. (Tilburg University, Center for Economic Research) 
Abstract:  We propose a class of distributionfree rankbased tests for the null hypothesis of a unit root. This class is indexed by the choice of a reference density g, which needs not coincide with the unknown actual innovation density f. The validity of these tests, in terms of exact finite sample size, is guaranteed, irrespective of the actual underlying density, by distributionfreeness. Those tests are locally and asymptotically optimal under a particular asymptotic scheme, for which we provide a complete analysis of asymptotic relative efficiencies. Rather than asymptotic optimality, however, we emphasize finitesample performances. Finitesample performances of unit root tests, however, depend quite heavily on initial values. We therefore investigate those performances as a function of initial values. It appears that our rankbased tests significantly outperform the traditional DickeyFuller tests, as well as the more recent procedures proposed by Elliot, Rothenberg, and Stock (1996), Ng and Perron (2001), and Elliott and MÂ¨uller (2006), for a broad range of initial values and for heavytailed innovation densities. As such, they provide a useful complement to existing techniques. 
Keywords:  Unit root;DickeyFuller test;Local Asymptotic Normality;Rank test. 
JEL:  C12 C22 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:dgr:kubcen:2011002&r=ecm 
By:  Yuanhua Feng (University of Paderborn) 
Abstract:  We propose a fast datadriven procedure for decomposing seasonal time series using the Berlin Method, the software used by the German Federal Statistical Office in this context. Formula of the asymptotic optimal bandwidth h_A is obtained. Meth ods for estimating the unknowns in h_A are proposed. The algorithm is developed by adapting the well known iterative plugin idea to time series decomposition. Asymptotic behaviour of the proposal is investigated. Some computational aspects are discussed in detail. Data example show that the proposal works very well in the practice and that datadriven bandwidth selection is a very useful tool to improve the Berlin Method. Deep insights into the iterative plugin rule are also provided. 
Keywords:  Time series decomposition, Berlin Method, local regression, bandwidth selection, iterative plugin 
Date:  2010–11 
URL:  http://d.repec.org/n?u=RePEc:pdn:wpaper:33&r=ecm 
By:  Einmahl, J.H.J.; Kumar, K.; Magnus, J.R. (Tilburg University, Center for Economic Research) 
Abstract:  Bayesian model averaging attempts to combine parameter estimation and model uncertainty in one coherent framework. The choice of prior is then critical. Within an explicit framework of ignorance we define a â€˜suitableâ€™ prior as one which leads to a continuous and suitable analog to the pretest estimator. The normal prior, used in standard Bayesian model averaging, is shown to be unsuitable. The Laplace (or lasso) prior is almost suitable. A suitable prior (the Subbotin prior) is proposed and its properties are investigated. 
Keywords:  Model averaging;Bayesian analysis;Subbotin prior. 
JEL:  C11 C51 C52 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:dgr:kubcen:2011003&r=ecm 
By:  Frisén, Marianne (Statistical Research Unit, Department of Economics, School of Business, Economics and Law, Göteborg University) 
Abstract:  Multivariate surveillance is of interest in industrial production as it enables the monitoring of several components. Recently there has been an increased interest also in other areas such as detection of bioterrorism, spatial surveillance and transaction strategies in finance. Multivariate counterparts to the univariate Shewhart, EWMA and CUSUM methods have earlier been proposed. A review of general approaches to multivariate surveillance is given with respect to how suggested methods relate to general statistical inference principles. Multivariate online surveillance problems can be complex. The sufficiency principle can be of great use to find simplifications without loss of information. We will use this to clarify the structure of some problems. This will be of help to find relevant metrics for evaluations of multivariate surveillance and to find optimal methods. The sufficiency principle will be used to determine efficient methods to combine data from sources with different time lag. Surveillance of spatial data is one example. Illustrations will be given of surveillance of outbreaks of influenza. 
Keywords:  Sequential; Surveillance; Multivariate; Sufficiency 
JEL:  C44 
Date:  2011–03–29 
URL:  http://d.repec.org/n?u=RePEc:hhs:gunsru:2011_005&r=ecm 
By:  Yukitoshi Matsushita (Graduate School of Humanities and Social Science, University of Tsukuba); Taisuke Otsu (Cowles Foundation, Yale University) 
Abstract:  This paper studies secondorder properties of the empirical likelihood overidentifying restriction test to check the validity of moment condition models. We show that the empirical likelihood test is Bartlett correctable and suggest secondorder refinement methods for the test based on the empirical Bartlett correction and adjusted empirical likelihood. Our secondorder analysis supplements the one in Chen and Cui (2007) who considered parameter hypothesis testing for overidentified models. In simulation studies we find that the empirical Bartlett correction and adjusted empirical likelihood assisted by bootstrapping provide remarkable improvements for the size properties. 
Keywords:  Empirical likelihood, GMM, Overidentification test, Bartlett correction, Higher order analysis 
JEL:  C12 C14 
Date:  2011–04 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1791&r=ecm 
By:  Cerqueti, Roy; Costantini, Mauro; Lupi, Claudio 
Abstract:  The false discovery rate (FDR, Benjamini and Hochberg 1995) is a powerful approach to multiple testing. However, the original approach developed by Benjamini and Hochberg (1995) applies only to independent tests. Yekutieli (2008) showed that a modification of the BenjaminiHochberg (BH) approach can be used in the presence of dependent tests and labelled his procedure as separate subsets BH (ssBH). However, Yekutieli (2008) left the practical specification of the subsets of p values largely unresolved. In this paper we propose a modification of the ssBH procedure based on a selection of the subsets that guarantees that the dependence properties needed to control the FDR are satisfied. We label this new procedure as the separate pairs BH (spBH). An extensive Monte Carlo analysis is presented that compares the properties of the BH and spBH procedures. 
Keywords:  Multiple testing, False discovery rate, Copulas 
JEL:  C10 C12 
Date:  2011–03–28 
URL:  http://d.repec.org/n?u=RePEc:mol:ecsdps:esdp11059&r=ecm 
By:  Stanislav Anatolyev (New Economic School); Grigory Kosenok (New Economic School) 
Abstract:  Sequential procedures of testing for structural stability do not provide enough guidance on the shape of boundaries that are used to decide on acceptance or rejection, requiring only that the overall size of the test is asymptotically controlled. We introduce and motivate a reasonable criterion for a shape of boundaries which requires that the test size be uniformly distributed over the testing period. Under this criterion, we numerically construct boundaries for most popular sequential tests that are characterized by a test statistic behaving asymptotically either as a Wiener process or Brownian bridge. We handle this problem both in a context of retrospecting a historical sample and in a context of monitoring newly arriving data. We tabulate the boundaries by tting them to certain exible but parsimonious functional forms. Interesting patterns emerge in an illustrative application of sequential tests to the Phillips curve model. 
Keywords:  Structural stability; sequential tests; CUSUM; retrospection; monitoring; boundaries; asymptotic size 
Date:  2011–04 
URL:  http://d.repec.org/n?u=RePEc:cfr:cefirw:w0123_april2011&r=ecm 
By:  Roberto Colombi; Gianmaria Martini; Giorgio Vittadini 
Abstract:  This paper presents a new stochastic frontier model for panel data. The model takes into account firm unobservable heterogeneity and shortrun and longrun sources of inefficiency. Each of these features is modeled by a specific random effect. In this way, firms’ latent heterogeneity is not wrongly modeled as inefficiency, and it is possible to disentangle a timepersistent component from the total inefficiency. Under reasonable assumptions, we show that the closedskew normal distribution allows us to derive both the loglikelihood function of the model and the posterior expected values of the random effects. The new model is compared with nested models by analyzing the efficiency of firms belonging to different sectors. 
Keywords:  ClosedSkew Normal Distribution, Longitudinal Data Analysis, Mixed Models, Stochastic Frontiers 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:brh:wpaper:1101&r=ecm 
By:  Elena Di Bernadino (SAF  Laboratoire de Sciences Actuarielle et Financière  Université Claude Bernard  Lyon I : EA2429); Thomas Laloë (JAD  Laboratoire Jean Alexandre Dieudonné  CNRS : UMR6621  Université de Nice SophiaAntipolis); Véronique MaumeDeschamps (SAF  Laboratoire de Sciences Actuarielle et Financière  Université Claude Bernard  Lyon I : EA2429); Clémentine Prieur (INRIA RhôneAlpes / LJK Laboratoire Jean Kuntzmann  MOISE  CNRS : UMR5224  INRIA  Laboratoire Jean Kuntzmann  Université Joseph Fourier  Grenoble I  Institut Polytechnique de Grenoble, (Méthodes d'Analyse Stochastique des Codes et Traitements Numériques)  GdR MASCOTNUM  CNRS : GDR3179) 
Abstract:  This paper deals with the problem of estimating the level sets of an unknown distribution function $F$. A plugin approach is followed. That is, given a consistent estimator $F_n$ of $F$, we estimate the level sets of $F$ by the level sets of $F_n$. In our setting no compactness property is a priori required for the level sets to estimate. We state consistency results with respect to the Hausdorff distance and the volume of the symmetric difference. Our results are motivated by applications in multivariate risk theory. In this sense we also present simulated and real examples which illustrate our theoretical results. 
Keywords:  Level sets ; Distribution function ; Plugin estimation ; Hausdorff distance ; Conditional Tail Expectation 
Date:  2011–03–28 
URL:  http://d.repec.org/n?u=RePEc:hal:wpaper:hal00580624&r=ecm 
By:  Tongeren, J.W. Van; Magnus, J.R. (Tilburg University, Center for Economic Research) 
Abstract:  We present a Bayesian estimation method applied to an extended set of national accounts data and estimates of approximately 2500 variables. The method is based on conventional national accounts frameworks as compiled by countries in Central America, in particular Guatemala, and on concepts that are defined in the international standards of the System of National Accounts. Identities between the variables are exactly satisfied by the estimates. The method uses ratios between the variables as Bayesian conditions, and introduces prior reliabilities of values of basic data and ratios as criteria to adjust these values in order to satisfy the conditions. The paper not only presents estimates and precisions, but also discusses alternative conditions and reliabilities, in order to test the impact of framework assumptions and carry out sensitivity analyses. These tests involve, among others, the impact on Bayesian estimates of limited annual availability of data, of very low reliabilities (close to nonavailability) of price indices, of limited availability of important administrative and survey data, and also the impact of aggregation of the basic data. We introduce the concept of `tentative' estimates that are close to conventional national accounts estimates, in order to establish a close link between the Bayesian estimation approach and conventional national accounting. 
Keywords:  Macro accounts;system of national accounts;data frameworks;ratios;reliability;Bayesian estimation;sensitivity analysis;aggregation. 
JEL:  C11 C82 C87 E01 E20 P44 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:dgr:kubcen:2011022&r=ecm 
By:  Lanne, Markku; Luoto, Jani 
Abstract:  We propose an estimation method of the new Keynesian Phillips curve (NKPC) based on a univariate noncausal autoregressive model for the inflation rate. By construction, our approach avoids a number of problems related to the GMM estimation of the NKPC. We estimate the hybrid NKPC with quarterly U.S. data (1955:12010:3), and both expected future inflation and lagged inflation are found important in determining the inflation rate, with the former clearly dominating. Moreover, inflation persistence turns out to be intrinsic rather than inherited from a persistent driving process. 
Keywords:  Noncausal time series; NonGaussian time series; inflation; Phillips curve 
JEL:  C51 E31 C22 
Date:  2011–03 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:29801&r=ecm 
By:  Muris, C.H.M. (Tilburg University) 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:ner:tilbur:urn:nbn:nl:ui:124578467&r=ecm 
By:  Simone CerreiaVioglio; Fabio Maccheroni; Massimo Marinacci; Luigi Montrucchio 
Abstract:  Starting with the seminal paper of Gilboa and Schmeidler (1989) an analogy between the maxmin approach of Decision Theory under Ambiguity and the minimax approach of Robust Statistics  e.g. Huber and Strassen (1973)  has been hinted at. The present paper formally clarifies this relation by showing the conditions under which the two approaches are actually equivalent. 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:igi:igierp:382&r=ecm 