
on Econometrics 
By:  Torben G. Andersen (Northwestern University, NBER, and CREATES); Dobrislav Dobrev (Federal Reserve Board of Governors); Ernst Schaumburg (Federal Reserve Bank of New York) 
Abstract:  We provide a first indepth look at robust estimation of integrated quarticity (IQ) based on high frequency data. IQ is the key ingredient enabling inference about volatility and the presence of jumps in financial time series and is thus of considerable interest in applications. We document the significant empirical challenges for IQ estimation posed by commonly encountered data imperfections and set forth three complementary approaches for improving IQ based inference. First, we show that many common deviations from the jump diffusive null can be dealt with by a novel filtering scheme that generalizes truncation of individual returns to truncation of arbitrary functionals on return blocks. Second, we propose a new family of efficient robust neighborhood truncation (RNT) estimators for integrated power variation based on order statistics of a set of unbiased local power variation estimators on a block of returns. Third, we find that ratiobased inference, originally proposed in this context by BarndorffNielsen and Shephard (2002), has desirable robustness properties in the face of regularly occurring data imperfections and thus is well suited for our empirical applications. We confirm that the proposed filtering scheme and the RNT estimators perform well in our extensive simulation designs and in an application to the individual Dow Jones 30 stocks. 
Keywords:  Neighborhood Truncation Estimator, Functional Filtering, Integrated Quarticity, Inference on Integrated Variance, HighFrequency Data 
JEL:  C14 C15 C22 C80 G10 
Date:  2011–05–29 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201123&r=ecm 
By:  Bent Jesper Christensen (Aarhus University and CREATES); Olaf Posch (Aarhus University and CREATES); Michel van der Wel (Erasmus University Rotterdam and CREATES) 
Abstract:  We show that including financial market data at daily frequency, along with macro series at standard lower frequency, facilitates statistical inference on structural parameters in dynamic equilibrium models. Our continuoustime formulation conveniently accounts for the difference in observation frequency. We suggest two approaches for the estimation of structural parameters. The first is a simple regressionbased procedure for estimation of the reducedform parameters of the model, combined with a minimumdistance method for identifying the structural parameters. The second approach uses martingale estimating functions to estimate the structural parameters directly through a nonlinear optimization scheme. We illustrate both approaches by estimating the stochastic AK model with meanreverting spot interest rates. We also provide Monte Carlo evidence on the small sample behavior of the estimators and estimate the model using 20 years of U.S. macro and financial data. 
Keywords:  Structural estimation, AKVasicek model, Martingale estimating function 
JEL:  C13 E32 O40 
Date:  2011–06–09 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201121&r=ecm 
By:  Antonio E. Noriega (Dirección General de Investigación Económica, Banco de México and Departamento de Department of Economics and Finance, Universidad de Guanajuato); Daniel VentosaSantaularia (Department of Economics and Finance, Universidad de Guanajuato) 
Abstract:  The literature on spurious regressions has found that the tstatistic for testing the null of no relationship between two independent variables diverges asymptotically under a wide variety of nonstationary data generating processes for the dependent and explanatory variables. This paper introduces a simple method which guarantees convergence of this tstatistic to a pivotal limit distribution, when there are drifts in the integrated processes generating the data, thus allowing asymptotic inference. We show that this method can be used to distinguish a genuine relationship from a spurious one among integrated (I(1) and I(2)) processes. Simulation experiments show that the test has good size and power properties in small samples. We apply the proposed procedure to several pairs of apparently independent integrated variables (including the marriages and mortality data of Yule, 1926), and find that our procedure, in contrast to standard ordinary least squares regression, does not find (spurious) significant relationships between the variables. 
Keywords:  Spurious regression, integrated process, detrending, Cointegration 
JEL:  C12 C15 C22 C46 
Date:  2011–05–01 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201115&r=ecm 
By:  George Kapetanios (Queen Mary, University of London); Fotis Papailias (Queen Mary, University of London) 
Abstract:  We consider the issue of Block Bootstrap methods in processes that exhibit strong dependence. The main difficulty is to transform the series in such way that implementation of these techniques can provide an accurate approximation to the true distribution of the test statistic under consideration. The bootstrap algorithm we suggest consists of the following operations: given <i>x<sub>t</sub> ~ I(d<sub>0</sub>)</i>, 1) estimate the long memory parameter and obtain <i>dˆ</i>, 2) difference the series <i>dˆ</i> times, 3) apply the block bootstrap on the above and finally, 4) cumulate the bootstrap sample <i>dˆ</i> times. Repetition of steps 3 and 4 for a sufficient number of times, results to a successful estimation of the distribution of the test statistic. Furthermore, we establish the asymptotic validity of this method. Its finitesample properties are investigated via Monte Carlo experiments and the results indicate that it can be used as an alternative, and in most of the cases to be preferred than the Sieve <i>AR</i> bootstrap for fractional processes. 
Keywords:  Block Bootstrap, Long memory; Resampling, Strong dependence 
JEL:  C15 C22 C63 
Date:  2011–06 
URL:  http://d.repec.org/n?u=RePEc:qmw:qmwecw:wp679&r=ecm 
By:  J. Isaac Miller (Department of Economics, University of MissouriColumbia) 
Abstract:  This paper introduces cointegrating mixed data sampling (CoMiDaS) regressions, generalizing nonlinear MiDaS regressions in the extant literature. Under a linear mixedfrequency datagenerating process, MiDaS regressions provide a parsimoniously parameterized nonlinear alternative when the linear forecasting model is overparameterized and may be infeasible. In spite of potential correlation of the error term both serially and with the regressors, I find that nonlinear least squares consistently estimates the minimum meansquared forecast error parameter vector. The exact asymptotic distribution of the difference may be nonstandard. I propose a novel testing strategy for nonlinear MiDaS and CoMiDaS regressions against a general but possibly infeasible linear alternative. An empirical application to nowcasting global real economic activity using monthly covariates illustrates the utility of the approach. 
Keywords:  cointegration, mixedfrequency series, mixed data sampling 
JEL:  C12 C13 C22 
Date:  2011–06–14 
URL:  http://d.repec.org/n?u=RePEc:umc:wpaper:1104&r=ecm 
By:  Barbara Rossi; Atsushi Inoue 
Abstract:  This paper proposes new methodologies for evaluating outofsample forecasting performance that are robust to the choice of the estimation window size. The methodologies involve evaluating the predictive ability of forecasting models over a wide range of window sizes. We show that the tests proposed in the literature may lack power to detect predictive ability, and might be subject to data snooping across different window sizes if used repeatedly. An empirical application shows the usefulness of the methodologies for evaluating exchange rate models' forecasting ability. 
Keywords:  Predictive Ability Testing, Forecast Evaluation, Estimation Window 
JEL:  C22 C52 C53 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:duk:dukeec:1104&r=ecm 
By:  Remy Chicheportiche; JeanPhilippe Bouchaud 
Abstract:  We revisit the KolmogorovSmirnov and Cram\'ervon Mises goodnessoffit (GoF) tests and propose a generalisation to identically distributed, but dependent univariate random variables. We show that the dependence leads to a reduction of the "effective" number of independent observations. The generalised GoF tests are not distributionfree but rather depend on all the lagged bivariate copulas. These objects, that we call "selfcopulas", encode all the nonlinear temporal dependences. We introduce a specific, lognormal model for these selfcopulas, for which a number of analytical results are derived. An application to financial time series is provided. As is well known, the dependence is to be longranged in this case, a finding that we confirm using selfcopulas. As a consequence, the acceptance rates for GoF tests are substantially higher than if the returns were iid random variables. 
Date:  2011–06 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1106.3016&r=ecm 
By:  Eric Gautier (CREST  Centre de Recherche en Économie et Statistique  INSEE  École Nationale de la Statistique et de l'Administration Économique, ENSAE  École Nationale de la Statistique et de l'Administration Économique  ENSAE ParisTech); Erwan Le Pennec (INRIA Saclay  Ile de France  SELECT  INRIA  Université Paris Sud  Paris XI  CNRS : UMR, Département de MathématiquesUniversité de Paris X1  Université Paris Sud  Paris XI) 
Abstract:  In this article we consider the estimation of the joint distribution of the random coefficients and error term in the nonparametric random coefficients binary choice model. In this model from economics, each agent has to choose between two mutually exclusive alternatives based on the observation of attributes of the two alternatives and of the agents, the random coefficients account for unobserved heterogeneity of preferences. Because of the scale invariance of the model, we want to estimate the density of a random vector of Euclidean norm 1. If the regressors and coefficients are independent, the choice probability conditional on a vector of $d1$ regressors is an integral of the joint density on half a hypersphere determined by the regressors. Estimation of the joint density is an illposed inverse problem where the operator that has to be inverted in the socalled hemispherical transform. We derive lower bounds on the minimax risk under $\xL^p$ losses and smoothness expressed in terms of Besov spaces on the sphere $\mathbb{S}^{d1}$. We then consider a needlet thresholded estimator with datadriven thresholds and obtain adaptivity for $\xL^p$ losses and Besov ellipsoids under assumptions on the random design. 
Keywords:  Discrete choice models;random coefficients; inverse problems; minimax rate optimality; adaptation; needlets; datadriven thresholding. 
Date:  2011–06 
URL:  http://d.repec.org/n?u=RePEc:hal:wpaper:inria00601274&r=ecm 
By:  Thanasis Stengos (University of Guelph); Brennan S. Thompson (Ryerson University) 
Abstract:  In this paper, we propose of a test of bivariate stochastic dominance using a generalized framework for testing inequality constraints. Unlike existing tests, this test has the advantage of utilizing the covariance structure of the estimates of the joint distribution functions. The performance of our proposed test is examined by way of a Monte Carlo experiment. We also consider an empirical example which utilizes household survey data on income and health status. 
Keywords:  Stochastic dominance, inequality restrictions, multidimensional welfare 
JEL:  C12 C15 D63 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:gue:guelph:201107.&r=ecm 
By:  Yingyao Hu and JiLiang Shiu 
Abstract:  This paper provides sufficient conditions for the nonparametric identification of the regression function m(.) in a regression model with an endogenous regressor x and an instrumental variable z. It has been shown that the identification of the regression function from the conditional expectation of the dependent variable on the instrument relies on the completeness of the distribution of the endogenous regressor conditional on the instrument, i.e., f(xz). We provide sufficient conditions for the completeness of f(xz) without imposing a specific functional form, such as the exponential family. We show that if the conditional density f(xz) coincides with an existing complete density at a limit point in the support of z, then f(xz) itself is complete, and therefore, the regression function m(.) is nonparametrically identified. We use this general result provide specific sufficient conditions for completeness in three different specifications of the relationship between the endogenous regressor x and the instrumental variable z. 
Date:  2011–06 
URL:  http://d.repec.org/n?u=RePEc:jhu:papers:581&r=ecm 
By:  Lee, J. 
Abstract:  The first chapter is introduction and Chapter 2 proposes formal frameworks for identifiability and testability of structural features allowing for set identification. The results in Chapter 2 are used in other chapters. The second section of Chapter 3, Chapter 4 and Chapter 5 contain new results. Chapter 3 has two sections. The first section introduces the quantilebased control function approach (QCFA) proposed by Chesher (2003) to compare and contrast other results in Chapter 4 and 5. The second section contains new findings on the local endogeneity bias and testability of endogeneity. Chapter 4 assumes that the structural relations are differentiable and applies the QCFA to several models for discrete outcomes. Chapter 4 reports point identification results of partial derivatives with respect to a continuously varying endogenous variable. Chapter 5 relaxes differentiability assumptions and apply the QCFA with an ordered discrete endogeneous variable. The model in Chapter 5 set identifies partial differences of a nonseparable structural function. 
Date:  2010–10–28 
URL:  http://d.repec.org/n?u=RePEc:ner:ucllon:http://discovery.ucl.ac.uk/516136/&r=ecm 
By:  Christophe Ley; YvesCaoimhin Swan 
Abstract:  We provide a general framework for characterizing families of (univariate, multivariate, discrete and continuous) distributions in terms of a parameter of interest. We show how this allows for recovering known ChenStein characterizations, and for constructing many more. Several examples are worked out in full, and different potential applications are discussed. 
Keywords:  characterization theorem; ChenStein characterization; local and scale parameters; parameter of interest 
Date:  2011–06 
URL:  http://d.repec.org/n?u=RePEc:eca:wpaper:2013/88988&r=ecm 
By:  Marie Brière; Bastien Drut; Valérie Mignon; Kim Oosterlinck; Ariane Szafarz 
Abstract:  Levy and Roll (Review of Financial Studies, 2010) have recently revived the debate related to the market portfolio's efficiency suggesting that it may be meanvariance efficient after all. This paper develops an alternative test of portfolio meanvariance efficiency based on the realistic assumption that all assets are risky. The test is based on the vertical distance of a portfolio from the efficient frontier. Monte Carlo simulations show that our test outperforms the previous meanvariance efficiency tests for large samples since it produces smaller size distortions for comparable power. Our empirical application to the US equity market highlights that the market portfolio is not meanvariance efficient, and so invalidates the zerobeta CAPM. 
Keywords:  Efficient portfolio, meanvariance efficiency, efficiency test. 
JEL:  G11 G12 C12 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:drm:wpaper:201120&r=ecm 
By:  HatemiJ, Abdulnasser 
Abstract:  This paper introduces asymmetric impulse response functions and asymmetric variance decompositions. It is shown how the underlying variables can be transformed into cumulative positive and negative changes in order to estimate the impulses to an asymmetric innovation. An application is provided to demonstrate how the propagation mechanism of these asymmetric impulses and responses operates. 
Keywords:  VAR modelling; Asymmetric Impulses; Fiscal Policy 
JEL:  C32 H21 C50 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:31700&r=ecm 
By:  Marc Henry; Ismael Mourifié 
Abstract:  In the spatial model of voting, voters choose the candidate closest to them in the ideological space. Recent work by (Degan and Merlo 2009) shows that it is falsifiable on the basis of individual voting data in multiple elections. We show how to tackle the fact that the model only partially identifies the distribution of voting profiles and we give a formal revealed preference test of the spatial voting model in 3 national elections in the US, and strongly reject the spatial model in all cases. We also construct confidence regions for partially identified voter characteristics in an augmented model with unobserved valence dimension, and identify the amount of voter heterogeneity necessary to reconcile the data with spatial preferences. <P> 
Keywords:  revealed preference, partial identification, elliptic preferences, voting behaviour, 
Date:  2011–06–01 
URL:  http://d.repec.org/n?u=RePEc:cir:cirwor:2011s49&r=ecm 
By:  Nicolas Ruiz 
Abstract:  Masking methods for the safe dissemination of microdata consist of distorting the original data while preserving a predefined set of statistical properties in the microdata. For continuous variables, available methodologies rely essentially on matrix masking and in particular on adding noise to the original values, using more or less refined procedures depending on the extent of information that one seeks to preserve. Almost all of these methods make use of the critical assumption that the original datasets follow a normal distribution and/or that the noise has such a distribution. This assumption is, however, restrictive in the sense that few variables follow empirically a Gaussian pattern: the distribution of household income, for example, is positively skewed, and this skewness is essential information that has to be considered and preserved. This paper addresses these issues by presenting a simple multiplicative masking method that preserves skewness of the original data while offering a sufficient level of disclosure risk control. Numerical examples are provided, leading to the suggestion that this method could be wellsuited for the dissemination of a broad range of microdata, including those based on administrative and business records.<BR>Les méthodes de masquage utilisées pour la diffusion sécurisée des micros données consistent principalement en deux exercices simultanés : la perturbation des valeurs d’origines des données utilisées et la préservation d’un ensemble prédéfini de leurs propriétés statistiques. Pour les variables continues, les méthodes disponibles reposent essentiellement sur l'ajout de bruit aux valeurs d'origine, en utilisant des procédures aux degrés de complexité variant selon l'étendue de l’information que l'on cherche à préserver. Cependant, une caractéristique commune à l’ensemble de ces méthodes est l’utilisation centrale qui est faite de la loi normale, en supposant les données d'origines et/ou les perturbations distribuées selon ce schéma. Cela reste une hypothèse très restrictive dans le sens ou la validité empirique de cette dernière n’est que très rarement vérifiée: la plupart des distributions de revenus observées sont par exemple fortement positivement asymétrique. Cette caractéristique demeure d’ailleurs essentielle et cruciale pour l’analyse économique, et se doit donc d’être préservé. Partant de ce constat, cet article présente une méthodologie simple de masquage multiplicatif préservant l'asymétrie des données d'origine, ce tout en proposant un niveau suffisant de contrôle des risques de divulgation. Cette méthode est illustré au moyen d‘exemples numériques tendant à démontrer l’intérêt de la procédure utilisée à la diffusion d'un large éventail de micro données, y compris celles fondées sur la base de registre administratifs. 
Date:  2011–02–23 
URL:  http://d.repec.org/n?u=RePEc:oec:stdaaa:2011/2en&r=ecm 
By:  Alfredo GarcíaHiernaux (Departamento de Economía Cuantitativa (Department of Quantitative Economics), Facultad de Ciencias Económicas y Empresariales (Faculty of Economics and Business), Universidad Complutense de Madrid); David E. Guerrero 
Abstract:  This paper provides a new, uni¯ed, and °exible framework to measure and characterize convergence in prices. We formally de¯ne this notion and propose a model to represent a wide range of transition paths that converge to a common steadystate. Our framework enables the econometric measurement of such transi tional behaviors and the development of testing procedures. Speci¯cally, we derive a statistical test to determine whether convergence exists and, if so, which type: as catchingup or steadystate. The application of this methodology to historic wheat prices results in a novel explanation of the convergence processes experienced during the 19th century. 
Keywords:  Price convergence, cointegration, law of one price. 
JEL:  C22 C32 N70 F15 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:ucm:doicae:1122&r=ecm 
By:  Darrell A Turkington (UWA Business School, The University of Western Australia) 
Abstract:  Simple theorems based on a mathematical property of vecY/vecX provide powerful tools for obtaining matrix calculus results. By way of illustration, new results are obtained for matrix derivatives involving vecA, vechA, v(A) and vecX where X is a symmetric matrix. The analysis explains exactly how a loglikelihood function should be differentiated using matrix calculus. 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:uwa:wpaper:1106&r=ecm 
By:  Abdulnasser, HatemiJ 
Abstract:  This article extends the seminal work of Granger and Yoo (2002) on hidden cointegration to panel data analysis. It shows how cumulative negative and positive changes can be constructed for each panel variable. It also shows how tests similar to the augmented DickeyFuller tests can be implemented to find out whether the cointegration is hidden in the panel or not. An application is provided to investigate the impact of permanent positive and negative shocks in the government expenditure on the national output in a panel of three countries. 
Keywords:  Asymmetry; Panel Data; Cointegration; Testing; Government Spending; Output 
JEL:  H21 C33 
Date:  2011–06 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:31604&r=ecm 