nep-ecm New Economics Papers
on Econometrics
Issue of 2011‒06‒25
nineteen papers chosen by
Sune Karlsson
Orebro University

  1. A Functional Filtering and Neighborhood Truncation Approach to Integrated Quarticity Estimation By Torben G. Andersen; Dobrislav Dobrev; Ernst Schaumburg
  2. Estimating Dynamic Equilibrium Models using Macro and Financial Data By Bent Jesper Christensen; Olaf Posch; Michel van der Wel
  3. A Simple Test for Spurious Regressions By Antonio E. Noriega; Daniel Ventosa-Santaularia
  4. Block Bootstrap and Long Memory By George Kapetanios; Fotis Papailias
  5. Cointegrating MiDaS Regressions and a MiDaS Test By J. Isaac Miller
  6. Out-of-Sample Forecast Tests Robust to Window Size Choice By Barbara Rossi; Atsushi Inoue
  7. Goodness-of-Fit tests with Dependent Observations By Remy Chicheportiche; Jean-Philippe Bouchaud
  8. Adaptive estimation in the nonparametric random coefficients binary choice model by needlet thresholding By Eric Gautier; Erwan Le Pennec
  9. Testing for Bivariate Stochastic Dominance Using Inequality Restrictions By Thanasis Stengos; Brennan S. Thompson
  10. Nonparametric Identification Using Instrumental Variables: Sufficient Conditions For Completeness By Yingyao Hu and Ji-Liang Shiu
  11. Nonparametric structural analysis of discrete data: the quantile-based control function approach. By Lee, J.
  12. Univariate and Multivariate Chen-Stein Characterizations-a Parametric Approach By Christophe Ley; Yves-Caoimhin Swan
  13. Is the Market Portfolio Efficient? A New Test to Revisit the Roll (1977) versus Levy and Roll (2010) Controversy By Marie Brière; Bastien Drut; Valérie Mignon; Kim Oosterlinck; Ariane Szafarz
  14. Asymmetric generalized impulse responses and variance decompositions with an application By Hatemi-J, Abdulnasser
  15. Euclidean Revealed Preferences: Testing the Spatial Voting Model By Marc Henry; Ismael Mourifié
  16. A Multiplicative Masking Method for Preserving the Skewness of the Original Micro-records By Nicolas Ruiz
  17. Convergence and Cointegration By Alfredo García-Hiernaux; David E. Guerrero
  18. On The Differentiation Of A Log-Liklihood Function Using Matrix Calculus By Darrell A Turkington
  19. Hidden panel cointegration By Abdulnasser, Hatemi-J

  1. By: Torben G. Andersen (Northwestern University, NBER, and CREATES); Dobrislav Dobrev (Federal Reserve Board of Governors); Ernst Schaumburg (Federal Reserve Bank of New York)
    Abstract: We provide a first in-depth look at robust estimation of integrated quarticity (IQ) based on high frequency data. IQ is the key ingredient enabling inference about volatility and the presence of jumps in financial time series and is thus of considerable interest in applications. We document the significant empirical challenges for IQ estimation posed by commonly encountered data imperfections and set forth three complementary approaches for improving IQ based inference. First, we show that many common deviations from the jump diffusive null can be dealt with by a novel filtering scheme that generalizes truncation of individual returns to truncation of arbitrary functionals on return blocks. Second, we propose a new family of efficient robust neighborhood truncation (RNT) estimators for integrated power variation based on order statistics of a set of unbiased local power variation estimators on a block of returns. Third, we find that ratio-based inference, originally proposed in this context by Barndorff-Nielsen and Shephard (2002), has desirable robustness properties in the face of regularly occurring data imperfections and thus is well suited for our empirical applications. We confirm that the proposed filtering scheme and the RNT estimators perform well in our extensive simulation designs and in an application to the individual Dow Jones 30 stocks.
    Keywords: Neighborhood Truncation Estimator, Functional Filtering, Integrated Quarticity, Inference on Integrated Variance, High-Frequency Data
    JEL: C14 C15 C22 C80 G10
    Date: 2011–05–29
  2. By: Bent Jesper Christensen (Aarhus University and CREATES); Olaf Posch (Aarhus University and CREATES); Michel van der Wel (Erasmus University Rotterdam and CREATES)
    Abstract: We show that including financial market data at daily frequency, along with macro series at standard lower frequency, facilitates statistical inference on structural parameters in dynamic equilibrium models. Our continuous-time formulation conveniently accounts for the difference in observation frequency. We suggest two approaches for the estimation of structural parameters. The first is a simple regression-based procedure for estimation of the reduced-form parameters of the model, combined with a minimum-distance method for identifying the structural parameters. The second approach uses martingale estimating functions to estimate the structural parameters directly through a non-linear optimization scheme. We illustrate both approaches by estimating the stochastic AK model with mean-reverting spot interest rates. We also provide Monte Carlo evidence on the small sample behavior of the estimators and estimate the model using 20 years of U.S. macro and financial data.
    Keywords: Structural estimation, AK-Vasicek model, Martingale estimating function
    JEL: C13 E32 O40
    Date: 2011–06–09
  3. By: Antonio E. Noriega (Dirección General de Investigación Económica, Banco de México and Departamento de Department of Economics and Finance, Universidad de Guanajuato); Daniel Ventosa-Santaularia (Department of Economics and Finance, Universidad de Guanajuato)
    Abstract: The literature on spurious regressions has found that the t-statistic for testing the null of no relationship between two independent variables diverges asymptotically under a wide variety of nonstationary data generating processes for the dependent and explanatory variables. This paper introduces a simple method which guarantees convergence of this t-statistic to a pivotal limit distribution, when there are drifts in the integrated processes generating the data, thus allowing asymptotic inference. We show that this method can be used to distinguish a genuine relationship from a spurious one among integrated (I(1) and I(2)) processes. Simulation experiments show that the test has good size and power properties in small samples. We apply the proposed procedure to several pairs of apparently independent integrated variables (including the marriages and mortality data of Yule, 1926), and find that our procedure, in contrast to standard ordinary least squares regression, does not find (spurious) significant relationships between the variables.
    Keywords: Spurious regression, integrated process, detrending, Cointegration
    JEL: C12 C15 C22 C46
    Date: 2011–05–01
  4. By: George Kapetanios (Queen Mary, University of London); Fotis Papailias (Queen Mary, University of London)
    Abstract: We consider the issue of Block Bootstrap methods in processes that exhibit strong dependence. The main difficulty is to transform the series in such way that implementation of these techniques can provide an accurate approximation to the true distribution of the test statistic under consideration. The bootstrap algorithm we suggest consists of the following operations: given <i>x<sub>t</sub> ~ I(d<sub>0</sub>)</i>, 1) estimate the long memory parameter and obtain <i>dˆ</i>, 2) difference the series <i>dˆ</i> times, 3) apply the block bootstrap on the above and finally, 4) cumulate the bootstrap sample <i>dˆ</i> times. Repetition of steps 3 and 4 for a sufficient number of times, results to a successful estimation of the distribution of the test statistic. Furthermore, we establish the asymptotic validity of this method. Its finite-sample properties are investigated via Monte Carlo experiments and the results indicate that it can be used as an alternative, and in most of the cases to be preferred than the Sieve <i>AR</i> bootstrap for fractional processes.
    Keywords: Block Bootstrap, Long memory; Resampling, Strong dependence
    JEL: C15 C22 C63
    Date: 2011–06
  5. By: J. Isaac Miller (Department of Economics, University of Missouri-Columbia)
    Abstract: This paper introduces cointegrating mixed data sampling (CoMiDaS) regressions, generalizing nonlinear MiDaS regressions in the extant literature. Under a linear mixed-frequency data-generating process, MiDaS regressions provide a parsimoniously parameterized nonlinear alternative when the linear forecasting model is over-parameterized and may be infeasible. In spite of potential correlation of the error term both serially and with the regressors, I find that nonlinear least squares consistently estimates the minimum mean-squared forecast error parameter vector. The exact asymptotic distribution of the difference may be non-standard. I propose a novel testing strategy for nonlinear MiDaS and CoMiDaS regressions against a general but possibly infeasible linear alternative. An empirical application to nowcasting global real economic activity using monthly covariates illustrates the utility of the approach.
    Keywords: cointegration, mixed-frequency series, mixed data sampling
    JEL: C12 C13 C22
    Date: 2011–06–14
  6. By: Barbara Rossi; Atsushi Inoue
    Abstract: This paper proposes new methodologies for evaluating out-of-sample forecasting performance that are robust to the choice of the estimation window size. The methodologies involve evaluating the predictive ability of forecasting models over a wide range of window sizes. We show that the tests proposed in the literature may lack power to detect predictive ability, and might be subject to data snooping across different window sizes if used repeatedly. An empirical application shows the usefulness of the methodologies for evaluating exchange rate models' forecasting ability.
    Keywords: Predictive Ability Testing, Forecast Evaluation, Estimation Window
    JEL: C22 C52 C53
    Date: 2011
  7. By: Remy Chicheportiche; Jean-Philippe Bouchaud
    Abstract: We revisit the Kolmogorov-Smirnov and Cram\'er-von Mises goodness-of-fit (GoF) tests and propose a generalisation to identically distributed, but dependent univariate random variables. We show that the dependence leads to a reduction of the "effective" number of independent observations. The generalised GoF tests are not distribution-free but rather depend on all the lagged bivariate copulas. These objects, that we call "self-copulas", encode all the non-linear temporal dependences. We introduce a specific, log-normal model for these self-copulas, for which a number of analytical results are derived. An application to financial time series is provided. As is well known, the dependence is to be long-ranged in this case, a finding that we confirm using self-copulas. As a consequence, the acceptance rates for GoF tests are substantially higher than if the returns were iid random variables.
    Date: 2011–06
  8. By: Eric Gautier (CREST - Centre de Recherche en Économie et Statistique - INSEE - École Nationale de la Statistique et de l'Administration Économique, ENSAE - École Nationale de la Statistique et de l'Administration Économique - ENSAE ParisTech); Erwan Le Pennec (INRIA Saclay - Ile de France - SELECT - INRIA - Université Paris Sud - Paris XI - CNRS : UMR, Département de Mathématiques-Université de Paris X1 - Université Paris Sud - Paris XI)
    Abstract: In this article we consider the estimation of the joint distribution of the random coefficients and error term in the nonparametric random coefficients binary choice model. In this model from economics, each agent has to choose between two mutually exclusive alternatives based on the observation of attributes of the two alternatives and of the agents, the random coefficients account for unobserved heterogeneity of preferences. Because of the scale invariance of the model, we want to estimate the density of a random vector of Euclidean norm 1. If the regressors and coefficients are independent, the choice probability conditional on a vector of $d-1$ regressors is an integral of the joint density on half a hyper-sphere determined by the regressors. Estimation of the joint density is an ill-posed inverse problem where the operator that has to be inverted in the so-called hemispherical transform. We derive lower bounds on the minimax risk under $\xL^p$ losses and smoothness expressed in terms of Besov spaces on the sphere $\mathbb{S}^{d-1}$. We then consider a needlet thresholded estimator with data-driven thresholds and obtain adaptivity for $\xL^p$ losses and Besov ellipsoids under assumptions on the random design.
    Keywords: Discrete choice models;random coefficients; inverse problems; minimax rate optimality; adaptation; needlets; data-driven thresholding.
    Date: 2011–06
  9. By: Thanasis Stengos (University of Guelph); Brennan S. Thompson (Ryerson University)
    Abstract: In this paper, we propose of a test of bivariate stochastic dominance using a generalized framework for testing inequality constraints. Unlike existing tests, this test has the advantage of utilizing the covariance structure of the estimates of the joint distribution functions. The performance of our proposed test is examined by way of a Monte Carlo experiment. We also consider an empirical example which utilizes household survey data on income and health status.
    Keywords: Stochastic dominance, inequality restrictions, multidimensional welfare
    JEL: C12 C15 D63
    Date: 2011
  10. By: Yingyao Hu and Ji-Liang Shiu
    Abstract: This paper provides sufficient conditions for the nonparametric identification of the regression function m(.) in a regression model with an endogenous regressor x and an instrumental variable z. It has been shown that the identification of the regression function from the conditional expectation of the dependent variable on the instrument relies on the completeness of the distribution of the endogenous regressor conditional on the instrument, i.e., f(x|z). We provide sufficient conditions for the completeness of f(x|z) without imposing a specific functional form, such as the exponential family. We show that if the conditional density f(x|z) coincides with an existing complete density at a limit point in the support of z, then f(x|z) itself is complete, and therefore, the regression function m(.) is nonparametrically identified. We use this general result provide specific sufficient conditions for completeness in three different specifications of the relationship between the endogenous regressor x and the instrumental variable z.
    Date: 2011–06
  11. By: Lee, J.
    Abstract: The first chapter is introduction and Chapter 2 proposes formal frameworks for identifiability and testability of structural features allowing for set identification. The results in Chapter 2 are used in other chapters. The second section of Chapter 3, Chapter 4 and Chapter 5 contain new results. Chapter 3 has two sections. The first section introduces the quantile-based control function approach (QCFA) proposed by Chesher (2003) to compare and contrast other results in Chapter 4 and 5. The second section contains new findings on the local endogeneity bias and testability of endogeneity. Chapter 4 assumes that the structural relations are differentiable and applies the QCFA to several models for discrete outcomes. Chapter 4 reports point identification results of partial derivatives with respect to a continuously varying endogenous variable. Chapter 5 relaxes differentiability assumptions and apply the QCFA with an ordered discrete endogeneous variable. The model in Chapter 5 set identifies partial differences of a nonseparable structural function.
    Date: 2010–10–28
  12. By: Christophe Ley; Yves-Caoimhin Swan
    Abstract: We provide a general framework for characterizing families of (univariate, multivariate, discrete and continuous) distributions in terms of a parameter of interest. We show how this allows for recovering known Chen-Stein characterizations, and for constructing many more. Several examples are worked out in full, and different potential applications are discussed.
    Keywords: characterization theorem; Chen-Stein characterization; local and scale parameters; parameter of interest
    Date: 2011–06
  13. By: Marie Brière; Bastien Drut; Valérie Mignon; Kim Oosterlinck; Ariane Szafarz
    Abstract: Levy and Roll (Review of Financial Studies, 2010) have recently revived the debate related to the market portfolio's efficiency suggesting that it may be mean-variance efficient after all. This paper develops an alternative test of portfolio mean-variance efficiency based on the realistic assumption that all assets are risky. The test is based on the vertical distance of a portfolio from the efficient frontier. Monte Carlo simulations show that our test outperforms the previous mean-variance efficiency tests for large samples since it produces smaller size distortions for comparable power. Our empirical application to the US equity market highlights that the market portfolio is not mean-variance efficient, and so invalidates the zerobeta CAPM.
    Keywords: Efficient portfolio, mean-variance efficiency, efficiency test.
    JEL: G11 G12 C12
    Date: 2011
  14. By: Hatemi-J, Abdulnasser
    Abstract: This paper introduces asymmetric impulse response functions and asymmetric variance decompositions. It is shown how the underlying variables can be transformed into cumulative positive and negative changes in order to estimate the impulses to an asymmetric innovation. An application is provided to demonstrate how the propagation mechanism of these asymmetric impulses and responses operates.
    Keywords: VAR modelling; Asymmetric Impulses; Fiscal Policy
    JEL: C32 H21 C50
    Date: 2011
  15. By: Marc Henry; Ismael Mourifié
    Abstract: In the spatial model of voting, voters choose the candidate closest to them in the ideological space. Recent work by (Degan and Merlo 2009) shows that it is falsifiable on the basis of individual voting data in multiple elections. We show how to tackle the fact that the model only partially identifies the distribution of voting profiles and we give a formal revealed preference test of the spatial voting model in 3 national elections in the US, and strongly reject the spatial model in all cases. We also construct confidence regions for partially identified voter characteristics in an augmented model with unobserved valence dimension, and identify the amount of voter heterogeneity necessary to reconcile the data with spatial preferences. <P>
    Keywords: revealed preference, partial identification, elliptic preferences, voting behaviour,
    Date: 2011–06–01
  16. By: Nicolas Ruiz
    Abstract: Masking methods for the safe dissemination of microdata consist of distorting the original data while preserving a pre-defined set of statistical properties in the microdata. For continuous variables, available methodologies rely essentially on matrix masking and in particular on adding noise to the original values, using more or less refined procedures depending on the extent of information that one seeks to preserve. Almost all of these methods make use of the critical assumption that the original datasets follow a normal distribution and/or that the noise has such a distribution. This assumption is, however, restrictive in the sense that few variables follow empirically a Gaussian pattern: the distribution of household income, for example, is positively skewed, and this skewness is essential information that has to be considered and preserved. This paper addresses these issues by presenting a simple multiplicative masking method that preserves skewness of the original data while offering a sufficient level of disclosure risk control. Numerical examples are provided, leading to the suggestion that this method could be well-suited for the dissemination of a broad range of microdata, including those based on administrative and business records.<BR>Les méthodes de masquage utilisées pour la diffusion sécurisée des micros données consistent principalement en deux exercices simultanés : la perturbation des valeurs d’origines des données utilisées et la préservation d’un ensemble prédéfini de leurs propriétés statistiques. Pour les variables continues, les méthodes disponibles reposent essentiellement sur l'ajout de bruit aux valeurs d'origine, en utilisant des procédures aux degrés de complexité variant selon l'étendue de l’information que l'on cherche à préserver. Cependant, une caractéristique commune à l’ensemble de ces méthodes est l’utilisation centrale qui est faite de la loi normale, en supposant les données d'origines et/ou les perturbations distribuées selon ce schéma. Cela reste une hypothèse très restrictive dans le sens ou la validité empirique de cette dernière n’est que très rarement vérifiée: la plupart des distributions de revenus observées sont par exemple fortement positivement asymétrique. Cette caractéristique demeure d’ailleurs essentielle et cruciale pour l’analyse économique, et se doit donc d’être préservé. Partant de ce constat, cet article présente une méthodologie simple de masquage multiplicatif préservant l'asymétrie des données d'origine, ce tout en proposant un niveau suffisant de contrôle des risques de divulgation. Cette méthode est illustré au moyen d‘exemples numériques tendant à démontrer l’intérêt de la procédure utilisée à la diffusion d'un large éventail de micro données, y compris celles fondées sur la base de registre administratifs.
    Date: 2011–02–23
  17. By: Alfredo García-Hiernaux (Departamento de Economía Cuantitativa (Department of Quantitative Economics), Facultad de Ciencias Económicas y Empresariales (Faculty of Economics and Business), Universidad Complutense de Madrid); David E. Guerrero
    Abstract: This paper provides a new, uni¯ed, and °exible framework to measure and characterize convergence in prices. We formally de¯ne this notion and propose a model to represent a wide range of transition paths that converge to a common steady-state. Our framework enables the econometric measurement of such transi- tional behaviors and the development of testing procedures. Speci¯cally, we derive a statistical test to determine whether convergence exists and, if so, which type: as catching-up or steady-state. The application of this methodology to historic wheat prices results in a novel explanation of the convergence processes experienced during the 19th century.
    Keywords: Price convergence, cointegration, law of one price.
    JEL: C22 C32 N70 F15
    Date: 2011
  18. By: Darrell A Turkington (UWA Business School, The University of Western Australia)
    Abstract: Simple theorems based on a mathematical property of vecY/vecX provide powerful tools for obtaining matrix calculus results. By way of illustration, new results are obtained for matrix derivatives involving vecA, vechA, v(A) and vecX where X is a symmetric matrix. The analysis explains exactly how a log-likelihood function should be differentiated using matrix calculus.
    Date: 2011
  19. By: Abdulnasser, Hatemi-J
    Abstract: This article extends the seminal work of Granger and Yoo (2002) on hidden cointegration to panel data analysis. It shows how cumulative negative and positive changes can be constructed for each panel variable. It also shows how tests similar to the augmented Dickey-Fuller tests can be implemented to find out whether the cointegration is hidden in the panel or not. An application is provided to investigate the impact of permanent positive and negative shocks in the government expenditure on the national output in a panel of three countries.
    Keywords: Asymmetry; Panel Data; Cointegration; Testing; Government Spending; Output
    JEL: H21 C33
    Date: 2011–06

This nep-ecm issue is ©2011 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.