nep-ecm New Economics Papers
on Econometrics
Issue of 2009‒02‒14
twenty papers chosen by
Sune Karlsson
Orebro University

  1. Instrumental Variable Estimators for Binary Outcomes By Paul Clarke; Frank Windmeijer
  2. Variable Selection and Inference for Multi-period Forecasting Problems By Pesaran, M.H.; Pick, A.; Timmermann, A.
  3. Bartlett's formula for a general class of non linear processes By Francq, Christian; Zakoian, Jean-Michel
  4. Comparing IV with Structural Models: What Simple IV Can and Cannot Identify By Heckman, James J.; Urzua, Sergio
  5. Non-Parametric Inference for the Effect of a Treatment on Survival Times with Application in the Health and Social Sciences By de Luna, Xavier; Johansson, Per
  6. Unobserved Heterogeneity in the Binary Logit Model with Cross-Sectional Data and Short Panels: A Finite Mixture Approach By Anders Holm; Mads Meier Jæger; Morten Pedersen
  7. Forecasting temperature indices with timevarying long-memory models By Massimiliano Caporin; Paolo Paruolo
  8. Structural Measurement Errors in Nonseparable Models By Hoderlein, Stefan; Winter, Joachim
  9. A Nonparametric Test for Seasonal Unit Roots By Kunst, Robert M.
  10. Spatial Filtering and Eigenvector Stability: Space-Time Models for German Unemployment Data By Roberto Patuelli; Daniel A. Griffith; Michael Tiefelsdorf; Peter Nijkamp
  11. Measuring Precision of Statistical Inference on Partially Identified Parameters By Aleksey Tetenov
  12. A State Space Approach to Extracting the Signal from Uncertain Data By Alastair Cunningham; Jana Eklund; Chris Jeffery; George Kapetanios; Vincent Labhard
  13. Forecasting a Large Dimensional Covariance Matrix of a Portfolio of Different Asset Classes By Lillie Lam; Laurence Fung; Ip-wing Yu
  14. Multivariate Decomposition for Hazard Rate Models By Powers, Daniel A.; Yun, Myeong-Su
  15. On the Sensitivity of Return to Schooling Estimates to Estimation Methods, Model Specification, and Influential Outliers If Identification Is Weak By Jaeger, David A.; Parys, Juliane
  16. Fractional Integration and Structural Breaks in U.S. Macro Dynamics By Luis A. Gil-Alana; Antonio Moreno
  17. Selection Bias in Educational Transition Models: Theory and Empirical Evidence By Anders Holm; Mads Meier Jæger
  18. Multivariate Variance Gamma and Gaussian dependence: a study with copulas By Elisa Luciano; Patrizia Semeraro
  19. A mathematical proof of the existence of trends in financial time series By Michel Fliess; Cédric Join
  20. The Role of Trends and Detrending in DSGE Models By Andrle, Michal

  1. By: Paul Clarke; Frank Windmeijer
    Abstract: The estimation of exposure effects on study outcomes is almost always complicated by non-random exposure selection - even randomised controlled trials can be affected by participant non-compliance. If the selection mechanism is non-ignorable then inferences based on estimators that fail to adjust for its effects will be misleading. Potentially consistent estimators of the exposure effect can be obtained if the data are expanded to include one or more instrumental variables (IVs). An IV must satisfy core conditions constraining it to be associated with the exposure, and indirectly (but not directly) associated with the outcome through this association. Here we consider IV estimators for studies in which the outcome is represented by a binary variable. While work on this problem has been carried out in statistics and econometrics, the estimators and their associated identifying assumptions have existed in the separate domains of structural models and potential outcomes with almost no overlap. In this paper, we review and integrate the work in these areas and reassess the issues of parameter identification and estimator consistency. Identification of maximum likelihood estimators comes from strong parametric modelling assumptions, with consistency depending on these assumptions being correct. Our main focus is on three semi-parametric estimators based on the generalised method of moments, marginal structural models and structural mean models (SMM). By inspecting the identifying assumptions for each method, we show that these estimators are inconsistent even if the true model generating the data is simple, and argue that this implies that consistency is obtained only under implausible conditions. Identification for SMMs can also be obtained under strong exposure-restricting design constraints that are often appropriate for randomised controlled trials, but not for observational studies. Finally, while estimation of local causal parameters is possible if the selection mechanism is monotonic, not all SMMs identify a local parameter.
    Keywords: Econometrics, Generalized methods of moments, Parameter identification, Marginal structural models, Structural mean models, Structural models
    JEL: C13 C14
    Date: 2009–01
  2. By: Pesaran, M.H.; Pick, A.; Timmermann, A.
    Abstract: This paper conducts a broad-based comparison of iterated and direct multi-step forecasting approaches applied to both univariate and multivariate models. Theoretical results and Monte Carlo simulations suggest that iterated forecasts dominate direct forecasts when estimation error is a first-order concern, i.e. in small samples and for long forecast horizons. Conversely, direct forecasts may dominate in the presence of dynamic model misspecification. Empirical analysis of the set of 170 variables studied by Marcellino, Stock and Watson (2006) shows that multivariate information, introduced through a parsimonious factor-augmented vector autoregression approach, improves forecasting performance for many variables, particularly at short horizons.
    Date: 2009–01
  3. By: Francq, Christian; Zakoian, Jean-Michel
    Abstract: A Bartlett-type formula is proposed for the asymptotic distribution of the sample autocorrelations of nonlinear processes. The asymptotic covariances between sample autocorrelations are expressed as the sum of two terms. The first term corresponds to the standard Bartlett's formula for linear processes, involving only the autocorrelation function of the observed process. The second term, which is specific to nonlinear processes, involves the autocorrelation function of the observed process, the kurtosis of the linear innovation process and the autocorrelation function of its square. This formula is obtained under a symmetry assumption on the linear innovation process. An application to GARCH models is proposed.
    Keywords: Bartlett's formula; nonlinear time series model; sample autocorrelation; GARCH model; weak white noise
    JEL: C13 C12 C22
    Date: 2009–02–05
  4. By: Heckman, James J. (University of Chicago); Urzua, Sergio (Northwestern University)
    Abstract: This paper compares the economic questions addressed by instrumental variables estimators with those addressed by structural approaches. We discuss Marschak's Maxim: estimators should be selected on the basis of their ability to answer well-posed economic problems with minimal assumptions. A key identifying assumption that allows structural methods to be more informative than IV can be tested with data and does not have to be imposed.
    Keywords: instrumental variables, structural approaches, Marschak's Maxim
    JEL: C31
    Date: 2009–01
  5. By: de Luna, Xavier (Umeå University); Johansson, Per (IFAU)
    Abstract: In this paper we perform inference on the effect of a treatment on survival times in studies where the treatment assignment is not randomized and the assignment time is not known in advance. Two such studies are discussed: a heart transplant program and a study of Swedish unemployed eligible for employment subsidy. We estimate survival functions on a treated and a control group which are made comparable through matching on observed covariates. The inference is performed by conditioning on waiting time to treatment, that is time between the entrance in the study and treatment. This can be done only when sufficient data is available. In other cases, averaging over waiting times is a possibility, although the classical interpretation of the estimated survival functions is lost unless hazards are not functions of waiting time. To show unbiasedness and to obtain an estimator of the variance, we build on the potential outcome framework, which was introduced by J. Neyman in the context of randomized experiments, and adapted to observational studies by D. B. Rubin. Our approach does not make parametric or distributional assumptions. In particular, we do not assume proportionality of the hazards compared. Small sample performance of the estimator and a derived test of no treatment effect are studied in a Monte Carlo study.
    Keywords: potential outcome, observational study, matching estimator, heart transplant, employment subsidy, survival function
    JEL: C12 C13 C14
    Date: 2009–01
  6. By: Anders Holm (Department of Sociology, University of Copenhagen); Mads Meier Jæger (Danish National Centre for Social Research, Copenhagen); Morten Pedersen (Department of Sociology, University of Copenhagen)
    Abstract: This paper proposes a new approach to dealing with unobserved heterogeneity in applied research using the binary logit model with cross-sectional data and short panels. Unobserved heterogeneity is particularly important in non-linear regression models such as the binary logit model because, unlike in linear regression models, estimates of the effects of observed independent variables are biased even when omitted independent variables are uncorrelated with the observed independent variables. We propose an extension of the binary logit model based on a finite mixture approach in which we conceptualize the unobserved heterogeneity via latent classes. Simulation results show that our approach leads to considerably less bias in the estimated effects of the independent variables than the standard logit model. Furthermore, because identification of the unobserved heterogeneity is weak when the researcher has cross-sectional rather than panel data, we propose a simple approach that fixes latent class weights and improves identification and estimation. Finally, we illustrate the applicability of our new approach using Canadian survey data on public support for redistribution.
    Keywords: binary logit model; unobserved heterogeneity; latent classes; simulation
    Date: 2008–11
  7. By: Massimiliano Caporin (University di Padova); Paolo Paruolo (Università dell'Insubria)
    Abstract: This paper proposes structured parametrizations for multivariate volatility models, which use spatial weight matrices induced by economic proximity. These structured specifications aim at solving the curse of dimensionality problem, which limits feasibility of model-estimation to small cross-sections for unstructured models. Structured parametrizations possess the following four desirable properties: i) they are flexible, allowing for covariance spill-over; ii) they are parsimonious, being characterized by a number of parameters that grows only linearly with the cross-section dimension; iii) model parameters have a direct economic interpretation that reflects the chosen notion of economic classification; iv) model-estimation computations are faster than for unstructured specifications. We give examples of structured specifications for multivariate GARCH models as well as for Stochastic- and Realized-Volatility models. The paper also discusses how to construct spatial weight matrices that are time-varying and possibly derived from a set of covariates.
    Keywords: MGARCH, Stochastic Volatility, Realized Volatility, Spatial models, ANOVA
    JEL: C31 C32 G11
    Date: 2009–02
  8. By: Hoderlein, Stefan; Winter, Joachim
    Abstract: This paper considers measurement error from a new perspective. In surveys, response errors are often caused by the fact that respondents recall past events and quantities imperfectly. We explore the consequences of recall errors for such key econometric is- sues as the identification of marginal effects or economic restrictions in structural models. Our identification approach is entirely nonparametric, using Matzkin-type nonseparable models that nest a large class of potential structural models. We establish that measurement errors due to poor recall are generally likely to exhibit nonstandard behavior, in particular be nonclassical and differential, and we provide means to deal with this situation. Moreover, our findings suggest that conventional wisdom about measurement errors may be misleading in many economic applications. For instance, under certain conditions left-hand side recall errors will be problematic even in the linear model, and quantiles will be less robust than means. Finally, we apply the main concepts put forward in this paper to real world data, and find evidence that underscores the importance of focusing on individual response behavior.
    Keywords: Measurement Error; Nonparametric; Survey Design; Nonseparable Model; Identification; Zero Homogeneity; Demand
    JEL: C14 D12
    Date: 2009–01–29
  9. By: Kunst, Robert M. (Department of Economics and Finance, Institute for Advanced Studies, Vienna, Austria, and Department of Economics, University of Vienna, Vienna, Austria)
    Abstract: We consider a nonparametric test for the null of seasonal unit roots in quarterly time series that builds on the RUR (records unit root) test by Aparicio, Escribano, and Sipols. We find that the test concept is more promising than a formalization of visual aids such as plots by quarter. In order to cope with the sensitivity of the original RUR test to autocorrelation under its null of a unit root, we suggest an augmentation step by autoregression. We present some evidence on the size and power of our procedure and we illustrate it by applications to a commodity price and to an unemployment rate.
    Keywords: Seasonality, Nonparametric test, Unit roots
    JEL: C12 C14 C22
    Date: 2009–01
  10. By: Roberto Patuelli (Institute for Economic Research (IRE), University of Lugano, Switzerland; The Rimini Centre for Economic Analysis (RCEA), Italy); Daniel A. Griffith (School of Economic, Political and Policy Sciences, University of Texas at Dallas, USA); Michael Tiefelsdorf (School of Economic, Political and Policy Sciences, University of Texas at Dallas, USA); Peter Nijkamp (Department of Spatial Economics, VU University Amsterdam, The Netherlands)
    Abstract: Regions, independent of their geographic level of aggregation, are known to be interrelated partly due to their relative locations. Similar economic performance among regions can be attributed to proximity. Consequently, a proper understanding, and accounting, of spatial liaisons is needed in order to effectively forecast regional economic variables. Several spatial econometric techniques are available in the literature, which deal with the spatial autocorrelation in geographically-referenced data. The experiments carried out in this paper are concerned with the analysis of the spatial autocorrelation observed for unemployment rates in 439 NUTS-3 German districts. We employ a semi-parametric approach – spatial filtering – in order to uncover spatial patterns that are consistently significant over time. We first provide a brief overview of the spatial filtering method and illustrate the data set. Subsequently, we describe the empirical application carried out: that is, the spatial filtering analysis of regional unemployment rates in Germany. Furthermore, we exploit the resulting spatial filter as an explanatory variable in a panel modelling framework. Additional explanatory variables, such as average daily wages, are used in concurrence with the spatial filter. Our experiments show that the computed spatial filters account for most of the residual spatial autocorrelation in the data.
    Keywords: spatial filtering, eigenvectors, Germany, unemployment
    JEL: C33 E24 R12
    Date: 2009–01
  11. By: Aleksey Tetenov
    Abstract: Planners of surveys and experiments that partially identify parameters of interest face trade offs between using limited resources to reduce sampling error or to reduce the extent of partial identification. Researchers who previously attempted evaluating these trade offs used the length of confidence intervals for the identification region to measure the precision of inference. I show that other reasonable measures of statistical precision yield qualitatively different conclusions, often implying higher value to reducing the extent of partial identification. I consider three alternative measures - maximum mean squared error, maximum mean absolute deviation, and maximum regret (applicable when the purpose of estimation is binary treatment choice). I analytically derive and compare estimation precision and tradeoffs implied by these measures in a simple statistical problem with normally distributed sample data and interval partial identification.
    Keywords: partial identification, statistical treatment choice, mean absolute error, mean squared error, minimax regret, survey planning
    JEL: C21 C44
    Date: 2008
  12. By: Alastair Cunningham (Bank of England); Jana Eklund (Bank of England); Chris Jeffery (Bank of England); George Kapetanios (Queen Mary, University of London and Bank of England); Vincent Labhard (European Central Bank)
    Abstract: Most macroeconomic data are uncertain - they are estimates rather than perfect measures of underlying economic variables. One symptom of that uncertainty is the propensity of statistical agencies to revise their estimates in the light of new information or methodological advances. This paper sets out an approach for extracting the signal from uncertain data. It describes a two-step estimation procedure in which the history of past revisions are first used to estimate the parameters of a measurement equation describing the official published estimates. These parameters are then imposed in a maximum likelihood estimation of a state space model for the macroeconomic variable.
    Keywords: Real-time data analysis, State space models, Data uncertainty, Data revisions
    JEL: C32 C53
    Date: 2009–02
  13. By: Lillie Lam (Research Department, Hong Kong Monetary Authority); Laurence Fung (Research Department, Hong Kong Monetary Authority); Ip-wing Yu (Research Department, Hong Kong Monetary Authority)
    Abstract: In portfolio and risk management, estimating and forecasting the volatilities and correlations of asset returns plays an important role. Recently, interest in the estimation of the covariance matrix of large dimensional portfolios has increased. Using a portfolio of 63 assets covering stocks, bonds and currencies, this paper aims to examine and compare the predictive power of different popular methods adopted by i) market practitioners (such as the sample covariance, the 250-day moving average, and the exponentially weighted moving average); ii) some sophisticated estimators recently developed in the academic literature (such as the orthogonal GARCH model and the Dynamic Conditional Correlation model); and iii) their combinations. Based on five different criteria, we show that a combined forecast of the 250-day moving average, the exponentially weighted moving average and the orthogonal GARCH model consistently outperforms the other methods in predicting the covariance matrix for both one-quarter and one-year ahead horizons.
    Keywords: Volatility forecasting; Risk management; Portfolio management; Model evaluation
    JEL: G32 C52
    Date: 2009–01
  14. By: Powers, Daniel A. (University of Texas at Austin); Yun, Myeong-Su (Tulane University)
    Abstract: We develop a regression decomposition technique for hazard rate models, where the difference in observed rates is decomposed into components attributable to group differences in characteristics and group differences in effects. The baseline hazard is specified using a piecewise constant exponential model, which leads to convenient estimation based on a Poisson regression model fit to person-period, or split-episode data. This specification allows for a flexible representation of the baseline hazard and provides a straightforward way to introduce time-varying covariates and time-varying effects. We provide computational details underlying the method and apply the technique to the decomposition of the black-white difference in first premarital birth rates into components reflecting characteristics and effect contributions of several predictors, as well as the effect contribution attributable to race differences in the baseline hazard.
    Keywords: decomposition, hazard rates, piecewise constant exponential model, Poisson regression
    JEL: C20 C41 J13
    Date: 2009–01
  15. By: Jaeger, David A. (CUNY Graduate Center); Parys, Juliane (University of Bonn)
    Abstract: We provide a comparison of return to schooling estimates based on an influential study by Angrist and Krueger (1991) using two stage least squares (TSLS), limited information maximum likelihood (LIML), jackknife (JIVE), and split sample instrumental variables (SSIV) estimation. We find that the estimated return to education is quite sensitive to the age controls used in the models as well as the estimation method used. In particular, we provide evidence that JIVE coefficients' standard errors are inflated by a group of extreme years of education observations, for which identification is especially weak. We propose to use Cook's Distance in order to identify influential outliers having substantial influence on first-stage JIVE coefficients and fitted values.
    Keywords: Cook's Distance, heteroskedasticity, outliers, return to education, specification, weak instruments
    JEL: C13 C31 J31
    Date: 2009–01
  16. By: Luis A. Gil-Alana (Facultad de Ciencias Económicas y Empresariales, Universidad de Navarra); Antonio Moreno (Facultad de Ciencias Económicas y Empresariales, Universidad de Navarra)
    Abstract: This paper identifies structural breaks in the post-World War II joint dynamics of U.S. inflation, unemployment and the short-term interest rate. We derive a structural break-date procedure which allows for long-memory behavior in all three series and perform the analysis for alternative data frequencies. Both long-memory and short-run coefficients are relevant for characterizing the changing patterns of U.S. macroeconomic dynamics. We provide an economic interpretation of those changes by examining the link between macroeconomic events and structural breaks.
    Keywords: Fractional integration, structural breaks, multivariate analysis, inflation dynamics
    JEL: C32 C51 E31 E32 E52
    Date: 2009–01–20
  17. By: Anders Holm (Department of Sociology, University of Copenhagen); Mads Meier Jæger (Danish National Centre for Social Research, Copenhagen)
    Abstract: Most studies which use Mare’s (1980, 1981) seminal model of educational transitions find that the effect of family background variables decreases across educational transitions. Cameron and Heckman (1998, 2001) have argued that this “waning coefficients” phenomenon might be driven by selection on unobserved variables. This paper, first, analyzes theoretically how selection on unobserved variables leads to waning coefficients and, second, illustrates empirically how selection affects estimates of the effect of family background variables on educational transitions. Our empirical analysis which uses data from the United States, United Kingdom, Denmark, and the Netherlands shows that the effect of family background variables on educational transitions is largely constant across transitions when we control for selection on unobserved variables. We also discuss the inherent difficulties in estimating educational transition models which deal effectively with selection on unobserved variables.
    Date: 2009–02
  18. By: Elisa Luciano; Patrizia Semeraro
    Abstract: This paper explores the dynamic dependence properties of a Levy process, the Variance Gamma, which has non Gaussian marginal features and non Gaussian dependence. In a static context, such a non Gaussian dependence should be represented via copulas. Copulas, however, are not able to capture the dynamics of dependence. By computing the distance between the Gaussian copula and the actual one, we show that even a non Gaussian process, such as the Variance Gamma, can "converge" to linear dependence over time. Empirical versions of different dependence measures confirm the result.
    JEL: C16 G12
    Date: 2008
  19. By: Michel Fliess (LIX - Laboratoire d'informatique de l'école polytechnique - CNRS : UMR7161 - Polytechnique - X, INRIA Saclay - Ile de France - ALIEN - INRIA - Polytechnique - X - CNRS : UMR - Ecole Centrale de Lille); Cédric Join (INRIA Saclay - Ile de France - ALIEN - INRIA - Polytechnique - X - CNRS : UMR - Ecole Centrale de Lille, CRAN - Centre de recherche en automatique de Nancy - CNRS : UMR7039 - Université Henri Poincaré - Nancy I - Institut National Polytechnique de Lorraine - INPL)
    Abstract: We are settling a longstanding quarrel in quantitative finance by proving the existence of trends in financial time series thanks to a theorem due to P. Cartier and Y. Perrin, which is expressed in the language of nonstandard analysis (Integration over finite sets, F. & M. Diener (Eds): Nonstandard Analysis in Practice, Springer, 1995, pp. 195--204). Those trends, which might coexist with some altered random walk paradigm and efficient market hypothesis, seem nevertheless difficult to reconcile with the celebrated Black-Scholes model. They are estimated via recent techniques stemming from control and signal theory. Several quite convincing computer simulations on the forecast of various financial quantities are depicted. We conclude by discussing the rôle of probability theory.
    Keywords: Financial time series; mathematical finance; technical analysis; trends; random walks; efficient markets; forecasting; volatility; heteroscedasticity; quickly fluctuating functions; low-pass filters; nonstandard analysis; operational calculus.
    Date: 2009
  20. By: Andrle, Michal
    Abstract: The paper discusses the role of stochastic trends in DSGE models and effects of stochastic detrending. We argue that explicit structural assumptions on trend behavior is convenient, namely for emerging countries. In emerging countries permanent shocks are an important part of business cycle dynamics. The reason is that permanent shocks spill over the whole frequency range, potentially, including business cycle frequencies. Applying high- or band-pass filter to obtain business cycle dynamics, however, does not eliminate the influence of permanent shocks on comovements of time series. The contribution of the paper is to provide a way how to calculate the role of permanent shocks on the detrended/ filtered business cycle population dynamics in a DSGE model laboratory using the frequency domain methods.
    Keywords: detrending; band-pass filter; spectral density; DSGE.
    JEL: E32 C53 D58
    Date: 2008–08–01

This nep-ecm issue is ©2009 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.