nep-ecm New Economics Papers
on Econometrics
Issue of 2012‒12‒15
sixteen papers chosen by
Sune Karlsson
Orebro University

  1. Nonparametric identification and semiparametric estimation of classical measurement error models without side information By Susanne Schennach; Yingyao Hu
  2. Generalized …Fixed-T Panel Unit Root Tests Allowing for Structural Breaks By Karavias, Yiannis; Tzavalis, Elias
  3. On the Local Power of Fixed T Panel Unit Root Tests with Serially Correlated Errors By Karavias, Yiannis; Tzavalis, Elias
  4. Identification and Inference in a Simultaneous Equation under Alternative Information Sets and Sampling Schemes By Jan F. Kiviet
  5. Measurement error in nonlinear models- a review By Susanne Schennach
  6. A triangular treatment effect model with random coefficients in the selection equation By Eric Gautier; Stefan Hoderlein
  7. Dynamic Factor Models with Infinite-Dimensional Factor Space: One-Sided Representations By Mario Forni; Marc Hallin; Marco Lippi; Paolo Zaffaroni
  8. Stock Market Asymmetries: A Copula Diffusion By Denitsa Stefanova
  9. Sequential Monte Carlo sampling for DSGE models By Edward Herbst; Frank Schorfheide
  10. Identification of Average Random Coefficients under Magnitude and Sign Restrictions on Confounding By Karim Chalak
  11. Identifying Structural Vector Autoregressions via Changes in Volatility By Helmut Lütkepohl
  12. Forecasting Binary Outcomes By Kajal Lahiri; Liu Yang
  13. From Depth to Local Depth: A Focus on Centrality By Davy Paindaveine; Germain Van Bever
  14. Identifcation in auctions with selective entry By Matthew Gentry; Tong Li
  15. Exploring or reducing noise? A global optimization algorithm in the presence of noise By Didier Rullière; Alaeddine Faleh; Frédéric Planchet; Wassim Youssef
  16. Real-time nowcasting with a Bayesian mixed frequency model with stochastic volatility By Andrea Carriero; Todd E. Clark; Massimiliano Marcellino

  1. By: Susanne Schennach (Institute for Fiscal Studies and Brown University); Yingyao Hu (Institute for Fiscal Studies and Johns Hopkins University)
    Abstract: Virtually all methods aimed at correcting for covariate measurement error in regressions rely on some form of additional information (e.g. validation data, known error distributions, repeated measurements or instruments). In contrast, we establish that the fully nonparametric classical errors-in-variables mode is identifiable from data on the regressor and the dependent variable alone, unless the model takes a very specific parametric form. The parametric family includes (but is not limited to) the linear specification with normally distributed variables as a well-known special cast. This result relies on standard primitive regularity conditions taking the form of smoothness constraints and nonvanishing characteristic functions assumptions. Our approach can handle both monotone and nonmonotone specifications, provided the latter oscillate a finite number of times. Given that the very specific unidentified parametric functional form is arguably the exception rather than the rule, this identification result should have a wide applicability. It leads to a new perspective on handling measurement error in nonlinear and nonparametric models, opening the way to a novel and practical approach to correct for measurement error in data sets where it was previously considered impossible (due to the lack of additional information regarding the measurement error). We suggest an estimator based on non/semi-parametric maximum likelihood, derive its asymptotic properties and illustrate the effectiveness of the method with a simulation study and an application to the relationship between firm investment behaviour and market value, the latter being notoriously mismeasured.
    Date: 2012–12
  2. By: Karavias, Yiannis; Tzavalis, Elias
    Abstract: In this paper we suggest panel data unit root tests which allow for a structural breaks in the individual effects or linear trends of panel data models. This is done under the assumption that the disturbance terms of the panel are heterogeneous and serially correlated. The limiting distributions of the suggested test statistics are derived under the assumption that the time-dimension of the panel (T) is …fixed, while the cross-section (N) grows large. Thus, they are appropriate for short panels, where T is small. The tests consider the cases of a known and unknown date break. For the latter case, the paper gives the analytic form of the distribution of the test statistics. Monte Carlo evidence suggest that our tests have size which is very close to its nominal level and satisfactory power in small-T panels. This is true even for cases where the degree of serial correlation is large and negative, where single time series unit root tests are found to be critically oversized.
    Keywords: Panel data models; unit roots; structural breaks;
    JEL: C23 C22
    Date: 2012–07
  3. By: Karavias, Yiannis; Tzavalis, Elias
    Abstract: Analytical asymptotic local power functions are employed to study the effects of general form short term serial correlation on …fixed-T panel data unit root tests. Two models are considered, one that has only individual intercepts and one that has both individual intercepts and individual trends. It is shown that tests based on IV estimators are more powerful in all cases examined. Even more, for the model with individual trends an IV based test is shown to have non-trivial local power at the natural root-N rate.
    Keywords: Panel data models; unit roots; local power functions; serial correlation; incidental trends
    JEL: C23
    Date: 2012–12
  4. By: Jan F. Kiviet (University of Amsterdam)
    Abstract: In simple static linear simultaneous equation models the empirical distributions of IV and OLS are examined under alternative sampling schemes and compared with their first-order asymptotic approximations. We demonstrate that the limiting distribution of consistent IV is not affected by conditioning on exogenous regressors, whereas that of inconsistent OLS is. The OLS asymptotic and simulated actual variances are shown to diminish by extending the set of exogenous variables kept fixed in sampling, whereas such an extension disrupts the distribution of IV and deteriorates the accuracy of its standard asymptotic approximation, not only when instruments are weak. Against this background the consequences for the identification of parameters of interest are examined for a setting in which (in practice often incredible) assumptions regarding the zero correlation between instruments and disturbances are replaced by (generally more credible) interval assumptions on the correlation between endogenous regressor and disturbance. This yields OLS-based modified confidence intervals, which are usually conservative. Often they compare favorably with IV-based intervals and accentuate their frailty.
    Keywords: partial identification; weak instruments; (un)restrained repeated sampling; (un)conditional (limiting) distributions; credible robust inference
    JEL: C12 C13 C15 C26 J31
    Date: 2012–11–27
  5. By: Susanne Schennach (Institute for Fiscal Studies and Brown University)
    Abstract: This overview of the recent econometrics literature on measurement error in nonlinear models centres on the question of the identification and estimation of general nonlinear models with measurement error. Simple approaches that rely on distributional knowledge regarding the measurement error (such as deconvolution or validation data techniques) are briefly presented. Then follows a description of methods that secure identification via more readily availably auxiliary variables (such as repeated measurements, measurement systems with a 'factor model' structure, instrumental variables and panel data). Methods exploiting higher-order moments or bounding techniques to avoid the need for auxiliary information are presented next. Special attention is devoted to a recently introduced general method to handle a broad class of latent variable models, called Entropic Latent Variable Integration via Simulation (ELVIS). Finally, the complex but active topic of nonclassical measurement error is covered and applications of measurement error techniques to other fields are outlined.
    Date: 2012–12
  6. By: Eric Gautier (Institute for Fiscal Studies and ENSAE); Stefan Hoderlein (Institute for Fiscal Studies and Boston College)
    Abstract: In this paper we study nonparametric estimation in a binary treatment model where the outcome equation is of unrestricted form, and the selection equation contains multiple unobservables that enter through a nonparametric random coefficients specification. This specification is flexible because it allows for complex unobserved heterogeneity of economic agents and non-monotone selection into treatment. We obtain conditions under which both the conditional distributions of Y0 and Y1, the outcome for the untreated, respectively treated, given first stage unobserved random coefficients, are identified. We can thus identify an average treatment effect, conditional on first stage unobservables called UCATE, which yields most treatment effects parameters that depend on averages, like ATE and TT. We provide sharp bounds on the variance, the joint distribution of (Y0 and Y1) and the distribution of treatment effects. In the particular case where the outcomes are continuously distributed, we provide novel and weak conditions that allow to point identify the join conditional distribution of Y0 and Y1, given the unobservables. This allows to derive every treatment effect parameter, e.g. the distribution of treatment effects and the proportion of individuals who benefit from treatment. We present estimators for the marginals, average and distribution of treatment effects, both conditional on unobservables and unconditional, as well as total population effects. The estimators use all data and discard tail values of the instruments when they are too unlikely. We provide their rates of convergence, and analyse their finite sample behaviour in a simluation study. Finally, we also discuss the situation where some of the instruments are discrete.
    Date: 2012–12
  7. By: Mario Forni; Marc Hallin; Marco Lippi; Paolo Zaffaroni
    Abstract: Abstract. Factor model methods recently have become extremely popular in the theory andpractice of large panels of time series data. Those methods rely on various factor models whichall are particular cases of the Generalized Dynamic Factor Model (GDFM) introduced inForni, Hallin, Lippi and Reichlin (2000). That paper, however, relies on Brillinger's dynamicprincipal components. The corresponding estimators are two-sided filters whose performanceat the end of the observation period or for forecasting purposes is rather poor. No such problem arises with estimators based on standard principal components, which have beendominant in this literature. On the other hand, those estimators require the assumptionthat the space spanned by the factors has finite dimension. In the present paper, we arguethat such an assumption is extremely restrictive and potentially quite harmful. Elaboratingupon recent results by Anderson and Deistler (2008a, b) on singular stationary processes withrational spectrum, we obtain one-sided representations for the GDFM without assuming finitedimension of the factor space. Construction of the corresponding estimators is also brieflyoutlined. In a companion paper, we establish consistency and rates for such estimators, andprovide Monte Carlo results further motivating our approach.
    Keywords: generalized dynamic factor models; vector processes with singular spectral density; one-sided representations for dynamic models
    JEL: C00 C01 E00
    Date: 2012–12
  8. By: Denitsa Stefanova (VU University Amsterdam)
    Abstract: The paper proposes a model for the dynamics of stock prices that incorporates increased asset co-movements during extreme market downturns in a continuous-time setting. The model is based on the construction of a multivariate diffusion with a pre-speci…ed stationary density with tail dependence. I estimate the model with Markov Chain Monte Carlo using a sequential inference procedure that proves to be well-suited for the problem. The model is able to reproduce stylized features of the dependence structure and the dynamic behaviour of asset returns.
    Keywords: tail dependence; multivariate di¤usion; Markov Chain Monte Carlo
    JEL: C11 C51 C58
    Date: 2012–11–21
  9. By: Edward Herbst; Frank Schorfheide
    Abstract: We develop a sequential Monte Carlo (SMC) algorithm for estimating Bayesian dynamic stochastic general equilibrium (DSGE) models, wherein a particle approximation to the posterior is built iteratively through tempering the likelihood. Using three examples consisting of an artificial state-space model, the Smets and Wouters (2007) model, and Schmitt-Grohe and Uribe's (2012) news shock model we show that the SMC algorithm is better suited for multi-modal and irregular posterior distributions than the widely-used random walk Metropolis-Hastings algorithm. Unlike standard Markov chain Monte Carlo (MCMC) techniques, the SMC algorithm is well suited for parallel computing.
    Keywords: Bayesian statistical decision theory
    Date: 2012
  10. By: Karim Chalak (Boston College)
    Abstract: This paper studies measuring the average effects of X on Y in a structural system with random coefficients and confounding. We do not require (conditionally) exogenous regressors or instruments. Using proxies W for the confounders U, we ask how do the average direct effects of U on Y compare in magnitude and sign to those of U on W. Exogeneity and equi- or proportional confounding are limit cases yielding full identification. Alternatively, the elements of beta-hat are partially identified in a sharp bounded interval if W is sufficiently sensitive to U, and sharp upper or lower bounds may obtain otherwise. We extend this analysis to accommodate conditioning on covariates and a semiparametric separable specification as well as a panel structure and proxies included in the Y equation. After studying estimation and inference, we apply this method to study the financial return to education and the black-white wage gap.
    Keywords: causality, confounding, endogeneity, omitted variable, partial identification, proxy
    JEL: C31 C33 C36
    Date: 2012–12–04
  11. By: Helmut Lütkepohl
    Abstract: Identification of shocks of interest is a central problem in structural vector autoregressive (SVAR) modelling. Identification is often achieved by imposing restrictions on the impact or long-run effects of shocks or by considering sign restrictions for the impulse responses. In a number of articles changes in the volatility of the shocks have also been used for identification. The present study focusses on the latter device. Some possible setups for identification via heteroskedasticity are reviewed and their potential and limitations are discussed. Two detailed examples are considered to illustrate the approach.
    Keywords: Markov switching model, vector autoregression, heteroskedasticity, vector GARCH, conditional heteroskedasticity
    JEL: C32
    Date: 2012
  12. By: Kajal Lahiri; Liu Yang
    Abstract: Binary events are involved in many economic decision problems. In recent years, considerable progress has been made in diverse disciplines in developing models for forecasting binary outcomes. We distinguish between two types of forecasts for binary events that are generally obtained as the output of regression models: probability forecasts and point forecasts. We summarize specification, estimation, and evaluation of binary response models for the purpose of forecasting in a unified framework which is characterized by the joint distribution of forecasts and actuals, and a general loss function. Analysis of both the skill and the value of probability and point forecasts can be carried out within this framework. Parametric, semiparametric, nonparametric, and Bayesian approaches are covered. The emphasis is on the basic intuitions underlying each methodology, abstracting away from the mathematical details.
    Date: 2012
  13. By: Davy Paindaveine; Germain Van Bever
    Abstract: Aiming at analysing multimodal or non-convexly supported distributionsthrough data depth, we introduce a local extension of depth. Our constructionis obtained by conditioning the distribution to appropriate depth-based neigh-borhoods, and has the advantages, among others, to maintain a_ne-invarianceand to apply to all depths in a generic way. Most importantly, unlike their com-petitors, that (for extreme localization) rather measure probability mass, theresulting local depths focus on centrality and remain of a genuine depth natureat any locality level. We derive their main properties, establish consistency oftheir sample versions, and study their behavior under extreme localization. Wealso extend our local concept to the regression and functional depth contexts.Throughout, we illustrate the results on some, arti_cial and real, univariateand multivariate data sets.
    Keywords: statistical depth functions; symmetrization; multimodality; non-convex support; regressioin depth; functional depth
    Date: 2012–12
  14. By: Matthew Gentry (Institute for Fiscal Studies and Vanderbilt University); Tong Li (Institute for Fiscal Studies and Vanderbilt University)
    Abstract: This paper considers nonparametric identification of a two-stage entry and bidding model for auctions which we call the Affiliated-Signal (AS) model. This model assumes that potential bidders have private values, observe imperfect signals of their true values prior to entry, and choose whether to undertake a costly entry process. The AS model is a theoretically appealing candidate for the structural analysis of auctions with entry: it accommodates a wide range of entry processes, in particular nesting the Levin and Smith (1994) and Samuelson (1985) models as special cases. To date, however, the model's identification properties have not been well understood. We establish identifcation results for the general AS model, using variation in factors affecting entry behaviour (such as potential competition or entry costs) to construct identified bounds on model fundamentals. If available entry variation is continuous, the AS model may be point identified; otherwise, it will be partially identified. We derive constructive identification results in both cases, which can readily be refined to produce the sharp identified set. We also consider policy analysis in environments where only partial identifcation is possible, and derive identified bounds on expected seller revenue corresponding to a wide range of counterfactual policies while accounting for endogenous and arbitrarily selective entry. Finally we establish that our core results extend to environments with asymmetric bidders and nonseperable auction-level unobserved heterogeneity.
    Date: 2012–11
  15. By: Didier Rullière (SAF - Laboratoire de Sciences Actuarielle et Financière - Université Claude Bernard - Lyon I : EA2429); Alaeddine Faleh (SAF - Laboratoire de Sciences Actuarielle et Financière - Université Claude Bernard - Lyon I : EA2429); Frédéric Planchet (SAF - Laboratoire de Sciences Actuarielle et Financière - Université Claude Bernard - Lyon I : EA2429); Wassim Youssef (Winter & associés - Winter & associés)
    Abstract: We consider the problem of the global minimization of a function observed with noise. This problem occurs for example when the objective function is estimated through stochastic simulations. We propose an original method for iteratively partitioning the search domain when this area is a nite union of simplexes. On each subdomain of the partition, we compute an indicator measuring if the subdomain is likely or not to contain a global minimizer. Next areas to be explored are chosen in accordance with this indicator. Con dence sets for minimizers are given. Numerical applications show empirical convergence results, and illustrate the compromise to be made between the global exploration of the search domain and the focalization around potential minimizers of the problem.
    Keywords: Golbal Optimisation; Simplex; Branch-and-Bound; Kriging
    Date: 2012–10–23
  16. By: Andrea Carriero; Todd E. Clark; Massimiliano Marcellino
    Abstract: This paper develops a method for producing current-quarter forecasts of GDP growth with a (possibly large) range of available within-the-quarter monthly observations of economic indicators, such as employment and industrial production, and financial indicators, such as stock prices and interest rates. In light of existing evidence of time variation in the variances of shocks to GDP, we consider versions of the model with both constant variances and stochastic volatility. We also evaluate models with either constant or time-varying regression coefficients. We use Bayesian methods to estimate the model, in order to facilitate providing shrinkage on the (possibly large) set of model parameters and conveniently generate predictive densities. We provide results on the accuracy of nowcasts of real-time GDP growth in the U.S. from 1985 through 2011. In terms of point forecasts, our proposal is comparable to alternative econometric methods and survey forecasts. In addition, it provides reliable density forecasts, for which the stochastic volatility specification is quite useful, while parameter time-variation does not seem to matter.
    Keywords: Bayesian statistical decision theory
    Date: 2012

This nep-ecm issue is ©2012 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.