nep-ecm New Economics Papers
on Econometrics
Issue of 2014‒04‒18
23 papers chosen by
Sune Karlsson
Orebro University

  1. Sieve Wald and QLR Inferences on Semi/nonparametric Conditional Moment Models By Xiaohong Chen; Demian Pouzo
  2. Testing Constancy of the Error Covariance Matrix in Vector Models against Parametric Alternatives using a Spectral Decomposition By Yukai Yang
  3. Modeling Covariance Breakdowns in Multivariate GARCH By Jin, Xin; Maheu, John M
  4. Specification tests for partially identified models defined by moment inequalities By Federico Bugni; Ivan Canay; Xiaoxia Shi
  5. Forecasting with the Standardized Self-Perturbed Kalman Filter By Stefano Grassi; Nima Nonejad; Paolo Santucci de Magistris
  6. Bootstrapping the GMM overidentification test Under first-order underidentification By Prosper Dovonon; Sílvia Gonçalves
  7. A Nonparametric Test for Granger-causality in Distribution with Application to Financial Contagion By Bertrand Caudelon; Sessi Tokpavi
  8. Bayesian DEJD model and detection of asymmetric jumps By Maciej Kostrzewski
  9. Nonparametric spectral-based estimation of latent structures By Stephane Bonhomme; Koen Jochmans; Jean-Marc Robin
  10. Regularized LIML for many instruments By Guy Tchuente; Marine Carrasco
  11. A General Double Robustness Result for Estimating Average Treatment Effects By Sloczynski, Tymon; Wooldridge, Jeffrey M.
  12. On the Power of Invariant Tests for Hypotheses on a Covariance Matrix By Preinerstorfer, David; Pötscher, Benedikt M.
  13. Modeling Multivariate Data Revisions By Jan P. A. M. Jacobs; Samad Sarferaz; Jan-Egbert Sturm; Simon van Norden
  14. DSGE Priors for BVAR Models By Thomai Filippeli; Konstantinos Theodoridis
  15. The identification power of smoothness assumptions in models with counterfactual outcomes By Wooyoung Kim; Koohyun Kwon; Soonwoo Kwon; Sokbae 'Simon' Lee
  16. A Large-Scale Marketing Model using Variational Bayes Inference for Sparse Transaction Data By Tsukasa Ishigaki; Nobuhiko Terui; Tadahiko Sato; Greg M. Allenby
  17. Estimating nonlinear regression errors without doing regression By Hong Pi; Carsten Peterson
  18. Efficient estimation with many weak instruments using regularization techniques By Guy Tchuente; Marine Carrasco
  19. Specifying Formatively-measured Constructs In Endogenous Positions In Structural Equation Models: Caveats and Guidelines For Researchers By Dirk Temme; Adamantios Diamantopoulos; Vanessa Pfegfeidel
  20. Statistical Power for School-Based RCTs with Binary Outcomes. By Peter Z. Schochet
  21. Fuzzy Changes-in Changes By de Chaisemartin, Clement; D'Haultfoeuille, Xavier
  22. A Unified Approach to Measurement Error and Missing�Data: Details and Extensions By Matthew Blackwell; James Honaker; Gary King
  23. Fat-tails in VAR Models By Ching-Wai (Jeremy) Chiu; Haroon Mumtaz; Gabor Pinter

  1. By: Xiaohong Chen (Cowles Foundation, Yale University); Demian Pouzo (Dept. of Economics, University of California, Berkeley)
    Abstract: This paper considers inference on functionals of semi/nonparametric conditional moment restrictions with possibly nonsmooth generalized residuals, which include all of the (nonlinear) nonparametric instrumental variables (IV) as special cases. For these models it is often difficult to verify whether a functional is regular (i.e., root-n estimable) or irregular (i.e., slower than root-n estimable). We provide computationally simple, unified inference procedures that are asymptotically valid regardless of whether a functional is regular or not. We establish the following new useful results: (1) the asymptotic normality of a plug-in penalized sieve minimum distance (PSMD) estimator of a (possibly irregular) functional; (2) the consistency of simple sieve variance estimators of the plug-in PSMD estimator, and hence the asymptotic chi-square distribution of the sieve Wald statistic; (3) the asymptotic chi-square distribution of an optimally weighted sieve quasi likelihood ratio (QLR) test under the null hypothesis; (4) the asymptotic tight distribution of a non-optimally weighted sieve QLR statistic under the null; (5) the consistency of generalized residual bootstrap sieve Wald and QLR tests; (6) local power properties of sieve Wald and QLR tests and of their bootstrap versions; (7) Wilks phenomenon of the sieve QLR test of hypothesis with increasing dimension. Simulation studies and an empirical illustration of a nonparametric quantile IV regression are presented.
    Keywords: Nonlinear nonparametric instrumental variables, Penalized sieve minimum distance, Irregular functional, Sieve variance estimators, Sieve Wald, Sieve quasi likelihood ratio, Generalized residual bootstrap, Local power, Wilks phenomenon
    Date: 2013–05
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1897r&r=ecm
  2. By: Yukai Yang (Université catholique de Louvain)
    Abstract: I consider multivariate (vector) time series models in which the error covariance matrix may be time-varying. I derive a test of constancy of the error covariance matrix against the alternative that the covariance matrix changes over time. I design a new family of Lagrange-multiplier tests against the alternative hypothesis that the innovations are time-varying according to several parametric specifications. I investigate the size and power properties of these tests and find that the test with smooth transition specification has satisfactory size properties. The tests are informative and may suggest to consider multivariate volatility modelling.
    Keywords: Covariance constancy, Error covariance structure, Lagrange multiplier test, Spectral decomposition, Auxiliary regression, Model misspecification, Monte Carlo simulation
    JEL: C32 C52
    Date: 2014–04–04
    URL: http://d.repec.org/n?u=RePEc:aah:create:2014-11&r=ecm
  3. By: Jin, Xin; Maheu, John M
    Abstract: This paper proposes a flexible way of modeling dynamic heterogeneous covariance breakdowns in multivariate GARCH (MGARCH) models. During periods of normal market activity, volatility dynamics are governed by an MGARCH specification. A covariance breakdown is any significant temporary deviation of the conditional covariance matrix from its implied MGARCH dynamics. This is captured through a flexible stochastic component that allows for changes in the conditional variances, covariances and implied correlation coefficients. Different breakdown periods will have different impacts on the conditional covariance matrix and are estimated from the data. We propose an efficient Bayesian posterior sampling procedure for the estimation and show how to compute the marginal likelihood of the model. When applying the model to daily stock market and bond market data, we identify a number of different covariance breakdowns. Modeling covariance breakdowns leads to a significant improvement in the marginal likelihood and gains in portfolio choice.
    Keywords: correlation breakdown; marginal likelihood; particle filter; Markov chain; generalized variance
    JEL: C32 C58 G1
    Date: 2014–04–10
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:55243&r=ecm
  4. By: Federico Bugni (Institute for Fiscal Studies and Duke University); Ivan Canay (Institute for Fiscal Studies and Northwestern University); Xiaoxia Shi
    Abstract: This paper studies the problem of specification testing in partially identified models defined by a finite number of moment equalities and inequalities (i.e. (in)equalities). Under the null hypothesis, there is at least one parameter value that simultaneously satisfies all of the moment (in)equalities whereas under the alternative hypothesis there is no such parameter value. This problem has not been directly addressed in the literature (except in particular cases), although several papers have suggested a test based on checking whether confidence sets for the parameters of interest are empty or not, referred to as Test BP. We propose two new specification tests, denoted Tests RS and RC, that achieve uniform asymptotic size control and dominate Test BP in terms of power in any finite sample and in the asymptotic limit. Test RC is particularly convenient to implement because it requires little additional work beyond the confidence set construction. Test RS requires a separate procedure to compute, but has the best power. The separate procedure is computationally easier than confidence set construction in typical cases.
    Date: 2014–04
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:19/14&r=ecm
  5. By: Stefano Grassi (Univeristy of Kent and CREATES); Nima Nonejad (Aarhus University and CREATES); Paolo Santucci de Magistris (Aarhus University and CREATES)
    Abstract: A modification of the self-perturbed Kalman filter of Park and Jun (1992) is proposed for the on-line estimation of models subject to parameter instability. The perturbationterm in the updating equation of the state covariance matrix is weighted by the measurement error variance, thus avoiding the calibration of a design parameter. The standardization leads to a better tracking of the dynamics of the parameters compared to other on-line methods, especially as the level of noise increases. The proposed estimation method, coupled with dynamic model averaging and selection, is adopted to forecast S&P500 realized volatility series with a time-varying parameters HAR model with exogenous variables.
    Keywords: TVP models, Self-Perturbed Kalman Filter, Dynamic Model Averaging, Dynamic Model Selection, Forecasting, Realized Variance
    JEL: C10 C11 C22 C80
    Date: 2014–04–07
    URL: http://d.repec.org/n?u=RePEc:aah:create:2014-12&r=ecm
  6. By: Prosper Dovonon; Sílvia Gonçalves
    Abstract: The main contribution of this paper is to study the applicability of the bootstrap to estimating the distribution of the standard test of overidentifying restrictions of Hansen (1982) when the model is globally identified but the rank condition fails to hold (lack of first order local identification). An important example for which these conditions are verified is the popular test of common conditionally heteroskedastic features proposed by Engle and Kozicki (1993). As Dovonon and Renault (2013) show, the Jacobian matrix for this model is identically zero at the true parameter value, resulting in a highly nonstandard limiting distribution that complicates the computation of critical values. We first show that the standard bootstrap method of Hall and Horowitz (1996) fails to consistently estimate the distribution of the overidentification restrictions test under lack of first order identification. We then propose a new bootstrap method that is asymptotically valid in this context. The modification consists of adding an additional term that recenters the bootstrap moment conditions in a way as to ensure that the bootstrap Jacobian matrix is zero when evaluated at the GMM estimate.
    Keywords: Bootstrapping, overidentification, overidentification,
    Date: 2014–04–01
    URL: http://d.repec.org/n?u=RePEc:cir:cirwor:2014s-25&r=ecm
  7. By: Bertrand Caudelon; Sessi Tokpavi
    Abstract: This paper introduces a kernel-based nonparametric inferential procedure to test for Granger-causality in distribution. This test is a multivariate extension of the kernel-based Granger-causality test in tail-event introduced by Hong et al. (2009) and hence shares its main advantage, by checking a large number of lags with higher order lags discounted. Besides, our test is highly exible as it can be used to check for Granger-causality in specific regions on the distribution supports, like the center or the tails. We prove that it converges asymptotically to a standard Gaussian distribution under the null hypothesis and thus it is free of parameter estimation uncertainty. Monte Carlo simulations illustrate the excellent small sample size and power properties of the test. This new test is applied for a set of European stock markets in order to analyse the spill-overs during the recent European crisis and to distinguish contagion from interdependence effect.
    Keywords: Granger-causality, Distribution, Tails, Kernel-based test, Financial Spill-over.
    JEL: C14 C22 G10
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:drm:wpaper:2014-18&r=ecm
  8. By: Maciej Kostrzewski
    Abstract: News might trigger jump arrivals in financial time series. The "bad" and "good" news seems to have distinct impact. In the research, a double exponential jump distribution is applied to model downward and upward jumps. Bayesian double exponential jump-diffusion model is proposed. Theorems stated in the paper enable estimation of the model's parameters, detection of jumps and analysis of jump frequency. The methodology, founded upon the idea of latent variables, is illustrated with two empirical studies, employing both simulated and real-world data (the KGHM index). News might trigger jump arrivals in financial time series. The "bad" and "good" news seems to have distinct impact. In the research, a double exponential jump distribution is applied to model downward and upward jumps. Bayesian double exponential jump-diffusion model is proposed. Theorems stated in the paper enable estimation of the model's parameters, detection of jumps and analysis of jump frequency. The methodology, founded upon the idea of latent variables, is illustrated with two empirical studies, employing both simulated and real-world data (the KGHM index).
    Date: 2014–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1404.2050&r=ecm
  9. By: Stephane Bonhomme; Koen Jochmans (Institute for Fiscal Studies and Sciences Po); Jean-Marc Robin (Institute for Fiscal Studies and Sciences Po)
    Abstract: We present a constructive identification proof of p-linear decompositions of q-way arrays. The analysis is based on the joint spectral decomposition of a set of matrices. It has applications in the analysis of a variety of latent-structure models, such as q-variate mixtures of p distributions. As such, our results provide a constructive alternative to Allman, Matias and Rhodes [2009]. The identification argument suggests a joint approximate-diagonalization estimator that is easy to implement and whose asymptotic properties we derive. We illustrate the usefulness of our approach by applying it to nonparametrically estimate multivariate finite-mixture models and hidden Markov models.
    Date: 2014–04
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:18/14&r=ecm
  10. By: Guy Tchuente; Marine Carrasco
    Abstract: The use of many moment conditions improves the asymptotic efficiency of the instrumental variables estimators. However, in finite samples, the inclusion of an excessive number of moments increases the bias. To solve this problem, we propose regularized versions of the limited information maximum likelihood (LIML) based on three different regularizations: Tikhonov, Landweber Fridman, and principal components. Our estimators are consistent and reach the semiparametric efficiency bound under some standard assumptions. We show that the regularized LIML estimator based on principal components possesses finite moments when the sample size is large enough. The higher order expansion of the mean square error (MSE) shows the dominance of regularized LIML over regularized two-staged least squares estimators. We devise a data driven selection of the regularization parameter based on the approximate MSE. A Monte Carlo study shows that the regularized LIML works well and performs better in many situations than competing methods. Two empirical applications illustrate the relevance of our estimators: one regarding the return to schooling and the other regarding the elasticity of intertemporal substitution.
    Keywords: High-dimensional models, LIML, many instruments, MSE, regularization methods,
    Date: 2013–07–01
    URL: http://d.repec.org/n?u=RePEc:cir:cirwor:2013s-20&r=ecm
  11. By: Sloczynski, Tymon (Warsaw School of Economics); Wooldridge, Jeffrey M. (Michigan State University)
    Abstract: In this paper we study doubly robust estimators of various average treatment effects under unconfoundedness. We unify and extend much of the recent literature by providing a very general identification result which covers binary and multi-valued treatments; unnormalized and normalized weighting; and both inverse-probability weighted (IPW) and doubly robust estimators. We also allow for subpopulation-specific average treatment effects where subpopulations can be based on covariate values in an arbitrary way. Similar to Wooldridge (2007), we then discuss estimation of the conditional mean using quasi-log likelihoods (QLL) from the linear exponential family.
    Keywords: double robustness, inverse-probability weighting (IPW), multi-valued treatments, quasi-maximum likelihood estimation (QMLE), treatment effects
    JEL: C13 C21 C31 C51
    Date: 2014–03
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp8084&r=ecm
  12. By: Preinerstorfer, David; Pötscher, Benedikt M.
    Abstract: The behavior of the power function of autocorrelation tests such as the Durbin-Watson test in time series regressions or the Cliff-Ord test in spatial regression models has been intensively studied in the literature. When the correlation becomes strong, Krämer (1985) (for the Durbin-Watson test) and Krämer (2005) (for the Cliff-Ord test) have shown that the power can be very low, in fact can converge to zero, under certain circumstances. Motivated by these results, Martellosio (2010) set out to build a general theory that would explain these findings. Unfortunately, Martellosio (2010) does not achieve this goal, as a substantial portion of his results and proofs suffer from serious flaws. The present paper now builds a theory as envisioned in Martellosio (2010) in a fairly general framework, covering general invariant tests of a hypothesis on the disturbance covariance matrix in a linear regression model. The general results are then specialized to testing for spatial correlation and to autocorrelation testing in time series regression models. We also characterize the situation where the null and the alternative hypothesis are indistinguishable by invariant tests.
    Keywords: power function, invariant test, autocorrelation, spatial correlation, zero-power trap, indistinguishability, Durbin-Watson test, Cliff-Ord test
    JEL: C12
    Date: 2014–03
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:55059&r=ecm
  13. By: Jan P. A. M. Jacobs; Samad Sarferaz; Jan-Egbert Sturm; Simon van Norden
    Abstract: Data revisions in macroeconomic time series are typically studied in isolation ignoring the joint behaviour of revisions across different series. This ignores (i) the possibility that early releases of some series may help forecast revisions in other series and (ii) the problems statitical agencies may face in producing estimates consistent with accounting identities. This paper extends the Jacobs and van Norden (2011) modeling framework to multivariate data revisions. We consider systems of variables, where true values and news and noise can be correlated, and which may be linked by one or more identities. We show how to model such systems with standard linear state space models. We motivate and illustrate the multivariate modeling framework with Swiss current account data using Bayesian econometric methods for estimation and inference.
    Keywords: data revisions, state space form, linear constraints, correlated shocks, Bayesian econometrics, current account statistics, Switzerland,
    JEL: C22 C53 C82
    Date: 2013–11–01
    URL: http://d.repec.org/n?u=RePEc:cir:cirwor:2013s-44&r=ecm
  14. By: Thomai Filippeli (Queen Mary University of London); Konstantinos Theodoridis (Bank of England)
    Abstract: Similar to Ingram and Whiteman (1994), De Jong et al. (1993) and Del Negro and Schorfheide (2004) this study proposes a methodology of constructing Dynamic Stochastic General Equilibrium (DSGE) consistent prior distributions for Bayesian Vector Autoregressive (BVAR) models. The moments of the assumed Normal-Inverse Wishart (no conjugate) prior distribution of the VAR parameter vector are derived using the results developed by Fernandez-Villaverde et al. (2007), Christiano et al. (2006) and Ravenna (2007) regarding structural VAR (SVAR) models and the normal prior density of the DSGE parameter vector. In line with the results from previous studies, BVAR models with theoretical priors seem to achieve forecasting performance that is comparable - if not better - to the one obtained using theory free "Minnesota" priors (Doan et al., 1984). Additionally, the marginal-likelihood of the time-series model with theory founded priors - derived from the output of the Gibbs sampler - can be used to rank competing DSGE theories that aim to explain the same observed data (Geweke, 2005). Finally, motivated by the work of Christiano et al. (2010b,a) and Del Negro and Schorfheide (2004) we use the theoretical results developed by Chernozhukov and Hong (2003) and Theodoridis (2011) to derive the quasi Bayesian posterior distribution of the DSGE parameter vector.
    Keywords: BVAR, SVAR, DSGE, Gibbs sampling, Marginal-likelihood evaluation, Predictive density evaluation, Quasi-Bayesian DSGE estimation
    JEL: C11 C13 C32 C52
    Date: 2014–03
    URL: http://d.repec.org/n?u=RePEc:qmw:qmwecw:wp713&r=ecm
  15. By: Wooyoung Kim; Koohyun Kwon; Soonwoo Kwon; Sokbae 'Simon' Lee (Institute for Fiscal Studies and Seoul National University)
    Abstract: In this paper, we investigate what can be learned about average counterfactual outcomes when it is assumed that treatment response functions are smooth. The smoothness conditions in this paper amount to assuming that the differences in average counterfactual outcomes are bounded under different treatments. We obtain a set of new partial identification results for the average treatment response by imposing smoothness conditions alone, by combining them with monotonicity assumptions, and by adding instrumental variables assumptions to treatment responses. We give a numerical illustration of our findings by reanalyzing the return to schooling example of Manski and Pepper (2000) and demonstrate how one can conduct sensitivity analysis by varying the degrees of smoothness assumption. In addition, we discuss how to carry out inference based on the existing literature using our identication results and illustrate its usefulness by applying one of our identification results to the Job Corps Study dataset. Our empirical results show that there is strong evidence of the gender and race gaps among the less educated population.
    Date: 2014–03
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:17/14&r=ecm
  16. By: Tsukasa Ishigaki; Nobuhiko Terui; Tadahiko Sato; Greg M. Allenby
    Abstract: Large-scale databases in marketing track multiple consumers across multiple product categories. A challenge in modeling these data is the resulting size of the data matrix, which often has thousands of consumers and thousands of choice alternatives with prices and merchandising variables changing over time. We develop a heterogeneous topic model for these data, and employ variational Bayes techniques for estimation that are shown to be accurate in a Monte Carlo simulation study. We find the model to be highly scalable and useful for identifying effective marketing variables for different consumers, and for predicting the choices of infrequent purchasers.
    Date: 2014–01
    URL: http://d.repec.org/n?u=RePEc:toh:tmarga:114&r=ecm
  17. By: Hong Pi; Carsten Peterson
    Abstract: A method for estimating nonlinear regression errors and their distributions without performing regression is presented. Assuming continuity of the modeling function the variance is given in terms of conditional probabilities extracted from the data. For N data points the computational demand is N2. Comparing the predicted residual errors with those derived from a linear model assumption provides a signal for nonlinearity. The method is successfully illustrated with data generated by the Ikeda and Lorenz maps augmented with noise. As a by-product the embedding dimensions of these maps are also extracted.
    Date: 2014–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1404.3219&r=ecm
  18. By: Guy Tchuente; Marine Carrasco
    Abstract: The problem of weak instruments is due to a very small concentration parameter. To boost the concentration parameter, we propose to increase the number of instruments to a large number or even up to a continuum. However, in finite samples, the inclusion of an excessive number of moments may be harmful. To address this issue, we use regularization techniques as in Carrasco (2012) and Carrasco and Tchuente (2013). We show that normalized regularized 2SLS and LIML are consistent and asymptotically normally distributed. Moreover, their asymptotic variances reach the semiparametric efficiency bound unlike most competing estimators. Our simulations show that the leading regularized estimators (LF and T of LIML) work very well (is nearly median unbiased) even in the case of relatively weak instruments.
    Keywords: Many weak instruments, LIML, 2SLS, regularization methods,
    Date: 2013–07–01
    URL: http://d.repec.org/n?u=RePEc:cir:cirwor:2013s-21&r=ecm
  19. By: Dirk Temme (University of Wuppertal); Adamantios Diamantopoulos (University of Vienna); Vanessa Pfegfeidel (University of Wuppertal)
    Abstract: Formatively-measured constructs (FMCs) are increasingly used in marketing research as well as in other disciplines. Although constructs operationalized by means of formative indicators have mostly been placed in exogenous positions in structural equation models, they also frequently occupy structurally endogenous positions. The vast majority of studies specifying endogenously positioned FMCs have followed the common practice of modeling the impact of antecedent (predictor) constructs directly on the focal FMC without specifying indirect effects via the formative indicators. However, while widespread even in top journals, this practice is highly problematic as it can lead to biased parameter estimates, erroneous total effects, and questionable conclusions. As a result both theory development and empirically-based managerial recommendations are likely to suffer. Against this background, the authors offer appropriate modeling guidelines to ensure that a conceptually sound and statistically correct model specification is obtained when a FMC occupies an endogenous position. The proposed guidelines are illustrated using both covariance structure analysis (CSA) and partial least squares (PLS) methods and are applied to a real-life empirical example. Implications for researchers are considered and ‘good practice’ recommendations offered.
    Keywords: formatively-measured constructs, endogenous formative indicators, covariance structure analysis, partial least squares
    Date: 2014–04
    URL: http://d.repec.org/n?u=RePEc:bwu:schdps:sdp14005&r=ecm
  20. By: Peter Z. Schochet
    Abstract: This article develops a new approach for calculating appropriate sample sizes for school-based randomized controlled trials (RCTs) with binary outcomes using logit models with and without baseline covariates. The theoretical analysis develops sample size formulas for clustered designs in which random assignment is at the school or teacher level using generalized estimating equation methods. The key finding is that sample sizes of 40 to 60 schools that are typically included in clustered RCTs for student test score or behavioral scale outcomes will often be insufficient for binary outcomes.
    Keywords: Statistical Power, Binary Outcomes, Clustered Designs, Randomized Control Trials
    JEL: I
    Date: 2013–06–30
    URL: http://d.repec.org/n?u=RePEc:mpr:mprres:7874&r=ecm
  21. By: de Chaisemartin, Clement (University of Warwick); D'Haultfoeuille, Xavier (CREST)
    Abstract: The changes-in-changes model extends the widely used difference-in-differences to situations where outcomes may evolve heterogeneously. Contrary to difference-in-differences, this model is invariant to the scaling of the outcome. This paper develops an instrumental variable changes-in-changes model, to allow for situations in which perfect control and treatment groups cannot be defined, so that some units may be treated in the "control group", while some units may remain untreated in the "treatment group". This is the case for instance with repeated cross sections, if the treatment is not tied to a strict rule. Under a mild strengthening of the changes-in-changes model, treatment effects in a population of compliers are point identied when the treatment rate does not change in the control group, and partially identied otherwise. Simple plug-in estimators of treatment effects are proposed. We show that they are asymptotically normal, and that the bootstrap is valid. Finally, we use our results to reanalyze findings in Field (2007) and Duo (2001).
    Keywords: differences-in-differences, changes-in-changes, imperfect compliance, instrumental variables, quantile treatment effects, partial identication.
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:cge:wacage:184&r=ecm
  22. By: Matthew Blackwell; James Honaker; Gary King
    Abstract: We extend a unified and easy-to-use approach to measurement error and missing data. Blackwell, Honaker, and King (2014a) gives an intuitive overview of the new technique, along with practical suggestions and empirical applications. Here, we offer more precise technical details; more sophisticated measurement error model specifications and estimation procedures; and analyses to assess the approach's robustness to correlated measurement errors and to errors in categorical variables. These results support using the technique to reduce bias and increase efficiency in a wide variety of empirical research.
    Date: 2014–01
    URL: http://d.repec.org/n?u=RePEc:qsh:wpaper:161326&r=ecm
  23. By: Ching-Wai (Jeremy) Chiu (Bank of England); Haroon Mumtaz (Queen Mary University of London); Gabor Pinter (Bank of England)
    Abstract: We confirm that standard time-series models for US output growth, inflation, interest rates and stock market returns feature non-Gaussian error structure. We build a 4-variable VAR model where the orthogonolised shocks have a Student t-distribution with a time-varying variance. We find that in terms of in-sample fit, the VAR model that features both stochastic volatility and Student-t disturbances outperforms restricted alternatives that feature either attributes. The VAR model with Student-t disturbances results in density forecasts for industrial production and stock returns that are superior to alternatives that assume Gaussianity. This difference appears to be especially stark over the recent financial crisis.
    Keywords: Bayesian VAR, Fat tails, Stochastic volatility
    JEL: C32 C53
    Date: 2014–03
    URL: http://d.repec.org/n?u=RePEc:qmw:qmwecw:wp714&r=ecm

This nep-ecm issue is ©2014 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.