
on Econometrics 
By:  Xiaohong Chen (Cowles Foundation, Yale University); Demian Pouzo (Dept. of Economics, University of California, Berkeley) 
Abstract:  This paper considers inference on functionals of semi/nonparametric conditional moment restrictions with possibly nonsmooth generalized residuals, which include all of the (nonlinear) nonparametric instrumental variables (IV) as special cases. For these models it is often difficult to verify whether a functional is regular (i.e., rootn estimable) or irregular (i.e., slower than rootn estimable). We provide computationally simple, unified inference procedures that are asymptotically valid regardless of whether a functional is regular or not. We establish the following new useful results: (1) the asymptotic normality of a plugin penalized sieve minimum distance (PSMD) estimator of a (possibly irregular) functional; (2) the consistency of simple sieve variance estimators of the plugin PSMD estimator, and hence the asymptotic chisquare distribution of the sieve Wald statistic; (3) the asymptotic chisquare distribution of an optimally weighted sieve quasi likelihood ratio (QLR) test under the null hypothesis; (4) the asymptotic tight distribution of a nonoptimally weighted sieve QLR statistic under the null; (5) the consistency of generalized residual bootstrap sieve Wald and QLR tests; (6) local power properties of sieve Wald and QLR tests and of their bootstrap versions; (7) Wilks phenomenon of the sieve QLR test of hypothesis with increasing dimension. Simulation studies and an empirical illustration of a nonparametric quantile IV regression are presented. 
Keywords:  Nonlinear nonparametric instrumental variables, Penalized sieve minimum distance, Irregular functional, Sieve variance estimators, Sieve Wald, Sieve quasi likelihood ratio, Generalized residual bootstrap, Local power, Wilks phenomenon 
Date:  2013–05 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1897r&r=ecm 
By:  Yukai Yang (Université catholique de Louvain) 
Abstract:  I consider multivariate (vector) time series models in which the error covariance matrix may be timevarying. I derive a test of constancy of the error covariance matrix against the alternative that the covariance matrix changes over time. I design a new family of Lagrangemultiplier tests against the alternative hypothesis that the innovations are timevarying according to several parametric specifications. I investigate the size and power properties of these tests and find that the test with smooth transition specification has satisfactory size properties. The tests are informative and may suggest to consider multivariate volatility modelling. 
Keywords:  Covariance constancy, Error covariance structure, Lagrange multiplier test, Spectral decomposition, Auxiliary regression, Model misspecification, Monte Carlo simulation 
JEL:  C32 C52 
Date:  2014–04–04 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201411&r=ecm 
By:  Jin, Xin; Maheu, John M 
Abstract:  This paper proposes a flexible way of modeling dynamic heterogeneous covariance breakdowns in multivariate GARCH (MGARCH) models. During periods of normal market activity, volatility dynamics are governed by an MGARCH specification. A covariance breakdown is any significant temporary deviation of the conditional covariance matrix from its implied MGARCH dynamics. This is captured through a flexible stochastic component that allows for changes in the conditional variances, covariances and implied correlation coefficients. Different breakdown periods will have different impacts on the conditional covariance matrix and are estimated from the data. We propose an efficient Bayesian posterior sampling procedure for the estimation and show how to compute the marginal likelihood of the model. When applying the model to daily stock market and bond market data, we identify a number of different covariance breakdowns. Modeling covariance breakdowns leads to a significant improvement in the marginal likelihood and gains in portfolio choice. 
Keywords:  correlation breakdown; marginal likelihood; particle filter; Markov chain; generalized variance 
JEL:  C32 C58 G1 
Date:  2014–04–10 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:55243&r=ecm 
By:  Federico Bugni (Institute for Fiscal Studies and Duke University); Ivan Canay (Institute for Fiscal Studies and Northwestern University); Xiaoxia Shi 
Abstract:  This paper studies the problem of specification testing in partially identified models defined by a finite number of moment equalities and inequalities (i.e. (in)equalities). Under the null hypothesis, there is at least one parameter value that simultaneously satisfies all of the moment (in)equalities whereas under the alternative hypothesis there is no such parameter value. This problem has not been directly addressed in the literature (except in particular cases), although several papers have suggested a test based on checking whether confidence sets for the parameters of interest are empty or not, referred to as Test BP. We propose two new specification tests, denoted Tests RS and RC, that achieve uniform asymptotic size control and dominate Test BP in terms of power in any finite sample and in the asymptotic limit. Test RC is particularly convenient to implement because it requires little additional work beyond the confidence set construction. Test RS requires a separate procedure to compute, but has the best power. The separate procedure is computationally easier than confidence set construction in typical cases. 
Date:  2014–04 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:19/14&r=ecm 
By:  Stefano Grassi (Univeristy of Kent and CREATES); Nima Nonejad (Aarhus University and CREATES); Paolo Santucci de Magistris (Aarhus University and CREATES) 
Abstract:  A modification of the selfperturbed Kalman filter of Park and Jun (1992) is proposed for the online estimation of models subject to parameter instability. The perturbationterm in the updating equation of the state covariance matrix is weighted by the measurement error variance, thus avoiding the calibration of a design parameter. The standardization leads to a better tracking of the dynamics of the parameters compared to other online methods, especially as the level of noise increases. The proposed estimation method, coupled with dynamic model averaging and selection, is adopted to forecast S&P500 realized volatility series with a timevarying parameters HAR model with exogenous variables. 
Keywords:  TVP models, SelfPerturbed Kalman Filter, Dynamic Model Averaging, Dynamic Model Selection, Forecasting, Realized Variance 
JEL:  C10 C11 C22 C80 
Date:  2014–04–07 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201412&r=ecm 
By:  Prosper Dovonon; Sílvia Gonçalves 
Abstract:  The main contribution of this paper is to study the applicability of the bootstrap to estimating the distribution of the standard test of overidentifying restrictions of Hansen (1982) when the model is globally identified but the rank condition fails to hold (lack of first order local identification). An important example for which these conditions are verified is the popular test of common conditionally heteroskedastic features proposed by Engle and Kozicki (1993). As Dovonon and Renault (2013) show, the Jacobian matrix for this model is identically zero at the true parameter value, resulting in a highly nonstandard limiting distribution that complicates the computation of critical values. We first show that the standard bootstrap method of Hall and Horowitz (1996) fails to consistently estimate the distribution of the overidentification restrictions test under lack of first order identification. We then propose a new bootstrap method that is asymptotically valid in this context. The modification consists of adding an additional term that recenters the bootstrap moment conditions in a way as to ensure that the bootstrap Jacobian matrix is zero when evaluated at the GMM estimate. 
Keywords:  Bootstrapping, overidentification, overidentification, 
Date:  2014–04–01 
URL:  http://d.repec.org/n?u=RePEc:cir:cirwor:2014s25&r=ecm 
By:  Bertrand Caudelon; Sessi Tokpavi 
Abstract:  This paper introduces a kernelbased nonparametric inferential procedure to test for Grangercausality in distribution. This test is a multivariate extension of the kernelbased Grangercausality test in tailevent introduced by Hong et al. (2009) and hence shares its main advantage, by checking a large number of lags with higher order lags discounted. Besides, our test is highly exible as it can be used to check for Grangercausality in specific regions on the distribution supports, like the center or the tails. We prove that it converges asymptotically to a standard Gaussian distribution under the null hypothesis and thus it is free of parameter estimation uncertainty. Monte Carlo simulations illustrate the excellent small sample size and power properties of the test. This new test is applied for a set of European stock markets in order to analyse the spillovers during the recent European crisis and to distinguish contagion from interdependence effect. 
Keywords:  Grangercausality, Distribution, Tails, Kernelbased test, Financial Spillover. 
JEL:  C14 C22 G10 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:drm:wpaper:201418&r=ecm 
By:  Maciej Kostrzewski 
Abstract:  News might trigger jump arrivals in financial time series. The "bad" and "good" news seems to have distinct impact. In the research, a double exponential jump distribution is applied to model downward and upward jumps. Bayesian double exponential jumpdiffusion model is proposed. Theorems stated in the paper enable estimation of the model's parameters, detection of jumps and analysis of jump frequency. The methodology, founded upon the idea of latent variables, is illustrated with two empirical studies, employing both simulated and realworld data (the KGHM index). News might trigger jump arrivals in financial time series. The "bad" and "good" news seems to have distinct impact. In the research, a double exponential jump distribution is applied to model downward and upward jumps. Bayesian double exponential jumpdiffusion model is proposed. Theorems stated in the paper enable estimation of the model's parameters, detection of jumps and analysis of jump frequency. The methodology, founded upon the idea of latent variables, is illustrated with two empirical studies, employing both simulated and realworld data (the KGHM index). 
Date:  2014–04 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1404.2050&r=ecm 
By:  Stephane Bonhomme; Koen Jochmans (Institute for Fiscal Studies and Sciences Po); JeanMarc Robin (Institute for Fiscal Studies and Sciences Po) 
Abstract:  We present a constructive identification proof of plinear decompositions of qway arrays. The analysis is based on the joint spectral decomposition of a set of matrices. It has applications in the analysis of a variety of latentstructure models, such as qvariate mixtures of p distributions. As such, our results provide a constructive alternative to Allman, Matias and Rhodes [2009]. The identification argument suggests a joint approximatediagonalization estimator that is easy to implement and whose asymptotic properties we derive. We illustrate the usefulness of our approach by applying it to nonparametrically estimate multivariate finitemixture models and hidden Markov models. 
Date:  2014–04 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:18/14&r=ecm 
By:  Guy Tchuente; Marine Carrasco 
Abstract:  The use of many moment conditions improves the asymptotic efficiency of the instrumental variables estimators. However, in finite samples, the inclusion of an excessive number of moments increases the bias. To solve this problem, we propose regularized versions of the limited information maximum likelihood (LIML) based on three different regularizations: Tikhonov, Landweber Fridman, and principal components. Our estimators are consistent and reach the semiparametric efficiency bound under some standard assumptions. We show that the regularized LIML estimator based on principal components possesses finite moments when the sample size is large enough. The higher order expansion of the mean square error (MSE) shows the dominance of regularized LIML over regularized twostaged least squares estimators. We devise a data driven selection of the regularization parameter based on the approximate MSE. A Monte Carlo study shows that the regularized LIML works well and performs better in many situations than competing methods. Two empirical applications illustrate the relevance of our estimators: one regarding the return to schooling and the other regarding the elasticity of intertemporal substitution. 
Keywords:  Highdimensional models, LIML, many instruments, MSE, regularization methods, 
Date:  2013–07–01 
URL:  http://d.repec.org/n?u=RePEc:cir:cirwor:2013s20&r=ecm 
By:  Sloczynski, Tymon (Warsaw School of Economics); Wooldridge, Jeffrey M. (Michigan State University) 
Abstract:  In this paper we study doubly robust estimators of various average treatment effects under unconfoundedness. We unify and extend much of the recent literature by providing a very general identification result which covers binary and multivalued treatments; unnormalized and normalized weighting; and both inverseprobability weighted (IPW) and doubly robust estimators. We also allow for subpopulationspecific average treatment effects where subpopulations can be based on covariate values in an arbitrary way. Similar to Wooldridge (2007), we then discuss estimation of the conditional mean using quasilog likelihoods (QLL) from the linear exponential family. 
Keywords:  double robustness, inverseprobability weighting (IPW), multivalued treatments, quasimaximum likelihood estimation (QMLE), treatment effects 
JEL:  C13 C21 C31 C51 
Date:  2014–03 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp8084&r=ecm 
By:  Preinerstorfer, David; Pötscher, Benedikt M. 
Abstract:  The behavior of the power function of autocorrelation tests such as the DurbinWatson test in time series regressions or the CliffOrd test in spatial regression models has been intensively studied in the literature. When the correlation becomes strong, Krämer (1985) (for the DurbinWatson test) and Krämer (2005) (for the CliffOrd test) have shown that the power can be very low, in fact can converge to zero, under certain circumstances. Motivated by these results, Martellosio (2010) set out to build a general theory that would explain these findings. Unfortunately, Martellosio (2010) does not achieve this goal, as a substantial portion of his results and proofs suffer from serious flaws. The present paper now builds a theory as envisioned in Martellosio (2010) in a fairly general framework, covering general invariant tests of a hypothesis on the disturbance covariance matrix in a linear regression model. The general results are then specialized to testing for spatial correlation and to autocorrelation testing in time series regression models. We also characterize the situation where the null and the alternative hypothesis are indistinguishable by invariant tests. 
Keywords:  power function, invariant test, autocorrelation, spatial correlation, zeropower trap, indistinguishability, DurbinWatson test, CliffOrd test 
JEL:  C12 
Date:  2014–03 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:55059&r=ecm 
By:  Jan P. A. M. Jacobs; Samad Sarferaz; JanEgbert Sturm; Simon van Norden 
Abstract:  Data revisions in macroeconomic time series are typically studied in isolation ignoring the joint behaviour of revisions across different series. This ignores (i) the possibility that early releases of some series may help forecast revisions in other series and (ii) the problems statitical agencies may face in producing estimates consistent with accounting identities. This paper extends the Jacobs and van Norden (2011) modeling framework to multivariate data revisions. We consider systems of variables, where true values and news and noise can be correlated, and which may be linked by one or more identities. We show how to model such systems with standard linear state space models. We motivate and illustrate the multivariate modeling framework with Swiss current account data using Bayesian econometric methods for estimation and inference. 
Keywords:  data revisions, state space form, linear constraints, correlated shocks, Bayesian econometrics, current account statistics, Switzerland, 
JEL:  C22 C53 C82 
Date:  2013–11–01 
URL:  http://d.repec.org/n?u=RePEc:cir:cirwor:2013s44&r=ecm 
By:  Thomai Filippeli (Queen Mary University of London); Konstantinos Theodoridis (Bank of England) 
Abstract:  Similar to Ingram and Whiteman (1994), De Jong et al. (1993) and Del Negro and Schorfheide (2004) this study proposes a methodology of constructing Dynamic Stochastic General Equilibrium (DSGE) consistent prior distributions for Bayesian Vector Autoregressive (BVAR) models. The moments of the assumed NormalInverse Wishart (no conjugate) prior distribution of the VAR parameter vector are derived using the results developed by FernandezVillaverde et al. (2007), Christiano et al. (2006) and Ravenna (2007) regarding structural VAR (SVAR) models and the normal prior density of the DSGE parameter vector. In line with the results from previous studies, BVAR models with theoretical priors seem to achieve forecasting performance that is comparable  if not better  to the one obtained using theory free "Minnesota" priors (Doan et al., 1984). Additionally, the marginallikelihood of the timeseries model with theory founded priors  derived from the output of the Gibbs sampler  can be used to rank competing DSGE theories that aim to explain the same observed data (Geweke, 2005). Finally, motivated by the work of Christiano et al. (2010b,a) and Del Negro and Schorfheide (2004) we use the theoretical results developed by Chernozhukov and Hong (2003) and Theodoridis (2011) to derive the quasi Bayesian posterior distribution of the DSGE parameter vector. 
Keywords:  BVAR, SVAR, DSGE, Gibbs sampling, Marginallikelihood evaluation, Predictive density evaluation, QuasiBayesian DSGE estimation 
JEL:  C11 C13 C32 C52 
Date:  2014–03 
URL:  http://d.repec.org/n?u=RePEc:qmw:qmwecw:wp713&r=ecm 
By:  Wooyoung Kim; Koohyun Kwon; Soonwoo Kwon; Sokbae 'Simon' Lee (Institute for Fiscal Studies and Seoul National University) 
Abstract:  In this paper, we investigate what can be learned about average counterfactual outcomes when it is assumed that treatment response functions are smooth. The smoothness conditions in this paper amount to assuming that the differences in average counterfactual outcomes are bounded under different treatments. We obtain a set of new partial identification results for the average treatment response by imposing smoothness conditions alone, by combining them with monotonicity assumptions, and by adding instrumental variables assumptions to treatment responses. We give a numerical illustration of our findings by reanalyzing the return to schooling example of Manski and Pepper (2000) and demonstrate how one can conduct sensitivity analysis by varying the degrees of smoothness assumption. In addition, we discuss how to carry out inference based on the existing literature using our identication results and illustrate its usefulness by applying one of our identification results to the Job Corps Study dataset. Our empirical results show that there is strong evidence of the gender and race gaps among the less educated population. 
Date:  2014–03 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:17/14&r=ecm 
By:  Tsukasa Ishigaki; Nobuhiko Terui; Tadahiko Sato; Greg M. Allenby 
Abstract:  Largescale databases in marketing track multiple consumers across multiple product categories. A challenge in modeling these data is the resulting size of the data matrix, which often has thousands of consumers and thousands of choice alternatives with prices and merchandising variables changing over time. We develop a heterogeneous topic model for these data, and employ variational Bayes techniques for estimation that are shown to be accurate in a Monte Carlo simulation study. We find the model to be highly scalable and useful for identifying effective marketing variables for different consumers, and for predicting the choices of infrequent purchasers. 
Date:  2014–01 
URL:  http://d.repec.org/n?u=RePEc:toh:tmarga:114&r=ecm 
By:  Hong Pi; Carsten Peterson 
Abstract:  A method for estimating nonlinear regression errors and their distributions without performing regression is presented. Assuming continuity of the modeling function the variance is given in terms of conditional probabilities extracted from the data. For N data points the computational demand is N2. Comparing the predicted residual errors with those derived from a linear model assumption provides a signal for nonlinearity. The method is successfully illustrated with data generated by the Ikeda and Lorenz maps augmented with noise. As a byproduct the embedding dimensions of these maps are also extracted. 
Date:  2014–04 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1404.3219&r=ecm 
By:  Guy Tchuente; Marine Carrasco 
Abstract:  The problem of weak instruments is due to a very small concentration parameter. To boost the concentration parameter, we propose to increase the number of instruments to a large number or even up to a continuum. However, in finite samples, the inclusion of an excessive number of moments may be harmful. To address this issue, we use regularization techniques as in Carrasco (2012) and Carrasco and Tchuente (2013). We show that normalized regularized 2SLS and LIML are consistent and asymptotically normally distributed. Moreover, their asymptotic variances reach the semiparametric efficiency bound unlike most competing estimators. Our simulations show that the leading regularized estimators (LF and T of LIML) work very well (is nearly median unbiased) even in the case of relatively weak instruments. 
Keywords:  Many weak instruments, LIML, 2SLS, regularization methods, 
Date:  2013–07–01 
URL:  http://d.repec.org/n?u=RePEc:cir:cirwor:2013s21&r=ecm 
By:  Dirk Temme (University of Wuppertal); Adamantios Diamantopoulos (University of Vienna); Vanessa Pfegfeidel (University of Wuppertal) 
Abstract:  Formativelymeasured constructs (FMCs) are increasingly used in marketing research as well as in other disciplines. Although constructs operationalized by means of formative indicators have mostly been placed in exogenous positions in structural equation models, they also frequently occupy structurally endogenous positions. The vast majority of studies specifying endogenously positioned FMCs have followed the common practice of modeling the impact of antecedent (predictor) constructs directly on the focal FMC without specifying indirect effects via the formative indicators. However, while widespread even in top journals, this practice is highly problematic as it can lead to biased parameter estimates, erroneous total effects, and questionable conclusions. As a result both theory development and empiricallybased managerial recommendations are likely to suffer. Against this background, the authors offer appropriate modeling guidelines to ensure that a conceptually sound and statistically correct model specification is obtained when a FMC occupies an endogenous position. The proposed guidelines are illustrated using both covariance structure analysis (CSA) and partial least squares (PLS) methods and are applied to a reallife empirical example. Implications for researchers are considered and ‘good practice’ recommendations offered. 
Keywords:  formativelymeasured constructs, endogenous formative indicators, covariance structure analysis, partial least squares 
Date:  2014–04 
URL:  http://d.repec.org/n?u=RePEc:bwu:schdps:sdp14005&r=ecm 
By:  Peter Z. Schochet 
Abstract:  This article develops a new approach for calculating appropriate sample sizes for schoolbased randomized controlled trials (RCTs) with binary outcomes using logit models with and without baseline covariates. The theoretical analysis develops sample size formulas for clustered designs in which random assignment is at the school or teacher level using generalized estimating equation methods. The key finding is that sample sizes of 40 to 60 schools that are typically included in clustered RCTs for student test score or behavioral scale outcomes will often be insufficient for binary outcomes. 
Keywords:  Statistical Power, Binary Outcomes, Clustered Designs, Randomized Control Trials 
JEL:  I 
Date:  2013–06–30 
URL:  http://d.repec.org/n?u=RePEc:mpr:mprres:7874&r=ecm 
By:  de Chaisemartin, Clement (University of Warwick); D'Haultfoeuille, Xavier (CREST) 
Abstract:  The changesinchanges model extends the widely used differenceindifferences to situations where outcomes may evolve heterogeneously. Contrary to differenceindifferences, this model is invariant to the scaling of the outcome. This paper develops an instrumental variable changesinchanges model, to allow for situations in which perfect control and treatment groups cannot be defined, so that some units may be treated in the "control group", while some units may remain untreated in the "treatment group". This is the case for instance with repeated cross sections, if the treatment is not tied to a strict rule. Under a mild strengthening of the changesinchanges model, treatment effects in a population of compliers are point identied when the treatment rate does not change in the control group, and partially identied otherwise. Simple plugin estimators of treatment effects are proposed. We show that they are asymptotically normal, and that the bootstrap is valid. Finally, we use our results to reanalyze findings in Field (2007) and Duo (2001). 
Keywords:  differencesindifferences, changesinchanges, imperfect compliance, instrumental variables, quantile treatment effects, partial identication. 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:cge:wacage:184&r=ecm 
By:  Matthew Blackwell; James Honaker; Gary King 
Abstract:  We extend a unified and easytouse approach to measurement error and missing data. Blackwell, Honaker, and King (2014a) gives an intuitive overview of the new technique, along with practical suggestions and empirical applications. Here, we offer more precise technical details; more sophisticated measurement error model specifications and estimation procedures; and analyses to assess the approach's robustness to correlated measurement errors and to errors in categorical variables. These results support using the technique to reduce bias and increase efficiency in a wide variety of empirical research. 
Date:  2014–01 
URL:  http://d.repec.org/n?u=RePEc:qsh:wpaper:161326&r=ecm 
By:  ChingWai (Jeremy) Chiu (Bank of England); Haroon Mumtaz (Queen Mary University of London); Gabor Pinter (Bank of England) 
Abstract:  We confirm that standard timeseries models for US output growth, inflation, interest rates and stock market returns feature nonGaussian error structure. We build a 4variable VAR model where the orthogonolised shocks have a Student tdistribution with a timevarying variance. We find that in terms of insample fit, the VAR model that features both stochastic volatility and Studentt disturbances outperforms restricted alternatives that feature either attributes. The VAR model with Studentt disturbances results in density forecasts for industrial production and stock returns that are superior to alternatives that assume Gaussianity. This difference appears to be especially stark over the recent financial crisis. 
Keywords:  Bayesian VAR, Fat tails, Stochastic volatility 
JEL:  C32 C53 
Date:  2014–03 
URL:  http://d.repec.org/n?u=RePEc:qmw:qmwecw:wp714&r=ecm 