
on Econometrics 
By:  Dominik Bertsche (University of Konstanz, Department of Economics, Box 129, 78457 Konstanz, Germany); Robin Braun (University of Konstanz, Graduate School of Decision Science, Department of Economics, Box 129, 78457 Konstanz, Germany) 
Abstract:  In Structural Vector Autoregressive (SVAR) models, heteroskedasticity can be exploited to identify structural parameters statistically. In this paper, we propose to capture time variation in the second moment of structural shocks by a stochastic volatility (SV) model, assuming that their log variances follow latent AR(1) processes. Estimation is performed by Gaussian Maximum Likelihood and an efficient Expectation Maximization algorithm is developed for that purpose. Since the smoothing distributions required in the algorithm are intractable, we propose to approximate them either by Gaussian distributions or with the help of Markov Chain Monte Carlo (MCMC) methods. We provide simulation evidence that the SVSVAR model works well in estimating the structural parameters also under model misspecification. We use the proposed model to study the interdependence between monetary policy and the stock market. Based on monthly US data, we find that the SV specification provides the best fit and is favored by conventional information criteria if compared to other models of heteroskedasticity, including GARCH, Markov Switching, and Smooth Transition models. Since the structural shocks identified by heteroskedasticity have no economic interpretation, we test conventional exclusion restrictions as well as Proxy SVAR restrictions which are overidentifying in the heteroskedastic model. 
Keywords:  Structural Vector Autoregression (SVAR), Identification via heteroskedasticity, Stochastic Volatility, Proxy SVAR 
JEL:  C32 
Date:  2017–12–21 
URL:  http://d.repec.org/n?u=RePEc:knz:dpteco:1711&r=ecm 
By:  Georgiev, I; Harvey, DI; Leybourne, SJ; Taylor, AMR 
Abstract:  We examine how the familiar spurious regression problem can manifest itself in the context of recently proposed predictability tests. For these tests to provide asymptotically valid inference, account has to be taken of the degree of persistence of the putative predictors. Failure to do so can lead to spurious overrejections of the no predictability null hypothesis. A number of methods have been developed to achieve this. However, these approaches all make an underlying assumption that any predictability in the variable of interest is purely attributable to the predictors under test, rather than to any unobserved persistent latent variables, themselves uncorrelated with the predictors being tested. We show that where this assumption is violated, something that could very plausibly happen in practice, sizeable (spurious) rejections of the null can occur in cases where the variables under test are not valid predictors. In response, we propose a screening test for predictive regression invalidity based on a stationarity testing approach. In order to allow for an unknown degree of persistence in the putative predictors, and for both conditional and unconditional heteroskedasticity in the data, we implement our proposed test using a fixed regressor wild bootstrap procedure. We establish the asymptotic validity of this bootstrap test, which entails establishing a conditional invariance principle along with its bootstrap counterpart, both of which appear to be new to the literature and are likely to have important applications beyond the present context. We also show how our bootstrap test can be used, in conjunction with extant predictability tests, to deliver a twostep feasible procedure. Monte Carlo simulations suggest that our proposed bootstrap methods work well in finite samples. An illustration employing U.S. stock returns data demonstrates the practical usefulness of our procedures. 
Keywords:  Predictive regression; causality; persistence; spurious regression; stationarity test; fixed regressor wild bootstrap; conditional distribution. 
Date:  2018–01 
URL:  http://d.repec.org/n?u=RePEc:esy:uefcwp:21006&r=ecm 
By:  Dongwoo Kim (Institute for Fiscal Studies and UCL); Daniel Wilhelm (Institute for Fiscal Studies and cemmap and UCL) 
Abstract:  This paper proposes a powerful alternative to the ttest in linear regressions when a regressor is mismeasured. We assume there is a second contaminated measurement of the regressor of interest. We allow the two measurement errors to be nonclassical in the sense that they may both be correlated with the true regressor, they may be correlated with each other, and we do not require any location normalizations on the measurement errors. We propose a new maximal tstatistic that is formed from the regression of the outcome onto a maximally weighted linear combination of the two measurements. Critical values of the test are easily computed via a multiplier bootstrap. In simulations, we show that this new test can be signi cantly more powerful than tstatistics based on OLS or IV estimates. Finally, we apply our test to the study of returns to education based on twins data from the U.S. 
Keywords:  linear regression, adaptive test, power of test, maximal combination of measurements, repeated measurements, multiplier bootstrap 
Date:  2017–12–11 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:57/17&r=ecm 
By:  Yong Li; Xiaobin Liu; Jun Yu; Tao Zeng 
Abstract:  In this paper, a new and convenient $\chi^2$ wald test based on MCMC outputs is proposed for hypothesis testing. The new statistic can be explained as MCMC version of Wald test and has several important advantages that make it very convenient in practical applications. First, it is welldefined under improper prior distributions and avoids JeffreyLindley's paradox. Second, it's asymptotic distribution can be proved to follow the $\chi^2$ distribution so that the threshold values can be easily calibrated from this distribution. Third, it's statistical error can be derived using the Markov chain Monte Carlo (MCMC) approach. Fourth, most importantly, it is only based on the posterior MCMC random samples drawn from the posterior distribution. Hence, it is only the byproduct of the posterior outputs and very easy to compute. In addition, when the prior information is available, the finite sample theory is derived for the proposed test statistic. At last, the usefulness of the test is illustrated with several applications to latent variable models widely used in economics and finance. 
Date:  2018–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1801.00973&r=ecm 
By:  Badi H. Baltagia; Georges Bresson; Anoop Chaturvedi; Guy Lacroix 
Abstract:  The paper develops a general Bayesian framework for robust linear static panel data models using econtamination. A twostep approach is employed to derive the conditional typeII maximum likelihood (MLII) posterior distribution of the coeffcients and individual effects. The MLII posterior densities are weighted averages of the Bayes estimator under a base prior and the datadependent empirical Bayes estimator. Twostage and three stage hierarchy estimators are developed and their finite sample performance is investigated through a series of Monte Carlo experiments. These include standard random effects as well as Mundlaktype, Chamberlaintype and HausmanTaylortype models. The simulation results underscore the relatively good performance of the threestage hierarchy estimator. Within a single theoretical framework, our Bayesian approach encompasses a variety of specifications while conventional methods require separate estimators for each case. 
Keywords:  econtamination, hyper gpriors, typeII maximum likelihood posterior density, panel data, robust Bayesian estimator, threestage hierarchy 
JEL:  C11 C23 C26 
Date:  2017 
URL:  http://d.repec.org/n?u=RePEc:lvl:crrecr:1706&r=ecm 
By:  Johan Dahlin; Adrian Wills; Brett Ninness 
Abstract:  This paper considers the problem of computing Bayesian estimates of system parameters and functions of them on the basis of observed system performance data. This is a previously studied issue where stochastic simulation approaches have been examined using the popular MetropolisHastings (MH) algorithm. This prior study has identified a recognised difficulty of tuning the proposal distribution so that the MH method provides realisations with sufficient mixing to deliver efficient convergence. This paper proposes and empirically examines a method of tuning the proposal using ideas borrowed from the numerical optimisation literature around efficient computation of Hessians so that gradient and curvature information of the target posterior can be incorporated in the proposal. 
Date:  2018–01 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1801.01243&r=ecm 
By:  Whitney K. Newey (Institute for Fiscal Studies and MIT); James M. Robins (Institute for Fiscal Studies) 
Abstract:  There are many interesting and widely used estimators of a functional with ?nite semiparametric variance bound that depend on nonparametric estimators of nuisance functions. We use cross?tting to construct such estimators with fast remainder rates. We give cross?t doubly robust estimators that use separate subsamples to estimate di?erent nuisance functions. We show that a cross?t doubly robust spline regression estimator of the expected conditional covariance is semiparametric e?cient under minimal conditions. Corresponding estimators of other average linear functionals of a conditional expectation are shown to have the fastest known remainder rates under certain smoothness conditions. The cross?t plugin estimator shares some of these properties but has a remainder term that is larger than the cross?t doubly robust estimator. As speci?c examples we consider the expected conditional covariance, mean with randomly missing data, and a weighted average derivative. 
Date:  2017–10–03 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:41/17&r=ecm 
By:  Muhammad Jehangir Amjad; Devavrat Shah; Dennis Shen 
Abstract:  We present a robust generalization of the synthetic control method for comparative case studies. Like the classical method, we present an algorithm to estimate the unobservable counterfactual of a treatment unit. A distinguishing feature of our algorithm is that of denoising the data matrix via singular value thresholding, which renders our approach robust in multiple facets: it automatically identifies a good subset of donors, overcomes the challenges of missing data, and continues to work well in settings where covariate information may not be provided. To begin, we establish the condition under which the fundamental assumption in synthetic controllike approaches holds, i.e. when the linear relationship between the treatment unit and the donor pool prevails in both the pre and postintervention periods. We provide the first finite sample analysis for a broader class of models, the Latent Variable Model, in contrast to Factor Models previously considered in the literature. Further, we show that our denoising procedure accurately imputes missing entries, producing a consistent estimator of the underlying signal matrix provided $p = \Omega( T^{1 + \zeta})$ for some $\zeta > 0$; here, $p$ is the fraction of observed data and $T$ is the time interval of interest. Under the same setting, we prove that the meansquarederror (MSE) in our prediction estimation scales as $O(\sigma^2/p + 1/\sqrt{T})$, where $\sigma^2$ is the noise variance. Using a data aggregation method, we show that the MSE can be made as small as $O(T^{1/2+\gamma})$ for any $\gamma \in (0, 1/2)$, leading to a consistent estimator. We also introduce a Bayesian framework to quantify the model uncertainty through posterior probabilities. Our experiments, using both realworld and synthetic datasets, demonstrate that our robust generalization yields an improvement over the classical synthetic control method. 
Date:  2017–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1711.06940&r=ecm 
By:  Gery Geenens; Richard Dunn 
Abstract:  ValueatRisk and its conditional allegory, which takes into account the available information about the economic environment, form the centrepiece of the Basel framework for the evaluation of market risk in the banking sector. In this paper, a new nonparametric framework for estimating this conditional ValueatRisk is presented. A nonparametric approach is particularly pertinent as the traditionally used parametric distributions have been shown to be insufficiently robust and flexible in most of the equityreturn data sets observed in practice. The method extracts the quantile of the conditional distribution of interest, whose estimation is based on a novel estimator of the density of the copula describing the dynamic dependence observed in the series of returns. Realworld backtesting analyses demonstrate the potential of the approach, whose performance may be superior to its industry counterparts. 
Date:  2017–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1712.05527&r=ecm 
By:  Maria Dimakopoulou; Susan Athey; Guido Imbens 
Abstract:  Contextual bandit algorithms seek to learn a personalized treatment assignment policy, balancing exploration against exploitation. Although a number of algorithms have been proposed, there is little guidance available for applied researchers to select among various approaches. Motivated by the econometrics and statistics literatures on causal effects estimation, we study a new consideration to the exploration vs. exploitation framework, which is that the way exploration is conducted in the present may contribute to the bias and variance in the potential outcome model estimation in subsequent stages of learning. We leverage parametric and nonparametric statistical estimation methods and causal effect estimation methods in order to propose new contextual bandit designs. Through a variety of simulations, we show how alternative design choices impact the learning performance and provide insights on why we observe these effects. 
Date:  2017–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1711.07077&r=ecm 
By:  Christian Gouriéroux (CREST; University of Toronto); Alain Monfort (CREST); JeanPaul Renne (University of Lausanne) 
Abstract:  The wellknown problem of nonidentifiability of structural VAR models disappears if the structural shocks are independent and if at most one of them is Gaussian. In that case, the relevant estimation technique is the Independent Component Analysis (ICA). Since the introduction of ICA by Comon (1994), various semiparametric estimation methods have been proposed for "orthogonalizing" the error terms. These methods include pseudo maximum likelihood (PML) approaches and recursive PML. The aim of our paper is to derive the asymptotic properties of the PML approaches, in particular to study their consistency. We conduct Monte Carlo studies exploring the relative performances of these methods. Finally, an application based on real data shows that structural VAR models can be estimated without additional identification restrictions in the nonGaussian case and that the usual restrictions can be tested. 
Keywords:  Independent Component Analysis; Pseudo Maximum Likelihood; Identification; Cayley Transform; Structural Shocks; Structural VAR; Impulse Response Functions 
JEL:  C14 C32 
URL:  http://d.repec.org/n?u=RePEc:crs:wpaper:201709&r=ecm 
By:  Federico A. Bugni (Institute for Fiscal Studies and Duke University); Ivan A. Canay (Institute for Fiscal Studies and Northwestern University); Azeem M. Shaikh (Institute for Fiscal Studies and University of Chicago) 
Abstract:  This paper studies inference in randomized controlled trials with covariateadaptive randomization when there are multiple treatments. More specifically, we study in this setting inference about the average effect of one or more treatments relative to other treatments or a control. As in Bugni et al. (2017), covariateadaptive randomization refers to randomization schemes that first stratify according to baseline covariates and then assign treatment status so as to achieve "balance" within each stratum. In contrast to Bugni et al. (2017), however, we allow for the proportion of units being assigned to each of the treatments to vary across strata. We first study the properties of estimators derived from a "fully saturated" linear regression, i.e., a linear regression of the outcome on all interactions between indicators for each of the treatments and indicators for each of the strata. We show that tests based on these estimators using the usual heteroskedasticityconsistent estimator of the asymptotic variance are invalid in the sense that they may have limiting rejection probability under the null hypothesis strictly greater than the nominal level; on the other hand, tests based on these estimators and suitable estimators of the asymptotic variance that we provide are exact in the sense that they have limiting rejection probability under the null hypothesis equal to the nominal level. For the special case in which the target proportion of units being assigned to each of the treatments does not vary across strata, we additionally consider tests based on estimators derived from a linear regression with "strata fixed effects," i.e., a linear regression of the outcome on indicators for each of the treatments and indicators for each of the strata. We show that tests based on these estimators using the usual heteroskedasticityconsistent estimator of the asymptotic variance are conservative in the sense that they have limiting rejection probability under the null hypothesis no greater than and typically strictly less than the nominal level, but tests based on these estimators and suitable estimators of the asymptotic variance that we provide are exact, thereby generalizing results in Bugni et al. (2017) for the case of a single treatment to multiple treatments. A simulation study illustrates the practical relevance of our theoretical results. 
Keywords:  Covariateadaptive randomization, multiple treatments, stratifed block randomization, Efron's biasedcoin design, treatment assignment, randomized controlled trial, strata fixed effects, saturated regression 
JEL:  C12 C14 
Date:  2017–08–02 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:34/17&r=ecm 
By:  Kunz, Johannes S. (Monash University); Staub, Kevin E. (University of Melbourne); Winkelmann, Rainer (University of Zurich) 
Abstract:  The maximum likelihood estimator for the regression coefficients, β, in a panel binary response model with fixed effects can be severely biased if N is large and T is small, a consequence of the incidental parameters problem. This has led to the development of conditional maximum likelihood estimators and, more recently, to estimators that remove the O(T–1) bias in β^. We add to this literature in two important ways. First, we focus on estimation of the fixed effects proper, as these have become increasingly important in applied work. Second, we build on a biasreduction approach originally developed by Kosmidis and Firth (2009) for crosssection data, and show that in contrast to other proposals, the new estimator ensures finiteness of the fixed effects even in the absence of withinunit variation in the outcome. Results from a simulation study document favourable small sample properties. In an application to hospital data on patient readmission rates under the 2010 Affordable Care Act, we find that hospital fixed effects are strongly correlated across different treatment categories and on average higher for privately owned hospitals. 
Keywords:  perfect prediction, bias reduction, penalised likelihood, logit, probit, Affordable Care Act 
JEL:  C23 C25 I18 
Date:  2017–11 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp11182&r=ecm 
By:  Vincent Boucher 
Abstract:  I present a behavioural model of network formation with positive network externalities in which individuals have preferences for being part of a clique. The behavioural model leads to an associated supermodular (Topkis, 1979) normalform game. I show that the behavioural model converges to the greatest Nash equilibrium of the associated normalform game. I propose an approximate Bayesian computation (ABC) framework, using original summary statistics, to make inferences about individuals' preferences, and provide an illustration using data on high school friendships. 
Keywords:  Network formation, Supermodular Games, Approximate Bayesian Computation 
JEL:  D85 C11 C15 C72 
Date:  2017 
URL:  http://d.repec.org/n?u=RePEc:lvl:crrecr:1710&r=ecm 
By:  Taisuke Otsu; Chen Qiu 
Abstract:  This paper is concerned with estimation of functionals of a latent weight function that satisfies possibly high dimensional multiplicative moment conditions. Main examples are missing data problems, treatment effects, and functionals of the stochastic discount factor in asset pricing. We propose to estimate the latent weight function by an information theoretic approach combined with the l1penalization technique to deal with high dimensional moment conditions under sparsity. We derive asymptotic properties of the proposed estimator, and illustrate the proposed method by a theoretical example on treatment effect analysis and empirical example on the stochastic discount factor. 
Keywords:  Stochastic discount factor, Treatment effect, Information theory, High dimension 
JEL:  C12 C14 
Date:  2018–01 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:595&r=ecm 
By:  Irene Botosaru (Institute for Fiscal Studies); Chris Muris (Institute for Fiscal Studies and Simon Fraser University) 
Abstract:  In nonlinear panel models with fixed effects and fixedT, the incidental parameter problem poses identification difficulties for structural parameters and partial effects. Existing solutions are modelspecific, likelihoodbased, impose time homogeneity, or restrict the distribution of unobserved heterogeneity. We provide new identification results for the large class of Fixed Effects Linear Transformation (FELT) models with unknown, timevarying, weakly monotone transformation functions. Our results accommodate continuous and discrete outcomes and covariates, require only two time periods and no parametric distributional assumptions. First, we provide a systematic solution to the incidental parameter problem in FELT via binarization, which transforms FELT into many binary choice models. Second, we identify the distribution of counterfactual outcomes and a menu of timevarying partial effects. Third, we obtain new results for nonlinear differenceindifferences with discrete and censored outcomes, and for FELT with random coefficients. Finally, we propose rank and likelihoodbased estimators that achieve vn rate of convergence. 
Keywords:  C14; C23; C41. 
Date:  2017–06–20 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:31/17&r=ecm 
By:  Jakub Olejnik (Department of Mathematics and Computer Science, University of Lodz); Alicja Olejnik (Faculty of Economics and Sociology, University of Lodz) 
Abstract:  This paper presents a fundamentally improved statement on asymptotic behaviour of the wellknown Gaussian QML estimator of parameters in highorder mixed regressive/autoregressive spatial model. We generalize the approach previously known in the econometric literature by considerably weakening assumptions on the spatial weight matrix, distribution of the residuals and the parameter space for the spatial autoregressive parameter. As an example application of our new asymptotic analysis we also give a statement on the large sample behaviour of a general fixed effects design. 
Keywords:  spatial autoregression, quasimaximum likelihood estimation, highorder SAR model, asymptotic analysis, fixed effects model 
JEL:  C21 C23 C51 
Date:  2017–12 
URL:  http://d.repec.org/n?u=RePEc:ann:wpaper:9/2017&r=ecm 
By:  Roger Koenker (Institute for Fiscal Studies and University of Illinois) 
Abstract:  Nonparametric maximum likelihood estimation of general mixture models pioneered by the work of Kiefer and Wolfowitz (1956) has been recently reformulated as an exponential family regression spline problem in Efron (2016). Both approaches yield a low dimensional estimate of the mixing distribution, gmodeling in the terminology of Efron. Some casual empiricism suggests that the Efron approach is preferable when the mixing distribution has a smooth density, while KieferWolfowitz is preferable for discrete mixing settings. In the classical Gaussian deconvolution problem both maximum likelihood methods appear to be preferable to (Fourier) kernel methods. Kernel smoothing of the KieferWolfowitz estimator appears to be competitive with the Efron procedure for smooth alternatives. 
Date:  2017–08–10 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:38/17&r=ecm 
By:  Alexandre Belloni (Institute for Fiscal Studies); Mingli Chen (Institute for Fiscal Studies and Warwick); Victor Chernozhukov (Institute for Fiscal Studies and MIT) 
Abstract:  The understanding of comovements, dependence, and influence between variables of interest is key in many applications. Broadly speaking such understanding can lead to better predictions and decision making in many settings. We propose Quantile Graphical Models (QGMs) to characterize prediction and conditional independence relationships within a set of random variables of interest. Although those models are of interest in a variety of applications, we draw our motivation and contribute to the financial risk management literature. Importantly, the proposed framework is intended to be applied to nonGaussian settings, which are ubiquitous in many real applications, and to handle a large number of variables and conditioning events. We propose two distinct QGMs. First, Condition Independence Quantile Graphical Models (CIQGMs) characterize conditional independence at each quantile index revealing the distributional dependence structure. Second, Prediction Quantile Graphical Models (PQGMs) characterize the best linear predictor under asymmetric loss functions. A key difference between those models is the (nonvanishing) misspeci cation between the best linear predictor and the conditional quantile functions. We also propose estimators for those QGMs. Due to highdimensionality, the two distinct QGMs require different estimators. The estimators are based on highdimensional techniques including (a continuum of) L1penalized quantile regressions (and low biased equations), which allow us to handle the potential large number of variables. We build upon a recent literature to obtain new results for valid choice of the penalty parameters, rates of convergence, and con dence regions that are simultaneously valid. We illustrate how to use QGMs to quantify tail interdependence (instead of mean dependence) between a large set of variables which is relevant in applications concerning with extreme events. We show that the associated tail risk network can be used for measuring systemic risk contributions. We also apply the framework to study international financial contagion and the impact of market downside movement on the dependence structure of assets' returns. 
Keywords:  Highdimensional approximately sparse model, tail risk network, conditional independence, nonlinear correlation, penalized quantile regression, systemic risk, financial contagion, downside movement 
Date:  2017–12–05 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:54/17&r=ecm 
By:  Joel L. Horowitz (Institute for Fiscal Studies and Northwestern University) 
Abstract:  This paper presents a simple nonasymptotic method for carrying out inference in IV models. The method is a nonStudentized version of the AndersonRubin test but is motivated and analyzed differently. In contrast to the conventional AndersonRubin test, the method proposed here does not require restrictive distributional assumptions, linearity of the estimated model, or simultaneous equations. Nor does it require knowledge of whether the instruments are strong or weak. It does not require testing or estimating the strength of the instruments. The method can be applied to quantile IV models that may be nonlinear and can be used to test a parametric IV model against a nonparametric alternative. The results presented here hold in finite samples, regardless of the strength of the instruments. 
Keywords:  Weak instruments, normal approximation, finitesample bounds 
JEL:  C21 C26 
Date:  2017–10–30 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:46/17&r=ecm 
By:  Guy Melard; Rajae R. Azrak 
Abstract:  The paper provides a kind of KlimkoNelson theoremsalternative in the case of conditional estimators for array timeseries, when the assumptions of almost sure convergence cannot be established.We do not assume stationarity nor even local stationarity.In addition, we provide sufficient conditions for two of the assumptionsand two theorems for the evaluation of the information matrixin array time series. 
Keywords:  properties leastsquare array time series 
Date:  2017–12–31 
URL:  http://d.repec.org/n?u=RePEc:eca:wpaper:2013/263350&r=ecm 
By:  Federico Crudu; Giovanni Mellace; Zsolt Sandor 
Abstract:  This paper proposes a specification test for instrumental variable models that is robust to the presence of heteroskedasticity. The test can be seen as a generalization the AndersonRubin test. Our approach is based on the jackknife principle. We are able to show that under the null the proposed statistic has a Gaussian limiting distribution. Moreover, a simulation study shows its competitive finite sample properties in terms of size and power 
Keywords:  Instrumental variables, heteroskedasticity, many instruments, jackknife, specification tests, overidentification tests 
JEL:  C12 C13 C23 
Date:  2017–11 
URL:  http://d.repec.org/n?u=RePEc:usi:wpaper:761&r=ecm 
By:  Christian Gouriéroux (CREST; University of Toronto); Alain Monfort (CREST); JeanPaul Renne (University of Lausanne) 
Abstract:  The basic assumption of a structural VARMA model (SVARMA) is that it is driven by a white noise whose components are independent and can be interpreted as economic shocks, called "structural" shocks. When the errors are Gaussian, independence is equivalent to noncorrelation and these models face two kinds of identi?cation issues. The ?rst identi?cation problem is "static" and is due to the fact that there is an in?nite number of linear transformations of a given random vector making its components uncorrelated. The second identi?cation problem is "dynamic" and is a consequence of the fact that the SVARMA process may have a non invertible AR and/or MA matrix polynomial but, still, has the same secondorder properties as a VARMA process in which both the AR and MA matrix polynomials are invertible (the fundamental representation). Moreover the standard BoxJenkins approach [Box and Jenkins (1970)] automatically estimates the fundamental representation and, therefore, may lead to misspeci?ed Impulse Response Functions. The aim of this paper is to explain that these dif?culties are mainly due to the Gaussian assumption, and that both identi?cation challenges are solved in a nonGaussian framework. We develop new simple parametric and semiparametric estimation methods when there is nonfundamentalness in the moving average dynamics. The functioning and performances of these methods are illustrated by applications conducted on both simulated and real data. 
Keywords:  Structural VARMA; Fundamental Representation; Identi?cation; Shocks; Impulse Response Function; Incomplete Likelihood; Composite Likelihood; Economic Scenario Generators 
JEL:  C01 C15 C32 E37 
URL:  http://d.repec.org/n?u=RePEc:crs:wpaper:201708&r=ecm 
By:  LeYu Chen (Institute for Fiscal Studies and Academia Sinica); Sokbae Lee (Institute for Fiscal Studies and Columbia University and IFS) 
Abstract:  We show that the generalized method of moments (GMM) estimation problem in instrumental variable quantile regression (IVQR) models can be equivalently formulated as a mixed integer quadratic programming problem. This enables exact computation of the GMM estimators for the IVQR models. We illustrate the usefulness of our algorithm via Monte Carlo experiments and an application to demand for fish. 
Keywords:  generalized method of moments, instrumental variable, quantile regression, endogeneity, mixed integer optimization 
JEL:  C21 C26 C61 C63 
Date:  2017–11–22 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:52/17&r=ecm 
By:  LeYu Chen (Institute for Fiscal Studies and Academia Sinica); Sokbae Lee (Institute for Fiscal Studies and Columbia University and IFS) 
Abstract:  We consider a variable selection problem for the prediction of binary outcomes. We study the best subset selection procedure by which the explanatory variables are chosen by maximizing Manski (1975, 1985)'s maximum score type objective function subject to a constraint on the maximal number of selected variables. We show that this procedure can be equivalently reformulated as solving a mixed integer optimization (MIO) problem, which enables computation of the exact or an approximate solution with a de finite approximation error bound. In terms of theoretical results, we obtain nonasymptotic upper and lower risk bounds when the dimension of potential covariates is possibly much larger than the sample size. Our upper and lower risk bounds are minimax rateoptimal when the maximal number of selected variables is fi xed and does not increase with the sample size. We illustrate usefulness of the best subset binary prediction approach via Monte Carlo simulations and an empirical application of the worktrip transportation mode choice. 
Keywords:  binary choice, maximum score estimation, best subset selection, `0constrained maximization, mixed integer optimization, minimax optimality, fi nite sample property 
JEL:  C52 C53 
Date:  2017–11–22 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:50/17&r=ecm 
By:  Kalouptsidi, Myrto; Scott, Paul; SouzaRodrigues, Eduardo 
Abstract:  Dynamic discrete choice models (DDC) are not identified nonparametrically. However, the nonidentification of DDC models does not necessarily imply nonidentification of coun terfactuals of interest. Using a novel approach that can accommodate both nonparametric and restricted payoff functions, we provide necessary and sufficient conditions for the iden tification of counterfactual behavior and welfare for a broad class of counterfactuals. The conditions are simple to check and can be applied to virtually all counterfactuals in the DDC literature. To explore the robustness of counterfactual results to model restrictions in practice, we consider a numerical example of a monopolist's entry problem, as well as an empirical model of agricultural land use. In each case, we provide examples of both identified and nonidentified counterfactuals of interest. 
Keywords:  counterfactual; dynamic discrete choice; identification; welfare 
Date:  2017–11 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:12470&r=ecm 
By:  Roger Koenker (Institute for Fiscal Studies and University of Illinois) 
Abstract:  Since Quetelet's work in the 19th century social science has iconi fied "the average man", that hypothetical man without qualities who is comfortable with his head in the oven, and his feet in a bucket of ice. Conventional statistical methods, since Quetelet, have sought to estimate the effects of policy treatments for this average man. But such effects are often quite heterogenous: medical treatments may improve life expectancy, but also impose serious short term risks; reducing class sizes may improve performance of good students, but not help weaker ones or vice versa. Quantile regression methods can help to explore these heterogeneous effects. Some recent developments in quantile regression methods are surveyed below. 
Keywords:  quantile regression, treatment effects, heterogeneity, causal inference 
Date:  2017–08–10 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:36/17&r=ecm 