nep-ecm New Economics Papers
on Econometrics
Issue of 2018‒01‒15
27 papers chosen by
Sune Karlsson
Örebro universitet

  1. Identification of Structural Vector Autoregressions by Stochastic Volatility By Dominik Bertsche; Robin Braun
  2. A Bootstrap Stationarity Test for Predictive Regression Invalidity By Georgiev, I; Harvey, DI; Leybourne, SJ; Taylor, AMR
  3. Powerful t-Tests in the presence of nonclassical measurement error By Dongwoo Kim; Daniel Wilhelm
  4. A New Wald Test for Hypothesis Testing Based on MCMC outputs By Yong Li; Xiaobin Liu; Jun Yu; Tao Zeng
  5. Robust linear static panel data models using e-contamination By Badi H. Baltagia; Georges Bresson; Anoop Chaturvedi; Guy Lacroix
  6. Constructing Metropolis-Hastings proposals using damped BFGS updates By Johan Dahlin; Adrian Wills; Brett Ninness
  7. Cross-fitting and fast remainder rates for semiparametric estimation By Whitney K. Newey; James M. Robins
  8. Robust Synthetic Control By Muhammad Jehangir Amjad; Devavrat Shah; Dennis Shen
  9. A nonparametric copula approach to conditional Value-at-Risk By Gery Geenens; Richard Dunn
  10. Estimation Considerations in Contextual Bandits By Maria Dimakopoulou; Susan Athey; Guido Imbens
  11. Statistical Inference for Independent Component Analysis: Application to Structural VAR Models By Christian Gouriéroux; Alain Monfort; Jean-Paul Renne
  12. Inference under covariate-adaptive randomization with multiple treatments By Federico A. Bugni; Ivan A. Canay; Azeem M. Shaikh
  13. Estimating Fixed Effects: Perfect Prediction and Bias in Binary Response Panel Models, with an Application to the Hospital Readmissions Reduction Program By Kunz, Johannes S.; Staub, Kevin E.; Winkelmann, Rainer
  14. The Estimation of Network Formation Games with Positive Spillovers By Vincent Boucher
  15. Information theoretic approach to high dimensional multiplicative models: Stochastic discount factor and treatment effect By Taisuke Otsu; Chen Qiu
  16. Binarization for panel models with fixed effects By Irene Botosaru; Chris Muris
  17. Improved asymptotic analysis of Gaussian QML estimators in spatial models By Jakub Olejnik; Alicja Olejnik
  18. Bayesian deconvolution: an R vinaigrette By Roger Koenker
  19. Quantile graphical models: prediction and conditional independence with applications to systemic risk By Alexandre Belloni; Mingli Chen; Victor Chernozhukov
  20. Non-asymptotic inference in instrumental variables estimation By Joel L. Horowitz
  21. Asymptotic Properties of Conditional Least-squares Estimators for Array Time Series By Guy Melard; Rajae R. Azrak
  22. Inference in instrumental variables models with heteroskedasticity and many instruments By Federico Crudu; Giovanni Mellace; Zsolt Sandor
  23. Identification and Estimation in Non-Fundamental Structural VARMA Models By Christian Gouriéroux; Alain Monfort; Jean-Paul Renne
  24. Exact computation of GMM estimators for instrumental variable quantile regression models By Le-Yu Chen; Sokbae Lee
  25. Best subset binary prediction By Le-Yu Chen; Sokbae Lee
  26. Identification of Counterfactuals in Dynamic Discrete Choice Models By Kalouptsidi, Myrto; Scott, Paul; Souza-Rodrigues, Eduardo
  27. Quantile regression 40 years on By Roger Koenker

  1. By: Dominik Bertsche (University of Konstanz, Department of Economics, Box 129, 78457 Konstanz, Germany); Robin Braun (University of Konstanz, Graduate School of Decision Science, Department of Economics, Box 129, 78457 Konstanz, Germany)
    Abstract: In Structural Vector Autoregressive (SVAR) models, heteroskedasticity can be exploited to identify structural parameters statistically. In this paper, we propose to capture time variation in the second moment of structural shocks by a stochastic volatility (SV) model, assuming that their log variances follow latent AR(1) processes. Estimation is performed by Gaussian Maximum Likelihood and an efficient Expectation Maximization algorithm is developed for that purpose. Since the smoothing distributions required in the algorithm are intractable, we propose to approximate them either by Gaussian distributions or with the help of Markov Chain Monte Carlo (MCMC) methods. We provide simulation evidence that the SV-SVAR model works well in estimating the structural parameters also under model misspecification. We use the proposed model to study the interdependence between monetary policy and the stock market. Based on monthly US data, we find that the SV specification provides the best fit and is favored by conventional information criteria if compared to other models of heteroskedasticity, including GARCH, Markov Switching, and Smooth Transition models. Since the structural shocks identified by heteroskedasticity have no economic interpretation, we test conventional exclusion restrictions as well as Proxy SVAR restrictions which are overidentifying in the heteroskedastic model.
    Keywords: Structural Vector Autoregression (SVAR), Identification via heteroskedasticity, Stochastic Volatility, Proxy SVAR
    JEL: C32
    Date: 2017–12–21
    URL: http://d.repec.org/n?u=RePEc:knz:dpteco:1711&r=ecm
  2. By: Georgiev, I; Harvey, DI; Leybourne, SJ; Taylor, AMR
    Abstract: We examine how the familiar spurious regression problem can manifest itself in the context of recently proposed predictability tests. For these tests to provide asymptotically valid inference, account has to be taken of the degree of persistence of the putative predictors. Failure to do so can lead to spurious over-rejections of the no predictability null hypothesis. A number of methods have been developed to achieve this. However, these approaches all make an underlying assumption that any predictability in the variable of interest is purely attributable to the predictors under test, rather than to any unobserved persistent latent variables, themselves uncorrelated with the predictors being tested. We show that where this assumption is violated, something that could very plausibly happen in practice, sizeable (spurious) rejections of the null can occur in cases where the variables under test are not valid predictors. In response, we propose a screening test for predictive regression invalidity based on a stationarity testing approach. In order to allow for an unknown degree of persistence in the putative predictors, and for both conditional and unconditional heteroskedasticity in the data, we implement our proposed test using a fixed regressor wild bootstrap procedure. We establish the asymptotic validity of this bootstrap test, which entails establishing a conditional invariance principle along with its bootstrap counterpart, both of which appear to be new to the literature and are likely to have important applications beyond the present context. We also show how our bootstrap test can be used, in conjunction with extant predictability tests, to deliver a two-step feasible procedure. Monte Carlo simulations suggest that our proposed bootstrap methods work well in finite samples. An illustration employing U.S. stock returns data demonstrates the practical usefulness of our procedures.
    Keywords: Predictive regression; causality; persistence; spurious regression; stationarity test; fixed regressor wild bootstrap; conditional distribution.
    Date: 2018–01
    URL: http://d.repec.org/n?u=RePEc:esy:uefcwp:21006&r=ecm
  3. By: Dongwoo Kim (Institute for Fiscal Studies and UCL); Daniel Wilhelm (Institute for Fiscal Studies and cemmap and UCL)
    Abstract: This paper proposes a powerful alternative to the t-test in linear regressions when a regressor is mismeasured. We assume there is a second contaminated measurement of the regressor of interest. We allow the two measurement errors to be nonclassical in the sense that they may both be correlated with the true regressor, they may be correlated with each other, and we do not require any location normalizations on the measurement errors. We propose a new maximal t-statistic that is formed from the regression of the outcome onto a maximally weighted linear combination of the two measurements. Critical values of the test are easily computed via a multiplier bootstrap. In simulations, we show that this new test can be signi cantly more powerful than t-statistics based on OLS or IV estimates. Finally, we apply our test to the study of returns to education based on twins data from the U.S.
    Keywords: linear regression, adaptive test, power of test, maximal combination of measurements, repeated measurements, multiplier bootstrap
    Date: 2017–12–11
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:57/17&r=ecm
  4. By: Yong Li; Xiaobin Liu; Jun Yu; Tao Zeng
    Abstract: In this paper, a new and convenient $\chi^2$ wald test based on MCMC outputs is proposed for hypothesis testing. The new statistic can be explained as MCMC version of Wald test and has several important advantages that make it very convenient in practical applications. First, it is well-defined under improper prior distributions and avoids Jeffrey-Lindley's paradox. Second, it's asymptotic distribution can be proved to follow the $\chi^2$ distribution so that the threshold values can be easily calibrated from this distribution. Third, it's statistical error can be derived using the Markov chain Monte Carlo (MCMC) approach. Fourth, most importantly, it is only based on the posterior MCMC random samples drawn from the posterior distribution. Hence, it is only the by-product of the posterior outputs and very easy to compute. In addition, when the prior information is available, the finite sample theory is derived for the proposed test statistic. At last, the usefulness of the test is illustrated with several applications to latent variable models widely used in economics and finance.
    Date: 2018–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1801.00973&r=ecm
  5. By: Badi H. Baltagia; Georges Bresson; Anoop Chaturvedi; Guy Lacroix
    Abstract: The paper develops a general Bayesian framework for robust linear static panel data models using e-contamination. A two-step approach is employed to derive the conditional type-II maximum likelihood (ML-II) posterior distribution of the coeffcients and individual effects. The ML-II posterior densities are weighted averages of the Bayes estimator under a base prior and the data-dependent empirical Bayes estimator. Two-stage and three stage hierarchy estimators are developed and their finite sample performance is investigated through a series of Monte Carlo experiments. These include standard random effects as well as Mundlak-type, Chamberlain-type and Hausman-Taylor-type models. The simulation results underscore the relatively good performance of the three-stage hierarchy estimator. Within a single theoretical framework, our Bayesian approach encompasses a variety of specifications while conventional methods require separate estimators for each case.
    Keywords: e-contamination, hyper g-priors, type-II maximum likelihood posterior density, panel data, robust Bayesian estimator, three-stage hierarchy
    JEL: C11 C23 C26
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:lvl:crrecr:1706&r=ecm
  6. By: Johan Dahlin; Adrian Wills; Brett Ninness
    Abstract: This paper considers the problem of computing Bayesian estimates of system parameters and functions of them on the basis of observed system performance data. This is a previously studied issue where stochastic simulation approaches have been examined using the popular Metropolis--Hastings (MH) algorithm. This prior study has identified a recognised difficulty of tuning the proposal distribution so that the MH method provides realisations with sufficient mixing to deliver efficient convergence. This paper proposes and empirically examines a method of tuning the proposal using ideas borrowed from the numerical optimisation literature around efficient computation of Hessians so that gradient and curvature information of the target posterior can be incorporated in the proposal.
    Date: 2018–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1801.01243&r=ecm
  7. By: Whitney K. Newey (Institute for Fiscal Studies and MIT); James M. Robins (Institute for Fiscal Studies)
    Abstract: There are many interesting and widely used estimators of a functional with ?nite semi-parametric variance bound that depend on nonparametric estimators of nuisance func-tions. We use cross-?tting to construct such estimators with fast remainder rates. We give cross-?t doubly robust estimators that use separate subsamples to estimate di?erent nuisance functions. We show that a cross-?t doubly robust spline regression estimator of the expected conditional covariance is semiparametric e?cient under minimal conditions. Corresponding estimators of other average linear functionals of a conditional expectation are shown to have the fastest known remainder rates under certain smoothness conditions. The cross-?t plug-in estimator shares some of these properties but has a remainder term that is larger than the cross-?t doubly robust estimator. As speci?c examples we consider the expected conditional covariance, mean with randomly missing data, and a weighted average derivative.
    Date: 2017–10–03
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:41/17&r=ecm
  8. By: Muhammad Jehangir Amjad; Devavrat Shah; Dennis Shen
    Abstract: We present a robust generalization of the synthetic control method for comparative case studies. Like the classical method, we present an algorithm to estimate the unobservable counterfactual of a treatment unit. A distinguishing feature of our algorithm is that of de-noising the data matrix via singular value thresholding, which renders our approach robust in multiple facets: it automatically identifies a good subset of donors, overcomes the challenges of missing data, and continues to work well in settings where covariate information may not be provided. To begin, we establish the condition under which the fundamental assumption in synthetic control-like approaches holds, i.e. when the linear relationship between the treatment unit and the donor pool prevails in both the pre- and post-intervention periods. We provide the first finite sample analysis for a broader class of models, the Latent Variable Model, in contrast to Factor Models previously considered in the literature. Further, we show that our de-noising procedure accurately imputes missing entries, producing a consistent estimator of the underlying signal matrix provided $p = \Omega( T^{-1 + \zeta})$ for some $\zeta > 0$; here, $p$ is the fraction of observed data and $T$ is the time interval of interest. Under the same setting, we prove that the mean-squared-error (MSE) in our prediction estimation scales as $O(\sigma^2/p + 1/\sqrt{T})$, where $\sigma^2$ is the noise variance. Using a data aggregation method, we show that the MSE can be made as small as $O(T^{-1/2+\gamma})$ for any $\gamma \in (0, 1/2)$, leading to a consistent estimator. We also introduce a Bayesian framework to quantify the model uncertainty through posterior probabilities. Our experiments, using both real-world and synthetic datasets, demonstrate that our robust generalization yields an improvement over the classical synthetic control method.
    Date: 2017–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1711.06940&r=ecm
  9. By: Gery Geenens; Richard Dunn
    Abstract: Value-at-Risk and its conditional allegory, which takes into account the available information about the economic environment, form the centrepiece of the Basel framework for the evaluation of market risk in the banking sector. In this paper, a new nonparametric framework for estimating this conditional Value-at-Risk is presented. A nonparametric approach is particularly pertinent as the traditionally used parametric distributions have been shown to be insufficiently robust and flexible in most of the equity-return data sets observed in practice. The method extracts the quantile of the conditional distribution of interest, whose estimation is based on a novel estimator of the density of the copula describing the dynamic dependence observed in the series of returns. Real-world back-testing analyses demonstrate the potential of the approach, whose performance may be superior to its industry counterparts.
    Date: 2017–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1712.05527&r=ecm
  10. By: Maria Dimakopoulou; Susan Athey; Guido Imbens
    Abstract: Contextual bandit algorithms seek to learn a personalized treatment assignment policy, balancing exploration against exploitation. Although a number of algorithms have been proposed, there is little guidance available for applied researchers to select among various approaches. Motivated by the econometrics and statistics literatures on causal effects estimation, we study a new consideration to the exploration vs. exploitation framework, which is that the way exploration is conducted in the present may contribute to the bias and variance in the potential outcome model estimation in subsequent stages of learning. We leverage parametric and non-parametric statistical estimation methods and causal effect estimation methods in order to propose new contextual bandit designs. Through a variety of simulations, we show how alternative design choices impact the learning performance and provide insights on why we observe these effects.
    Date: 2017–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1711.07077&r=ecm
  11. By: Christian Gouriéroux (CREST; University of Toronto); Alain Monfort (CREST); Jean-Paul Renne (University of Lausanne)
    Abstract: The well-known problem of non-identifiability of structural VAR models disappears if the structural shocks are independent and if at most one of them is Gaussian. In that case, the relevant estimation technique is the Independent Component Analysis (ICA). Since the introduction of ICA by Comon (1994), various semi-parametric estimation methods have been proposed for "orthogonalizing" the error terms. These methods include pseudo maximum likelihood (PML) approaches and recursive PML. The aim of our paper is to derive the asymptotic properties of the PML approaches, in particular to study their consistency. We conduct Monte Carlo studies exploring the relative performances of these methods. Finally, an application based on real data shows that structural VAR models can be estimated without additional identification restrictions in the non-Gaussian case and that the usual restrictions can be tested.
    Keywords: Independent Component Analysis; Pseudo Maximum Likelihood; Identification; Cayley Transform; Structural Shocks; Structural VAR; Impulse Response Functions
    JEL: C14 C32
    URL: http://d.repec.org/n?u=RePEc:crs:wpaper:2017-09&r=ecm
  12. By: Federico A. Bugni (Institute for Fiscal Studies and Duke University); Ivan A. Canay (Institute for Fiscal Studies and Northwestern University); Azeem M. Shaikh (Institute for Fiscal Studies and University of Chicago)
    Abstract: This paper studies inference in randomized controlled trials with covariate-adaptive randomization when there are multiple treatments. More specifically, we study in this setting inference about the average effect of one or more treatments relative to other treatments or a control. As in Bugni et al. (2017), covariate-adaptive randomization refers to randomization schemes that first stratify according to baseline covariates and then assign treatment status so as to achieve "balance" within each stratum. In contrast to Bugni et al. (2017), however, we allow for the proportion of units being assigned to each of the treatments to vary across strata. We first study the properties of estimators derived from a "fully saturated" linear regression, i.e., a linear regression of the outcome on all interactions between indicators for each of the treatments and indicators for each of the strata. We show that tests based on these estimators using the usual heteroskedasticity-consistent estimator of the asymptotic variance are invalid in the sense that they may have limiting rejection probability under the null hypothesis strictly greater than the nominal level; on the other hand, tests based on these estimators and suitable estimators of the asymptotic variance that we provide are exact in the sense that they have limiting rejection probability under the null hypothesis equal to the nominal level. For the special case in which the target proportion of units being assigned to each of the treatments does not vary across strata, we additionally consider tests based on estimators derived from a linear regression with "strata fixed effects," i.e., a linear regression of the outcome on indicators for each of the treatments and indicators for each of the strata. We show that tests based on these estimators using the usual heteroskedasticity-consistent estimator of the asymptotic variance are conservative in the sense that they have limiting rejection probability under the null hypothesis no greater than and typically strictly less than the nominal level, but tests based on these estimators and suitable estimators of the asymptotic variance that we provide are exact, thereby generalizing results in Bugni et al. (2017) for the case of a single treatment to multiple treatments. A simulation study illustrates the practical relevance of our theoretical results.
    Keywords: Covariate-adaptive randomization, multiple treatments, stratifed block randomization, Efron's biased-coin design, treatment assignment, randomized controlled trial, strata fixed effects, saturated regression
    JEL: C12 C14
    Date: 2017–08–02
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:34/17&r=ecm
  13. By: Kunz, Johannes S. (Monash University); Staub, Kevin E. (University of Melbourne); Winkelmann, Rainer (University of Zurich)
    Abstract: The maximum likelihood estimator for the regression coefficients, β, in a panel binary response model with fixed effects can be severely biased if N is large and T is small, a consequence of the incidental parameters problem. This has led to the development of conditional maximum likelihood estimators and, more recently, to estimators that remove the O(T–1) bias in β^. We add to this literature in two important ways. First, we focus on estimation of the fixed effects proper, as these have become increasingly important in applied work. Second, we build on a bias-reduction approach originally developed by Kosmidis and Firth (2009) for cross-section data, and show that in contrast to other proposals, the new estimator ensures finiteness of the fixed effects even in the absence of within-unit variation in the outcome. Results from a simulation study document favourable small sample properties. In an application to hospital data on patient readmission rates under the 2010 Affordable Care Act, we find that hospital fixed effects are strongly correlated across different treatment categories and on average higher for privately owned hospitals.
    Keywords: perfect prediction, bias reduction, penalised likelihood, logit, probit, Affordable Care Act
    JEL: C23 C25 I18
    Date: 2017–11
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp11182&r=ecm
  14. By: Vincent Boucher
    Abstract: I present a behavioural model of network formation with positive network externalities in which individuals have preferences for being part of a clique. The behavioural model leads to an associated supermodular (Topkis, 1979) normalform game. I show that the behavioural model converges to the greatest Nash equilibrium of the associated normal-form game. I propose an approximate Bayesian computation (ABC) framework, using original summary statistics, to make inferences about individuals' preferences, and provide an illustration using data on high school friendships.
    Keywords: Network formation, Supermodular Games, Approximate Bayesian Computation
    JEL: D85 C11 C15 C72
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:lvl:crrecr:1710&r=ecm
  15. By: Taisuke Otsu; Chen Qiu
    Abstract: This paper is concerned with estimation of functionals of a latent weight function that satisfies possibly high dimensional multiplicative moment conditions. Main examples are missing data problems, treatment effects, and functionals of the stochastic discount factor in asset pricing. We propose to estimate the latent weight function by an information theoretic approach combined with the l1-penalization technique to deal with high dimensional moment conditions under sparsity. We derive asymptotic properties of the proposed estimator, and illustrate the proposed method by a theoretical example on treatment effect analysis and empirical example on the stochastic discount factor.
    Keywords: Stochastic discount factor, Treatment effect, Information theory, High dimension
    JEL: C12 C14
    Date: 2018–01
    URL: http://d.repec.org/n?u=RePEc:cep:stiecm:595&r=ecm
  16. By: Irene Botosaru (Institute for Fiscal Studies); Chris Muris (Institute for Fiscal Studies and Simon Fraser University)
    Abstract: In nonlinear panel models with fixed effects and fixed-T, the incidental parameter problem poses identification difficulties for structural parameters and partial effects. Existing solutions are model-specific, likelihood-based, impose time homogeneity, or restrict the distribution of unobserved heterogeneity. We provide new identification results for the large class of Fixed Effects Linear Transformation (FELT) models with unknown, time-varying, weakly monotone transformation functions. Our results accommodate continuous and discrete outcomes and covariates, require only two time periods and no parametric distributional assumptions. First, we provide a systematic solution to the incidental parameter problem in FELT via binarization, which transforms FELT into many binary choice models. Second, we identify the distribution of counterfactual outcomes and a menu of time-varying partial effects. Third, we obtain new results for nonlinear difference-in-differences with discrete and censored outcomes, and for FELT with random coefficients. Finally, we propose rank- and likelihood-based estimators that achieve vn rate of convergence.
    Keywords: C14; C23; C41.
    Date: 2017–06–20
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:31/17&r=ecm
  17. By: Jakub Olejnik (Department of Mathematics and Computer Science, University of Lodz); Alicja Olejnik (Faculty of Economics and Sociology, University of Lodz)
    Abstract: This paper presents a fundamentally improved statement on asymptotic behaviour of the well-known Gaussian QML estimator of parameters in high-order mixed regressive/autoregressive spatial model. We generalize the approach previously known in the econometric literature by considerably weakening assumptions on the spatial weight matrix, distribution of the residuals and the parameter space for the spatial autoregressive parameter. As an example application of our new asymptotic analysis we also give a statement on the large sample behaviour of a general fixed effects design.
    Keywords: spatial autoregression, quasi-maximum likelihood estimation, high-order SAR model, asymptotic analysis, fixed effects model
    JEL: C21 C23 C51
    Date: 2017–12
    URL: http://d.repec.org/n?u=RePEc:ann:wpaper:9/2017&r=ecm
  18. By: Roger Koenker (Institute for Fiscal Studies and University of Illinois)
    Abstract: Nonparametric maximum likelihood estimation of general mixture models pioneered by the work of Kiefer and Wolfowitz (1956) has been recently reformulated as an exponential family regression spline problem in Efron (2016). Both approaches yield a low dimensional estimate of the mixing distribution, g-modeling in the terminology of Efron. Some casual empiricism suggests that the Efron approach is preferable when the mixing distribution has a smooth density, while Kiefer-Wolfowitz is preferable for discrete mixing settings. In the classical Gaussian deconvolution problem both maximum likelihood methods appear to be preferable to (Fourier) kernel methods. Kernel smoothing of the Kiefer-Wolfowitz estimator appears to be competitive with the Efron procedure for smooth alternatives.
    Date: 2017–08–10
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:38/17&r=ecm
  19. By: Alexandre Belloni (Institute for Fiscal Studies); Mingli Chen (Institute for Fiscal Studies and Warwick); Victor Chernozhukov (Institute for Fiscal Studies and MIT)
    Abstract: The understanding of co-movements, dependence, and influence between variables of interest is key in many applications. Broadly speaking such understanding can lead to better predictions and decision making in many settings. We propose Quantile Graphical Models (QGMs) to characterize prediction and conditional independence relationships within a set of random variables of interest. Although those models are of interest in a variety of applications, we draw our motivation and contribute to the financial risk management literature. Importantly, the proposed framework is intended to be applied to non-Gaussian settings, which are ubiquitous in many real applications, and to handle a large number of variables and conditioning events. We propose two distinct QGMs. First, Condition Independence Quantile Graphical Models (CIQGMs) characterize conditional independence at each quantile index revealing the distributional dependence structure. Second, Prediction Quantile Graphical Models (PQGMs) characterize the best linear predictor under asymmetric loss functions. A key difference between those models is the (non-vanishing) misspeci cation between the best linear predictor and the conditional quantile functions. We also propose estimators for those QGMs. Due to high-dimensionality, the two distinct QGMs require different estimators. The estimators are based on high-dimensional techniques including (a continuum of) L1-penalized quantile regressions (and low biased equations), which allow us to handle the potential large number of variables. We build upon a recent literature to obtain new results for valid choice of the penalty parameters, rates of convergence, and con dence regions that are simultaneously valid. We illustrate how to use QGMs to quantify tail interdependence (instead of mean dependence) between a large set of variables which is relevant in applications concerning with extreme events. We show that the associated tail risk network can be used for measuring systemic risk contributions. We also apply the framework to study international financial contagion and the impact of market downside movement on the dependence structure of assets' returns.
    Keywords: High-dimensional approximately sparse model, tail risk network, conditional independence, nonlinear correlation, penalized quantile regression, systemic risk, financial contagion, downside movement
    Date: 2017–12–05
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:54/17&r=ecm
  20. By: Joel L. Horowitz (Institute for Fiscal Studies and Northwestern University)
    Abstract: This paper presents a simple non-asymptotic method for carrying out inference in IV models. The method is a non-Studentized version of the Anderson-Rubin test but is motivated and analyzed differently. In contrast to the conventional Anderson-Rubin test, the method proposed here does not require restrictive distributional assumptions, linearity of the estimated model, or simultaneous equations. Nor does it require knowledge of whether the instruments are strong or weak. It does not require testing or estimating the strength of the instruments. The method can be applied to quantile IV models that may be nonlinear and can be used to test a parametric IV model against a nonparametric alternative. The results presented here hold in finite samples, regardless of the strength of the instruments.
    Keywords: Weak instruments, normal approximation, finite-sample bounds
    JEL: C21 C26
    Date: 2017–10–30
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:46/17&r=ecm
  21. By: Guy Melard; Rajae R. Azrak
    Abstract: The paper provides a kind of Klimko-Nelson theoremsalternative in the case of conditional estimators for array timeseries, when the assumptions of almost sure convergence cannot be established.We do not assume stationarity nor even local stationarity.In addition, we provide sufficient conditions for two of the assumptionsand two theorems for the evaluation of the information matrixin array time series.
    Keywords: properties least-square array time series
    Date: 2017–12–31
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/263350&r=ecm
  22. By: Federico Crudu; Giovanni Mellace; Zsolt Sandor
    Abstract: This paper proposes a specification test for instrumental variable models that is robust to the presence of heteroskedasticity. The test can be seen as a generalization the Anderson-Rubin test. Our approach is based on the jackknife principle. We are able to show that under the null the proposed statistic has a Gaussian limiting distribution. Moreover, a simulation study shows its competitive finite sample properties in terms of size and power
    Keywords: Instrumental variables, heteroskedasticity, many instruments, jackknife, specification tests, overidentification tests
    JEL: C12 C13 C23
    Date: 2017–11
    URL: http://d.repec.org/n?u=RePEc:usi:wpaper:761&r=ecm
  23. By: Christian Gouriéroux (CREST; University of Toronto); Alain Monfort (CREST); Jean-Paul Renne (University of Lausanne)
    Abstract: The basic assumption of a structural VARMA model (SVARMA) is that it is driven by a white noise whose components are independent and can be interpreted as economic shocks, called "structural" shocks. When the errors are Gaussian, independence is equivalent to noncorrelation and these models face two kinds of identi?cation issues. The ?rst identi?cation problem is "static" and is due to the fact that there is an in?nite number of linear transformations of a given random vector making its components uncorrelated. The second identi?cation problem is "dynamic" and is a consequence of the fact that the SVARMA process may have a non invertible AR and/or MA matrix polynomial but, still, has the same second-order properties as a VARMA process in which both the AR and MA matrix polynomials are invertible (the fundamental representation). Moreover the standard Box-Jenkins approach [Box and Jenkins (1970)] automatically estimates the fundamental representation and, therefore, may lead to misspeci?ed Impulse Response Functions. The aim of this paper is to explain that these dif?culties are mainly due to the Gaussian assumption, and that both identi?cation challenges are solved in a non-Gaussian framework. We develop new simple parametric and semi-parametric estimation methods when there is non-fundamentalness in the moving average dynamics. The functioning and performances of these methods are illustrated by applications conducted on both simulated and real data.
    Keywords: Structural VARMA; Fundamental Representation; Identi?cation; Shocks; Impulse Response Function; Incomplete Likelihood; Composite Likelihood; Economic Scenario Generators
    JEL: C01 C15 C32 E37
    URL: http://d.repec.org/n?u=RePEc:crs:wpaper:2017-08&r=ecm
  24. By: Le-Yu Chen (Institute for Fiscal Studies and Academia Sinica); Sokbae Lee (Institute for Fiscal Studies and Columbia University and IFS)
    Abstract: We show that the generalized method of moments (GMM) estimation problem in instrumental variable quantile regression (IVQR) models can be equivalently formulated as a mixed integer quadratic programming problem. This enables exact computation of the GMM estimators for the IVQR models. We illustrate the usefulness of our algorithm via Monte Carlo experiments and an application to demand for fish.
    Keywords: generalized method of moments, instrumental variable, quantile regression, endogeneity, mixed integer optimization
    JEL: C21 C26 C61 C63
    Date: 2017–11–22
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:52/17&r=ecm
  25. By: Le-Yu Chen (Institute for Fiscal Studies and Academia Sinica); Sokbae Lee (Institute for Fiscal Studies and Columbia University and IFS)
    Abstract: We consider a variable selection problem for the prediction of binary outcomes. We study the best subset selection procedure by which the explanatory variables are chosen by maximizing Manski (1975, 1985)'s maximum score type objective function subject to a constraint on the maximal number of selected variables. We show that this procedure can be equivalently reformulated as solving a mixed integer optimization (MIO) problem, which enables computation of the exact or an approximate solution with a de finite approximation error bound. In terms of theoretical results, we obtain non-asymptotic upper and lower risk bounds when the dimension of potential covariates is possibly much larger than the sample size. Our upper and lower risk bounds are minimax rate-optimal when the maximal number of selected variables is fi xed and does not increase with the sample size. We illustrate usefulness of the best subset binary prediction approach via Monte Carlo simulations and an empirical application of the work-trip transportation mode choice.
    Keywords: binary choice, maximum score estimation, best subset selection, `0-constrained maximization, mixed integer optimization, minimax optimality, fi nite sample property
    JEL: C52 C53
    Date: 2017–11–22
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:50/17&r=ecm
  26. By: Kalouptsidi, Myrto; Scott, Paul; Souza-Rodrigues, Eduardo
    Abstract: Dynamic discrete choice models (DDC) are not identified nonparametrically. However, the non-identification of DDC models does not necessarily imply non-identification of coun- terfactuals of interest. Using a novel approach that can accommodate both nonparametric and restricted payoff functions, we provide necessary and sufficient conditions for the iden- tification of counterfactual behavior and welfare for a broad class of counterfactuals. The conditions are simple to check and can be applied to virtually all counterfactuals in the DDC literature. To explore the robustness of counterfactual results to model restrictions in practice, we consider a numerical example of a monopolist's entry problem, as well as an empirical model of agricultural land use. In each case, we provide examples of both identified and non-identified counterfactuals of interest.
    Keywords: counterfactual; dynamic discrete choice; identification; welfare
    Date: 2017–11
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:12470&r=ecm
  27. By: Roger Koenker (Institute for Fiscal Studies and University of Illinois)
    Abstract: Since Quetelet's work in the 19th century social science has iconi fied "the average man", that hypothetical man without qualities who is comfortable with his head in the oven, and his feet in a bucket of ice. Conventional statistical methods, since Quetelet, have sought to estimate the effects of policy treatments for this average man. But such effects are often quite heterogenous: medical treatments may improve life expectancy, but also impose serious short term risks; reducing class sizes may improve performance of good students, but not help weaker ones or vice versa. Quantile regression methods can help to explore these heterogeneous effects. Some recent developments in quantile regression methods are surveyed below.
    Keywords: quantile regression, treatment effects, heterogeneity, causal inference
    Date: 2017–08–10
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:36/17&r=ecm

This nep-ecm issue is ©2018 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.