nep-ecm New Economics Papers
on Econometrics
Issue of 2018‒07‒23
twenty-two papers chosen by
Sune Karlsson
Örebro universitet

  1. Modelling Efficiency Effects in a True Fixed Effects Stochastic Frontier By Paul, Satya; Shankar, Sriram
  2. Introducing shrinkage in heavy-tailed state space models to predict equity excess returns By Florian Huber; Gregor Kastner
  3. Minimizing Sensitivity to Model Misspecification By St\'ephane Bonhomme; Martin Weidner
  4. The Fiction of Full BEKK: Pricing Fossil Fuels and Carbon Emissions By Chia-Lin Chang; Michael McAleer
  5. Diagnostic Tests for Homoskedasticity in Spatial Cross-Sectional or Panel Models By Baltagi, Badi; Pirotte, Alain; Yang, Zhenlin
  6. Inference on time-invariant variables using panel data: a pretest estimator By Jean-Bernard Chatelain; Kirsten Ralf
  7. A Semi-parametric Realized Joint Value-at-Risk and Expected Shortfall Regression Framework By Chao Wang; Richard Gerlach; Qian Chen
  8. Using penalized likelihood to select parameters in a random coefficients multinomial logit model By Joel L. Horowitz; Lars Nesheim
  9. Adaptive Bayesian Estimation of Mixed Discrete-Continuous Distributions under Smoothness and Sparsity By Andriy Norets; Justinas Pelenis
  10. Machine Learning Macroeconometrics A Primer By Korobilis, Dimitris
  11. Measurement error and rank correlations By Toru Kitagawa; Martin Nybom; Jan Stuhler
  12. Fast and Efficient Computation of Directional Distance Estimators By Cinzia Daraio; Leopold Simar; Paul W. Wilson
  13. Variational Bayes inference in high-dimensional time-varying parameter models By Korobilis, Dimitris; Koop, Gary
  14. Inference on winners By Isaiah Andrews; Toru Kitagawa; Adam McCloskey
  15. Global estimation of finite mixture and misclassi fication models with an application to multiple equilibria By Yingyao Hu; Ruli Xiao
  16. State-Varying Factor Models of Large Dimensions By Markus Pelger; Ruoxuan Xiong
  17. Nonparameteric forecasting of multivariate probability density functions By Dominique Guégan; Matteo Iacopini
  18. Trade creation and trade diversion of regional trade agreements revisited: A constrained panel pseudo-maximum likelihood approach By Michael Pfaffermayr
  19. Shift-Share Designs: Theory and Inference By Rodrigo Ad\~ao; Michal Koles\'ar; Eduardo Morales
  20. Testing continuity of a density via g -order statistics in the regression discontinuity design By Federico A. Bugni; Ivan A. Canay
  21. An adaptive test of stochastic monotonicity By Denis Chetverikov; Daniel Wilhelm; Dongwoo Kim
  22. The wild bootstrap with a "small" number of "large" clusters By Ivan A. Canay; Andres Santos; Azeem M. Shaikh

  1. By: Paul, Satya; Shankar, Sriram
    Abstract: This paper proposes a stochastic frontier panel data model which includes time-invariant unobserved heterogeneity along with the efficiency effects. Following Paul and Shankar (2018), the efficiency effects are specified by a standard normal cumulative distribution function of exogenous variables which ensures the efficiency scores to lie in a unit interval. This specification eschews one-sided error term present in almost all the existing inefficiency effects models. The model parameters can be estimated by non-linear least squares after removing the individual effects by the usual within transformation or using non-linear least squares dummy variables (NLLSDV) estimator. The efficiency scores are directly calculated once the model is estimated. An empirical illustration based on widely used panel data on Indian farmers is presented.
    Keywords: Fixed effects; Stochastic frontier; Technical efficiency; Standard normal cumulative distribution function; Non-linear least squares.
    JEL: C51 D24 Q12
    Date: 2018–06–04
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:87437&r=ecm
  2. By: Florian Huber; Gregor Kastner
    Abstract: We forecast S&P 500 excess returns using a flexible econometric state space model with non-Gaussian features at several levels. Estimation and prediction are conducted using fully-fledged Bayesian techniques. More precisely, we control for overparameterization via novel global-local shrinkage priors on the state innovation variances as well as the time-invariant part of the state space model. The shrinkage priors are complemented by heavy tailed state innovations that cater for potential large swings in the latent states even if the amount of shrinkage introduced is high. Moreover, we allow for leptokurtic stochastic volatility in the observation equation. The empirical findings indicate that several variants of the proposed approach outperform typical competitors frequently used in the literature, both in terms of point and density forecasts. Furthermore, a simple trading exercise shows that our framework also fares well when used for investment decisions.
    Date: 2018–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1805.12217&r=ecm
  3. By: St\'ephane Bonhomme; Martin Weidner
    Abstract: We propose a framework to improve the predictions based on an economic model, and the estimates of the model parameters, when the model may be misspecified. We rely on a local asymptotic approach where the degree of misspecification is indexed by the sample size. We derive formulas to construct estimators whose mean squared error is minimax in a neighborhood of the reference model, based on simple one-step adjustments. We construct confidence intervals that contain the true parameter under both correct specification and local misspecification. We calibrate the degree of misspecification using a model detection error approach, which allows us to perform systematic sensitivity analysis in both point-identified and partially-identified settings. To illustrate our approach we study panel data models where the distribution of individual effects may be misspecified and the number of time periods is small, and we revisit the structural evaluation of a conditional cash transfer program in Mexico.
    Date: 2018–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1807.02161&r=ecm
  4. By: Chia-Lin Chang (Department of Applied Economics and Department of Finance National Chung Hsing University, Taiwan.); Michael McAleer (Department of Quantitative Finance National Tsing Hua University, Taiwan and Econometric Institute Erasmus School of Economics Erasmus University Rotterdam, The Netherlands and Department of Quantitative Economics Complutense University of Madrid, Spain And Institute of Advanced Sciences Yokohama National University, Japan.)
    Abstract: The purpose of the paper is to (i) show that univariate GARCH is not a special case of multivariate GARCH, specifically the Full BEKK model, except under parametric restrictions on the off-diagonal elements of the random coefficient autoregressive coefficient matrix, that are not consistent with Full BEKK, and (ii) provide the regularity conditions that arise from the underlying random coefficient autoregressive process, for which the (quasi-) maximum likelihood estimates (QMLE) have valid asymptotic properties under the appropriate parametric restrictions. The paper provides a discussion of the stochastic processes that lead to the alternative specifications, regularity conditions, and asymptotic properties of the univariate and multivariate GARCH models. It is shown that the Full BEKK model, which in empirical practice is estimated almost exclusively compared with Diagonal BEKK (DBEKK), has no underlying stochastic process that leads to its specification, regularity conditions, or asymptotic properties, as compared with DBEKK. An empirical illustration shows the differences in the QMLE of the parameters of the conditional means and conditional variances for the univariate, DEBEKK and Full BEKK specifications.
    Keywords: Random coefficient stochastic process; Off-diagonal parametric restrictions; Diagonal BEKK; Full BEKK; Regularity conditions; Asymptotic properties; Conditional volatility; Univariate and multivariate models; Fossil fuels and carbon emissions.
    JEL: C22 C32 C52 C58
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:ucm:doicae:1808&r=ecm
  5. By: Baltagi, Badi (Syracuse University); Pirotte, Alain (CRED, University Paris II Pantheon-Assas); Yang, Zhenlin (School of Economics, Singapore Management University)
    Abstract: We propose tests for homoskedasticity in spatial econometric models, based on joint or concentrated score functions and an Outer-Product-of-Martingale-Difference (OPMD) estimate of the variance of the joint or concentrated score functions. Versions of these tests robust against non-normality are also given. Asymptotic properties of the proposed tests are formally examined using a cross-section model and a panel model with fixed effects. Monte Carlo results show that the proposed tests based on the concentrated score function have good finite sample properties. Finally, the generality of the proposed approach in constructing tests for homoskedasticity is further demonstrated using a spatial dynamic panel data model with short panels.
    Keywords: Adjusted quasi-scores; Dynamics; Fixed effects; Heteroskedasticity; Non-normality; Martingale difference; Score tests; Short panels; Spatial effects.
    JEL: C12 C18 C21 C23
    Date: 2018–07–01
    URL: http://d.repec.org/n?u=RePEc:ris:smuesw:2018_012&r=ecm
  6. By: Jean-Bernard Chatelain (PJSE - Paris Jourdan Sciences Economiques - UP1 - Université Panthéon-Sorbonne - ENS Paris - École normale supérieure - Paris - INRA - Institut National de la Recherche Agronomique - EHESS - École des hautes études en sciences sociales - ENPC - École des Ponts ParisTech - CNRS - Centre National de la Recherche Scientifique, PSE - Paris School of Economics); Kirsten Ralf (Ecole Supérieure du Commerce Extérieur - ESCE - International business school)
    Abstract: This paper proposes a new pretest estimator of panel data models including time-invariant variables based on the Mundlak-Krishnakumar estimator and an "unrestricted" Hausman-Taylor estimator. Furthermore, the paper evaluates the biases of currently used estimators: repeated between, ordinary least squares, two-stage restricted between, Oaxaca-Geisler estimator, fixed effect vector decomposition, and generalized least squares. Some of these may lead to erroneous conclusions regarding the statistical significance of the estimated parameter values of time-invariant variables, especially when time-invariant variables are correlated with the individual effects.
    Keywords: Time-invariant variables,panel data,time-series cross-sections,pretest estimator,Mundlak estimator,Hausman-Taylor estimator
    Date: 2018–02
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:halshs-01719835&r=ecm
  7. By: Chao Wang; Richard Gerlach; Qian Chen
    Abstract: A new realized joint Value-at-Risk (VaR) and expected shortfall (ES) regression framework is proposed, through incorporating a measurement equation into the original joint VaR and ES regression model. The measurement equation models the contemporaneous dependence between the realized measures (e.g. Realized Variance and Realized Range) and the latent conditional quantile. Further, sub-sampling and scaling methods are applied to both the realized range and realized variance, to help deal with the inherent micro-structure noise and inefficiency. An adaptive Bayesian Markov Chain Monte Carlo method is employed for estimation and forecasting, whose properties are assessed and compared with maximum likelihood estimator through simulation study. In a forecasting study, the proposed models are applied to 7 market indices and 2 individual assets, compared to a range of parametric, non-parametric and semi-parametric models, including GARCH, Realized-GARCH, CARE and Taylor (2017) joint VaR and ES quantile regression models, one-day-ahead Value-at-Risk and Expected Shortfall forecasting results favor the proposed models, especially when incorporating the sub-sampled Realized Variance and the sub-sampled Realized Range in the model.
    Date: 2018–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1807.02422&r=ecm
  8. By: Joel L. Horowitz (Institute for Fiscal Studies and Northwestern University); Lars Nesheim (Institute for Fiscal Studies and cemmap and UCL)
    Abstract: The multinomial logit model with random coefficients is widely used in applied research. This paper is concerned with estimating a random coefficients logit model in which the distribution of each coefficient is characterized by finitely many parameters. Some of these parameters may be zero or close to zero in a sense that is defined. We call these parameters small. The paper gives conditions under which with probability approaching 1 as the sample size approaches infinity, penalized maximum likelihood estimation (PMLE) with the adaptive LASSO (AL) penalty function distinguishes correctly between large and small parameters in a random-coefficients logit model. If one or more parameters are small, then PMLE with the AL penalty function reduces the asymptotic mean-square estimation error of any continuously differentiable function of the model’s parameters, such as a market share, the value of travel time, or an elasticity. The paper describes a method for computing the PMLE of a random-coefficients logit model. It also presents the results of Monte Carlo experiments that illustrate the numerical performance of the PMLE. Finally, it presents the results of PMLE estimation of a random-coefficients logit model of choice among brands of butter and margarine in the British groceries market.
    Date: 2018–04–26
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:29/18&r=ecm
  9. By: Andriy Norets; Justinas Pelenis
    Abstract: We consider nonparametric estimation of a mixed discrete-continuous distribution under anisotropic smoothness conditions and possibly increasing number of support points for the discrete part of the distribution. For these settings, we derive lower bounds on the estimation rates in the total variation distance. Next, we consider a nonparametric mixture of normals model that uses continuous latent variables for the discrete part of the observations. We show that the posterior in this model contracts at rates that are equal to the derived lower bounds up to a log factor. Thus, Bayesian mixture of normals models can be used for optimal adaptive estimation of mixed discrete-continuous distributions.
    Date: 2018–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1806.07484&r=ecm
  10. By: Korobilis, Dimitris
    Abstract: This Chapter reviews econometric methods that can be used in order to deal with the challenges of inference in high-dimensional empirical macro models with possibly 'more parameters than observations'.These methods broadly include machine learning algorithms for Big Data, but also more traditional estimation algorithms for data with a short span of observations relative to the number of explanatory variables. While building mainly on a univariate linear regression setting, I show how machine learning ideas can be generalized to classes of models that are interesting to applied macroeconomists, such as time-varying parameter models and vector autoregressions.
    Date: 2018–07
    URL: http://d.repec.org/n?u=RePEc:esy:uefcwp:22666&r=ecm
  11. By: Toru Kitagawa (Institute for Fiscal Studies and cemmap and University College London); Martin Nybom (Institute for Fiscal Studies); Jan Stuhler (Institute for Fiscal Studies)
    Abstract: This paper characterizes and proposes a method to correct for errors-in-variables biases in the estimation of rank correlation coeffcients (Spearman's ? and Kendall's t). We first investigate a set of suffcient conditions under which measurement errors bias the sample rank correlations toward zero. We then provide a feasible nonparametric bias-corrected estimator based on the technique of small error variance approximation. We assess its performance in simulations and an empirical application, using rich Swedish data to estimate intergenerational rank correlations in income. The method performs well in both cases, lowering the mean squared error by 50-85 percent already in moderately sized samples (n = 1,000).
    Keywords: Errors-in-variables, Spearman's rank correlation, Kendall's tau, Small variance approximation, Intergenerational mobility.
    Date: 2018–04–12
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:28/18&r=ecm
  12. By: Cinzia Daraio; Leopold Simar; Paul W. Wilson
    Abstract: Directional distances provide useful, flexible measures of technical efficiency of production units relative to the efficient frontier of the attainable set in input-output space. In addition, the additive nature of directional distances permits negative input or outputs quantities. The choice of the direction allows analysis of different strate- gies for the units attempting to reach the efficient frontier. Simar et al. (2012) and Simar and Vanhems (2012) develop asymptotic properties of full-envelopment, FDH and DEA estimators of directional distances as well as robust order-m and order-± di- rectional distance estimators. Extensions of these estimators to measures conditioned on environmental variables Z are also available (e.g., see Daraio and Simar, 2014). The resulting estimators have been shown to share the properties of their corresponding radial measures. However, to date the algorithms proposed for computing the directional distance estimates suffer from various numerical drawbacks (Daraio and Simar, 2014). In particular, for the order-m versions (conditional and unconditional) only approximations, based on Monte-Carlo methods, have been suggested, involving additional computational burden. In this paper we propose a new fast and efficient method to compute exact values of the directional distance estimates for all the cases (full and partial frontier cases, unconditional or conditional to external factors), that overcome all previous difficulties. This new method is illustrated on simulated and real data sets. Matlab code for computation is provided in an appendix.
    Keywords: directional distances, conditional efficiency, robust frontiers, environmental factors, nonparametric methods
    Date: 2018–07–13
    URL: http://d.repec.org/n?u=RePEc:ssa:lemwps:2018/21&r=ecm
  13. By: Korobilis, Dimitris; Koop, Gary
    Abstract: This paper proposes a mean field variational Bayes algorithm for efficient posterior and predictive inference in time-varying parameter models. Our approach involves: i) computationally trivial Kalman filter updates of regression coefficients, ii) a dynamic variables election prior that removes irrelevant variables in each time period, and iii) a fast approximate state-space estimator of the regression volatility parameter. In an exercise involving simulated data we evaluate the new algorithm numerically and establish its computational advantages. Using macroeconomic data for the US we find that regression models that combine time-varying parameters with the information in many predictors have the potential to improve forecasts over a number of alternatives.
    Date: 2018–07
    URL: http://d.repec.org/n?u=RePEc:esy:uefcwp:22665&r=ecm
  14. By: Isaiah Andrews (Institute for Fiscal Studies and Harvard University); Toru Kitagawa (Institute for Fiscal Studies and cemmap and University College London); Adam McCloskey (Institute for Fiscal Studies and Brown University)
    Abstract: Many questions in econometrics can be cast as inference on a parameter selected through optimization. For example, researchers may be interested in the effectiveness of the best policy found in a randomized trial, or the best-performing investment strategy based on historical data. Such settings give rise to a winner's curse, where conventional estimates are biased and conventional confi dence intervals are unreliable. This paper develops optimal con fidence sets and median-unbiased estimators that are valid conditional on the parameter selected and so overcome this winner's curse. If one requires validity only on average over target parameters that might have been selected, we develop hybrid procedures that combine conditional and projection con fidence sets and offer further performance gains that are attractive relative to existing alternatives.
    Keywords: Winner's Curse, Selective Inference
    JEL: C12 C13
    Date: 2018–05–10
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:31/18&r=ecm
  15. By: Yingyao Hu (Institute for Fiscal Studies and Johns Hopkins University); Ruli Xiao (Institute for Fiscal Studies and Indiana University)
    Abstract: We show that the identi fication results of fi nite mixture and misclassifi cation models are equivalent in a widely-used scenario except an extra ordering assumption. In the misclassi fication model, an ordering condition is imposed to pin down the precise values of the latent variable, which are also of researchers' interests and need to be identifi ed. In contrast, the identifi cation of fi nite mixture models is usually up to permutations of a latent index. This local identifi cation is satisfactory because the latent index does not convey any economic meaning. However, reaching global identi fication is important for estimation, especially, when researchers use bootstrap to estimate standard errors, which may be wrong without a global estimator. We provide a theoretical framework and Monte Carlo evidences to show that imposing an ordering condition to achieve a global estimator innocuously improves the estimation of fi nite mixture models. As a natural application, we show that games with multiple equilibria fi t in our framework and the global estimator with ordering assumptions provides more reliable estimates.
    Keywords: Finite mixture, misclassi fication, global estimation, identifi cation, bootstrap, multiple equilibria.
    Date: 2018–05–29
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:32/18&r=ecm
  16. By: Markus Pelger; Ruoxuan Xiong
    Abstract: This paper develops an inferential theory for state-varying factor models of large dimensions. Unlike constant factor models, loadings are general functions of some recurrent state process. We develop an estimator for the latent factors and state-varying loadings under a large cross-section and time dimension. Our estimator combines nonparametric methods with principal component analysis. We derive the rate of convergence and limiting normal distribution for the factors, loadings and common components. In addition, we develop a statistical test for a change in the factor structure in different states. Applying the estimator to U.S. Treasury security data, we find that the systematic factor structure differs in times of booms and recessions as well as in periods of high market volatility.
    Date: 2018–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1807.02248&r=ecm
  17. By: Dominique Guégan (Université Paris1 Panthéon-Sorbonne, Centre d'Economie de la Sorbonne, LabEx ReFi and Ca' Foscari University of Venezia); Matteo Iacopini (Université Paris1 Panthéon-Sorbonne, Centre d'Economie de la Sorbonne, LabEx ReFi and Ca' Foscari University of Venezia)
    Abstract: The study of dependence between random variables is the core of theoretical and applied statistics. Static and dynamic copula models are useful for describing the dependence structure, which is fully encrypted in the copula probability density function. However, these models are not always able to describe the temporal change of the dependence patterns, which is a key characteristic of financial data. We propose a novel nonparametric framework for modelling a time series of copula probability density functions, which allows to forecast the entire function without the need of post-processing procedures to grant positiveness and unit integral. We exploit a suitable isometry that allows to transfer the analysis in a subset of the space of square integrable functions, where we build on nonparametric functional data analysis techniques to perform the analysis. The framework does not assume the densities to belong to any parametric family and it can be successfully applied also to general multivariate probability density functions with bounded or unbounded support. Finally, a noteworthy field of application pertains the study of time varying networks represented through vine copula models. We apply the proposed methodology for estimating and forecasting the time varying dependence structure between the S&P500 and NASDAQ indices
    Keywords: multivariate densities; functional PCA; nonparametric statistics; copula; functional time series; forecast; unbounded support
    JEL: C14 C53
    Date: 2018–03
    URL: http://d.repec.org/n?u=RePEc:mse:cesdoc:18012&r=ecm
  18. By: Michael Pfaffermayr
    Abstract: For the estimation of structural gravity models using PPML with countrypair, exporter-time and importer-time effects it proves useful to exploit the equilibrium restrictions imposed by the system of multilateral resistances. This yields an iterative projection based PPML estimator that is unaffected by the incidental parameters problem. Further, in this setting it is straight forward to establish the asymptotic distribution of the structural parameters and that of counterfactual predictions. The present contribution applies the constrained panel PPML estimator to reconsider the trade creation and trade diversion effects of regional trade agreements. Results show significant trade creation effects of RTAs ranging in between 8.7 and 21.7 percent in 2012, but also point to substantial trade diversion in the range of -14.4 and -5.8 percent. These counterfactual predictions account for adjustment in multilateral trade resistances. The quite large confidence intervals of counterfactual predictions seem to be an overlooked issue in the literature.
    Keywords: Constrained Panel Poisson Pseudo Maximum Likelihood Estimation, International Trade, Gravity Equation, Structural Estimation
    JEL: F10 F15 C13 C50
    Date: 2018–07
    URL: http://d.repec.org/n?u=RePEc:inn:wpaper:2018-10&r=ecm
  19. By: Rodrigo Ad\~ao; Michal Koles\'ar; Eduardo Morales
    Abstract: Since Bartik (1991), it has become popular in empirical studies to estimate regressions in which the variable of interest is a shift-share, such as when a regional labor market outcome is regressed on a weighted average of observed sectoral shocks, using the regional sector shares as weights. In this paper, we discuss inference in these regressions. We show that standard economic models imply that the regression residuals are likely to be correlated across regions with similar sector shares, independently of their geographic location. These correlations are ignored by inference procedures commonly used in these regressions, which can lead to severe undercoverage. In regressions studying the effect of randomly generated placebo sectoral shocks on actual labor market outcomes in U.S. commuting zones, we find that a 5% level significance test based on standard errors clustered at the state level rejects the null of no effect in up to 45% of the placebo interventions. We derive novel confidence intervals that correctly account for the potential correlation in the regression residuals.
    Date: 2018–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1806.07928&r=ecm
  20. By: Federico A. Bugni (Institute for Fiscal Studies and Duke University); Ivan A. Canay (Institute for Fiscal Studies and Northwestern University)
    Abstract: In the regression discontinuity design (RDD), it is common practice to assess the credibility of the design by testing the continuity of the density of the running variable at the cut-off, e.g., McCrary (2008). In this paper we propose a new test for continuity of a density at a point based on the so-called g-order statistics, and study its properties under a novel asymptotic framework. The asymptotic framework is intended to approximate a small sample phenomenon: even though the total number n of observations may be large, the number of effective observations local to the cut-off is often small. Thus, while traditional asymptotics in RDD require a growing number of observations local to the cut-off as n ? 8, our framework allows for the number q of observations local to the cut-off to be fixed as n ? 8. The new test is easy to implement, asymptotically valid under weaker conditions than those used by competing methods, exhibits finite sample validity under stronger conditions than those needed for its asymptotic validity, and has favorable power properties against certain alternatives. In a simulation study, we find that the new test controls size remarkably well across designs. We finally apply our test to the design in Lee (2008), a well-known application of the RDD to study incumbency advantage.
    Keywords: Regression discontinuity design, g-ordered statistics, sign tests, continuity, density
    JEL: C12 C14
    Date: 2018–03–21
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:20/18&r=ecm
  21. By: Denis Chetverikov (Institute for Fiscal Studies and UCLA); Daniel Wilhelm (Institute for Fiscal Studies and cemmap and UCL); Dongwoo Kim (Institute for Fiscal Studies and University College London)
    Abstract: We propose a new nonparametric test of stochastic monotonicity which adapts to the unknown smoothness of the conditional distribution of interest, possesses desirable asymptotic properties, is conceptually easy to implement, and computationally attractive. In particular, we show that the test asymptotically controls size at a polynomial rate, is non-conservative, and detects local alternatives that converge to the null with the fastest possible rate. Our test is based on a data-driven bandwidth value and the critical value for the test takes this randomness into account. Monte Carlo simulations indicate that the test performs well in fi nite samples. In particular, the simulations show that the test controls size and may be signi ficantly more powerful than existing alternative procedures.
    Date: 2018–04–03
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:24/18&r=ecm
  22. By: Ivan A. Canay (Institute for Fiscal Studies and Northwestern University); Andres Santos (Institute for Fiscal Studies and UC San Diego); Azeem M. Shaikh (Institute for Fiscal Studies and University of Chicago)
    Abstract: This paper studies the properties of the wild bootstrap-based test proposed in Cameron et al. (2008) in settings with clustered data. Cameron et al. (2008) provide simulations that suggest this test works well even in settings with as few as fi ve clusters, but existing theoretical analyses of its properties all rely on an asymptotic framework in which the number of clusters is "large." In contrast to these analyses, we employ an asymptotic framework in which the number of clusters is "small," but the number of observations per cluster is "large." In this framework, we provide conditions under which the limiting rejection probability of an un-Studentized version of the test does not exceed the nominal level. Importantly, these conditions require, among other things, certain homogeneity restrictions on the distribution of covariates. We further establish that the limiting rejection probability of a Studentized version of the test does not exceed the nominal level by more than an amount that decreases exponentially with the number of clusters. We study the relevance of our theoretical results for finite samples via a simulation study.
    Keywords: Wild bootstrap, Clustered Data, Randomization Tests
    Date: 2018–04–11
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:27/18&r=ecm

This nep-ecm issue is ©2018 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.