nep-ecm New Economics Papers
on Econometrics
Issue of 2018‒11‒19
seventeen papers chosen by
Sune Karlsson
Örebro universitet

  1. Nonparametric Analysis of Finite Mixtures By Yuichi Kitamura; Louise Laage
  2. Generalised Empirical Likelihood Kernel Block Bootstrapping By Paulo M.D.C. Parente; Richard J. Smith
  3. On the Informativeness of Descriptive Statistics for Structural Estimates By Isaiah Andrews; Matthew Gentzkow; Jesse M. Shapiro
  4. Instrument Validity Tests with Causal Trees: With an Application to the Same-sex Instrument By Guber, Raphael
  5. Testing Distributional Assumptions Using a Continuum of Moments By Dante Amengual; Marine Carrasco; Enrique Sentana
  6. EDGEWORTH EXPANSION BASED CORRECTION OF SELECTIVITY BIAS IN MODELS OF DOUBLE SELECTION By Insan Tunali; Berk Yavuzoglu
  7. Fast, "Robust", and Approximately Correct: Estimating Mixed Demand Systems By Salanié, Bernard; Wolak, Frank
  8. Normality Tests for Latent Variables By Tincho Almuzara; Dante Amengual; Enrique Sentana
  9. On the Comparison of Interval Forecasts By Ross Askanazi; Francis X. Diebold; Frank Schorfheide; Minchul Shin
  10. Nonparametric maximum likelihood methods for binary response models with random coefficients By Jiaying Gu; Roger Koenker
  11. Testing the Multivariate Regular Variation Model By Einmahl, John; Yang, Fan; Zhou, Chen
  12. The de-biased group Lasso estimation for varying coefficient models By HONDA, Toshio
  13. Correlated Non-Classical Measurement Errors, Second Best Policy Inference and the Inverse Size-Productivity Relationship in Agriculture By Barrett, C.; Abay, K.; Abate, G.; Bernard, T.
  14. Some (Maybe) Unpleasant Arithmetic in Minimum Wage Evaluations: The Role of Power, Significance and Sample Size By Bachmann, Ronald; Felder, Rahel; Schaffner, Sandra; Tamm, Marcus
  15. Empirical Evaluation of Overspecified Asset Pricing Models By Elena Manresa; Francisco Peñaranda; Enrique Sentana
  16. Randomization Tests for Equality in Dependence Structure By Juwon Seo
  17. Uncertain Kingdom: nowcasting GDP and its revisions By Anesti, Nikoleta; Galvão, Ana; Miranda-Agrippino, Silvia

  1. By: Yuichi Kitamura; Louise Laage
    Abstract: Finite mixture models are useful in applied econometrics. They can be used to model unobserved heterogeneity, which plays major roles in labor economics, industrial organization and other fields. Mixtures are also convenient in dealing with contaminated sampling models and models with multiple equilibria. This paper shows that finite mixture models are nonparametrically identified under weak assumptions that are plausible in economic applications. The key is to utilize the identification power implied by information in covariates variation. First, three identification approaches are presented, under distinct and non-nested sets of sufficient conditions. Observable features of data inform us which of the three approaches is valid. These results apply to general nonparametric switching regressions, as well as to structural econometric models, such as auction models with unobserved heterogeneity. Second, some extensions of the identification results are developed. In particular, a mixture regression where the mixing weights depend on the value of the regressors in a fully unrestricted manner is shown to be nonparametrically identifiable. This means a finite mixture model with function-valued unobserved heterogeneity can be identified in a cross-section setting, without restricting the dependence pattern between the regressor and the unobserved heterogeneity. In this aspect it is akin to fixed effects panel data models which permit unrestricted correlation between unobserved heterogeneity and covariates. Third, the paper shows that fully nonparametric estimation of the entire mixture model is possible, by forming a sample analogue of one of the new identification strategies. The estimator is shown to possess a desirable polynomial rate of convergence as in a standard nonparametric estimation problem, despite nonregular features of the model.
    Date: 2018–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1811.02727&r=ecm
  2. By: Paulo M.D.C. Parente; Richard J. Smith
    Abstract: This article unveils how the kernel block bootstrap method of Parente and Smith (2018a,2018b) can be applied to make inferences on parameters of models de ned through moment restrictions. Bootstrap procedures that resort to generalised empirical likelihood implied probabilities to draw observations are also introduced. We prove the rst-order asymptotic validity of bootstrapped test statistics for overidentifying moment restrictions, parametric restrictions and additional moment restrictions. Resampling methods based on such probabilities were shown to be efficient by Brown and Newey (2002). A set of simulation experiments reveals that the statistical tests based on the proposed bootstrap methods perform better than those that rely on first-order asymptotic theory.
    Keywords: Bootstrap; heteroskedastic and autocorrelation consistent inference; Generalised Method of Moments; Generalised Empirical Likelihood
    JEL: C14 C15 C32
    Date: 2018–11
    URL: http://d.repec.org/n?u=RePEc:ise:remwps:wp0552018&r=ecm
  3. By: Isaiah Andrews; Matthew Gentzkow; Jesse M. Shapiro
    Abstract: Researchers often present treatment-control differences or other descriptive statistics alongside structural estimates that answer policy or counterfactual questions of interest. We ask to what extent confidence in the researcher's interpretation of the former should increase a reader's confidence in the latter. We consider a structural estimate ĉ that may depend on a vector of descriptive statistics ̂γ. We define a class of misspecified models in a neighborhood of the assumed model. We then compare the bounds on the bias of ĉ due to misspecification across all models in this class with the bounds across the subset of these models in which misspecification does not affect ̂γ. Our main result shows that the ratio of the lengths of these tight bounds depends only on a quantity we call the informativeness of ̂γ for ĉ, which can be easily estimated even for complex models. We recommend that researchers report the estimated informativeness of descriptive statistics. We illustrate with applications to three recent papers.
    JEL: C18 D12 I13 I25
    Date: 2018–11
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:25217&r=ecm
  4. By: Guber, Raphael (Munich Center for the Economics of Aging (MEA))
    Abstract: The use of instrumental variables (IVs) to identify causal effects is widespread in empirical economics, but it is fundamentally impossible to proof their validity. However, assumptions sufficient for the identification of local average treatment effects (LATEs) jointly generate necessary conditions in the observed data that allow to refute an IV's validity. Suitable tests exist, but they may not be able to detect even severe violations of IV validity in practice. In this paper, we employ recently developed machine learning tools as data-driven improvements for these tests. Specifically, we use the causal tree (CT) algorithm from Athey and Imbens (2016) to directly search the covariate space for violations of the LATE assumptions. The new approach is applied to the sibling sex composition instrument in census data from China and the United States. We expect that, because of son preferences, the siblings sex instrument is invalid in the Chinese context. However, existing IV validity tests are unable to detect violations, while our CT based procedure does.
    JEL: C12 C18 C26
    Date: 2018–09–17
    URL: http://d.repec.org/n?u=RePEc:mea:meawpa:201805&r=ecm
  5. By: Dante Amengual (CEMFI, Centro de Estudios Monetarios y Financieros); Marine Carrasco (Université de Montréal); Enrique Sentana (CEMFI)
    Abstract: We propose specification tests for parametric distributions that compare theoretical and empirical characteristic functions. Our proposal is the continuum of moment conditions analogue to the usual overidentifying restrictions test, which takes into account the correlation between influence functions for different argument values. We derive its asymptotic distribution for fixed regularization parameter and when this vanishes with the sample size. We show its consistency against any deviation from the null, study its local power and compare it with existing tests. An extensive Monte Carlo exercise confirms that our proposed tests display good power in finite samples against a variety of alternatives.
    Keywords: Consistent tests, characteristic function, GMM, continuum of moment conditions, goodness-of-fit, Tikhonov regularization.
    JEL: C01 C12 C52
    Date: 2017–03
    URL: http://d.repec.org/n?u=RePEc:cmf:wpaper:wp2017_1709&r=ecm
  6. By: Insan Tunali (Department of Economics, Koc University); Berk Yavuzoglu (Department of Economics, Nazarbayev University)
    Abstract: Edgeworth expansions are known to be useful for approximating probability distributions and moments. In our case, we exploit the expansion in the context of models of double selection embedded in a trivariate normal structure. We assume bivariate normality among the random disturbance terms in the two selection equations but allow the distribution of the disturbance term in the outcome equation to be free. This sets the stage for a control function approach to correction of selectivity bias that affords tests for the more common trivariate normality specifi- cation. Other recently proposed methods for handling multiple outcomes are Multinomial Logit based selection correction models. An empirical example is presented to document the dierences among the results obtained from our selectivity correction approach, trivariate normality specification and Multinomial Logit based selection correction models.
    Keywords: double selection models; Edgeworth expansion; female labor supply; Multinomial Logit based selection correction models; selectivity bias.
    Date: 2018–11
    URL: http://d.repec.org/n?u=RePEc:naz:wpaper:1802&r=ecm
  7. By: Salanié, Bernard; Wolak, Frank
    Abstract: Many econometric models used in applied work integrate over unobserved heterogeneity. We show that a class of these models that includes many random coefficients demand systems can be approximated by a "small-sigma" expansion that yields a straightforward 2SLS estimator. We study in detail the models of market shares popular in empirical IO ("macro BLP"). Our estimator is only approximately correct, but it performs very well in practice. It is extremely fast and easy to implement, and it accommodates to misspecifications in the higher moments of the distribution of the random coefficients. At the very least, it provides excellent starting values for more commonly used estimators of these models.
    Keywords: demand systems; discrete choice; industrial organization
    JEL: C50 C51 C52 D10 D20 D40
    Date: 2018–10
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:13236&r=ecm
  8. By: Tincho Almuzara (CEMFI, Centro de Estudios Monetarios y Financieros); Dante Amengual (CEMFI, Centro de Estudios Monetarios y Financieros); Enrique Sentana (CEMFI)
    Abstract: We exploit the rationale behind the Expectation Maximization algorithm to derive simple to implement and interpret score tests of normality in the innovations to the latent variables in state space models against generalized hyperbolic alternatives, including symmetric and asymmetric Student ts. We decompose our tests into third and fourth moment components, and obtain one-sided likelihood ratio analogues, whose asymptotic distribution we provide. When we apply them to a cointegrated dynamic factor model which combines the expenditure and income versions of US aggregate real output to improve its measurement, we reject normality if the sample period extends beyond the Great Moderation.
    Keywords: Gross domestic product, gross domestic income, kurtosis, Kuhn-Tucker test, skewness, supremum test, Wiener-Kolmogorov-Kalman smoother.
    JEL: C32 C52 E01
    Date: 2017–02
    URL: http://d.repec.org/n?u=RePEc:cmf:wpaper:wp2017_1708&r=ecm
  9. By: Ross Askanazi (Cornerstone Research); Francis X. Diebold (Department of Economics, University of Pennsylvania); Frank Schorfheide (Department of Economics, University of Pennsylvania); Minchul Shin (Department of Economics, University of Illinois)
    Abstract: We explore interval forecast comparison when the nominal conï¬ dence level is speciï¬ ed, but the quantiles on which intervals are based are not speciï¬ ed. It turns out that the problem is difficult, and perhaps unsolvable. We ï¬ rst consider a situation where intervals meet the Christoffersen conditions (in particular, where they are correctly calibrated), in which case the common prescription, which we rationalize and explore, is to prefer the interval of shortest length. We then allow for mis-calibrated intervals, in which case there is a calibration-length tradeoff. We propose two natural conditions that interval forecast loss functions should meet in such environments, and we show that a variety of popular approaches to interval forecast comparison fail them. Our negative results strengthen the case for abandoning interval forecasts in favor of density forecasts: Density forecasts not only provide richer information, but also can be readily compared using known proper scoring rules like the log predictive score, whereas interval forecasts cannot.
    Keywords: Forecast accuracy, forecast evaluation, prediction
    JEL: C53
    Date: 2018–08–02
    URL: http://d.repec.org/n?u=RePEc:pen:papers:18-013&r=ecm
  10. By: Jiaying Gu; Roger Koenker
    Abstract: Single index linear models for binary response with random coefficients have been extensively employed in many econometric settings under various parametric specifications of the distribution of the random coefficients. Nonparametric maximum likelihood estimation (NPMLE) as proposed by Cosslett (1983) and Ichimura and Thompson (1998), in contrast, has received less attention in applied work due primarily to computational difficulties. We propose a new approach to computation of NPMLEs for binary response models that significantly increase their computational tractability thereby facilitating greater flexibility in applications. Our approach, which relies on recent developments involving the geometry of hyperplane arrangements, is contrasted with the recently proposed deconvolution method of Gautier and Kitamura (2013). An application to modal choice for the journey to work in the Washington DC area illustrates the methods.
    Date: 2018–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1811.03329&r=ecm
  11. By: Einmahl, John (Tilburg University, Center For Economic Research); Yang, Fan; Zhou, Chen
    Abstract: In this paper, we propose a test for the multivariate regular variation model. The test is based on testing whether the extreme value indices of the radial component conditional on the angular component falling in different subsets are the same. Combining the test on the constancy across different conditional extreme value indices with testing the regular variation of the radial component, we obtain the test for testing multivariate regular variation. Simulation studies demonstrate the good performance of the proposed tests. We apply this test to examine two data sets used in previous studies that are assumed to follow the multivariate regular variation model.
    Keywords: extreme value statistics; Hill estimator; local empirical process
    JEL: C12 C14
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:tiu:tiucen:dd3c4dd0-7181-40f3-af44-f9f1eb224ff1&r=ecm
  12. By: HONDA, Toshio
    Abstract: There has been a lot of attention on the de-biased or de-sparsified Lasso since it was proposed in 2014. The Lasso is very useful in variable selection and obtaining initial estimators for other methods in high-dimensional settings. However, it is well-known that the Lasso produces biased estimators. Therefore several authors simultaneously proposed the de-biased Lasso to fix this drawback and carry out statistical inferences based on the de-biased Lasso estimators. The de-biased Lasso procedures need desirable estimators of high-dimensional precision matrices for bias correction. Thus the research is almost limited to linear regression models with some restrictive assumptions, generalized linear models with stringent assumptions and the like. To our knowledge, there are a few papers on linear regression models with group structure, but no result on structured nonparametric regression models such as varying coefficient models. In this paper, we apply the de-biased group Lasso to varying coefficient models and closely examine the theoretical properties and the effects of approximation errors involved in nonparametric regression. Some simulation results are also presented.
    Keywords: high-dimensional data, B-spline, varying coefficient models, group Lasso, bias correction
    Date: 2018–11
    URL: http://d.repec.org/n?u=RePEc:hit:econdp:2018-04&r=ecm
  13. By: Barrett, C.; Abay, K.; Abate, G.; Bernard, T.
    Abstract: We show analytically and empirically that non-classical measurement errors in the two key variables in a hypothesized relationship can bias the estimated relationship between them in any direction. Furthermore, if the errors are correlated, correcting for either one alone can aggravate bias in the parameter estimate of interest relative to ignoring mismeasurement in both variables, a second best result with implications for a broad class of economic phenomena of policy interest. We illustrate these results empirically by demonstrating the implications of mismeasured agricultural output and plot size for the long-debated (inverse) relationship between size and productivity. Our data from Ethiopia show large discrepancies between farmer self-reported and directly measured values of crop output and plot size; these errors are strongly, positively correlated with one another. In these data, correlated non-classical measurement errors generate a strong but largely spurious estimated inverse size-productivity relationship. We demonstrate empirically our analytical result that correcting for just one measurement problem may aggravate the bias in the parameter estimate of interest. Acknowledgement : This paper benefited from comments by Marc Bellemare, Leah Bevis, Chris Boone, Brian Dillon, John Gibson, Kalle Hirvonen and seminar participants at the African Development Bank. Any remaining errors are the authors sole responsibility.
    Keywords: Research Methods/ Statistical Methods
    Date: 2018–07
    URL: http://d.repec.org/n?u=RePEc:ags:iaae18:276985&r=ecm
  14. By: Bachmann, Ronald (RWI); Felder, Rahel (Ruhr University Bochum); Schaffner, Sandra (RWI); Tamm, Marcus (RWI)
    Abstract: In this paper, we discuss the importance of sample size in the evaluation of minimum wage effects. We first show which sample sizes are necessary to make reliable statements about the effects of minimum wages on binary outcomes, and second how to determine these sample sizes. This is particularly important when interpreting statistically insignificant effects, which could be due to (i) the absence of a true effect or (ii) lack of statistical power, which makes it impossible to detect an effect even though it exists. We illustrate this for the analysis of labour market transitions using two data sets which are particularly important in the minimum wage research for Germany, the Integrated Labour Market Biographies (IEB) and the Socio-Economic Panel (SOEP).
    Keywords: power calculation, sample size, significance testing, evaluation, minimum wage
    JEL: C12 C80 J38
    Date: 2018–10
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp11867&r=ecm
  15. By: Elena Manresa (MIT Sloan); Francisco Peñaranda (Queens College CUNY); Enrique Sentana (CEMFI)
    Abstract: Asset pricing models with potentially too many risk factors are increasingly common in empirical work. Unfortunately, they can yield misleading statistical inferences. Unlike other studies focusing on the properties of standard estimators and tests, we estimate the sets of SDFs and risk prices compatible with the asset pricing restrictions of a given model. We also propose tests to detect problematic situations with economically meaningless SDFs uncorrelated to the test assets. We confirm the empirical relevance of our proposed estimators and tests with Yogo's (2006) linearized version of the consumption CAPM, and provide Monte Carlo evidence on their reliability in finite samples.
    Keywords: Continuously updated GMM, factor pricing models, set estimation, stochastic discount factor, underidentification tests.
    JEL: G12 C52
    Date: 2017–05
    URL: http://d.repec.org/n?u=RePEc:cmf:wpaper:wp2017_1711&r=ecm
  16. By: Juwon Seo
    Abstract: We develop a new statistical procedure to test whether the dependence structure is identical between two groups. Rather than relying on a single index such as Pearson's correlation coefficient or Kendall's Tau, we consider the entire dependence structure by investigating the dependence functions (copulas). The critical values are obtained by a modified randomization procedure designed to exploit asymptotic group invariance conditions. Implementation of the test is intuitive and simple, and does not require any specification of a tuning parameter or weight function. At the same time, the test exhibits excellent finite sample performance, with the null rejection rates almost equal to the nominal level even when the sample size is extremely small. Two empirical applications concerning the dependence between income and consumption, and the Brexit effect on European financial market integration are provided.
    Date: 2018–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1811.02105&r=ecm
  17. By: Anesti, Nikoleta (Bank of England); Galvão, Ana (Warwick Business School); Miranda-Agrippino, Silvia (Bank of England)
    Abstract: We design a new econometric framework to nowcast macroeconomic data subject to revisions, and use it to predict UK GDP growth in real-time. To this end, we assemble a novel dataset of monthly and quarterly indicators featuring over ten years of real-time data vintages. In the Release-Augmented DFM (or RA-DFM) successive monthly estimates of GDP growth for the same quarter are treated as correlated observables in a Dynamic Factor Model (DFM) that also includes a large number of mixed-frequency predictors. The framework allows for a simple characterisation of the stochastic process for the revisions as a function of the observables, and permits a detailed assessment of the contribution of the data flow in informing (i) forecasts of quarterly GDP growth; (ii) the evolution of forecast uncertainty; and (iii) forecasts of revisions to early released GDP data. We find that the RA-DFM predictions have information about the latest GDP releases above and beyond that contained in the statistical office earlier estimates; predictive intervals are well-calibrated; and that real-time estimates of UK GDP growth are commensurate with those of professional forecasters. Data on production and labour markets, subject to large publication delays, account for most of the forecastability of the revisions.
    Keywords: Nowcasting; data revisions; dynamic factor model
    JEL: C51 C53
    Date: 2018–11–02
    URL: http://d.repec.org/n?u=RePEc:boe:boeewp:0764&r=ecm

This nep-ecm issue is ©2018 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.