nep-ecm New Economics Papers
on Econometrics
Issue of 2017‒08‒27
fourteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Simulation error in maximum likelihood estimation of discrete choice models By Mikołaj Czajkowski; Wiktor Budziński
  2. A Smooth Nonparametric, Multivariate, Mixed-Data Location-Scale Test By Jeffrey Racine; Ingrid Van Keilegom
  3. Quantile Treatment Effects in Regression Discontinuity Designs with Covariates By Yu-Chin Hsu; Chung-Ming Kuan; Giorgio Teng-Yu Lo
  4. Fundamentos de Econometría Espacial Aplicada By Herrera Gómez, Marcos
  5. Estimation of linear and non-linear indicators using interval censored income data By Walter, Paul; Groß, Markus; Schmid, Timo; Tzavidis, Nikos
  6. Bootstrap and Asymptotic Inference with Multiway Clustering By James G. MacKinnon; Morten Ørregaard Nielsen; Matthew D. Webb
  7. Identification of SVAR Models by Combining Sign Restrictions With External Instruments By Robin Braun; Ralf Brüggemann
  8. Estimating Counterfactual Treatment Effects to Assess External Validity By Yu-Chin Hsu; Tsung-Chih Lai; Robert P. Lieli
  9. Quantile Treatment Effects in Difference in Differences Models with Panel Data By Brantly Callaway; Tong Li
  10. Empirical Performance of GARCH Models with Heavy-tailed Innovations By Guo, Zi-Yi
  11. Conditional moment restrictions and the role of density information in estimated structural models By Andreas Tryphonides;
  12. New Approaches to Prediction using Functional Data Analysis By Laha, A. K.; Rathi, Poonam
  13. Monotonicity Test for Local Average Treatment Effects Under Regression Discontinuity By Yu-Chin Hsu; Shu Shen
  14. Evidence on News Shocks under Information Deficiency By Nelimarkka, Jaakko

  1. By: Mikołaj Czajkowski (Faculty of Economic Sciences, University of Warsaw); Wiktor Budziński (Faculty of Economic Sciences, University of Warsaw)
    Abstract: Maximum simulated likelihood is the preferred estimator of most researchers who deal with discrete choice. It allows estimation of models such as mixed multinomial logit (MXL), generalized multinomial logit, or hybrid choice models, which have now become the state-of-practice in the microeconometric analysis of discrete choice data. All these models require simulation-based solving of multidimensional integrals, which can lead to several numerical problems. In this study, we focus on one of these problems – utilizing from 100 to 1,000,000 draws, we investigate the extent of the simulation bias resulting from using several different types of draws: (1) pseudo random numbers, (2) modified Latin hypercube sampling, (3) randomized scrambled Halton sequence, and (4) randomized scrambled Sobol sequence. Each estimation is repeated up to 1,000 times. The simulations use several artificial datasets based on an MXL data generating process with different numbers of individuals (400, 800, 1200), different numbers of choice tasks per respondent (4, 8, 12) and different experimental designs (D-optimal, D-efficient for the MNL and D-efficient for the MXL model). Our large-scale simulation study allows for comparisons and drawing conclusions with respect to (1) how efficient different types of quasi Monte Carlo simulation methods are and (2) how many draws one should use to make sure the results are of “satisfying” quality – under different experimental conditions. Our study is the first to date to offer such a comprehensive comparison.
    Keywords: discrete choice, mixed logit, simulated maximum log-likelihood function, simulation error, draws, quasi Monte Carlo methods, MLHS, Halton, Sobol, number of draws
    JEL: C15 C51 C63
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:war:wpaper:2017-18&r=ecm
  2. By: Jeffrey Racine; Ingrid Van Keilegom
    Abstract: A number of tests have been proposed for assessing the location-scale assumption that is often invoked by practitioners. Existing approaches include Kolmogorov-Smirnov and Cramer-von-Mises statistics that each involve measures of divergence between unknown joint distribution functions and products of marginal distributions. In practice, the unknown distribution functions embedded in these statistics are approximated using non-smooth empirical distribution functions. We demonstrate how replacing the non-smooth distributions with their kernel-smoothed counter-parts can lead to substantial power improvements. In so doing we extend existing approaches to the smooth multivariate and mixed continuous and discrete data setting thereby extending the reach of existing approaches. Theoretical underpinnings are provided, Monte Carlo simulations are undertaken to assess finite-sample performance, and illustrative applications are provided.
    Keywords: Kernel Smoothing, Kolmogorov-Smirnov, Inference
    JEL: C14
    Date: 2017–08–16
    URL: http://d.repec.org/n?u=RePEc:mcm:deptwp:2017-13&r=ecm
  3. By: Yu-Chin Hsu (Institute of Economics, Academia Sinica, Taipei, Taiwan; Department of Finance National Central University); Chung-Ming Kuan (Department of Finance & CRETA National Taiwan University); Giorgio Teng-Yu Lo (Department of Economics National Tsing Hua University)
    Abstract: Estimating treatment effect heterogeneity conditional on a covariate has become an important research direction for regression discontinuity (RD) designs. In this paper, we go beyond conditional average treatment effect and propose a new method to estimate the conditional quantile treatment effect (CQTE) in RD designs. The proposed CQTE estimator is based on local quantile regression and hence remains local to the cutoff. Moreover, it captures the relations between treatment effects and the covariate directly for each quantile of the outcome variable. The estimated CQTE thus allows us to assess treatment effect heterogeneity with respect to covariates and the outcome variable simultaneously. As such, the proposed procedure renders a more comprehensive picture of treatment effect in RD designs than conventional methods.
    Keywords: Conditional treatment effect, Local quantile regression, Quantile treatment effect, Regression discontinuity design JEL Classification: C13, C21
    Date: 2017–08
    URL: http://d.repec.org/n?u=RePEc:sin:wpaper:17-a009&r=ecm
  4. By: Herrera Gómez, Marcos
    Abstract: The growing availability of Geo-referenced information needs particular econometric tools such as those developed by Spatial Econometrics. This econometric branch is dedicated to the analysis of heterogeneity and spatial dependence in regression models. In this paper, I review the most consolidated developments in the area related to the specification and interpretation of spatial dependence in cross-section and panel data. The work is completed with two classic empirical examples.
    Keywords: Modelos espacio-temporales; Dependencia espacial; Matriz espacial.
    JEL: C12 C21 C23
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:80871&r=ecm
  5. By: Walter, Paul; Groß, Markus; Schmid, Timo; Tzavidis, Nikos
    Abstract: Among a variety of small area estimation methods, one popular approach for the estimation of linear and non-linear indicators is the empirical best predictor. However, parameter estimation using standard maximum likelihood methods is not possible, when the dependent variable of the underlying nested error regression model, is censored to specific intervals. This is often the case for income variables. Therefore, this work proposes an estimation method, which enables the estimation of the regression parameters of the nested error regression model using interval censored data. The introduced method is based on the stochastic expectation maximization algorithm. Since the stochastic expectation maximization method relies on the Gaussian assumptions of the error terms, transformations are incorporated into the algorithm to handle departures from normality. The estimation of the mean squared error of the empirical best predictors is facilitated by a parametric bootstrap which captures the additional uncertainty coming from the interval censored dependent variable. The validity of the proposed method is validated by extensive model-based simulations.
    Keywords: small area estimation,empirical best predictor,nested error regression model,grouped data
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:zbw:fubsbe:201722&r=ecm
  6. By: James G. MacKinnon (Queen's University); Morten Ørregaard Nielsen (Queen's University); Matthew D. Webb (Carleton University)
    Abstract: We study a cluster-robust variance estimator (CRVE) for regression models with clustering in two dimensions that was proposed in Cameron, Gelbach, and Miller (2011). We prove that this CRVE is consistent and yields valid inferences under precisely stated assumptions about moments and cluster sizes. We then propose several wild bootstrap procedures and prove that they are asymptotically valid. Simulations suggest that bootstrap inference tends to be much more accurate than inference based on the t distribution, especially when there are few clusters in at least one dimension. An empirical example confirms that bootstrap inferences can differ substantially from conventional ones.
    Keywords: clustered data, cluster-robust variance estimator, CRVE, wild bootstrap, wild cluster bootstrap, two-way clustering
    JEL: C15 C21 C23
    Date: 2017–08
    URL: http://d.repec.org/n?u=RePEc:qed:wpaper:1386&r=ecm
  7. By: Robin Braun (Department of Economics, University of Konstanz, Germany); Ralf Brüggemann (Department of Economics, University of Konstanz, Germany)
    Abstract: We identify structural vector autoregressive (SVAR) models by combining sign restrictions with information in external instruments and proxy variables. We incorporate the proxy variables by augmenting the SVAR with equations that relate them to the structural shocks. Our modeling framework allows to simultaneously identify different shocks using either sign restrictions or an external instrument approach, always ensuring that all shocks are orthogonal. The combination of restrictions can also be used to identify a single shock. This entails discarding models that imply structural shocks that have no close relation to the external proxy time series, which narrows down the set of admissible models. Our approach nests the pure sign restriction case and the pure external instrument variable case. We discuss full Bayesian inference, which accounts for both, model and estimation uncertainty. We illustrate the usefulness of our method in SVARs analyzing oil market and monetary policy shocks. Our results suggest that combining sign restrictions with proxy variable information is a promising way to sharpen results from SVAR models.
    Keywords: Structural vector autoregressive model, sign restrictions, external instruments
    JEL: C32 C11 E32 E52
    Date: 2017–08–11
    URL: http://d.repec.org/n?u=RePEc:knz:dpteco:1707&r=ecm
  8. By: Yu-Chin Hsu (Institute of Economics, Academia Sinica, Taipei, Taiwan); Tsung-Chih Lai (Department of Economics Feng Chia University); Robert P. Lieli (Department of Economics Central European University)
    Abstract: We propose statistical methods for assessing the external validity of treatment effect estimates obtained in a specific status-quo environment. In particular, we estimate counterfactual quantile treatment effects that would obtain if one were to change the composition of the population targeted by the status-quo treatment. Assuming unconfoundedness, and the invariance of the conditional distributions of the potential outcomes, the parameter of interest is identified and can be nonparametrically estimated by a kernel-based method. Viewed as a random function over the continuum of quantile indices, the estimator converges weakly to a zero mean Gaussian process at the parametric rate. Exploiting this result, we propose a multiplier bootstrap procedure to construct uniform confidence bands. We provide similar results for the counterfactually treated subpopulation and the average effect. As an application, we estimate the counterfactual quantile treatment effect of the Job Corps training program in the U.S. under various scenarios. The results suggest that strong economic conditions and the skill hypotheses both help explain the earlier finding in the literature that the program was ineffective at low quantiles of the earnings distribution.
    Keywords: counterfactual analysis, external validity, program evaluation, multiplier bootstrap, Job Corps JEL Classification: C13, C31, J24, J30
    Date: 2017–08
    URL: http://d.repec.org/n?u=RePEc:sin:wpaper:17-a011&r=ecm
  9. By: Brantly Callaway (Department of Economics, Temple University); Tong Li (Department of Economics, Vanderbilt University)
    Abstract: This paper considers identification and estimation of the Quantile Treatment Effect on the Treated (QTT) under a straightforward distributional extension of the most commonly invoked Mean Difference in Differences assumption used for identifying the Average Treatment Effect on the Treated (ATT). Identification of the QTT is more complicated than the ATT though because it depends on the unknown dependence between the change in untreated potential outcomes and the initial level of untreated potential outcomes for the treated group. To address this issue, we introduce a new Copula Stability Assumption that says that the missing dependence is constant over time. Under this assumption and when panel data is available, the missing dependence can be recovered, and the QTT is identified. Second, we allow for identification to hold only after conditioning on covariates and provide very simple estimators based on propensity score re-weighting for this case. We use our method to estimate the effect of increasing the minimum wage on quantiles of local labor markets' unemployment rates and find significant heterogeneity.
    Keywords: Quantile Treatment Effect on the Treated, Difference in Differences, Copula, Panel Data, Propensity Score Re-weighting
    JEL: C14 C20 C23
    Date: 2017–08
    URL: http://d.repec.org/n?u=RePEc:tem:wpaper:1701&r=ecm
  10. By: Guo, Zi-Yi
    Abstract: We introduce a new type of heavy-tailed distribution, the normal reciprocal inverse Gaussian distribution (NRIG), to the GARCH and Glosten-Jagannathan-Runkle (1993) GARCH models, and compare its empirical performance with two other popular types of heavy-tailed distribution, the Student’s t distribution and the normal inverse Gaussian distribution (NIG), using a variety of asset return series. Our results illustrate that there is no overwhelmingly dominant distribution in fitting the data under the GARCH framework, although the NRIG distribution performs slightly better than the other two types of distribution. For market indexes series, it is important to introduce both GJR-terms and the NRIG distribution to improve the models’ performance, but it is ambiguous for individual stock prices series. Our results also show the GJR-GARCH NRIG model has practical advantages in quantitative risk management. Finally, the convergence of numerical solutions in maximum-likelihood estimation of GARCH and GJR-GARCH models with the three types of heavy-tailed distribution is investigated.
    Keywords: Heavy-tailed distribution,GARCH model,Model comparison,Numerical solution
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:zbw:esprep:167626&r=ecm
  11. By: Andreas Tryphonides;
    Abstract: While incomplete models are desirable due to their robustness to misspecification, they cannot be used to conduct full information exercises i.e. counterfactual experiments and predictions. Moreover, the performance of the corresponding GMM estimators is fragile in small samples. To deal with both issues, we propose the use of an auxiliary conditional model for the observables f(X|Z, '), where the equilibrium conditions E(m(X, #)|Z) = 0 are imposed on f(X|Z, ') using information projections, and (#, ') are estimated jointly. We provide the asymptotic theory for parameter estimates for a general set of conditional projection densities, under correct and local misspecification of f(X|Z, '). In either cases, efficiency gains are significant. We provide simulation evidence for the Mean Squared Error (MSE) both under the case of local and fixed density misspecification and apply the method to the prototypical stochastic growth model. Moreover, we illustrate that given (#ˆ, 'ˆ) it is now feasible to do counterfactual experiments without explicitly solving for the equilibrium law of motion.
    Keywords: Incomplete models, Information projections, Small Samples, Shrinkage
    JEL: C13 C14 E10
    Date: 2017–07
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2017-016&r=ecm
  12. By: Laha, A. K.; Rathi, Poonam
    Abstract: In this paper we address the problem of prediction with functional data. We discuss several new methods for predicting the future values of a partially observed curve when it can be assumed that the data is coming from an underlying Gaussian Process. When the underlying process can be assumed to be stationary with powered exponential covariance function we suggest two new predictors and compare their performance. In some real life situations the data may come from a mixture of two stationary Gaussian Processes. We introduce three new methods of prediction in this case and compare their performance. In case the data comes from a non-stationary process we propose a modifi cation of the powered exponential covariance function and study the performance of the three predictors mentioned above using three real-life data sets. The results indicate that the KM-Predictor in which the training data is clustered using the K-Means algorithm before prediction can be used in several real life situations.
    Date: 2017–08–10
    URL: http://d.repec.org/n?u=RePEc:iim:iimawp:14576&r=ecm
  13. By: Yu-Chin Hsu (Institute of Economics, Academia Sinica, Taipei, Taiwan); Shu Shen (Department of Economics University of California, Davis)
    Abstract: This paper proposes nonparametric monotonicity tests for the (local) average treatment effects under the sharp and fuzzy regression discontinuity designs. The tests allow researchers to examine whether the policy effect of interest has monotonic relationships with conditioning covariates. We show the consistency and asymptotic uniform size control of the proposed tests. The proposed tests are applied to re-investigate the impact of attending a better high school using the Romanian data set studied in Pop-Eleches and Urquiola (2013). We find that the effect of going to a better school on a student’s probability of taking the Baccalaureate exam increases monotonically with peer quality of the school.
    Keywords: average treatment effect, local average treatment effect, regression discontinuity, regression monotonicity, nonparametric
    Date: 2017–08
    URL: http://d.repec.org/n?u=RePEc:sin:wpaper:17-a010&r=ecm
  14. By: Nelimarkka, Jaakko
    Abstract: News shocks about future productivity can be correctly inferred from a conventional VAR model only if information contained in observables is rich enough. This paper examines news shocks by means of a noncausal VAR model that recovers economic shocks from both past and future variation. As noncausality is implied by nonfundamentalness, the model solves the problem of insufficient information per se. By the impulse responses derived from the model, variables react to the anticipated structural shocks, which are identified by exploiting future dependence of investment with respect to productivity. In the U.S. economy, news about improving total factor productivity moves investment and stock prices on impact, but these responses are likely affected by a parallel increase in productivity. The news shock gradually diffuses to productivity and generates smooth reactions of forward-looking variables.
    Keywords: News shocks, Structural VAR analysis, Nonfundamentalness, Noncausal VAR
    JEL: C18 C32 C53 E32
    Date: 2017–08–10
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:80850&r=ecm

This nep-ecm issue is ©2017 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.