nep-ecm New Economics Papers
on Econometrics
Issue of 2011‒05‒07
eleven papers chosen by
Sune Karlsson
Orebro University

  1. Non-parametric methods for circular-circular and circular-linear By José Antonio Carnicero; Michael P. Wiper; Concepción Ausín
  2. Hierarchical shrinkage priors for dynamic regressions with many predictors By Korobilis, Dimitris
  3. Generalized Jackknife Estimators of Weighted Average Derivatives By Matias D. Cattaneo; Richard K. Crump; Michael Jansson
  4. Quantile Regression with Censoring and Endogeneity By Victor Chernozhukov; Iván Fernández-Val; Amanda E. Kowalski
  5. Identification of Panel Data Models with Endogenous Censoring By Khan, Shakeeb; Ponomareva, Maria; Tamer, Elie
  6. Multi-period credit default prediction with time-varying covariates. By Orth, Walter
  7. Proving causal relationships using observational data By Bryant, Henry; Bessler, David
  8. Evaluating Individual and Mean Non-Replicable Forecasts By Chia-Lin Chang; Philip Hans Franses; Michael McAleer
  9. Estimating Non-linear Weather Impacts on Corn Yield—A Bayesian Approach By Tian Yu; Bruce A. Babcock
  10. Measuring Technical Efficiency of Dairy Farms with Imprecise Data: A Fuzzy Data Envelopment Analysis Approach By Mugera, Amin
  11. Beyond baseline and follow-up : the case for more t in experiments By McKenzie, David

  1. By: José Antonio Carnicero; Michael P. Wiper; Concepción Ausín
    Abstract: We present a non-parametric approach for the estimation of the bivariate distribution of two circular variables and the modelling of the joint distribution of a circular and a linear variable. We combine nonparametric estimates of the marginal densities of the circular and linear components with the use of class of nonparametric copulas, known as empirical Bernstein copulas, to model the dependence structure. We derive the necessary conditions to obtain continuous distributions defined on the cylinder for the circular-linear model and on the torus for the circular-circular model. We illustrate these two approaches with two sets of real environmental data
    Keywords: Bernstein polynomials, Circular distributions, Circular-Circular data, Circular-linear data, Copulas, Non-parametric estimation
    Date: 2011–04
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws110704&r=ecm
  2. By: Korobilis, Dimitris
    Abstract: This paper builds on a simple unified representation of shrinkage Bayes estimators based on hierarchical Normal-Gamma priors. Various popular penalized least squares estimators for shrinkage and selection in regression models can be recovered using this single hierarchical Bayes formulation. Using 129 U.S. macroeconomic quarterly variables for the period 1959 -- 2010 I exhaustively evaluate the forecasting properties of Bayesian shrinkage in regressions with many predictors. Results show that for particular data series hierarchical shrinkage dominates factor model forecasts, and hence it becomes a valuable addition to existing methods for handling large dimensional data.
    Keywords: Forecasting; shrinkage; factor model; variable selection; Bayesian LASSO
    JEL: C53 C63 C52 C22 E37 C11
    Date: 2011–04–17
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:30380&r=ecm
  3. By: Matias D. Cattaneo (Department of Economics, University of Michigan); Richard K. Crump (Federal Reserve Bank of New York); Michael Jansson (University of California, Berkeley and CREATES)
    Abstract: With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic linearity of the estimator is established under weak conditions. Indeed, we show that the bandwidth conditions employed are necessary in some cases. A bias-corrected version of the estimator is proposed and shown to be asymptotically linear under yet weaker bandwidth conditions. Consistency of an analog estimator of the asymptotic variance is also established. To establish the results, a novel result on uniform convergence rates for kernel estimators is obtained.
    Keywords: Semiparametric estimation, bias correction, uniform consistency.
    JEL: C14 C21
    Date: 2011–04–08
    URL: http://d.repec.org/n?u=RePEc:aah:create:2011-12&r=ecm
  4. By: Victor Chernozhukov; Iván Fernández-Val; Amanda E. Kowalski
    Abstract: In this paper, we develop a new censored quantile instrumental variable (CQIV) estimator and describe its properties and computation. The CQIV estimator combines Powell (1986) censored quantile regression (CQR) to deal semiparametrically with censoring, with a control variable approach to incorporate endogenous regressors. The CQIV estimator is obtained in two stages that are nonadditive in the unobservables. The first stage estimates a nonadditive model with infinite dimensional parameters for the control variable, such as a quantile or distribution regression model. The second stage estimates a nonadditive censored quantile regression model for the response variable of interest, including the estimated control variable to deal with endogeneity. For computation, we extend the algorithm for CQR developed by Chernozhukov and Hong (2002) to incorporate the estimation of the control variable. We give generic regularity conditions for asymptotic normality of the CQIV estimator and for the validity of resampling methods to approximate its asymptotic distribution. We verify these conditions for quantile and distribution regression estimation of the control variable. We illustrate the computation and applicability of the CQIV estimator with numerical examples and an empirical application on estimation of Engel curves for alcohol.
    JEL: C14
    Date: 2011–04
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:16997&r=ecm
  5. By: Khan, Shakeeb; Ponomareva, Maria; Tamer, Elie
    Abstract: This paper analyzes the identification question in censored panel data models, where the censoring can depend on both observable and unobservable variables in arbitrary ways. Under some general conditions, we derive the tightest sets on the parameter of interest. These sets (which can be singletons) represent the limit of what one can learn about the parameter of interest given the model and the data in that every parameter that belongs to these sets is observationally equivalent to the true parameter. We consider two separate sets of assumptions, motivated by the previous literature, each controlling for unobserved heterogeneity with an individual specific (fixed) effect. The first imposes a stationarity assumption on the unobserved disturbance terms, along the lines of Manski (1987), and Honor ́e (1993). The second is a nonstationary model that imposes a conditional independence assumption. For both models, we provide sufficient conditions for these models to point identify the parameters. Since our identified sets are defined through parameters that obey first order dominance, we outline easily implementable approaches to build confidence regions based on recent advances in Linton et.al.(2010) on bootstrapping tests of stochastic dominance. We also extend our results to dynamic versions of the censored panel models in which we consider lagged observed, latent dependent variables and lagged censoring indicator variables as regressors.
    Keywords: Endogenous Censoring; Conditional Stochastic Dominance; Censored Panel Models.
    JEL: C24 C01
    Date: 2011–04–18
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:30373&r=ecm
  6. By: Orth, Walter
    Abstract: In credit default prediction models, the need to deal with time-varying covariates often arises. For instance, in the context of corporate default prediction a typical approach is to estimate a hazard model by regressing the hazard rate on time-varying covariates like balance sheet or stock market variables. If the prediction horizon covers multiple periods, this leads to the problem that the future evolution of these covariates is unknown. Consequently, some authors have proposed a framework that augments the prediction problem by covariate forecasting models. In this paper, we present simple alternatives for multi-period prediction that avoid the burden to specify and estimate a model for the covariate processes. In an application to North American public firms, we show that the proposed models deliver high out-of-sample predictive accuracy.
    Keywords: Credit default; multi-period predictions; hazard models; panel data; out-of-sample tests
    JEL: C53 C41 G32
    Date: 2011–03–17
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:30507&r=ecm
  7. By: Bryant, Henry; Bessler, David
    Abstract: We describe a means of rejecting a null hypothesis concerning observed, but not deliberately manipulated, variables of the form H0: A -/-> B in favor of an alternative hypothesis HA: A --> B, even given the possibility of causally related unobserved variables. Rejection of such an H0 relies on the availability of two observed and appropriately related instrumental variables. While the researcher will have limited control over the confidence level in this test, simulation results suggest that type I errors occur with a probability of less than 0.15 (often substantially less) across a wide range of circumstances. The power of the test is limited if there are but few observations available and the strength of correspondence among the variables is weak. We demonstrate the method by testing a hypothesis with critically important policy implications relating to a possible cause of childhood malnourishment.
    Keywords: causality, Monte Carlo, observational data, hypothesis testing, Research Methods/ Statistical Methods,
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:ags:aaea11:103238&r=ecm
  8. By: Chia-Lin Chang; Philip Hans Franses; Michael McAleer (University of Canterbury)
    Abstract: Macroeconomic forecasts are often based on the interaction between econometric models and experts. A forecast that is based only on an econometric model is replicable and may be unbiased, whereas a forecast that is not based only on an econometric model, but also incorporates expert intuition, is non-replicable and is typically biased. In this paper we propose a methodology to analyze the qualities of individual and means of non-replicable forecasts. One part of the methodology seeks to retrieve a replicable component from the non-replicable forecasts, and compares this component against the actual data. A second part modifies the estimation routine due to the assumption that the difference between a replicable and a non-replicable forecast involves measurement error. An empirical example to forecast economic fundamentals for Taiwan shows the relevance of the methodological approach using both individuals and mean forecasts.
    Keywords: Individual forecasts; mean forecasts; efficient estimation; generated regressors; replicable forecasts; non-replicable forecasts; expert intuition
    JEL: C53 C22 E27 E37
    Date: 2011–04–01
    URL: http://d.repec.org/n?u=RePEc:cbt:econwp:11/16&r=ecm
  9. By: Tian Yu; Bruce A. Babcock (Center for Agricultural and Rural Development (CARD))
    Abstract: We estimate impacts of rainfall and temperature on corn yields by fitting a linear spline model with endogenous thresholds. Using Gibbs sampling and the Metropolis - Hastings algorithm, we simultaneously estimate the thresholds and other model parameters. A hierarchical structure is applied to capture county-specific factors determining corn yields. Results indicate that impacts of both rainfall and temperature are nonlinear and asymmetric in most states. Yield is concave in both weather variables. Corn yield decreases significantly when temperature increases beyond a certain threshold, and when the amount of rainfall decreases below a certain threshold. Flooding is another source of yield loss in some states. A moderate amount of heat is beneficial to corn yield in northern states, but not in other states. Both the levels of the thresholds and the magnitudes of the weather effects are estimated to be different across states in the Corn Belt.
    Keywords: Bayesian estimation, Gibbs sampler, hierarchical structure, Metropolis-Hastings algorithm, non-linear JEL codes: C11, C13, Q10, Q54.
    Date: 2011–04
    URL: http://d.repec.org/n?u=RePEc:ias:cpaper:11-wp522&r=ecm
  10. By: Mugera, Amin
    Abstract: This article integrates fuzzy set theory in Data Envelopment Analysis (DEA) framework to compute technical efficiency scores when input and output data are imprecise. The underlying assumption in convectional DEA is that inputs and outputs data are measured with precision. However, production agriculture takes place in an uncertain environment and, in some situations, input and output data may be imprecise. We present an approach of measuring efficiency when data is known to lie within specified intervals and empirically illustrate this approach using a group of 34 dairy producers in Pennsylvania. Compared to the convectional DEA scores that are point estimates, the computed fuzzy efficiency scores allow the decision maker to trace the performance of a decision-making unit at different possibility levels.
    Keywords: fuzzy set theory, Data Envelopment Analysis, membership function, α-cut level, technical efficiency, Farm Management, Production Economics, Productivity Analysis, Research Methods/ Statistical Methods, Risk and Uncertainty, D24, Q12, C02, C44, C61,
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:ags:aaea11:103251&r=ecm
  11. By: McKenzie, David
    Abstract: The vast majority of randomized experiments in economics rely on a single baseline and single follow-up survey. If multiple follow-ups are conducted, the reason is typically to examine the trajectory of impact effects, so that in effect only one follow-up round is being used to estimate each treatment effect of interest. While such a design is suitable for study of highly autocorrelated and relatively precisely measured outcomes in the health and education domains, this paper makes the case that it is unlikely to be optimal for measuring noisy and relatively less autocorrelated outcomes such as business profits, household incomes and expenditures, and episodic health outcomes. Taking multiple measurements of such outcomes at relatively short intervals allows the researcher to average out noise, increasing power. When the outcomes have low autocorrelation, it can make sense to do no baseline at all. Moreover, the author shows how for such outcomes, more power can be achieved with multiple follow-ups than allocating the same total sample size over a single follow-up and baseline. The analysis highlights the large gains in power from ANCOVA rather than difference-in-differences when autocorrelations are low and a baseline is taken. The paper discusses the issues involved in multiple measurements, and makes recommendations for the design of experiments and related non-experimental impact evaluations.
    Keywords: Scientific Research&Science Parks,Science Education,Statistical&Mathematical Sciences,Disease Control&Prevention,Economic Theory&Research
    Date: 2011–04–01
    URL: http://d.repec.org/n?u=RePEc:wbk:wbrwps:5639&r=ecm

This nep-ecm issue is ©2011 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.