nep-ecm New Economics Papers
on Econometrics
Issue of 2015‒08‒07
fifteen papers chosen by
Sune Karlsson
Örebro universitet

  1. A New Estimator for Multivariate Binary Data By Fu, Shengfei; Shonkwiler, John Scott
  2. Estimation of Yield Densities: A Bayesian Nonparametric Perspective By Wang, Yang; Annan, Francis
  3. Consistent Estimation in Large Heterogeneous Panels with Multifactor Structure and Endogeneity By Giovanni Forchini; Bin Jiang; Bin Peng
  4. The Economics of Exclusion Restrictions in IV Models By Damon Jones
  5. Imputing for Missing Data in the ARMS Household Section: A Multivariate Imputation Approach By Burns, Christopher; Prager, Daniel; Ghosh, Sujit; Goodwin, Barry
  6. Duality theory econometrics: How reliable is it with real-world data? By Rosas, Juan Francisco; Lence, Sergio H.
  7. Effects of restrictions on parameter estimates of US agricultural production By Plastina, Alejandro; Lence, Sergio H.
  8. Estimating US Crop Supply Model Elasticities Using PMP and Bayesian Analysis By Hudak, Michael
  9. Estimating Recreation Demand When Survey Responses are Rounded By Page, Ian B.; Lichtenberg, Erik; Saavoss, Monica
  10. A common factor of stochastic volatilities between oil and commodity prices By Lee, Eunhee; Han, Doo Bong; Ito, Shoichi; Rodolfo M. Nayga, Jr
  11. Improving the reliability of self-reported attribute non-attendance behaviour through the use of polytomous attendance scales By Yuan, Yuan; You, Wen; Boyle, Kevin J.
  12. Inducing Hypothetical Bias Mitigation with Ten Commandments By Lim, Kar Ho; Grebitus, Carola; Hu, Wuyang; Nayga, Rodolfo M. Jr.
  13. A generalized latent class logit model of discontinuous preferences in repeated discrete choice data: an application to mosquito control in Madison, Wisconsin By Brown, Zachary S.; Dickinson, Katherine L.; Paskewitz, Susan
  14. On the Examination of the Reliability of Statistical Software for Estimating Logistic Regression Models By Bergtold, Jason S.; Pokharel, Krishna; Featherstone, Allen
  15. Understanding policy rates at the zero lower bound: insights from a Bayesian shadow rate model By Marcello Pericoli; Marco Taboga

  1. By: Fu, Shengfei; Shonkwiler, John Scott
    Abstract: This study proposes a new estimator for multivariate binary response data. This study considers binary responses as being generated from a truncated multivariate discrete distribution. Specifically the discrete normal probability mass function, which has support on all integers, is extended to a multivariate form. Truncating this point probability mass function below zero and above one results the multivariate binary discrete normal distribution. This distribution has a number of attractive properties. Monte Carlo simulation and empirical applications are performed to show the properties of this new estimator; comparisons are made to the traditional multivariate probit model.
    Keywords: Multivariate binary response, discrete normal distribution, Multivariate Probit, Agricultural and Food Policy, Consumer/Household Economics, Environmental Economics and Policy, Institutional and Behavioral Economics, Marketing, Research Methods/ Statistical Methods, B23, Q13, D1,
    Date: 2015
  2. By: Wang, Yang; Annan, Francis
    Abstract: The pricing of crop insurance products hinges crucially on the accurate estimation of the underlying yield densities. Multiple estimation methods have already been examined in the literature, but the need for other potential candidates remains essential. Here we propose and examine a Bayesian nonparametric model which is based on Dirichlet processes for yield estimation. We deploy our proposed model for the empirical estimation of county level yield data for Cotton from Texas. Next, we examine the implications of our modeling framework on the pricing of the Group Risk Plan (GRP) insurance compared to a nonparametric Kernel-type model.
    Keywords: Crop Insurance, Bayesian nonparametrics, Dirichlet processes, Yield Density, Research Methods/ Statistical Methods, Risk and Uncertainty, C11, Q18,
    Date: 2015
  3. By: Giovanni Forchini (University of Surrey); Bin Jiang (Monash University); Bin Peng (University of Technology, Sydney)
    Abstract: The set-up considered by Pesaran (Econometrica, 2006) is extended to allow for endogenous explanatory variables. A class of instrumental variables estimators is studied and it is shown that estimators in this class are consistent and asymptotically normally distributed as both the cross-section and time-series dimensions tend to infinity.
    Date: 2015–07
  4. By: Damon Jones
    Abstract: We explore a key underlying assumption, the exclusion restriction, commonly used in interpreting IV estimates in the presence of heterogenous treatment effects as a local average treatment effect (LATE). We show through a series of simple examples that in some commonly featured cases that this assumption is likely to be violated among inframarginal agents, i.e. the always- and never-takers. This violation of the exclusion restriction will generally confound the LATE interpretation of the associated IV results. We discuss potential adjustments to IV estimates in the presence of this bias.
    JEL: C1 C26 C36
    Date: 2015–07
  5. By: Burns, Christopher; Prager, Daniel; Ghosh, Sujit; Goodwin, Barry
    Abstract: This study proposes a new method to impute for ordinal missing data found in the household section of the Agricultural Resource Management Survey (ARMS). We extend a multivariate imputation method known as Iterative Sequential Regression (ISR) and make use of cut points to transform these ordinal variables into continuous variables for imputation. The household section contains important economic information on the well-being of the farm operator’s household, asking respondents for information on off-farm income, household expenditures, and off-farm debt and assets. Currently, the USDA's Economic Research Service (ERS) uses conditional mean imputation in the household section, a method known to bias the variance of imputed variables downward and to distort multivariate relationships. The new transformation of these variables allows them to be jointly modeled with other ARMS variables using a Gaussian copula. A conditional linear model for imputation is then built using correlation analysis and economic theory. Finally, we discuss a Monte Carlo study which will randomly poke holes in the ARMS data to test the robustness of our proposed method. This will allow us to assess how well the adapted ISR imputation method works in comparison with two other missing data strategies, conditional mean imputation and a complete case analysis.
    Keywords: ARMS, missing data, ordinal data, ISR, imputation, Markov Chain Monte Carlo, Agricultural Finance, Consumer/Household Economics, Research Methods/ Statistical Methods, C18, C15, C55,
    Date: 2015
  6. By: Rosas, Juan Francisco; Lence, Sergio H.
    Abstract: The Neoclassical theory of production establishes a dual relationship between the profit value function of a competitive firm and its underlying production technology. This relationship, commonly referred to as duality theory, has been widely used in empirical work to estimate production parameters without the requirement of explicitly specifying the technology. We analyze the ability of this approach to recover the underlying production parameters. We compute the data generating process by Monte Carlo simulations such that the true technology parameters are known. Employing widely used datasets, we calibrate the data generating process to yield a dataset featuring important characteristics of U.S. agriculture. We compare the estimated production parameters with the true (and known) parameters by means of the identities between the Hessians of the production and profit functions. We conclude that, when the dataset bears minimum sources of noise, duality theory is able to recover the true parameters with reasonable accuracy. Also, that when it is employed in time series coming from an aggregation of technologically heterogeneous firms, the parameters recovered are close to the firm at the median of the distribution. The proposed calibration sets the basis for analyzing the performance of duality theory approaches when datasets used by practitioners are subject to other observed and unobserved sources of noise.
    Keywords: duality theory, firm’s heterogeneity, data aggregation, Monte Carlo simulations, elasticities, Agricultural and Food Policy, Demand and Price Analysis, Production Economics,
    Date: 2015
  7. By: Plastina, Alejandro; Lence, Sergio H.
    Abstract: The economic theory of producer behavior requires certain conditions to hold in order for a functional form to be representative of a production technology. Agricultural production studies are usually conducted using classical econometrics that do not allow for the imposition of curvature conditions in flexible functional forms. Therefore, some conditions required by economic theory do not hold globally in estimation. Some studies report the proportion of the sample for which curvature conditions do not hold, and the reader is warned about the unknown distorting effects that those data points might have on their final results.Bayesian methods allow for the imposition of first- and second-order restrictions in the estimation of flexible functional forms. We estimate a flexible representation of the US agricultural production technology using Bayesian econometrics under alternative sets of restrictions, and elaborate on the effects of the restrictions on the pdfs of the parameter estimates.
    Keywords: Agricultural production, Bayesian estimation, monotonicity, concavity, flexible functional forms, generalized quadratic, Production Economics, Productivity Analysis, Research Methods/ Statistical Methods, C5, C510, Q1,
    Date: 2015
  8. By: Hudak, Michael
    Abstract: This paper examines an innovative and practical way to model the supply of agricultural crops. This will be done by extending the technique developed by Howitt (1995), Positive Mathematical Programming (PMP), using Bayesian estimation. A key problem in the use of the PMP model is the relative difficulty of finding calibrating parameters such that the first and second order conditions are satisfied; with the added difficulty that many of the conditions needed to be satisfied are not exactly known. Thus the use of Bayesian analysis is a useful tool to try and determine these parameters. By employing a Markov chain Monte Carlo (MCMC) algorithm, specifically a Metropolis-Hasting Algorithm, a posterior distribution for the calibrating parameters can be found such that the resulting supply model will not only reproduce an optimum close to observed acreages, but also produce reasonable elasticities due to the prior information. The value of this style of estimation for a crop supply model lies in the limited amount of data needed to estimate the model.
    Keywords: agricultural supply analysis, mathematical programming models, Bayesian econometrics, US agricultural, Land Economics/Use, Production Economics, C11, C60, Q10,
    Date: 2015
  9. By: Page, Ian B.; Lichtenberg, Erik; Saavoss, Monica
    Abstract: Recall data of consumption of cigarettes, alcohol, fresh fruits and vegetables; visits to recreational sites, doctors’ offices, and local businesses; household expenditures; and individuals’ perceived probabilities of future events often contain reported numbers that appear to be rounded to nearby focal points (e.g., the closest 5 or 10). Failure to address this rounding has been show to produce biased estimates of marginal effects and willingness to pay. We investigate the relative performance of three count data models used with data of the kind typically found in recreation demand studies. We create a dataset based on observed recreational trip counts and associated trip costs that exhibits substantial rounding. We then conduct a Monte Carlo simulation exercise to compare estimated parameters, the average partial effect on an increase in trip cost, and average consumer surplus per trip for three alternative estimators: a standard Poisson model with no adjustment for rounding, a censored Poisson model, and the grouped Poisson model. The standard Poisson model with no adjustment for rounding exhibits significant, persistent bias, especially in estimates of average consumer surplus per trip. The grouped Poisson, in contrast, shows only slight biases and none at all in estimates of average consumer surplus per trip.
    Keywords: Consumer/Household Economics, Environmental Economics and Policy,
    Date: 2015–05–27
  10. By: Lee, Eunhee; Han, Doo Bong; Ito, Shoichi; Rodolfo M. Nayga, Jr
    Abstract: This paper analyzes the multivariate stochastic volatilities with a common factor which is affecting both the volatilities of crude oil and agricultural commodity prices in both biofuel and non-biofuel use. We develop a stochastic volatility model which has a latent common volatility with two asymptotic regimes and a smooth transition between them. In contrast with conventional volatility models, stochastic volatilities in this study are generated by a logistic transformation of the latent factors, which consists of two components: the common volatility factor and the idiosyncratic component. In this study, we analyze the stochastic volatility model with a common factor for oil, corn and wheat from August 8, 2005 to October 10, 2014 using a Markov-Chain-Monte-Carlo (MCMC) method and estimate the stochastic volatilities and also extract the common factor. Our results suggest that the volatility of oil and grain markets are very persistent since the common factor generating the stochastic volatilities of oil and commodity markets is highly persistent. In addition, the volatilities of oil prices are more affected by a common factor while the volatilities of corn are more determined by the idiosyncratic component.
    Keywords: Stochastic Volatility Model, Regime Switching with Smooth Transitions, Common Latent Factor, MCMC, Gibbs sampling, Oil and Grain Prices, Research Methods/ Statistical Methods, Risk and Uncertainty,
    Date: 2015–05
  11. By: Yuan, Yuan; You, Wen; Boyle, Kevin J.
    Abstract: The literature in discrete choice modelling is increasingly recognizing the existence of attribute non-attendance, in which respondents ignore some attributes when answering an attribute-based question. This behaviour may present a serious problem for modelling and inference, because it violates fundamental assumptions of the random utility model on which choice models are based. In this study, we elicit attribute non-attendance using a six-point polytomous attendance scale, rather than restricting them to a dichotomous ignored/considered response, as in previous studies. Stated non-attendance has been found to be unreliable in previous studies, but polytomous attendance scales have the potential to address the sources of unreliability. Using data from a choice experiment in health economics, this study assesses the performance and consistency between empirical observations and theoretical expectations of a polytomous attendance scale. We find that the lowest point on the attendance scale is the part of the scale which corresponds best to attribute non-attendance, and that attendance scales longer than two or three points do not provide much additional information. Furthermore, the polytomous attendance scale had limited success in producing theoretically consistent results, suggesting that potential for polytomous attendance scales to produce more reliable attendance statements was not realized in this study.
    Keywords: choice experiment, choice model, attribute non-attendance, polytomous attendance scale, Health Economics and Policy, Research Methods/ Statistical Methods,
    Date: 2015–07
  12. By: Lim, Kar Ho; Grebitus, Carola; Hu, Wuyang; Nayga, Rodolfo M. Jr.
    Abstract: While a number of hypothetical bias mitigation methods have been proposed, the problem remains as the literature continues to debate the effectiveness and practicality of the mitigation methods (Loomis, 2011). We propose an easy to implement methods to mitigate hypothetical bias in choice experiments. The method involve asking respondents to recall the Ten Commandments prior to willingness to pay elicitation. Our result shows that the proposed method exhibit sign of hypothetical bias mitigation.
    Keywords: Hypothetical bias, Ten Commandments, Honesty, Agribusiness, Agricultural and Food Policy, Institutional and Behavioral Economics, Marketing, C18, C90, D12,
    Date: 2015
  13. By: Brown, Zachary S.; Dickinson, Katherine L.; Paskewitz, Susan
    Abstract: Serial nonparticipation in nonmarket valuation using choice data is a pattern of behavior in which an individual always appears to choose the status quo or ‘no program’ alternative. From a choice modelling perspective serial nonparticipation may be viewed as belonging to a class of ‘discontinuous preferences,’ which also includes other behavioral patterns, such as serial participation (never choosing the status quo), as well as lexicographic preferences (e.g. always choosing the alternative with the greatest health benefit). Discontinuous preferences are likely to be especially relevant in the context of environmental goods, due to the lack of familiarity that individuals have with valuing these goods in markets. In the case of discrete choice data, logit-based choice models are ill-equipped for identifying such preferences, because conditional logit choice probabilities cannot take a value of zero or one for any finite parameter estimates. Here we extend latent class choice models to account for discontinuous preferences. Our methodological innovation is to specify for each latent class a subset of alternatives that are avoided with certainty. This results in class membership being partially observable, since we then know with certainty that an individual does not belong to a class if she selects any alternatives avoided by that class. We apply our model to data from a discrete choice experiment on mosquito control programs to reduce West Nile virus risk and nuisance disamenities in Madison, Wisconsin. We find that our ‘generalized latent class model’ (GLCM) outperforms standard latent class models in terms of information criteria metrics, and provides significantly different estimates for willingness-to-pay. We also argue that GLCMs are useful for identifying some alternatives for which valuation estimates may not be identified in a given dataset, thus reducing the risk of invalid inference from discrete choice data.
    Keywords: discrete choice econometrics, latent class models, partial observability, serial nonparticipation, serial participation, discontinuous preferences, E-M algorithm, Environmental Economics and Policy, Institutional and Behavioral Economics, Public Economics, Research Methods/ Statistical Methods, Q51, C35,
    Date: 2015
  14. By: Bergtold, Jason S.; Pokharel, Krishna; Featherstone, Allen
    Abstract: The numerical reliability of software packages was examined for the logistic regression model. Software tested include SAS 9.3, MATLAB R2012a, R 3.1.0, Stata/IC 13.1 and LIMDEP 10.5. Thirty benchmark datasets were created by simulating different conditional binary choice processes. To obtain certified values, this study followed the National Institute of Standards and Technology procedures when they generated certified values of parameter estimates and standard errors for the nonlinear logistic regression models used. The logarithm of the relative error was used as a measure of accuracy to examine the numerical reliability of these packages.
    Keywords: Benchmark Data Set, Logistic Regression, Maximum Likelihood, Reliability, Software, Research Methods/ Statistical Methods,
    Date: 2015
  15. By: Marcello Pericoli (Bank of Italy); Marco Taboga (Bank of Italy)
    Abstract: Term structure models are routinely used by central banks to assess the impact of their communication on market participants' views of future interest rate developments. However, recent studies have pointed out that traditional term structure models can provide misleading indications when policy rates are at the zero lower bound (ZLB). One of the main drawbacks is that they are unable to reproduce the stylized fact that policy rates tend to remain at the ZLB for prolonged periods of time once they reach it. A consensus has recently emerged that shadow rate models, first introduced by Black (1995), are apt to solve this problem. The main idea is that the shadow rate (i.e., the short-term interest rate that would prevail in the absence of the ZLB) can move in negative territory for long time spans even when the actual rate remains close to the ZLB. Due to their high nonlinearity, shadow rate models are particularly difficult to estimate and have been so far only estimated with approximate methods. We propose an exact Bayesian method for their estimation. We use it to study developments in euro and US dollar yield curves since the end of the '90s. Our estimates confirm - and provide a quantitative assessment of - the fact that there has been a significant divergence of monetary policies in the euro area and in the US over the past years: between 2009 and 2013, the shadow rate was much lower in the US than in the euro area, while the opposite has been true since 2014; furthermore, at the end of our sample (January 2015), the most likely date of the the first increase in policy rates was estimated to be around mid-2015 in the US and around 2020 in the euro area.
    Keywords: zero lower bound, shadow rate term structure model
    JEL: C32 E43 G12
    Date: 2015–07

This nep-ecm issue is ©2015 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.