
on Econometrics 
By:  Hiromasa Tamae (Graduate School of Economics, The University of Tokyo); Tatsuya Kubokawa (Faculty of Economics, The University of Tokyo) 
Abstract:  The paper concerns smallarea estimation in the FayHerriot type arealevel model with random dispersions, which models the case that the sampling errors change from area to area. The resulting Bayes estimator shrinks both means and variances, but needs numerical computation to provide the estimates. In this paper, an approximated empirical Bayes (AEB) estimator with a closed form is suggested. The model parameters are estimated via the moment method, and the mean squared error of the AEB is estimated via the single parametric bootstrap method. The benchmarked estimator and a secondorder unbiased estimator of the mean squared error are also derived.  
Date:  2015–07 
URL:  http://d.repec.org/n?u=RePEc:tky:fseres:2015cf982&r=ecm 
By:  Francine Gresnigt (Erasmus University Rotterdam, the Netherlands); Erik Kole (Erasmus University Rotterdam, the Netherlands); Philip Hans Franses (Erasmus University Rotterdam, the Netherlands) 
Abstract:  We propose various specification tests for Hawkes models based on the Lagrange Multiplier (LM) principle. Hawkes models can be used to model the occurrence of extreme events in financial markets. Our specific testing focus is on extending a univariate model to a multivariate model, that is, we examine whether there is a conditional dependence between extreme events in markets. Simulations show that the test has good size and power, in particular for sample sizes that are typically encountered in practice. Applying the specification test for dependence to US stocks, bonds and exchange rate data, we find strong evidence for crossexcitation within segments as well as between segments. Therefore, we recommend that univariate Hawkes models be extended to account for the crosstriggering phenomenon. 
Keywords:  Hawkes processes; specification tests; extremal dependence; financial crashes 
JEL:  C12 C22 C32 C52 
Date:  2015–07–24 
URL:  http://d.repec.org/n?u=RePEc:tin:wpaper:20150086&r=ecm 
By:  Roberto Casarin (University Ca’ Foscari of Venice, Italy); Stefano Grassi (University of Kent, United Kingdom); Francesco Ravazzolo (Norges Bank and Centre for Applied Macro and Petroleum Economics, Norway); Herman K. van Dijk (Erasmus University Rotterdam, VU University Amsterdam, the Netherlands) 
Abstract:  A Bayesian nonparametric predictive model is introduced to construct timevarying weighted combinations of a large set of predictive densities. A clustering mechanism allocates these densities into a smaller number of mutually exclusive subsets. Using properties of Aitchinson's geometry of the simplex, combination weights are defined with a probabilistic interpretation. The classpreserving property of the logisticnormal distribution is used to define a compositional dynamic factor model for the weight dynamics with latent factors defined on a reduced dimension simplex. Groups of predictive models with combination weights are updated with parallel clustering and sequential Monte Carlo filters. The procedure is applied to predict Standard & Poor's 500 index using more than 7000 predictive densities based on US individual stocks and finds substantial forecast and econ omic gains. Similar forecast gains are obtained in point and density forecasting of US real GDP, Inflation, Treasury Bill yield and employment using a large data set. 
Keywords:  Density Combination; Large Set of Predictive Densities; Compositional Factor Models; Nonlinear State Space; Bayesian Inference; GPU Computing 
JEL:  C11 C15 C53 E37 
Date:  2015–07–20 
URL:  http://d.repec.org/n?u=RePEc:tin:wpaper:20150084&r=ecm 
By:  Kaspar Wüthrich 
Abstract:  This paper studies estimation of conditional and unconditional quantile treatment effects based on the instrumental variable quantile regression (IVQR) model (Chernozhukov and Hansen, 2004, 2005, 2006). I introduce a class of semiparametric plugin estimators based on closed form solutions derived from the IVQR moment conditions. These estimators do not rely on separability of the structural quantile function, while retaining computational tractability and rootnconsistency. Functional central limit theorems and bootstrap validity results for the estimators of the quantile treatment effects and other functionals are provided. I apply my method to reanalyze the effect of 401(k) plans on individual savings behavior. 
Keywords:  instrumental variables; quantile treatment effects; distribution regression; functional central limit theorem; Hadamard differentiability; exchangeable bootstrap 
JEL:  C14 C21 C26 
Date:  2015–07 
URL:  http://d.repec.org/n?u=RePEc:ube:dpvwib:dp1509&r=ecm 
By:  Bergtold, Jason S.; Ramsey, Steven M. 
Abstract:  Estimation of binary choice models typically require that the econometric model satisfy the utility maximization hypothesis. The most widely used models for this purpose are the binary logit and probit models. To satisfy the utility maximization hypothesis the logit and probit models must make a priori assumptions regarding the underlying functional form of a representative utility function. Such a theoretical restriction on a statistical model withouth considering the underlying probabilistic structure of the observed data can leave the postulated estimable model statistically misspecified. Feedforward backpropagation artificial neural networks (FFBANN) provide a potentially powerful seminonparametric method to avoid misspecifications. This paper shows that a singlehidden layer FFBANN can be interpreted as a logistic regression with a flexible index function. An empirical application is conducted using FFBANNs to model a contingent valuation study and estimate marginal effects and willingnesstopay. Results are used for comparison with more traditional methods such as the binary logit and probit models. 
Keywords:  Binary Choice, Contingent Valuation, Logistic Regression, Neural Networks, Marginal Effects, Seminonparametric, Willingness to Pay, Environmental Economics and Policy, Research Methods/ Statistical Methods, Resource /Energy Economics and Policy, 
Date:  2015 
URL:  http://d.repec.org/n?u=RePEc:ags:aaea15:205649&r=ecm 
By:  Jackie, Yenerall; Wen, You; George, Davis; Paul, Estabrooks 
Abstract:  Missing data in experiments can bias estimates if not appropriately addressed. This is of particular concern in costeffectiveness analysis where bias in either the cost or effect estimate could bias the entire cost effectiveness estimate. Complicated experimental designs, such as cluster randomized trials (CRT) or longitudinal data call for even greater care when addressing missingness. The purpose of this paper is to compare two sample selection models designed to address bias resulting from nonrandom missingless when applied to a longitudinal CRT. From the statistics literature we consider the Diggle Kenward model and from the econometrics literature we consider the Heckman model. Both of these models will be used to analyze the twelvemonth outcomes of a worksite weight loss program, as well as used in a simulation experiment. 
Keywords:  costeffectiveness analysis, missing data, Health Economics and Policy, Research Methods/ Statistical Methods, 
Date:  2015 
URL:  http://d.repec.org/n?u=RePEc:ags:aaea15:205690&r=ecm 
By:  Sonoda, Tadashi; Mishra, Ashok 
Keywords:  Production Economics, Research Methods/ Statistical Methods, 
Date:  2015 
URL:  http://d.repec.org/n?u=RePEc:ags:aaea15:205308&r=ecm 
By:  Koutchadé, Philippe; Carpentier, Alain; Féménia, Fabienne 
Abstract:  To account for the effects of heterogeneity in microeconometric models has been major concern in labor economics, empirical industrial organization or trade economics for at least two decades. The microeconometric agricultural production choice models found in the literature largely ignore the impacts of unobserved heterogeneity. This can partly be explained by the dimension of these models which deal with large choice sets, e.g., acreage choices, input demands and yield supplies. We propose a random parameter framework to account for the unobserved heterogeneity in microeconometric agricultural production choices models. This approach allows accounting for unobserved farms’ and farmers’ heterogeneity in a fairly flexible way. We estimate a system of yield supply and acreage choice equations with a panel set of French crop growers. Our results show that heterogeneity significantly matters in our empirical application and that ignoring the heterogeneity of farmers’ choice processes can have important impacts on simulation outcomes. Due to the dimension of the estimation problem and the functional form of the considered production choice model, the Simulated Maximum Likelihood approach usually considered in the applied econometrics literature in such context is empirically intractable. We show that specific versions of the Stochastic ExpectationMaximization algorithms proposed in the statistics literature can be implemented. 
Keywords:  Unobserved heterogeneity, random parameter models, agricultural production choices, Farm Management, Land Economics/Use, Production Economics, Research Methods/ Statistical Methods, Q12, C13, C15, 
Date:  2015 
URL:  http://d.repec.org/n?u=RePEc:ags:aaea15:205098&r=ecm 
By:  Binkley, James K.; PenaLevano, Luis M. 
Abstract:  Logit and probit models are designed to estimate latent variable models. However, there are cases that these estimates are used, even though the latent variable is fully observable. The most prominent examples are studies about obesity, where they calculate BMI based on two observed variables: weight and height squared. They translate BMI into a binary variable (e.g. obese or not obese) and this index is used to examine factors affecting obesity. This study determines the loss in efficiency of using logit/probit models versus the conventional OLS (e.g. with unknown variance). We also compare the marginal effects between these models. The results suggest that OLS is a more efficient than the logit/probit models when estimating the true coefficients, regardless of the multicollinearity, fit of regression and cutoff probability. Likewise, OLS provided unbiased marginal effects compared to both binary response models. It is also less likely to be biased. We can conclude, that according to our Monte Carlo simulation, when the latent variable is observable, it is better to use the continous value and regress it with respect to their explanatory variable instead of converting it into a latent variable. 
Keywords:  efficiency, logit, probit, BMI, bias, latent variable, Food Consumption/Nutrition/Food Safety, Research Methods/ Statistical Methods, B23, C01, C18, C51, 
Date:  2015 
URL:  http://d.repec.org/n?u=RePEc:ags:aaea15:205659&r=ecm 
By:  Koutchade, Obafèmi Philippe; Carpentier, Alain; Femenia, Fabienne 
Abstract:  Corner solution problems are pervasive in microeconometric acreage choice models because farmers rarely produce the same crop set in a considered sample. Acreage choice models suitably accounting for corner solution need to be specified as Endogenous Regime Switching (ERS) models. Microeconometric ERS models are however rarely used in practice because their estimation difficulty quickly grows with the dimension of the considered system. Their functional form is generally quite involved and their congruent likelihood functions need to be integrated using simulation methods in most case of interest. We present here an ERS model specifically designed for empirically modeling acreage choices with corner solutions. This model is theoretically consistent with acreage choices based on the maximization of a profit function with nonnegativity constraints and a total land use constraint. It can be combined with yield supply and variable input demand functions. Furthermore, the model accounts for regime fixed costs which represent crop specific marketing and management costs. To our knowledge, this is a unique feature for an ERS model accounting for nonnegativity constraints. The proposed ERS model defines a Nested MultiNomial Logit (NMNL) acreage choice model for each potential production regime. The regime choice is based on a standard discrete choice model according to which farmers choose the crop subset they produce by comparing the different regime profit levels. The structure of the model and the functional form of its likelihood function makes the Simulated ExpectationMaximisation algorithm especially suitable for maximizing the sample likelihood function. The empirical tractability of the model is illustrated by the estimation of a five crop production choice model for a sample of French grain crop producers. 
Keywords:  Production Economics, Research Methods/ Statistical Methods, Q12, C13, C15, 
Date:  2015 
URL:  http://d.repec.org/n?u=RePEc:ags:aaea15:206060&r=ecm 
By:  Yuan, Yuan; You, Wen; Boyle, Kevin J. 
Abstract:  Unobserved heterogeneity is popularly modelled using the mixed logit model, so called because it is a mixture of standard conditional logit models. Although the mixed logit model can, in theory, approximate any random utility model with an appropriate mixing distribution, there is little guidance on how to select such a distribution. This study contributes to suggestions on distribution selection by describing the heterogeneity features which can be captured by established parametric mixing distributions and more recently introduced nonparametric mixing distributions, both of a discrete and continuous nature. We provide empirical illustrations of each feature in turn using simple mixing distributions which focus on the feature at hand. 
Keywords:  choice experiment, choice model, mixed logit, random parameters logit, mixing distribution, latent class logit, preference heterogeneity, unobserved heterogeneity, Health Economics and Policy, Research Methods/ Statistical Methods, 
Date:  2015–07 
URL:  http://d.repec.org/n?u=RePEc:ags:aaea15:205733&r=ecm 
By:  Ker, Alan. P; Tolhurst, Tor; Liu, Yong 
Abstract:  The Agricultural Act of 2014 solidified insurance as the cornerstone of U.S. agricultural policy. The Congressional Budget Office (2014) estimates this Act will increase spending on agricultural insurance programs by $5.7 billion to a total of $89.8 billion over the next decade. In light of the sizable resources directed toward these programs, accurate rating of insurance contracts is of utmost importance to producers, private insurance companies, and the federal government. Unlike most forms of insurance  where sufficient information exists to accurately estimate the probability and magnitude of losses (i.e. the underlying density)  agricultural insurance is plagued by a paucity of spatially correlated data. A novel interpretation of Bayesian Model Averaging is used to estimate a set of possibly similar densities that offers greater efficiency if the set of densities are similar while seemingly not losing any if the set of densities are dissimilar. Simulations indicate finite sample performance  in particular small sample performance  is quite promising. The proposed approach does not require knowledge of the form or extent of any possible similarities, is relatively easy to implement, admits correlated data, and can be used with either parametric or nonparametric estimators. We use the proposed approach to estimate U.S. crop insurance premium rates for areatype programs and develop a test to evaluate its efficacy. An outofsample game between private insurance companies and the federal government highlights the policy implications for a variety of cropstate combinations. We repeat the empirical analyses under reduced sample sizes given: (i) new programs will dramatically expand areatype insurance to crops and states that have significantly less historical data; and (ii) changes in technology could render some historical loss data no longer representative. Consistent with the simulation results, the performance of the proposed approach with respect to rating areatype insurance  in particular small sample performance  remains quite promising. 
Keywords:  rating crop insurance contracts, Bayesian model averaging, multiple density estimation, spatial correlation, small sample estimation, Agribusiness, Research Methods/ Statistical Methods, Risk and Uncertainty, 
Date:  2015 
URL:  http://d.repec.org/n?u=RePEc:ags:aaea15:205211&r=ecm 
By:  Baylis, Kathy; Ham, Andres 
Abstract:  Randomized controlled trials have become the gold standard for impact evaluation since they provide unbiased estimates of causal effects. This paper studies randomized settings where treatment is assigned over geographical units. We analyze how omitting spatial correlation in outcomes or unobservables affects treatment effect estimates. First, we study spatial dependence in Mexico's Progresa program. Second, we conduct Monte Carlo simulations to generalize our results. Findings reveal that spatial correlation is more relevant than the literature suggests, and may affect both the precision of the estimate and the estimate itself. Existing spatial econometric methods may provide solutions to mitigate the consequences of omitting spatial correlation. 
Keywords:  randomization, spatial correlation, treatment effects, estimation, inference, International Development, Research Methods/ Statistical Methods, C15, I38, R58, 
Date:  2015–05 
URL:  http://d.repec.org/n?u=RePEc:ags:aaea15:205586&r=ecm 
By:  Parman, Bryon; Featherstone, Allen; Coffey, Brian 
Abstract:  Nonparametric cost frontier estimation has been commonly used to examine the relative efficiency of firms without critically examining the shape of the cost frontier. To examine the shape of the cost frontier has required additional estimation using parametric methods to recover potential cost savings from multiproduct and productspecific economies of scale. This paper develops and tests a method for estimating multiproduct and productspecific economies of scale using the nonparametric approach by evaluating the difference between scale calculations from an assumed cost frontier and those estimated using data envelopment analysis. The results demonstrated that the nonparametric approach is able to accurately estimate multiproduct economies of scale and productspecific economies of scale under alternative inefficiency distributional assumptions. 
Keywords:  Cost Function, Efficiency Analysis, MultiProduct Economies of Scale, Product Specific Economies of Scale, Nonparametric Estimation, Economies of Scope, Agribusiness, Crop Production/Industries, Farm Management, Production Economics, Productivity Analysis, Research Methods/ Statistical Methods, C13, C14, D20, 
Date:  2015–01–15 
URL:  http://d.repec.org/n?u=RePEc:ags:misswp:197532&r=ecm 
By:  Shonkwiler, J. Scott; Barfield, Ashley 
Abstract:  Systematic biases in reporting past behavior may compromise the methods used to derive values from revealed preference data. Recreational survey response data is routinely plagued by three problems: an abundance of zeros due to nonparticipation (the “excesszero” problem), response “heaping”, and “leaping” of responses (issues resulting from recall bias). To simultaneously address these issues in the discrete data context, we consider several different specifications of the negative binomial estimator of recreation demand. We find that the negative binomial model’s fit is significantly improved by reassigning heaped responses to censored regimes where reported trip numbers form the intervals’ upperbounds. To this end, we illustrate how employing the incomplete beta function to represent the cumulative distribution function of the negative binomial distribution simplifies the incorporation of censored intervals. 
Keywords:  Survey response data, excess zeros, recall bias, negative binomial, Research Methods/ Statistical Methods, Resource /Energy Economics and Policy, 
Date:  2015 
URL:  http://d.repec.org/n?u=RePEc:ags:aaea15:204868&r=ecm 