
on Econometrics 
By:  Taisuke Otsu (Cowles Foundation, Yale University); KeLi Xu (Dept. of Economics, Texas A&M University and University of Alberta School of Business) 
Abstract:  This paper proposes empirical likelihood based inference methods for causal effects identified from regression discontinuity designs. We consider both the sharp and fuzzy regression discontinuity designs and treat the regression functions as nonparametric. The proposed inference procedures do not require asymptotic variance estimation and the confidence sets have natural shapes, unlike the conventional Waldtype method. These features are illustrated by simulations and an empirical example which evaluates the effect of class size on pupils' scholastic achievements. Bandwidth selection methods, higherorder properties, and extensions to incorporate additional covariates and parametric functional forms are also discussed. 
Keywords:  Empirical likelihood, Nonparametric methods, Regression discontinuity design, Treatment effect 
JEL:  C12 C14 C21 
Date:  2011–05 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1799&r=ecm 
By:  Eiji Kurozumi; Khashbaatar Dashtseren 
Abstract:  We develop a new approach of statistical inference in possibly integrated/cointegrated vector autoregressions. Our method is built on the two previous approaches: the lag augmented approach by Toda and Yamamoto (1995) and the artificial autoregressions by Yamamoto (1996). We show that our estimator is asymptotically normally distributed irrespective of whether the variables are stationary or nonstationary, and that the Wald test statistic for the parameter restrictions has an asymptotic chisquare distribution. Using this method, we also propose to test for multiple structural changes. We show that our test statistics have the same limiting distributions as in the standard case, irrespective of whether the variables are stationary, purely integrated, or cointegrated. 
Keywords:  multiple breaks, stationary, unit root, cointegration 
JEL:  C12 C13 C32 
Date:  2011–04 
URL:  http://d.repec.org/n?u=RePEc:hst:ghsdps:gd11187&r=ecm 
By:  Stefano Grassi (Aarhus University and CREATES); Paolo Santucci de Magistris (Aarhus University and CREATES) 
Abstract:  The finite sample properties of the state space methods applied to long memory time series are analyzed through Monte Carlo simulations. The state space setup allows to introduce a novel modeling approach in the long memory framework, which directly tackles measurement errors and random level shifts. Missing values and several alternative sources of misspecification are also considered. It emerges that the state space methodology provides a valuable alternative for the estimation of the long memory models, under different data generating processes, which are common in financial and economic series. Two empirical applications highlight the practical usefulness of the proposed state space methods. 
Keywords:  ARFIMA models, Kalman Filter, Missing Observations, Measurement Error, Level Shifts. 
JEL:  C10 C22 C80 
Date:  2011–05–02 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201114&r=ecm 
By:  Adolfson, Malin (Monetary Policy Department, Central Bank of Sweden); Lindé, Jesper (Division of International Finance) 
Abstract:  In this paper, we use Monte Carlo methods to study the small sample properties of the classical maximum likelihood (ML) estimator in artificial samples generated by the New Keynesian open economy DSGE model estimated by Adolfson et al. (2008) with Bayesian techniques. While asymptotic identification tests show that some of the parameters are weakly identified in the model and by the set of observable variables we consider, we document that ML is unbiased and has low MSE for many key parameters if a suitable set of observable variables are included in the estimation. These findings suggest that we can learn a lot about many of the parameters by confronting the model with data, and hence stand in sharp contrast to the conclusions drawn by Canova and Sala (2009) and Iskrev (2008). Encouraged by our results, we estimate the model using classical techniques on actual data, where we use a new simulation based approach to compute the uncertainty bands for the parameters. From a classical viewpoint, ML estimation leads to a significant improvement in fit relative to the loglikelihood computed with the Bayesian posterior median parameters, but at the expense of some the ML estimates being implausible from a microeconomic viewpoint. We interpret these results to imply that the model at hand suffers from a substantial degree of model misspecification. This interpretation is supported by the DSGEVAR() analysis in Adolfson et al. (2008). Accordingly, we conclude that problems with model misspecification, and not primarily weak identification, is the main challenge ahead in developing quantitative macromodels for policy analysis. 
Keywords:  Identification; Bayesian estimation; MonteCarlo methods; Maximum Likelihood estimation; NewKeynesian DSGE Model; Open economy. 
JEL:  C13 C51 E30 
Date:  2011–04–01 
URL:  http://d.repec.org/n?u=RePEc:hhs:rbnkwp:0251&r=ecm 
By:  Toru Kitagawa (Institute for Fiscal Studies and UCL) 
Abstract:  <p>This paper develops inference and statistical decision for setidentified parameters from the robust Bayes perspective. When a model is setidentified, prior knowledge for model parameters is decomposed into two parts: the one that can be updated by data (revisable prior knowledge) and the one that never be updated (unrevisable prior knowledge.) We introduce a class of prior distributions that shares a single prior distribution for the revisable, but allows for arbitrary prior distributions for the unrevisable. A posterior inference procedure proposed in this paper operates on the resulting class of posteriors by focusing on the posterior lower and upper probabilities. We analyze point estimation of the setidentified parameters with applying the gammaminimax criterion. We propose a robustified posterior credible region for the setidentified parameters by focusing on a contour set of the posterior lower probability. Our framework offers a procedure to eliminate setidentified nuisance parameters, and yields inference for the marginalized identified set. For an interval identified parameter case, we establish asymptotic equivalence of the lower probability inference to frequentist inference for the identified set. </p><p></p> 
Date:  2011–05 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:16/11&r=ecm 
By:  Zhu, Ying; Ghosh, Sujit K. 
Abstract:  The objective of this study is to evaluate the robust regression method when detrending the crop yield data. Using a Monte Carlo simulation method, the performance of the proposed TimeVarying Beta method is compared with the previous study of OLS, Mestimator and MMestimator in an application of crop yield modeling. We analyze the properties of these estimators for outliercontaminated data in both symmetric and skewed distribution case. The application of these estimation methods is illustrated in an agricultural insurance analysis. The consequence of obtaining more accurate detrending method will offer the potential to improve the accuracy of models used in rating crop insurance contracts. 
Keywords:  Research Methods/ Statistical Methods, Risk and Uncertainty, 
Date:  2011–05 
URL:  http://d.repec.org/n?u=RePEc:ags:aaea11:103426&r=ecm 
By:  Eric Gautier (CREST  Centre de Recherche en Économie et Statistique  INSEE  École Nationale de la Statistique et de l'Administration Économique, ENSAE  École Nationale de la Statistique et de l'Administration Économique  ENSAE ParisTech); Alexandre Tsybakov (CREST  Centre de Recherche en Économie et Statistique  INSEE  École Nationale de la Statistique et de l'Administration Économique, LPMA  Laboratoire de Probabilités et Modèles Aléatoires  CNRS : UMR7599  Université Pierre et Marie Curie  Paris VI  Université ParisDiderot  Paris VII) 
Abstract:  We propose an instrumental variables method for estimation in linear models with endogenous regressors in the highdimensional setting where the sample size n can be smaller than the number of possible regressors K, and L>=K instruments. We allow for heteroscedasticity and we do not need a prior knowledge of variances of the errors. We suggest a new procedure called the STIV (Self Tuning Instrumental Variables) estimator, which is realized as a solution of a conic optimization program. The main results of the paper are upper bounds on the estimation error of the vector of coefficients in l_pnorms for 1 
Keywords:  Instrumental variables ; Sparsity ; STIV estimator ; Endogeneity ; Highdimensional regression ; Conic programming ; Optimal instruments ; Hereroscedasticity ; Confidence intervals ; NonGaussian errors ; Variable selection ; Unknown variance ; Sign consistency 
Date:  2011–05–09 
URL:  http://d.repec.org/n?u=RePEc:hal:wpaper:hal00591732&r=ecm 
By:  Storm, Hugo; Heckelei, Thomas 
Abstract:  In this poster a Bayesian estimation framework for a nonstationary Markov model is developed for situations where sample data with observed transition between classes (micro data) and aggregate population shares (macro data) are available. Posterior distributions on transition probabilities are derived based on a micro based prior and a macro based Likelihood function thereby consistently combining previously separated approaches. Monte Carlo simulations for ordered and unordered Markov states show how observed micro transitions improve precision of posterior knowledge as the sample size increases. 
Keywords:  Bayesian estimation, Markov transitions, prior information, multinomial logit, ordered multinomial logit, Agricultural and Food Policy, Research Methods/ Statistical Methods, 
Date:  2011–07–24 
URL:  http://d.repec.org/n?u=RePEc:ags:aaea11:103645&r=ecm 
By:  Holger Dette; Stefan Hoderlein (Institute for Fiscal Studies and Boston College); Natalie Neumeyer 
Abstract:  <p>This paper is concerned with testing rationality restrictions using quantile regression methods. Specifically, we consider negative semidefiniteness of the Slutsky matrix, arguably the core restriction implied by utility maximization. We consider a heterogeneous population characterized by a system of nonseparable structural equations with infinite dimensional unobservable. To analyze the economic restriction, we employ quantile regression methods because they allow us to utilize the entire distribution of the data. Difficulties arise because the restriction involves several equations, while the quantile is a univariate concept. We establish that we may test the economic restriction by considering quantiles of linear combinations of the dependent variable. For this hypothesis we develop a new empirical process based test that applies kernel quantile estimators, and derive its large sample behavior. We investigate the performance of the test in a simulation study. Finally, we apply all concepts to Canadian individual data, and show that rationality is an acceptable description of actual individual behavior.</p> 
Date:  2011–05 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:14/11&r=ecm 
By:  Lance Lochner; Enrico Moretti 
Abstract:  In many empirical studies, researchers seek to estimate causal relationships using instrumental variables. When only one valid instrumental variable is available, researchers are limited to estimating linear models, even when the true model may be nonlinear. In this case, ordinary least squares and instrumental variable estimators will identify different weighted averages of the underlying marginal causal effects even in the absence of endogeneity. As such, the traditional Hausman test for endogeneity is uninformative. We build on this insight to develop a new test for endogeneity that is robust to any form of nonlinearity. Notably, our test works well even when only a single valid instrument is available. This has important practical applications, since it implies that researchers can estimate a completely unrestricted nonlinear model by OLS, and then use our test to establish whether those OLS estimates are consistent. We revisit a few recent empirical examples to show how the test can be used to shed new light on the role of nonlinearity. 
JEL:  C01 J0 
Date:  2011–05 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:17039&r=ecm 
By:  Tom Engsted (Aarhus University and CREATES); Thomas Q. Pedersen (Aarhus University and CREATES) 
Abstract:  We analyze and compare the properties of various methods for biascorrecting parameter estimates in vector autoregressions. First, we show that two analytical bias formulas from the existing literature are in fact identical. Next, based on a detailed simulation study, we show that this simple and easytouse analytical bias formula compares very favorably to the more standard but also more computer intensive bootstrap biascorrection method, both in terms of bias and mean squared error. Both methods yield a notable improvement over both OLS and a recently proposed WLS estimator. We also investigate the properties of an iterative scheme when applying the analytical bias formula, and we ?find that this can imply slightly better fi?nitesample properties for very small sample sizes while for larger sample sizes there is no gain by iterating. Finally, we also pay special attention to the risk of pushing an otherwise stationary model into the nonstationary region of the parameter space during the process of correcting for bias. 
Keywords:  Bias reduction, VAR model, analytical bias formula, bootstrap, iteration, YuleWalker, nonstationary system, skewed and fattailed data. 
JEL:  C13 C32 
Date:  2011–05–13 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201118&r=ecm 
By:  Deb, P;; Trivedi, P; 
Abstract:  This paper develops Â…nite mixture models with Â…xed eÂ¤ects for two families of distributions for which the incidental parameter problem has a solution. Analytical results are provided for mixtures of Normals and mixtures of Poisson. We provide algorithms based on the expectationsmaximization (EM) approach as well as computationally simpler equivalent estimators that can be used in the case of the mixtures of normals. We design and implement a Monte Carlo study that examines the Â…nite sample performance of the proposed estimator and also compares it with other estimators such the MundlakChamberlain conditionally correlated random eÂ¤ects estimator. The results of Monte Carlo experiments suggest that our proposed estimators of such models have excellent Â…nite sample properties, even in the case of relatively small T and moderately sized N dimensions. The methods are applied to models of healthcare expenditures and counts of utilization using data from the Health and Retirement Study. 
Date:  2011–04 
URL:  http://d.repec.org/n?u=RePEc:yor:hectdg:11/03&r=ecm 
By:  Massimiliano Caporin (University of Padova); Gabriel G. Velo (University of Padova) 
Abstract:  In this paper, we estimate, model and forecast Realized Range Volatility, a new realized measure and estimator of the quadratic variation of financial prices. This estimator was early introduced in the literature and it is based on the highlow range observed at high frequency during the day. We consider the impact of the microstructure noise in high frequency data and correct our estimations, following a known procedure. Then, we model the Realized Range accounting for the wellknown stylized effects present in financial data. We consider an HAR model with asymmetric effects with respect to the volatility and the return, and GARCH and GJRGARCH specifications for the variance equation. Moreover, we also consider a non Gaussian distribution for the innovations. The analysis of the forecast performance during the different periods suggests that including the HAR components in the model improve the point forecasting accuracy while the introduction of asymmetric effects only leads to minor improvements. 
Keywords:  Statistical analysis of financial data, Econometrics, Forecasting methods, Time series analysis, Realized Range Volatility, Realized Volatility, Longmemory, Volatility forecasting 
JEL:  C22 C52 C53 
Date:  2011–02 
URL:  http://d.repec.org/n?u=RePEc:pad:wpaper:0128&r=ecm 
By:  Cristina Amado (Universidade do Minho  NIPE); Timo Teräsvirta (CREATES, School of Economics and Management, Aarhus University) 
Abstract:  In this paper we investigate the effects of careful modelling the longrun dynamics of the volatilities of stock market returns on the conditional correlation structure. To this end we allow the individual unconditional variances in Conditional Correlation GARCH models to change smoothly over time by incorporating a nonstationary component in the variance equations. The modelling technique to determine the parametric structure of this timevarying component is based on a sequence of specification Lagrange multipliertype tests derived in Amado and Teräsvirta (2011). The variance equations combine the longrun and the shortrun dynamic behaviour of the volatilities. The structure of the conditional correlation matrix is assumed to be either time independent or to vary over time. We apply our model to pairs of seven daily stock returns belonging to the S&P 500 composite index and traded at the New York Stock Exchange. The results suggest that accounting for deterministic changes in the unconditional variances considerably improves the fit of the multivariate Conditional Correlation GARCH models to the data. The effect of careful specification of the variance equations on the estimated correlations is variable: in some cases rather small, in others more discernible. As a byproduct, we generalize news impact surfaces to the situation in which both the GARCH equations and the conditional correlations contain a deterministic component that is a function of time. 
Keywords:  Multivariate GARCH model; Timevarying unconditional variance; Lagrange multiplier test; Modelling cycle; Nonlinear time series. 
JEL:  C12 C32 C51 C52 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:nip:nipewp:15/2011&r=ecm 
By:  Goodwin, Barry K.; Holt, Matthew T.; Prestemon, Jeffrey P.; Onel, Gulcan 
Abstract:  An extensive empirical literature has addressed a wide array of issues pertaining to price linkages over space and across time. Empirical models of price linkages have been used to measure market power and to characterize the operation of markets that are separated by space, time, and product form. The long history of these empirical models extends from simple tests of price correlation, to conventional regression tests, to modern time series models that account for nonstationarity, nonlinearities, and threshold behavior in market linkages. This paper proposes an entirely dierent and potentially novel approach to analyzing these same types of time series data in a nonlinear fashion. Copulabased models that consider the joint distribution of prices separated by space are developed and applied to weekly prices for important lumber products at geographically distinct markets. In particular, we consider prices taken from weekly editions of the Random Lengths publication for homogeneous OSB products. 
Keywords:  Spatial Market Linkages, Copula Models, Statedependence, Forest Products, Research Methods/ Statistical Methods, 
Date:  2011–05–03 
URL:  http://d.repec.org/n?u=RePEc:ags:aaea11:103715&r=ecm 
By:  Søren Johansen (University of Copenhagen and CREATES); Theis Lange (University of Copenhagen and CREATES) 
Abstract:  The purpose of the present paper is to analyse a simple bubble model suggested by Blanchard and Watson. The model is defined by y(t) =s(t)?y(t1)+e(t), t=1,…,n, where s(t) is an i.i.d. binary variable with p=P(s(t)=1), independent of e(t) i.i.d. with mean zero and finite variance. We take ?>1 so the process is explosive for a period and collapses when s(t)=0. We apply the drift criterion for nonlinear time series to show that the process is geometrically ergodic when p<1, because of the recurrent collapse. It has a finite mean if p?<1, and a finite variance if p?²<1. The question we discuss is whether a bubble model with infinite variance can create the long swings, or persistence, which are observed in many macro variables. We say that a variable is persistent if its autoregressive coefficient ?(n) of y(t) on y(t1), is close to one. We show that the estimator of ?(n) converges to ?p, if the variance is finite, but if the variance of y(t) is infinite, we prove the curious result that the estimator converges to ??¹. The proof applies the notion of a tail index of sums of positive random variables with infinite variance to find the order of magnitude of the product moments of y(t). 
Keywords:  Time series, explosive processes, bubble models. 
JEL:  C32 
Date:  2011–05–09 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201117&r=ecm 
By:  Nicoletti, Cheti; Best, Nicky G. 
Abstract:  Analyses using aggregated data may bias inference. In this work we show how to avoid or at least reduce this bias when estimating quantile regressions using aggregated information. This is possible by considering the unconditional quantile regression recently introduced by Firpo et al (2009) and using a specific strategy to aggregate the data. 
Date:  2011–05–13 
URL:  http://d.repec.org/n?u=RePEc:ese:iserwp:201112&r=ecm 
By:  Victor DeMiguel; Alberto Martín Utrera; Francisco J. Nogales 
Abstract:  Shrinkage estimators is an area widely studied in statistics. In this paper, we contemplate the role of shrinkage estimators on the construction of the investor's portfolio. We study the performance of shrinking the sample moments to estimate portfolio weights as well as the performance of shrinking the naive sample portfolio weights themselves. We provide a theoretical and empirical analysis of different new methods to calibrate shrinkage estimators within portfolio optimization 
Keywords:  Portfolio choice, Estimation error, Shrinkage estimators, Smoothed bootstrap 
Date:  2011–05 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:ws111510&r=ecm 
By:  Bergtold, Jason S.; Yeager, Elizabeth A.; Featherstone, Allen 
Abstract:  The logistic regression models has been widely used in the social and natural sciences and results from studies using this model can have significant impact. Thus, confidence in the reliability of inferences drawn from these models is essential. The robustness of such inferences is dependent on sample size. The purpose of this study is to examine the impact of sample size on the mean estimated bias and efficiency of parameter estimation and inference for the logistic regression model. A number of simulations are conducted examining the impact of sample size, nonlinear predictors, and multicollinearity on substantive inferences (e.g. odds ratios, marginal effects) and goodness of fit (e.g. pseudoR2, predictability) of logistic regression models. Findings suggest that sample size can affect parameter estimates and inferences in the presence of multicollinearity and nonlinear predictor functions, but marginal effects estimates are relatively robust to sample size. 
Keywords:  Logistic Regression Model, Multicollinearity, Nonlinearity, Robustness, Small Sample Bias, Research Methods/ Statistical Methods, 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:ags:aaea11:103771&r=ecm 
By:  Stefan Hoderlein (Institute for Fiscal Studies and Boston College); Yuya Sasaki 
Abstract:  <p>This paper contributes to the understanding of the source of identiﬁcation in panel data models. Recent research has established that few time periods suﬃce to identify interesting structural eﬀects in nonseparable panel data models even in the presence of complex correlated unobservables, provided these unobservables are time invariant. A communality of all of these approaches is that they point identify eﬀects only for subpopulations. In this paper we focus on average partial derivatives and continuous explanatory variables. We elaborate on the parallel between time in panels and instrumental variables in cross sections and establish that point identiﬁcation is generically only possible in speciﬁc subpopulations, for ﬁnite T . Moreover, for general subpopulations, we provide sharp bounds. Finally, we show that these bounds converge to point identiﬁcation as T tends to inﬁnity only. We systematize this behavior by comparing it to increasing the number of support points of an instrument. Finally, we apply all of these concepts to the semiparametric panel binary choice model and establish that these issues determine the rates of convergence of estimators for the slope coeﬃcient.</p> 
Date:  2011–05 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:15/11&r=ecm 
By:  Shigeru Iwata; Han Li 
Abstract:  When a certain procedure is applied to extract two component processes from a single observed process, it is necessary to impose a set of restrictions that defines two components. One popular restriction is the assumption that the shocks to the trend and cycle are orthogonal. Another is the assumption that the trend is a pure random walk process. The unobserved components (UC) model (Harvey, 1985) assumes both of the above, whereas the BN decomposition (Beveridge and Nelson, 1981) assumes only the latter. Quah (1992) investigates a broad class of decompositions by making the former assumption only. This paper provides a general framework in which alternative trendcycle decompositions are regarded as special cases, and examines alternative decomposition schemes from the perspective of the frequency domain. We find that as long as the US GDP is concerned, the conventional UC model is inappropriate for the trendcycle decomposition. We agree with Morley et al (2003) that the UC model is simply misspecified. However, this does not imply that the UC model that allows for the correlated shocks is a better model specification. The correlated UC model would lose many attractive features of the conventional UC model. 
Keywords:  BeveridgeNelson decomposition, Unobserved Component Models 
JEL:  E44 F36 G15 
Date:  2011–03 
URL:  http://d.repec.org/n?u=RePEc:hst:ghsdps:gd10171&r=ecm 
By:  Antoni, Espasa; Iván, Mayo 
Abstract:  The paper is focused on providing joint consistent forecasts for an aggregate and all its components and in showing that this indirect forecast of the aggregate is at least as accurate as the direct one. The procedure developed in the paper is a disaggregated approach based on singleequation models for the components, which take into account common stable features which some components share between them. The procedure is applied to forecasting euro area, UK and US inflation and it is shown that its forecasts are significantly more accurate than the ones obtained by the direct forecast of the aggregate or by dynamic factor models. A byproduct of the procedure is the classification of a large number of components by restrictions shared between them, which could be also useful in other respects, as the application of dynamic factors, the definition of intermediate aggregates or the formulation of models with unobserved components 
Keywords:  Common trends, Common serial correlation, Inflation, Euro Area, UK, US, Cointegration, Singleequation econometric models 
Date:  2011–04 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:ws110805&r=ecm 
By:  Donald W.K. Andrews (Cowles Foundation, Yale University) 
Abstract:  Completeness and boundedcompleteness conditions are used increasingly in econometrics to obtain nonparametric identification in a variety of models from nonparametric instrumental variable regression to nonclassical measurement error models. However, distributions that are known to be complete or boundedly complete are somewhat scarce. In this paper, we consider an L^2completeness condition that lies between completeness and bounded completeness. We construct broad (nonparametric) classes of distributions that are L^2complete and boundedly complete. The distributions can have any marginal distributions and a wide range of strengths of dependence. Examples of L^2incomplete distributions also are provided. 
Keywords:  C14 
Date:  2011–05 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1801&r=ecm 
By:  Ghosh, Somali; Woodard, Joshua D.; Vedenov, Dmitry V. 
Abstract:  The association between prices and yields are of paramount importance to the crop insurance programs. Proper estimation of the association is highly desirable. Copulas are one such method to measure the dependence structure. Five single parametric copulas, a non parametric copula and their fifteen different combinations taking a mixture of two different copulas at a time have been used in the crop insurance rating analysis. Using data of corn from 19732009 for 602 counties in the MidWest area two different efficient methods have been proposed to generate the optimal mixtures using the cross validation approach. A resampling technique is used to check for the significance of the expected indemnities. 
Keywords:  Copulas, Crop Insurance, CrossValidation, Empirical distribution, GRIP, Indemnities, OutOfSample LogLikelihood, Agricultural Finance, Q14, 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:ags:aaea11:103738&r=ecm 
By:  Chen, Xuan; Goodwin, Barry K. 
Abstract:  Our study focuses on modeling southern pine beetle (SPB) outbreaks in the southern area. The approach is to evaluate SPB outbreak frequency in a spatiotemporal framework. A block bootstrapping method with zeroinflated estimation has been proposed to construct a statistical model accounting for explanatory variables while adjusting for spatial and temporal autocorrelation. Although the bootstrap (Efron 1979) method can handle independent observations well, the strong autocorrelation of SPB outbreaks brings about a major challenge. Motivated by bootstrapping overlapping blocks method in autoregressive time series scenario (Kunsch 1989) and block bootstrapping method of dependent data from a spatial map (Hall 1985), we have developed a method to bootstrap overlapping spatiotemporal blocks. By selecting an appropriate block size, the spatialtemporal correlation can be eliminated. The second challenge arises from the fact that the SPB spots distribution has a heavy weight on 0. To accommodate this issue, the zeroinflated models are adopted in the estimation stage. With our saptiotemporal block bootstrapping approach, impacts of environmental factors on SPB outbreaks and implications of pine forest management are assessed. Almost all the explanatory variables, including drought, temperature, forest ecosystem and hurricane, have been detected to have significant impacts. Forestland size and government share of forestland would positively contribute to SPB outbreaks significantly. Meanwhile, our method offers a way to forecast the frequency of future SPB outbreaks, given the current environmental information of a county. 
Keywords:  Southern Pinebeetle, Block Bootstrapping, Risk and Uncertainty, 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:ags:aaea11:103668&r=ecm 
By:  Tsung Yu, Yang 
Abstract:  Regarding the nature of yield data, there are two basic characteristics that needs to be accommodated while we are about to model a yield distribution. The first one is the nonstationary nature of the yield distribution, which causes the heteroscedasticity related problems. The second one is the left skewness of the yield distribution. A common approach to this problem is based on a twostage method in which the yields are detrended first and the detrended yields are taken as observed data modeled by various parametric and nonparametric methods. Based on a twostage estimation structure, a mixed normal distribution seems to better capture the the secondary distribution from catastrophic years than a Beta distribution. The implication to the risk management is the yield risk may be underestimated under the common selection  Beta distribution. A mixed normal distribution under a timevarying structure, under which the parameters are allowed to vary over time, tends to collapse to a single normal distribution. The timevarying mixed normal model fits the realized yield data in one step that avoids the possible bias caused by sampling variability. Also, the timevarying parameters imply that the premium rates can be adjusted to represent the most recent information and that lifts the efficiency of the insurance market. 
Keywords:  TimeVarying Distribution, Mixture Distribution, Crop Insurance, Agricultural Finance, Crop Production/Industries, Research Methods/ Statistical Methods, Risk and Uncertainty, 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:ags:aaea11:103422&r=ecm 
By:  Gibson, Fiona L.; Burton, Michael P. 
Abstract:  Observed and unobserved characteristics of an individual are often used by researchers to explain choices over the provision of environmental goods. One means for identifying what is typically an unobserved characteristic, such as an attitude, is through some data reduction technique, such as factor analysis. However, the resultant variable represents the true attitude with measurement error, and hence, when included into a nonlinear choice model, introduces bias in the model. There are well established methods to overcome this issue, which are seldom implemented. In an application to preferences over two water source alternatives for Perth in Western Australia, we use structural equation modeling within a discrete choice model to determine whether welfare measures are significantly impacted by ignoring measurement error in latent attitudes, and the advantage to policy makers from understanding what drives certain attitudes. 
Keywords:  contingent valuation, attitudes, structural equation modeling, recycled water, Environmental Economics and Policy, Research Methods/ Statistical Methods, Q51, Q53, C13, 
Date:  2011–05–02 
URL:  http://d.repec.org/n?u=RePEc:ags:uwauwp:103428&r=ecm 
By:  Franke, Reiner; Westerhoff, Frank 
Abstract:  In the framework of smallscale agentbased financial market models, the paper starts out from the concept of structural stochastic volatility, which derives from different noise levels in the demand of fundamentalists and chartists and the timevarying market shares of the two groups. It advances several different specifications of the endogenous switching between the trading strategies and then estimates these models by the method of simulated moments (MSM), where the choice of the moments reflects the basic stylized facts of the daily returns of a stock market index. In addition to the standard version of MSM with a quadratic loss function, we also take into account how often a great number of Monte Carlo simulation runs happen to yield moments that are all contained within their empirical confidence intervals. The model contest along these lines reveals a strong role for a (tamed) herding component. The quantitative performance of the winner model is so good that it may provide a standard for future research.  
Keywords:  Method of simulated moments,moment coverage ratio,herding,discrete choice approach,transition probability approach 
JEL:  D84 G12 G14 G15 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:zbw:bamber:78&r=ecm 
By:  Johnston, Robert J.; Ramachandran, Mahesh; Schultz, Eric T.; Segerson, Kathleen; Besedin, Elena Y. 
Abstract:  Stated preference analyses commonly impose strong and unrealistic assumptions in response to spatial welfare heterogeneity. These include spatial homogeneity or continuous distance decay. Despite their ubiquity in the valuation literature, global assumptions such as these have been increasingly abandoned by noneconomics disciplines in favor of approaches that allow for spatial patchiness. This paper develops parallel methods to evaluate local patchiness and hot spots in stated preference welfare estimates, characterizing relevant patterns overlooked by traditional approaches. The analysis draws from a choice experiment addressing river restoration. Results demonstrate shortcomings in standard treatments of spatial heterogeneity and insights available through alternative methods. 
Keywords:  Willingness to Pay, Hot Spot, Stated Preference, Ecosystem Service, Valuation, Environmental Economics and Policy, Research Methods/ Statistical Methods, 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:ags:aaea11:103374&r=ecm 
By:  Yoshitsugu Kitazawa (Faculty of Economics, Kyushu Sangyo University) 
Abstract:  This note proposes the average elasticity of the logit probabilities with respect to the exponential functions of explanatory variables in the framework of the fixed effects logit model. The average elasticity is able to be calculated using the consistent estimators of parameters of interest and the average of binary dependent variables, regardless of the fixed effects. 
Keywords:  average elasticity; fixed effects logit model 
JEL:  C23 
Date:  2011–05 
URL:  http://d.repec.org/n?u=RePEc:kyu:dpaper:49&r=ecm 
By:  Ryota Yabe 
Abstract:  This paper derives the asymptotic distribution of Tanaka's score statistic under moderate deviation from a unit root in a moving average model of order one or MA(1). We classify the limiting distribution into three types depending on the order of deviation. In the fastest case, the convergence order of the asymptotic distribution continuously changes from the invertible process to the unit root. In the slowest case, the limiting distribution coincides with the invertible process in a distributional sense. This implies that these cases share an asymptotic property. The limiting distribution in the intermediate case provides the boundary property between the fastest and slowest cases. 
Date:  2011–02 
URL:  http://d.repec.org/n?u=RePEc:hst:ghsdps:gd10170&r=ecm 
By:  Hernandez, Monica; Pudney, Stephen 
Abstract:  We investigate the consequences of using timeinvariant individual effects in panel data models when the unobservables are in fact timevarying. Using data from the British Offending Crime and Justice panel, we estimate a dynamic factor model of the occurrence of a range of illicit activities as outcomes of young peoples development processes. This structure is then used to demonstrate that relying on the assumption of timeinvariant individual effects to deal with confounding factors in a conventional dynamic panel data model is likely to lead to spurious gateway effects linking cannabis use to subsequent hard drug use. 
Date:  2011–05–14 
URL:  http://d.repec.org/n?u=RePEc:ese:iserwp:201113&r=ecm 
By:  Luís Francisco Aguiar (Universidade do Minho  NIPE); Maria Joana Soares (Universidade do Minho  Departamento de Matemática) 
Abstract:  Economists are already familiar with the Discrete Wavelet Transform. However, a body of work using the Continuous Wavelet Transform has also been growing. We provide a selfcontained summary on continuous wavelet tools, such as the Continuous Wavelet Transform, the CrossWavelet, the Wavelet Coherency and the PhaseDifference. Furthermore, we generalize the concept of simple coherency to Partial Wavelet Coherency and Multiple Wavelet Coherency, akin to partial and multiple correlations, allowing the researcher to move beyond bivariate analysis. Finally, we describe the Generalized Morse Wavelets, a class of analytic wavelets recently proposed. A userfriendly toolbox, with examples, is attached to this paper. 
Keywords:  Continuous Wavelet Transform, CrossWavelet Transform, Wavelet Coherency, Partial Wavelet Coherency, Multiple Wavelet Coherency, Wavelet PhaseDifference; Economic fluctuations 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:nip:nipewp:16/2011&r=ecm 
By:  Chen, Min; Lupi, Frank 
Abstract:  Researchers have been using the latent class model (LCM) to value recreational activities for years. Then the reliability of this model becomes an issue. We conduct Monte Carlo simulations to test if the latent class model is able to recover the truth. The simulation results show that LCM does a better job on recovering population average values than recovering underlying population segments. 
Keywords:  Monte Carlo Simulations, Latent Class Model, Environmental Economics and Policy, 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:ags:aaea11:103449&r=ecm 