nep-ecm New Economics Papers
on Econometrics
Issue of 2011‒05‒24
34 papers chosen by
Sune Karlsson
Orebro University

  1. Empirical Likelihood for Regression Discontinuity Design By Taisuke Otsu; Ke-Li Xu
  2. Statistical Inference in Possibly Integrated/Cointegrated Vector Autoregressions: Application to Testing for Structural Changes By Eiji Kurozumi; Khashbaatar Dashtseren
  3. When Long Memory Meets the Kalman Filter: A Comparative Study By Stefano Grassi; Paolo Santucci de Magistris
  4. Parameter Identification in a Estimated New Keynesian Open Economy Model By Adolfson, Malin; Lindé, Jesper
  5. Inference and decision for set identified parameters using posterior lower and upper probabilities By Toru Kitagawa
  6. A Robust Study of Regression Methods for Crop Yield Data By Zhu, Ying; Ghosh, Sujit K.
  7. High-dimensional instrumental variables regression and confidence sets By Eric Gautier; Alexandre Tsybakov
  8. Bayesian estimation of non-stationary Markov models combining micro and macro data By Storm, Hugo; Heckelei, Thomas
  9. Testing multivariate economic restrictions using quantiles: the example of Slutsky negative semidefiniteness By Holger Dette; Stefan Hoderlein; Natalie Neumeyer
  10. Estimating and Testing Non-Linear Models Using Instrumental Variables By Lance Lochner; Enrico Moretti
  11. Bias-correction in vector autoregressive models: A simulation study By Tom Engsted; Thomas Q. Pedersen
  12. Finite Mixture for Panels with Fixed Effects By Deb, P;; Trivedi, P;
  13. Modeling and forecasting realized range volatility By Massimiliano Caporin; Gabriel G. Velo
  14. Conditional Correlation Models of Autoregressive Conditional Heteroskedasticity with Nonstationary GARCH Equations By Cristina Amado; Timo Teräsvirta
  15. Copula-Based Nonlinear Models of Spatial Market Linkages By Goodwin, Barry K.; Holt, Matthew T.; Prestemon, Jeffrey P.; Onel, Gulcan
  16. Some econometric results for the Blanchard-Watson bubble model By Søren Johansen; Theis Lange
  17. Quantile regression with aggregated data By Nicoletti, Cheti; Best, Nicky G.
  18. Calibration of shrinkage estimators for portfolio optimization By Victor DeMiguel; Alberto Martín Utrera; Francisco J. Nogales
  19. Sample Size and Robustness of Inferences from Logistic Regression in the Presence of Nonlinearity and Multicollinearity By Bergtold, Jason S.; Yeager, Elizabeth A.; Featherstone, Allen
  20. On the role of time in nonseparable panel data models By Stefan Hoderlein; Yuya Sasaki
  21. Why are Trend Cycle Decompositions of Alternative Models So Different? By Shigeru Iwata; Han Li
  22. Forecasting aggregate and disaggregates with common features By Antoni, Espasa; Iván, Mayo
  23. Examples of L^2-Complete and Boundedly-Complete Distributions By Donald W.K. Andrews
  24. Efficient Estimation of Copula Mixture Model: An Application to the Rating of Crop Revenue Insurance By Ghosh, Somali; Woodard, Joshua D.; Vedenov, Dmitry V.
  25. Spatio-Temporal Modeling of Southern Pine Beetle Outbreaks with a Block Bootstrapping Approach By Chen, Xuan; Goodwin, Barry K.
  26. Measurement of Yield distribution: A Time-Varying Distribution Model By Tsung Yu, Yang
  27. Determining the change in welfare estimates from introducing measurement error in non-linear choice models By Gibson, Fiona L.; Burton, Michael P.
  28. Structural stochastic volatility in asset pricing dynamics: Estimation and model contest By Franke, Reiner; Westerhoff, Frank
  29. Characterizing Spatial Pattern in Ecosystem Service Values when Distance Decay Doesnât Apply: Choice Experiments and Local Indicators of Spatial Association By Johnston, Robert J.; Ramachandran, Mahesh; Schultz, Eric T.; Segerson, Kathleen; Besedin, Elena Y.
  30. Average elasticity in the framework of the fixed effects logit model By Yoshitsugu Kitazawa
  31. Limiting Distribution of the Score Statistic under Moderate Deviation from a Unit Root in MA(1) By Ryota Yabe
  32. What you don't see can't hurt you? Panel data analysis and the dynamics of unobservable factors By Hernandez, Monica; Pudney, Stephen
  33. The Continuous Wavelet Transform: A Primer By Luís Francisco Aguiar; Maria Joana Soares
  34. First in Class? The Performance of Latent Class Model By Chen, Min; Lupi, Frank

  1. By: Taisuke Otsu (Cowles Foundation, Yale University); Ke-Li Xu (Dept. of Economics, Texas A&M University and University of Alberta School of Business)
    Abstract: This paper proposes empirical likelihood based inference methods for causal effects identified from regression discontinuity designs. We consider both the sharp and fuzzy regression discontinuity designs and treat the regression functions as nonparametric. The proposed inference procedures do not require asymptotic variance estimation and the confidence sets have natural shapes, unlike the conventional Wald-type method. These features are illustrated by simulations and an empirical example which evaluates the effect of class size on pupils' scholastic achievements. Bandwidth selection methods, higher-order properties, and extensions to incorporate additional covariates and parametric functional forms are also discussed.
    Keywords: Empirical likelihood, Nonparametric methods, Regression discontinuity design, Treatment effect
    JEL: C12 C14 C21
    Date: 2011–05
  2. By: Eiji Kurozumi; Khashbaatar Dashtseren
    Abstract: We develop a new approach of statistical inference in possibly integrated/cointegrated vector autoregressions. Our method is built on the two previous approaches: the lag augmented approach by Toda and Yamamoto (1995) and the artificial autoregressions by Yamamoto (1996). We show that our estimator is asymptotically normally distributed irrespective of whether the variables are stationary or nonstationary, and that the Wald test statistic for the parameter restrictions has an asymptotic chi-square distribution. Using this method, we also propose to test for multiple structural changes. We show that our test statistics have the same limiting distributions as in the standard case, irrespective of whether the variables are stationary, purely integrated, or cointegrated.
    Keywords: multiple breaks, stationary, unit root, cointegration
    JEL: C12 C13 C32
    Date: 2011–04
  3. By: Stefano Grassi (Aarhus University and CREATES); Paolo Santucci de Magistris (Aarhus University and CREATES)
    Abstract: The finite sample properties of the state space methods applied to long memory time series are analyzed through Monte Carlo simulations. The state space setup allows to introduce a novel modeling approach in the long memory framework, which directly tackles measurement errors and random level shifts. Missing values and several alternative sources of misspecification are also considered. It emerges that the state space methodology provides a valuable alternative for the estimation of the long memory models, under different data generating processes, which are common in financial and economic series. Two empirical applications highlight the practical usefulness of the proposed state space methods.
    Keywords: ARFIMA models, Kalman Filter, Missing Observations, Measurement Error, Level Shifts.
    JEL: C10 C22 C80
    Date: 2011–05–02
  4. By: Adolfson, Malin (Monetary Policy Department, Central Bank of Sweden); Lindé, Jesper (Division of International Finance)
    Abstract: In this paper, we use Monte Carlo methods to study the small sample properties of the classical maximum likelihood (ML) estimator in artificial samples generated by the New- Keynesian open economy DSGE model estimated by Adolfson et al. (2008) with Bayesian techniques. While asymptotic identification tests show that some of the parameters are weakly identified in the model and by the set of observable variables we consider, we document that ML is unbiased and has low MSE for many key parameters if a suitable set of observable variables are included in the estimation. These findings suggest that we can learn a lot about many of the parameters by confronting the model with data, and hence stand in sharp contrast to the conclusions drawn by Canova and Sala (2009) and Iskrev (2008). Encouraged by our results, we estimate the model using classical techniques on actual data, where we use a new simulation based approach to compute the uncertainty bands for the parameters. From a classical viewpoint, ML estimation leads to a significant improvement in fit relative to the log-likelihood computed with the Bayesian posterior median parameters, but at the expense of some the ML estimates being implausible from a microeconomic viewpoint. We interpret these results to imply that the model at hand suffers from a substantial degree of model misspecification. This interpretation is supported by the DSGE-VAR() analysis in Adolfson et al. (2008). Accordingly, we conclude that problems with model misspecification, and not primarily weak identification, is the main challenge ahead in developing quantitative macromodels for policy analysis.
    Keywords: Identification; Bayesian estimation; Monte-Carlo methods; Maximum Likelihood estimation; New-Keynesian DSGE Model; Open economy.
    JEL: C13 C51 E30
    Date: 2011–04–01
  5. By: Toru Kitagawa (Institute for Fiscal Studies and UCL)
    Abstract: <p>This paper develops inference and statistical decision for set-identified parameters from the robust Bayes perspective. When a model is set-identified, prior knowledge for model parameters is decomposed into two parts: the one that can be updated by data (revisable prior knowledge) and the one that never be updated (unrevisable prior knowledge.) We introduce a class of prior distributions that shares a single prior distribution for the revisable, but allows for arbitrary prior distributions for the unrevisable. A posterior inference procedure proposed in this paper operates on the resulting class of posteriors by focusing on the posterior lower and upper probabilities. We analyze point estimation of the set-identified parameters with applying the gamma-minimax criterion. We propose a robustified posterior credible region for the set-identified parameters by focusing on a contour set of the posterior lower probability. Our framework offers a procedure to eliminate set-identified nuisance parameters, and yields inference for the marginalized identified set. For an interval identified parameter case, we establish asymptotic equivalence of the lower probability inference to frequentist inference for the identified set. </p><p></p>
    Date: 2011–05
  6. By: Zhu, Ying; Ghosh, Sujit K.
    Abstract: The objective of this study is to evaluate the robust regression method when detrending the crop yield data. Using a Monte Carlo simulation method, the performance of the proposed Time-Varying Beta method is compared with the previous study of OLS, M-estimator and MM-estimator in an application of crop yield modeling. We analyze the properties of these estimators for outlier-contaminated data in both symmetric and skewed distribution case. The application of these estimation methods is illustrated in an agricultural insurance analysis. The consequence of obtaining more accurate detrending method will offer the potential to improve the accuracy of models used in rating crop insurance contracts.
    Keywords: Research Methods/ Statistical Methods, Risk and Uncertainty,
    Date: 2011–05
  7. By: Eric Gautier (CREST - Centre de Recherche en Économie et Statistique - INSEE - École Nationale de la Statistique et de l'Administration Économique, ENSAE - École Nationale de la Statistique et de l'Administration Économique - ENSAE ParisTech); Alexandre Tsybakov (CREST - Centre de Recherche en Économie et Statistique - INSEE - École Nationale de la Statistique et de l'Administration Économique, LPMA - Laboratoire de Probabilités et Modèles Aléatoires - CNRS : UMR7599 - Université Pierre et Marie Curie - Paris VI - Université Paris-Diderot - Paris VII)
    Abstract: We propose an instrumental variables method for estimation in linear models with endogenous regressors in the high-dimensional setting where the sample size n can be smaller than the number of possible regressors K, and L>=K instruments. We allow for heteroscedasticity and we do not need a prior knowledge of variances of the errors. We suggest a new procedure called the STIV (Self Tuning Instrumental Variables) estimator, which is realized as a solution of a conic optimization program. The main results of the paper are upper bounds on the estimation error of the vector of coefficients in l_p-norms for 1
    Keywords: Instrumental variables ; Sparsity ; STIV estimator ; Endogeneity ; High-dimensional regression ; Conic programming ; Optimal instruments ; Hereroscedasticity ; Confidence intervals ; Non-Gaussian errors ; Variable selection ; Unknown variance ; Sign consistency
    Date: 2011–05–09
  8. By: Storm, Hugo; Heckelei, Thomas
    Abstract: In this poster a Bayesian estimation framework for a non-stationary Markov model is developed for situations where sample data with observed transition between classes (micro data) and aggregate population shares (macro data) are available. Posterior distributions on transition probabilities are derived based on a micro based prior and a macro based Likelihood function thereby consistently combining previously separated approaches. Monte Carlo simulations for ordered and unordered Markov states show how observed micro transitions improve precision of posterior knowledge as the sample size increases.
    Keywords: Bayesian estimation, Markov transitions, prior information, multinomial logit, ordered multinomial logit, Agricultural and Food Policy, Research Methods/ Statistical Methods,
    Date: 2011–07–24
  9. By: Holger Dette; Stefan Hoderlein (Institute for Fiscal Studies and Boston College); Natalie Neumeyer
    Abstract: <p>This paper is concerned with testing rationality restrictions using quantile regression methods. Specifically, we consider negative semidefiniteness of the Slutsky matrix, arguably the core restriction implied by utility maximization. We consider a heterogeneous population characterized by a system of nonseparable structural equations with infinite dimensional unobservable. To analyze the economic restriction, we employ quantile regression methods because they allow us to utilize the entire distribution of the data. Difficulties arise because the restriction involves several equations, while the quantile is a univariate concept. We establish that we may test the economic restriction by considering quantiles of linear combinations of the dependent variable. For this hypothesis we develop a new empirical process based test that applies kernel quantile estimators, and derive its large sample behavior. We investigate the performance of the test in a simulation study. Finally, we apply all concepts to Canadian individual data, and show that rationality is an acceptable description of actual individual behavior.</p>
    Date: 2011–05
  10. By: Lance Lochner; Enrico Moretti
    Abstract: In many empirical studies, researchers seek to estimate causal relationships using instrumental variables. When only one valid instrumental variable is available, researchers are limited to estimating linear models, even when the true model may be non-linear. In this case, ordinary least squares and instrumental variable estimators will identify different weighted averages of the underlying marginal causal effects even in the absence of endogeneity. As such, the traditional Hausman test for endogeneity is uninformative. We build on this insight to develop a new test for endogeneity that is robust to any form of non-linearity. Notably, our test works well even when only a single valid instrument is available. This has important practical applications, since it implies that researchers can estimate a completely unrestricted non-linear model by OLS, and then use our test to establish whether those OLS estimates are consistent. We re-visit a few recent empirical examples to show how the test can be used to shed new light on the role of non-linearity.
    JEL: C01 J0
    Date: 2011–05
  11. By: Tom Engsted (Aarhus University and CREATES); Thomas Q. Pedersen (Aarhus University and CREATES)
    Abstract: We analyze and compare the properties of various methods for bias-correcting parameter estimates in vector autoregressions. First, we show that two analytical bias formulas from the existing literature are in fact identical. Next, based on a detailed simulation study, we show that this simple and easy-to-use analytical bias formula compares very favorably to the more standard but also more computer intensive bootstrap bias-correction method, both in terms of bias and mean squared error. Both methods yield a notable improvement over both OLS and a recently proposed WLS estimator. We also investigate the properties of an iterative scheme when applying the analytical bias formula, and we ?find that this can imply slightly better fi?nite-sample properties for very small sample sizes while for larger sample sizes there is no gain by iterating. Finally, we also pay special attention to the risk of pushing an otherwise stationary model into the non-stationary region of the parameter space during the process of correcting for bias.
    Keywords: Bias reduction, VAR model, analytical bias formula, bootstrap, iteration, Yule-Walker, non-stationary system, skewed and fat-tailed data.
    JEL: C13 C32
    Date: 2011–05–13
  12. By: Deb, P;; Trivedi, P;
    Abstract: This paper develops …nite mixture models with …xed e¤ects for two families of distributions for which the incidental parameter problem has a solution. Analytical results are provided for mixtures of Normals and mixtures of Poisson. We provide algorithms based on the expectations-maximization (EM) approach as well as computationally simpler equivalent estimators that can be used in the case of the mixtures of normals. We design and implement a Monte Carlo study that examines the …nite sample performance of the proposed estimator and also compares it with other estimators such the Mundlak-Chamberlain conditionally correlated random e¤ects estimator. The results of Monte Carlo experiments suggest that our proposed estimators of such models have excellent …nite sample properties, even in the case of relatively small T and moderately sized N dimensions. The methods are applied to models of healthcare expenditures and counts of utilization using data from the Health and Retirement Study.
    Date: 2011–04
  13. By: Massimiliano Caporin (University of Padova); Gabriel G. Velo (University of Padova)
    Abstract: In this paper, we estimate, model and forecast Realized Range Volatility, a new realized measure and estimator of the quadratic variation of financial prices. This estimator was early introduced in the literature and it is based on the high-low range observed at high frequency during the day. We consider the impact of the microstructure noise in high frequency data and correct our estimations, following a known procedure. Then, we model the Realized Range accounting for the well-known stylized effects present in financial data. We consider an HAR model with asymmetric effects with respect to the volatility and the return, and GARCH and GJR-GARCH specifications for the variance equation. Moreover, we also consider a non Gaussian distribution for the innovations. The analysis of the forecast performance during the different periods suggests that including the HAR components in the model improve the point forecasting accuracy while the introduction of asymmetric effects only leads to minor improvements.
    Keywords: Statistical analysis of financial data, Econometrics, Forecasting methods, Time series analysis, Realized Range Volatility, Realized Volatility, Long-memory, Volatility forecasting
    JEL: C22 C52 C53
    Date: 2011–02
  14. By: Cristina Amado (Universidade do Minho - NIPE); Timo Teräsvirta (CREATES, School of Economics and Management, Aarhus University)
    Abstract: In this paper we investigate the effects of careful modelling the long-run dynamics of the volatilities of stock market returns on the conditional correlation structure. To this end we allow the individual unconditional variances in Conditional Correlation GARCH models to change smoothly over time by incorporating a nonstationary component in the variance equations. The modelling technique to determine the parametric structure of this time-varying component is based on a sequence of specification Lagrange multiplier-type tests derived in Amado and Teräsvirta (2011). The variance equations combine the long-run and the short-run dynamic behaviour of the volatilities. The structure of the conditional correlation matrix is assumed to be either time independent or to vary over time. We apply our model to pairs of seven daily stock returns belonging to the S&P 500 composite index and traded at the New York Stock Exchange. The results suggest that accounting for deterministic changes in the unconditional variances considerably improves the fit of the multivariate Conditional Correlation GARCH models to the data. The effect of careful specification of the variance equations on the estimated correlations is variable: in some cases rather small, in others more discernible. As a by-product, we generalize news impact surfaces to the situation in which both the GARCH equations and the conditional correlations contain a deterministic component that is a function of time.
    Keywords: Multivariate GARCH model; Time-varying unconditional variance; Lagrange multiplier test; Modelling cycle; Nonlinear time series.
    JEL: C12 C32 C51 C52
    Date: 2011
  15. By: Goodwin, Barry K.; Holt, Matthew T.; Prestemon, Jeffrey P.; Onel, Gulcan
    Abstract: An extensive empirical literature has addressed a wide array of issues pertaining to price linkages over space and across time. Empirical models of price linkages have been used to measure market power and to characterize the operation of markets that are separated by space, time, and product form. The long history of these empirical models extends from simple tests of price correlation, to conventional regression tests, to modern time series models that account for nonstationarity, nonlinearities, and threshold behavior in market linkages. This paper proposes an entirely dierent and potentially novel approach to analyzing these same types of time series data in a nonlinear fashion. Copula-based models that consider the joint distribution of prices separated by space are developed and applied to weekly prices for important lumber products at geographically distinct markets. In particular, we consider prices taken from weekly editions of the Random Lengths publication for homogeneous OSB products.
    Keywords: Spatial Market Linkages, Copula Models, State-dependence, Forest Products, Research Methods/ Statistical Methods,
    Date: 2011–05–03
  16. By: Søren Johansen (University of Copenhagen and CREATES); Theis Lange (University of Copenhagen and CREATES)
    Abstract: The purpose of the present paper is to analyse a simple bubble model suggested by Blanchard and Watson. The model is defined by y(t) =s(t)?y(t-1)+e(t), t=1,…,n, where s(t) is an i.i.d. binary variable with p=P(s(t)=1), independent of e(t) i.i.d. with mean zero and finite variance. We take ?>1 so the process is explosive for a period and collapses when s(t)=0. We apply the drift criterion for non-linear time series to show that the process is geometrically ergodic when p<1, because of the recurrent collapse. It has a finite mean if p?<1, and a finite variance if p?²<1. The question we discuss is whether a bubble model with infinite variance can create the long swings, or persistence, which are observed in many macro variables. We say that a variable is persistent if its autoregressive coefficient ?(n) of y(t) on y(t-1), is close to one. We show that the estimator of ?(n) converges to ?p, if the variance is finite, but if the variance of y(t) is infinite, we prove the curious result that the estimator converges to ??¹. The proof applies the notion of a tail index of sums of positive random variables with infinite variance to find the order of magnitude of the product moments of y(t).
    Keywords: Time series, explosive processes, bubble models.
    JEL: C32
    Date: 2011–05–09
  17. By: Nicoletti, Cheti; Best, Nicky G.
    Abstract: Analyses using aggregated data may bias inference. In this work we show how to avoid or at least reduce this bias when estimating quantile regressions using aggregated information. This is possible by considering the unconditional quantile regression recently introduced by Firpo et al (2009) and using a specific strategy to aggregate the data.
    Date: 2011–05–13
  18. By: Victor DeMiguel; Alberto Martín Utrera; Francisco J. Nogales
    Abstract: Shrinkage estimators is an area widely studied in statistics. In this paper, we contemplate the role of shrinkage estimators on the construction of the investor's portfolio. We study the performance of shrinking the sample moments to estimate portfolio weights as well as the performance of shrinking the naive sample portfolio weights themselves. We provide a theoretical and empirical analysis of different new methods to calibrate shrinkage estimators within portfolio optimization
    Keywords: Portfolio choice, Estimation error, Shrinkage estimators, Smoothed bootstrap
    Date: 2011–05
  19. By: Bergtold, Jason S.; Yeager, Elizabeth A.; Featherstone, Allen
    Abstract: The logistic regression models has been widely used in the social and natural sciences and results from studies using this model can have significant impact. Thus, confidence in the reliability of inferences drawn from these models is essential. The robustness of such inferences is dependent on sample size. The purpose of this study is to examine the impact of sample size on the mean estimated bias and efficiency of parameter estimation and inference for the logistic regression model. A number of simulations are conducted examining the impact of sample size, nonlinear predictors, and multicollinearity on substantive inferences (e.g. odds ratios, marginal effects) and goodness of fit (e.g. pseudo-R2, predictability) of logistic regression models. Findings suggest that sample size can affect parameter estimates and inferences in the presence of multicollinearity and nonlinear predictor functions, but marginal effects estimates are relatively robust to sample size.
    Keywords: Logistic Regression Model, Multicollinearity, Nonlinearity, Robustness, Small Sample Bias, Research Methods/ Statistical Methods,
    Date: 2011
  20. By: Stefan Hoderlein (Institute for Fiscal Studies and Boston College); Yuya Sasaki
    Abstract: <p>This paper contributes to the understanding of the source of identification in panel data models. Recent research has established that few time periods suffice to identify interesting structural effects in nonseparable panel data models even in the presence of complex correlated unobservables, provided these unobservables are time invariant. A communality of all of these approaches is that they point identify effects only for subpopulations. In this paper we focus on average partial derivatives and continuous explanatory variables. We elaborate on the parallel between time in panels and instrumental variables in cross sections and establish that point identification is generically only possible in specific subpopulations, for finite T . Moreover, for general subpopulations, we provide sharp bounds. Finally, we show that these bounds converge to point identification as T tends to infinity only. We systematize this behavior by comparing it to increasing the number of support points of an instrument. Finally, we apply all of these concepts to the semiparametric panel binary choice model and establish that these issues determine the rates of convergence of estimators for the slope coefficient.</p>
    Date: 2011–05
  21. By: Shigeru Iwata; Han Li
    Abstract: When a certain procedure is applied to extract two component processes from a single observed process, it is necessary to impose a set of restrictions that defines two components. One popular restriction is the assumption that the shocks to the trend and cycle are orthogonal. Another is the assumption that the trend is a pure random walk process. The unobserved components (UC) model (Harvey, 1985) assumes both of the above, whereas the BN decomposition (Beveridge and Nelson, 1981) assumes only the latter. Quah (1992) investigates a broad class of decompositions by making the former assumption only. This paper provides a general framework in which alternative trend-cycle decompositions are regarded as special cases, and examines alternative decomposition schemes from the perspective of the frequency domain. We find that as long as the US GDP is concerned, the conventional UC model is inappropriate for the trend-cycle decomposition. We agree with Morley et al (2003) that the UC model is simply misspecified. However, this does not imply that the UC model that allows for the correlated shocks is a better model specification. The correlated UC model would lose many attractive features of the conventional UC model.
    Keywords: Beveridge-Nelson decomposition, Unobserved Component Models
    JEL: E44 F36 G15
    Date: 2011–03
  22. By: Antoni, Espasa; Iván, Mayo
    Abstract: The paper is focused on providing joint consistent forecasts for an aggregate and all its components and in showing that this indirect forecast of the aggregate is at least as accurate as the direct one. The procedure developed in the paper is a disaggregated approach based on single-equation models for the components, which take into account common stable features which some components share between them. The procedure is applied to forecasting euro area, UK and US inflation and it is shown that its forecasts are significantly more accurate than the ones obtained by the direct forecast of the aggregate or by dynamic factor models. A by-product of the procedure is the classification of a large number of components by restrictions shared between them, which could be also useful in other respects, as the application of dynamic factors, the definition of intermediate aggregates or the formulation of models with unobserved components
    Keywords: Common trends, Common serial correlation, Inflation, Euro Area, UK, US, Cointegration, Single-equation econometric models
    Date: 2011–04
  23. By: Donald W.K. Andrews (Cowles Foundation, Yale University)
    Abstract: Completeness and bounded-completeness conditions are used increasingly in econometrics to obtain nonparametric identification in a variety of models from nonparametric instrumental variable regression to non-classical measurement error models. However, distributions that are known to be complete or boundedly complete are somewhat scarce. In this paper, we consider an L^2-completeness condition that lies between completeness and bounded completeness. We construct broad (nonparametric) classes of distributions that are L^2-complete and boundedly complete. The distributions can have any marginal distributions and a wide range of strengths of dependence. Examples of L^2-incomplete distributions also are provided.
    Keywords: C14
    Date: 2011–05
  24. By: Ghosh, Somali; Woodard, Joshua D.; Vedenov, Dmitry V.
    Abstract: The association between prices and yields are of paramount importance to the crop insurance programs. Proper estimation of the association is highly desirable. Copulas are one such method to measure the dependence structure. Five single parametric copulas, a non- parametric copula and their fifteen different combinations taking a mixture of two different copulas at a time have been used in the crop insurance rating analysis. Using data of corn from 1973-2009 for 602 counties in the Mid-West area two different efficient methods have been proposed to generate the optimal mixtures using the cross validation approach. A resampling technique is used to check for the significance of the expected indemnities.
    Keywords: Copulas, Crop Insurance, Cross-Validation, Empirical distribution, GRIP, Indemnities, Out-Of-Sample Log-Likelihood, Agricultural Finance, Q14,
    Date: 2011
  25. By: Chen, Xuan; Goodwin, Barry K.
    Abstract: Our study focuses on modeling southern pine beetle (SPB) outbreaks in the southern area. The approach is to evaluate SPB outbreak frequency in a spatio-temporal framework. A block bootstrapping method with zero-inflated estimation has been proposed to construct a statistical model accounting for explanatory variables while adjusting for spatial and temporal autocorrelation. Although the bootstrap (Efron 1979) method can handle independent observations well, the strong autocorrelation of SPB outbreaks brings about a major challenge. Motivated by bootstrapping overlapping blocks method in autoregressive time series scenario (Kunsch 1989) and block bootstrapping method of dependent data from a spatial map (Hall 1985), we have developed a method to bootstrap overlapping spatio-temporal blocks. By selecting an appropriate block size, the spatial-temporal correlation can be eliminated. The second challenge arises from the fact that the SPB spots distribution has a heavy weight on 0. To accommodate this issue, the zero-inflated models are adopted in the estimation stage. With our saptio-temporal block bootstrapping approach, impacts of environmental factors on SPB outbreaks and implications of pine forest management are assessed. Almost all the explanatory variables, including drought, temperature, forest ecosystem and hurricane, have been detected to have significant impacts. Forestland size and government share of forestland would positively contribute to SPB outbreaks significantly. Meanwhile, our method offers a way to forecast the frequency of future SPB outbreaks, given the current environmental information of a county.
    Keywords: Southern Pinebeetle, Block Bootstrapping, Risk and Uncertainty,
    Date: 2011
  26. By: Tsung Yu, Yang
    Abstract: Regarding the nature of yield data, there are two basic characteristics that needs to be accommodated while we are about to model a yield distribution. The first one is the nonstationary nature of the yield distribution, which causes the heteroscedasticity related problems. The second one is the left skewness of the yield distribution. A common approach to this problem is based on a two-stage method in which the yields are detrended first and the detrended yields are taken as observed data modeled by various parametric and nonparametric methods. Based on a two-stage estimation structure, a mixed normal distribution seems to better capture the the secondary distribution from catastrophic years than a Beta distribution. The implication to the risk management is the yield risk may be underestimated under the common selection -- Beta distribution. A mixed normal distribution under a time-varying structure, under which the parameters are allowed to vary over time, tends to collapse to a single normal distribution. The time-varying mixed normal model fits the realized yield data in one step that avoids the possible bias caused by sampling variability. Also, the time-varying parameters imply that the premium rates can be adjusted to represent the most recent information and that lifts the efficiency of the insurance market.
    Keywords: Time-Varying Distribution, Mixture Distribution, Crop Insurance, Agricultural Finance, Crop Production/Industries, Research Methods/ Statistical Methods, Risk and Uncertainty,
    Date: 2011
  27. By: Gibson, Fiona L.; Burton, Michael P.
    Abstract: Observed and unobserved characteristics of an individual are often used by researchers to explain choices over the provision of environmental goods. One means for identifying what is typically an unobserved characteristic, such as an attitude, is through some data reduction technique, such as factor analysis. However, the resultant variable represents the true attitude with measurement error, and hence, when included into a non-linear choice model, introduces bias in the model. There are well established methods to overcome this issue, which are seldom implemented. In an application to preferences over two water source alternatives for Perth in Western Australia, we use structural equation modeling within a discrete choice model to determine whether welfare measures are significantly impacted by ignoring measurement error in latent attitudes, and the advantage to policy makers from understanding what drives certain attitudes.
    Keywords: contingent valuation, attitudes, structural equation modeling, recycled water, Environmental Economics and Policy, Research Methods/ Statistical Methods, Q51, Q53, C13,
    Date: 2011–05–02
  28. By: Franke, Reiner; Westerhoff, Frank
    Abstract: In the framework of small-scale agent-based financial market models, the paper starts out from the concept of structural stochastic volatility, which derives from different noise levels in the demand of fundamentalists and chartists and the time-varying market shares of the two groups. It advances several different specifications of the endogenous switching between the trading strategies and then estimates these models by the method of simulated moments (MSM), where the choice of the moments reflects the basic stylized facts of the daily returns of a stock market index. In addition to the standard version of MSM with a quadratic loss function, we also take into account how often a great number of Monte Carlo simulation runs happen to yield moments that are all contained within their empirical confidence intervals. The model contest along these lines reveals a strong role for a (tamed) herding component. The quantitative performance of the winner model is so good that it may provide a standard for future research. --
    Keywords: Method of simulated moments,moment coverage ratio,herding,discrete choice approach,transition probability approach
    JEL: D84 G12 G14 G15
    Date: 2011
  29. By: Johnston, Robert J.; Ramachandran, Mahesh; Schultz, Eric T.; Segerson, Kathleen; Besedin, Elena Y.
    Abstract: Stated preference analyses commonly impose strong and unrealistic assumptions in response to spatial welfare heterogeneity. These include spatial homogeneity or continuous distance decay. Despite their ubiquity in the valuation literature, global assumptions such as these have been increasingly abandoned by non-economics disciplines in favor of approaches that allow for spatial patchiness. This paper develops parallel methods to evaluate local patchiness and hot spots in stated preference welfare estimates, characterizing relevant patterns overlooked by traditional approaches. The analysis draws from a choice experiment addressing river restoration. Results demonstrate shortcomings in standard treatments of spatial heterogeneity and insights available through alternative methods.
    Keywords: Willingness to Pay, Hot Spot, Stated Preference, Ecosystem Service, Valuation, Environmental Economics and Policy, Research Methods/ Statistical Methods,
    Date: 2011
  30. By: Yoshitsugu Kitazawa (Faculty of Economics, Kyushu Sangyo University)
    Abstract: This note proposes the average elasticity of the logit probabilities with respect to the exponential functions of explanatory variables in the framework of the fixed effects logit model. The average elasticity is able to be calculated using the consistent estimators of parameters of interest and the average of binary dependent variables, regardless of the fixed effects.
    Keywords: average elasticity; fixed effects logit model
    JEL: C23
    Date: 2011–05
  31. By: Ryota Yabe
    Abstract: This paper derives the asymptotic distribution of Tanaka's score statistic under moderate deviation from a unit root in a moving average model of order one or MA(1). We classify the limiting distribution into three types depending on the order of deviation. In the fastest case, the convergence order of the asymptotic distribution continuously changes from the invertible process to the unit root. In the slowest case, the limiting distribution coincides with the invertible process in a distributional sense. This implies that these cases share an asymptotic property. The limiting distribution in the intermediate case provides the boundary property between the fastest and slowest cases.
    Date: 2011–02
  32. By: Hernandez, Monica; Pudney, Stephen
    Abstract: We investigate the consequences of using time-invariant individual effects in panel data models when the unobservables are in fact time-varying. Using data from the British Offending Crime and Justice panel, we estimate a dynamic factor model of the occurrence of a range of illicit activities as outcomes of young peoples development processes. This structure is then used to demonstrate that relying on the assumption of time-invariant individual effects to deal with confounding factors in a conventional dynamic panel data model is likely to lead to spurious gateway effects linking cannabis use to subsequent hard drug use.
    Date: 2011–05–14
  33. By: Luís Francisco Aguiar (Universidade do Minho - NIPE); Maria Joana Soares (Universidade do Minho - Departamento de Matemática)
    Abstract: Economists are already familiar with the Discrete Wavelet Transform. However, a body of work using the Continuous Wavelet Transform has also been growing. We provide a self-contained summary on continuous wavelet tools, such as the Continuous Wavelet Transform, the Cross-Wavelet, the Wavelet Coherency and the Phase-Difference. Furthermore, we generalize the concept of simple coherency to Partial Wavelet Coherency and Multiple Wavelet Coherency, akin to partial and multiple correlations, allowing the researcher to move beyond bivariate analysis. Finally, we describe the Generalized Morse Wavelets, a class of analytic wavelets recently proposed. A user-friendly toolbox, with examples, is attached to this paper.
    Keywords: Continuous Wavelet Transform, Cross-Wavelet Transform, Wavelet Coherency, Partial Wavelet Coherency, Multiple Wavelet Coherency, Wavelet Phase-Difference; Economic fluctuations
    Date: 2011
  34. By: Chen, Min; Lupi, Frank
    Abstract: Researchers have been using the latent class model (LCM) to value recreational activities for years. Then the reliability of this model becomes an issue. We conduct Monte Carlo simulations to test if the latent class model is able to recover the truth. The simulation results show that LCM does a better job on recovering population average values than recovering underlying population segments.
    Keywords: Monte Carlo Simulations, Latent Class Model, Environmental Economics and Policy,
    Date: 2011

This nep-ecm issue is ©2011 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.