nep-ecm New Economics Papers
on Econometrics
Issue of 2010‒06‒26
sixteen papers chosen by
Sune Karlsson
Orebro University

  1. Inference Based on Conditional Moment Inequalities By Donald W.K. Andrews; Xiaoxia Shi
  2. Inference on Time-Invariant Variables using Panel Data: A Pre-Test Estimator with an Application to the Returns to Schooling By Jean-Bernard Chatelain; Kirsten Ralf
  3. Explicit Solutions for the Asymptotically-Optimal Bandwidth in Cross Validation By Karim M. Abadir; Michel Lubrano
  4. Model-Free Estimation of Large Variance Matrices By Karim M. Abadir; Walter Distaso; Filip Žikeš
  5. Evaluating real-time VAR forecasts with an informative democratic prior By Jonathan H. Wright
  6. An I(d) Model with Trend and Cycles By Karim M. Abadir; Walter Distaso; Liudas Giraitis
  7. On Asymptotic Properties of the Parameters of Differentiated Product Demand and Supply Systems When Demographically-Categorized Purchasing Pattern Data are Available By Satoshi Myojo; Yuichiro Kanazawa
  8. Kernel smoothing end of sample instability tests P values By Patrick Richard
  9. Quantile Treatment Effects in the Regression Discontinuity Design: Process Results and Gini Coefficient By Frölich, Markus; Melly, Blaise
  10. Non-Hermitean Wishart random matrices (I) By Eugene Kanzieper; Navinder Singh
  11. Simplicial similarity and its application to hierarchical clustering By Ángel López; Juan Romo
  12. Exact and high order discretization schemes for Wishart processes and their affine extensions By Abdelkoddousse Ahdida; Aurélien Alfonsi
  13. The choice between fixed and random effects models: some considerations for educational research By Paul Clarke; Claire Crawford; Fiona Steele; Anna Vignoles
  14. Concave-Monotone Treatment Response and Monotone Treatment Selection: With an Application to the Returns to Schooling By Okumura, Tsunao; Usui, Emiko
  15. Testing construct validity of verbal versus numerical measures of preference uncertainty in contingent valuation By Sonia Akter; Jeff Bennett
  16. Using M-quantile models as an alternative to random effects to model the contextual value-added of schools in London By Nikos Tzavidis; James J Brown

  1. By: Donald W.K. Andrews (Cowles Foundation, Yale University); Xiaoxia Shi (Department of Economics, Yale University)
    Abstract: In this paper, we propose an instrumental variable approach to constructing confidence sets (CS's) for the true parameter in models defined by conditional moment inequalities/equalities. We show that by properly choosing instrument functions, one can transform conditional moment inequalities/equalities into unconditional ones without losing identification power. Based on the unconditional moment inequalities/equalities, we construct CS's by inverting Cramer-von Mises-type or Kolmogorov-Smirnov-type tests. Critical values are obtained using generalized moment selection (GMS) procedures. We show that the proposed CS's have correct uniform asymptotic coverage probabilities. New methods are required to establish these results because an infinite-dimensional nuisance parameter affects the asymptotic distributions. We show that the tests considered are consistent against all fixed alternatives and have power against some n^{-1/2}-local alternatives, though not all such alternatives. Monte Carlo simulations for three different models show that the methods perform well in finite samples.
    Keywords: Asymptotic size, asymptotic power, conditional moment inequalities, confidence set, Cramer-von Mises, generalized moment selection, Kolmogorov-Smirnov, moment inequalities
    JEL: C12 C15
    Date: 2010–06
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1761&r=ecm
  2. By: Jean-Bernard Chatelain (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I); Kirsten Ralf (PSE - Paris-Jourdan Sciences Economiques - CNRS : UMR8545 - Ecole des Hautes Etudes en Sciences Sociales (EHESS) - Ecole Nationale des Ponts et Chaussées - Ecole Normale Supérieure de Paris - ENS Paris)
    Abstract: This paper proposes a new pre-test estimator of panel data models including time invariant variables based upon the Mundlak-Krishnakumar estimator and an "unrestricted” Hausman-Taylor estimator. The paper evaluates the biases of currently used restricted estimators, omitting the average-over-time of at least one endogenous time-varying explanatory variable. Repeated Between, Ordinary Least Squares, Two stage restricted Between and Oaxaca-Geisler estimator, Fixed Effect Vector Decomposition, Generalized least squares may lead to wrong conclusions regarding the statistical significance of the estimated parameter values of time-invariant variables.
    Keywords: Time-Invariant Variables, Panel data, Time-Series Cross-Sections, Pre-Test Estimator, Mundlak Estimator, Fixed Effects Vector Decomposition
    Date: 2010–01–15
    URL: http://d.repec.org/n?u=RePEc:hal:cesptp:hal-00492039_v1&r=ecm
  3. By: Karim M. Abadir (Imperial College, London, UK); Michel Lubrano (Greqam-Cnrs, Centre de la Vieille Charité, Marseille, France)
    Abstract: Least squares cross-validation (CV) methods are often used for automated bandwidth selection. We show that they share a common structure which has an explicit asymptotic solution that we derive. Using the framework of density estimation, we consider unbiased, biased, and smoothed CV methods. We show that, with a Student t(v) kernel which includes the Gaussian as a special case, the CV criterion becomes asymptotically equivalent to a simple polynomial. This leads to optimal-bandwidth solutions that dominate the usual CV methods, definitely in terms of simplicity and speed of calculation, but also often in terms of integrated squared error because of the robustness of our asymptotic solution, hence also alleviating the notorious sample variability of CV. We present simulations to illustrate these features and to give practical guidance on the choice of v.
    Keywords: bandwidth choice; cross validation; nonparametric density estimation; analytical solution
    Date: 2010–01
    URL: http://d.repec.org/n?u=RePEc:rim:rimwps:16_10&r=ecm
  4. By: Karim M. Abadir (Imperial College London); Walter Distaso (Imperial College London); Filip Žikeš (Imperial College London)
    Abstract: This paper introduces a new method for estimating large variance matrices. Starting from the orthogonal decomposition of the sample variance matrix, we exploit the fact that orthogonal matrices are never ill-conditioned and therefore focus on improving the estimation of the eigenvalues. We estimate the eigenvectors from just a fraction of the data, then use them to transform the data into approximately orthogonal series that we use to estimate a well-conditioned matrix of eigenvalues. Our estimator is model-free: we make no assumptions on the distribution of the random sample or on any parametric structure the variance matrix may have. By design, it delivers well-conditioned estimates regardless of the dimension of problem and the number of observations available. Simulation evidence show that the new estimator outperforms the usual sample variance matrix, not only by achieving a substantial improvement in the condition number (as expected), but also by much lower error norms that measure its deviation from the true variance.
    Keywords: variance matrices, ill-conditioning, mean squared error, mean absolute deviations, resampling
    JEL: C10
    Date: 2010–01
    URL: http://d.repec.org/n?u=RePEc:rim:rimwps:17_10&r=ecm
  5. By: Jonathan H. Wright
    Abstract: This paper proposes Bayesian forecasting in a vector autoregression using a democratic prior. This prior is chosen to match the predictions of survey respondents. In particular, the unconditional mean for each series in the vector autoregression is centered around long-horizon survey forecasts. Heavy shrinkage toward the democratic prior is found to give good real-time predictions of a range of macroeconomic variables, as these survey projections are good at quickly capturing endpoint-shifts.
    Keywords: Forecasting ; Real-time data
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:fip:fedpwp:10-19&r=ecm
  6. By: Karim M. Abadir (Imperial College Business School, Imperial College London, London, UK); Walter Distaso (Imperial College Business School, Imperial College London, London, UK); Liudas Giraitis (Department of Economics, Queen Mary, University of London, London, UK)
    Abstract: This paper deals with models allowing for trending processes and cyclical component with error processes that are possibly nonstationary, nonlinear, and non-Gaussian. Asymptotic confidence intervals for the trend, cyclical component, and memory parameters are obtained. The confidence intervals are applicable for a wide class of processes, exhibit good coverage accuracy, and are easy to implement.
    Keywords: fractional integration, trend, cycle, nonlinear process, Whittle objective function
    JEL: C22
    Date: 2010–01
    URL: http://d.repec.org/n?u=RePEc:rim:rimwps:18_10&r=ecm
  7. By: Satoshi Myojo (Graduate School of Economics, Kobe University); Yuichiro Kanazawa (University of Tsukuba)
    Abstract: In this paper, we derive asymptotic theorems for the Petrin (2002) extension of the Berry, Levinsohn, and Pakes (BLP, 1995) framework to estimate demand-supply models with micro moments. The micro moments contain the information relating the consumer demographics to the characteristics of the products they purchase. With additional assumptions, the extended estimator is shown to be CAN and more efficient than the BLP estimator. We discuss the conditions under which these asymptotic theorems hold for the random coefficient logit model. We implement extensive simulation studies and confirm the benefit of the micro moments in estimating the random coefficient logit model.
    Date: 2010–06
    URL: http://d.repec.org/n?u=RePEc:koe:wpaper:1009&r=ecm
  8. By: Patrick Richard (GREDI, Département d'économique, Université de Sherbrooke)
    Abstract: A Monte Carlo investigation shows that the rejection probability of the structural stability test of Andrews (2003) depends on several characteristics of the DGP, one of which is the length of the hypothesized break period. This is analyzed and found to be caused, at least in part, by the fact that the number of subsampling statistics used to compute the P value depends on the sample size and the length of the break period. Simulations show that kernel smoothed P values provide more accurate tests in small samples.
    Keywords: Kernel smoothing; Simulation-based test; P value; Stability test
    JEL: C12 C14 C15
    Date: 2010–06–17
    URL: http://d.repec.org/n?u=RePEc:shr:wpaper:10-19&r=ecm
  9. By: Frölich, Markus (University of Mannheim); Melly, Blaise (Brown University)
    Abstract: This paper shows nonparametric identification of quantile treatment effects (QTE) in the regression discontinuity design. The distributional impacts of social programs such as welfare, education, training programs and unemployment insurance are of large interest to economists. QTE are an intuitive tool to characterize the effects of these interventions on the outcome distribution. We propose uniformly consistent estimators for both potential outcome distributions (treated and non-treated) for the population of interest as well as other function-valued effects of the policy including in particular the QTE process. The estimators are straightforward to implement and attain the optimal rate of convergence for one-dimensional nonparametric regression. We apply the proposed estimators to estimate the effects of summer school on the distribution of school grades, complementing the results of Jacob and Lefgren (2004).
    Keywords: quantile treatment effect, causal effect, endogeneity, regression discontinuity
    JEL: C13 C14 C21
    Date: 2010–06
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp4993&r=ecm
  10. By: Eugene Kanzieper; Navinder Singh
    Abstract: A non-Hermitean extension of paradigmatic Wishart random matrices is introduced to set up a theoretical framework for statistical analysis of (real, complex and real quaternion) stochastic time series representing two "remote" complex systems. The first paper in a series provides a detailed spectral theory of non-Hermitean Wishart random matrices composed of complex valued entries. The great emphasis is placed on an asymptotic analysis of the mean eigenvalue density for which we derive, among other results, a complex-plane analogue of the Marchenko-Pastur law. A surprising connection with a class of matrix models previously invented in the context of quantum chromodynamics is pointed out. This provides one more evidence of the ubiquity of Random Matrix Theory.
    Date: 2010–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1006.3096&r=ecm
  11. By: Ángel López; Juan Romo
    Abstract: In the present document, an extension of the statistical depth notion is introduced with the aim to allow for measuring proximities between pairs of points. In particular, we will extend the simplicial depth function, which measures how central is a point by using random simplices (triangles in the two-dimensional space). The paper is structured as follows: In first place, there is a brief introduction to statistical depth functions. Next, the simplicial similarity function will be defined and its properties studied. Finally, we will present a few graphical examples in order to show its behavior with symmetric and asymmetric distributions, and apply the function to hierarchical clustering.
    Keywords: Statistical depth, Similarity measures, Hierarchical clustering
    Date: 2010–06
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws102411&r=ecm
  12. By: Abdelkoddousse Ahdida (CERMICS - Centre d'Enseignement et de Recherche en Mathématiques et Calcul Scientifique - Ecole Nationale des Ponts et Chaussées); Aurélien Alfonsi (CERMICS - Centre d'Enseignement et de Recherche en Mathématiques et Calcul Scientifique - Ecole Nationale des Ponts et Chaussées)
    Abstract: This work deals with the simulation of Wishart processes and affine diffusions on positive semidefinite matrices. To do so, we focus on the splitting of the infinitesimal generator, in order to use composition techniques as Ninomiya and Victoir or Alfonsi. Doing so, we have found a remarkable splitting for Wishart processes that enables us to sample exactly Wishart distributions, without any restriction on the parameters. It is related but extends existing exact simulation methods based on Bartlett's decomposition. Moreover, we can construct high-order discretization schemes for Wishart processes and second-order schemes for general affine diffusions. These schemes are in practice faster than the exact simulation to sample entire paths. Numerical results on their convergence are given.
    Keywords: Wishart processes, affine processes, exact simulation, discretization schemes, weak error, Bartlett's decomposition.
    Date: 2010–06–11
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-00491371_v1&r=ecm
  13. By: Paul Clarke (Centre for Market and Public Organisation, University of Bristol, 2 Priory Road, Bristol, BS8 1TX.); Claire Crawford (Institute for Fiscal Studies, 7 Ridgmount Street, London, WC1E 7AE; Institute of Education, University of London, 20 Bedford Way, London WC1H 0AL, UK.); Fiona Steele (Centre for Multilevel Modelling, Graduate School of Education, University of Bristol, 2 Priory Road, Bristol, BS8 1TX); Anna Vignoles (Department of Quantitative Social Science, Institute of Education, University of London. 20 Bedford Way, London WC1H 0AL, UK.)
    Abstract: We discuss the use of fixed and random effects models in the context of educational research and set out the assumptions behind the two modelling approaches. To illustrate the issues that should be considered when choosing between these approaches, we analyse the determinants of pupil achievement in primary school, using data from the Avon Longitudinal Study of Parents and Children. We conclude that a fixed effects approach will be preferable in scenarios where the primary interest is in policy-relevant inference of the effects of individual characteristics, but the process through which pupils are selected into schools is poorly understood or the data are too limited to adjust for the effects of selection. In this context, the robustness of the fixed effects approach to the random effects assumption is attractive, and educational researchers should consider using it, even if only to assess the robustness of estimates obtained from random effects models. On the other hand, when the selection mechanism is fairly well understood and the researcher has access to rich data, the random effects model should naturally be preferred because it can produce policy-relevant estimates while allowing a wider range of research questions to be addressed. Moreover, random effects estimators of regression coefficients and shrinkage estimators of school effects are more statistically efficient than those for fixed effects.
    Keywords: fixed effects, random effects, multilevel modelling, education, pupil achievement
    JEL: C52 I21
    Date: 2010–06–18
    URL: http://d.repec.org/n?u=RePEc:qss:dqsswp:1010&r=ecm
  14. By: Okumura, Tsunao (Yokohama National University); Usui, Emiko (Nagoya University)
    Abstract: This paper identifies sharp bounds on the mean treatment response and average treatment effect under the assumptions of both concave monotone treatment response (concave-MTR) and monotone treatment selection (MTS). We use our bounds and the US National Longitudinal Survey of Youth to estimate mean returns to schooling. Our upper-bound estimates are substantially smaller than (1) estimates using only the concave-MTR assumption of Manski (1997) and (2) estimates using only the MTR and MTS assumptions of Manski and Pepper (2000). They fall in the lower range of the point estimates given in previous studies that assume linear wage functions. This is because ability bias is corrected by assuming MTS when the functions are close to linear. Our results therefore imply that higher returns reported in previous studies are likely to be overestimated.
    Keywords: nonparametric methods, partial identification, sharp bounds, treatment response, returns to schooling
    JEL: C14 J24
    Date: 2010–06
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp4986&r=ecm
  15. By: Sonia Akter (Crawford School of Economics and Government, the Australian National University); Jeff Bennett (Crawford School of Economics and Government, the Australian National University)
    Abstract: The numerical certainty scale (NCS) and polychotomous choice (PC) methods are two widely used techniques for measuring preference uncertainty in contingent valuation (CV) studies. The NCS follows a numerical scale and the PC is based on a verbal scale. This paper presents results of two experiments that use these preference uncertainty measurement techniques. The first experiment was designed to compare and contrast the uncertainty scores obtained from the NCS and the PC method. The second experiment was conducted to test a preference uncertainty measurement scale which combines verbal expressions with numerical and graphical interpretations: a composite certainty scale (CCS). The construct validity of the certainty scores obtained from these three techniques was tested by estimating three separate ordered probit regression models. The results of the study can be summarized in three key findings. First, the PC method generates a higher proportion of ‘Yes’ responses than the conventional dichotomous choice elicitation format. Second, the CCS method generates a significantly higher proportion of certain responses than the NCS and the PC methods. Finally, the NCS method performs poorly in terms of construct validity. We conclude that, overall, the verbal measures perform better than the numerical measure. Furthermore, the CCS method is promising in measuring preference uncertainty in CV studies. However, further empirical applications are required to develop a better understanding of its strengths and the weaknesses.
    Keywords: Preference uncertainty, contingent valuation, numerical certainty scale, polychotomous choice method, composite certainty scale, climate change, Australia
    JEL: Q51 Q54
    Date: 2010–01
    URL: http://d.repec.org/n?u=RePEc:een:eenhrr:0946&r=ecm
  16. By: Nikos Tzavidis (University of Manchester.); James J Brown (Department of Quantitative Social Science, Institute of Education, University of London. 20 Bedford Way, WC1H 0AL)
    Abstract: The measurement of school performance for secondary schools in England has developed from simple measures of marginal performance at age 16 to more complex contextual value-added measures that account for pupil prior attainment and background. These models have been developed within the multilevel modelling environment (pupils within schools) but in this paper we propose an alternative using a more robust approach based on M-quantile modelling of individual pupil efficiency. These efficiency measures condition on a pupils ability and background, as do the current contextual value-added models, but as they are measured at the pupil level a variety of performance measures can be readily produced at the school and higher (local authority) levels. Standard errors for the performance measures are provided via a bootstrap approach, which is validated using a model-based simulation.
    Keywords: School Performance, Contextual Value-Added, M-Quantile Models, Pupil Efficiency, London
    JEL: C14 C21 I21
    Date: 2010–06–18
    URL: http://d.repec.org/n?u=RePEc:qss:dqsswp:1011&r=ecm

This nep-ecm issue is ©2010 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.