nep-ecm New Economics Papers
on Econometrics
Issue of 2007‒08‒14
fifteen papers chosen by
Sune Karlsson
Orebro University

  1. Nonparametric Identification and Estimation of Nonclassical Errors-in-Variables Models Without Additional Information By Xiaohong Chen; Yingyao Hu; Arthur Lewbel
  2. Testing for Shifts in Trend with an Integrated or Stationary Noise Component By Pierre Perron; Tomoyoshi Yabu
  3. Nearest Neighbor Conditional Estimation for Harris Recurrent Markov Chains By Sancetta, A.
  4. Nonparametric Identification of Regression Models Containing a Misclassified Dichotomous Regressor Without Instruments By Xiaohong Chen; Yingyao Hu; Arthur Lewbel
  5. A Non-local Perspective on the Power Properties of the CUSUM and CUSUM of Squares Tests for Structural Change* By Ai Deng; Pierre Perron
  6. Fixed Effects Estimation of Structural Parameters and Marginal Effects in Panel Probit Models By Ivan Fernandez-Val
  7. The Weak Instrument Problem of the System GMM Estimator in Dynamic Panel Data Models By Maurice J.G. Bun; Frank Windmeijer
  8. Bias Corrections for Two-Step Fixed Effects Panel Data Estimators By Francis Vella; Ivan Fernandez-Val
  9. Nonparametric identification of the classical errors-in-variables model without side information By Susanne M. Schennach; Yingyao Hu; Arthur Lewbel
  10. Asymptotic Distribution of the OLS Estimator for a Mixed Regressive, Spatial Autoregressive Model By Mynbaev, Kairat
  11. A Component GARCH Model with Time Varying Weights By Luc, BAUWENS; G., STORTI
  12. QUANTILE AND PROBABILITY CURVES WITHOUT CROSSING By Victor Chernozhukov; Ivan Fernandez-Val; Alfred Galichon
  13. An Analytical Evaluation of the Log-periodogram Estimate in the Presence of Level Shifts and its Implications for Stock Returns Volatility* By Pierre Perron; Zhongjun Qu
  14. IMPROVING ESTIMATES OF MONOTONE FUNCTIONS BY REARRANGEMENT By Victor Chernozhukov; Ivan Fernandez-Val; Alfred Galichon
  15. Global warming: Forecasts by scientists versus scientific forecasts By Green, Kesten C.; Armstrong, J. Scott

  1. By: Xiaohong Chen (Yale University); Yingyao Hu (Johns Hopkins University); Arthur Lewbel (Boston College)
    Abstract: This paper considers identification and estimation of a nonparametric regression model with an unobserved discrete covariate. The sample consists of a dependent variable and a set of covariates, one of which is discrete and arbitrarily correlates with the unobserved covariate. The observed discrete covariate has the same support as the unobserved covariate, and can be interpreted as a proxy or mismeasure of the unobserved one, but with a nonclassical measurement error that has an unknown distribution. We obtain nonparametric identification of the model given monotonicity of the regression function and a rank condition that is directly testable given the data. Our identification strategy does not require additional sample information, such as instrumental variables or a secondary sample. We then estimate the model via the method of sieve maximum likelihood, and provide root-n asymptotic normality and semiparametric efficiency of smooth functionals of interest. Two small simulations are presented to illustrate the identification and the estimation results.
    Keywords: Errors-In-Variables (EIV), Identification; Nonclassical measurement error; Nonparametric regression; Sieve maximum likelihood.
    JEL: C20 C14
    Date: 2007–08–08
    URL: http://d.repec.org/n?u=RePEc:boc:bocoec:676&r=ecm
  2. By: Pierre Perron (Department of Economics, Boston University); Tomoyoshi Yabu (Department of Economics, Boston University)
    Abstract: This paper considers the problem of testing for structural changes in the trend function of a univariate time series without any prior knowledge as to whether the noise component is stationary or contains an autoregressive unit root. We propose a new approach that builds on the work of Perron and Yabu (2005), based on a Feasible Quasi Generalized Least Squares procedure that uses a superefficient estimate of the sum of autoregressive parameters á when á = 1. In the case of a known break date, the resulting Wald test has a chi- square limit distribution in both the I(0) and I(1) cases. When the break date is unknown, the Exp function of Andrews and Ploberger (1994) yields a test with identical limit distributions in the two cases so that a testing procedure with nearly the same size in the I(0) and I(1) cases can be obtained. To improve the finite sample properties of the tests, we used the bias corrected version of the OLS estimate of á proposed by Roy and Fuller (2001). We show our procedure to be substantially more powerful then currently available alternatives and also to have a power function that is close to that attainable if we knew the true value of á in many cases. The extension to the case of multiple breaks is also discussed.
    Keywords: structural change, unit root, median-unbiased estimates, GLS procedure, super efficient estimates.
    JEL: C22
    Date: 2007–03
    URL: http://d.repec.org/n?u=RePEc:bos:wpaper:wp2007-025&r=ecm
  3. By: Sancetta, A.
    Abstract: This paper is concerned with consistent nearest neighbor time series estimation for data generated by a Harris recurrent Markov chain. The goal is to validate nearest neighbor estimation in this general time series context, using simple and weak conditions. The framework considered covers, in a unified manner, a wide variety of statistical quantities, e.g. autoregression function, conditional quantiles, conditional tail estimators and, more generally, extremum estimators. The focus is theoretical, but examples are given to highlight applications.
    Keywords: Nonparametric Estimation, Quantile Estimation, Semiparametric Estimation, Sequential Forecasting, Tail Estimation, Time Series.
    Date: 2007–07
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:0735&r=ecm
  4. By: Xiaohong Chen (Yale University); Yingyao Hu (Johns Hopkins University); Arthur Lewbel (Boston College)
    Abstract: This note considers nonparametric identification of a general nonlinear regression model with a dichotomous regressor subject to misclassification error. The available sample information consists of a dependent variable and a set of regressors, one of which is binary and error-ridden with misclassification error that has unknown distribution. Our identification strategy does not parameterize any regression or distribution functions, and does not require additional sample information such as instrumental variables, repeated measurements, or an auxiliary sample. Our main identifying assumption is that the regression model error has zero conditional third moment. The results include a closed-form solution for the unknown distributions and the regression function.
    Keywords: misclassification error; identification; nonparametric regression
    JEL: C20 C14
    Date: 2007–07–31
    URL: http://d.repec.org/n?u=RePEc:boc:bocoec:675&r=ecm
  5. By: Ai Deng (Bates White, LLC); Pierre Perron (Department of Economics, Boston University)
    Abstract: We consider the power properties of the CUSUM and CUSUM of squares tests in the presence of a one-time change in the parameters of a linear regression model. A result due to Ploberger and Krämer (1990) is that the CUSUM of squares test has only trivial asymptotic local power in this case, while the CUSUM test has non-trivial local asymptotic power unless the change is orthogonal to the mean regressor. The main theme of the paper is that such conclusions obtained from a local asymptotic framework are not reliable guides to what happens in finite samples. The approach we take is to derive expansions of the test statistics that retain terms related to the magnitude of the change under the alternative hypothesis. This enables us to analyze what happens for non-local to zero breaks. Our theoretical results are able to explain how the power function of the tests can be drastically different depending on whether one deals with a static regression with uncorrelated errors, a static regression with correlated errors, a dynamic regression with lagged dependent variables, or whether a correction for non-Normality is applied in the case of the CUSUM of squares. We discuss in which cases the tests are subject to a non-monotonic power function that goes to zero as the magnitude of the change increases, and uncover some curious properties. All theoretical results are verified to yield good guides to the finite sample power through simulation experiments. We finally highlight the practical importance of our results.
    Keywords: Change-point, Mean shift, Local asymptotic power, Recursive residuals, Dynamic models.
    Date: 2007–03
    URL: http://d.repec.org/n?u=RePEc:bos:wpaper:wp2007-020&r=ecm
  6. By: Ivan Fernandez-Val (Department of Economics, Boston University)
    Abstract: Fixed effects estimators of nonlinear panel models can be severely biased due to the incidental parameters problem. In this paper I find that the most important component of this incidental parameters bias for probit fixed effects estimators of index coefficients is proportional to the true value of these coe±cients, using a large-T expansion of the bias. This result allows me to derive a lower bound for this bias, and to show that fixed effects estimates of ratios of coefficients and average marginal effects have zero bias in the absence of heterogeneity, and have negligible bias relative to their true values for a wide variety of distributions of regressors and individual effects. New bias corrected estimators for index coefficients and marginal effects with improved finite sample properties are also proposed for static and dynamic probit, logit, and linear probability models with predetermined regressors.
    Keywords: Panel data; Bias; Discrete Choice Models; Probit; Fixed effects; Labor Force Participation.
    JEL: C23 C25 J22
    Date: 2007–02
    URL: http://d.repec.org/n?u=RePEc:bos:wpaper:wp2007-010&r=ecm
  7. By: Maurice J.G. Bun; Frank Windmeijer
    Abstract: The system GMM estimator for dynamic panel data models combines moment conditions for the model in first differences with moment conditions for the model in levels. It has been shown to improve on the GMM estimator in the first differenced model in terms of bias and root mean squared error. However, we show in this paper that in the covariance stationary panel data AR(1) model the expected values of the concentration parameters in the differenced and levels equations for the crosssection at time t are the same when the variances of the individual heterogeneity and idiosyncratic errors are the same. This indicates a weak instrument problem also for the equation in levels. We show that the 2SLS biases relative to that of the OLS biases are then similar for the equations in differences and levels, as are the size distortions of the Wald tests. These results are shown in a Monte Carlo study to extend to the panel data system GMM estimator.
    Keywords: Dynamic Panel Data, System GMM, Weak Instruments
    JEL: C12 C13 C23
    URL: http://d.repec.org/n?u=RePEc:bri:uobdis:07/595&r=ecm
  8. By: Francis Vella (Georgetown University); Ivan Fernandez-Val (Department of Economics, Boston University)
    Abstract: This paper introduces bias-corrected estimators for nonlinear panel data models with both time invariant and time varying heterogeneity. These include limited dependent variable models with both unobserved individual effects and endogenous explanatory variables, and sample selection models with unobserved individual effects. Our two-step approach first estimates the reduced form by fixed effects procedures to obtain estimates of the time variant heterogeneity underlying the endogeneity/selection bias. We then estimate the primary equation by fixed effects including an appropriately constructed control function from the reduced form estimates as an additional explanatory variable. The fixed effects approach in this second step captures the time invariant heterogeneity while the control function accounts for the time varying heterogeneity. Since either or both steps might employ nonlinear fixed effects procedures it is necessary to bias adjust the estimates due to the incidental parameters problem. This problem is exacerbated by the two step nature of the procedure. As these two step approaches are not covered in the existing literature we derive the appropriate correction thereby extending the use of large-T bias adjustments to an important class of models. Simulation evidence indicates our approach works well in finite samples and an empirical example illustrates the applicability of our estimator.
    Keywords: Panel data; Two-Step Estimation; Endogenous Regressors; Fixed Effects; Bias; Union Premium.
    JEL: C23 J31 J51
    Date: 2007–02
    URL: http://d.repec.org/n?u=RePEc:bos:wpaper:wp2007-011&r=ecm
  9. By: Susanne M. Schennach (University of Chicago); Yingyao Hu (Johns Hopkins University); Arthur Lewbel (Boston College)
    Abstract: This note establishes that the fully nonparametric classical errors-in-variables model is identifiable from data on the regressor and the dependent variable alone, unless the specification is a member of a very specific parametric family. This family includes the linear specification with normally distributed variables as a special case. This result relies on standard primitive regularity conditions taking the form of smoothness and monotonicity of the regression function and nonvanishing characteristic functions of the disturbances.
    Keywords: errors in variables, nonparametric estimation, identification
    JEL: C20 C14
    Date: 2007–07–16
    URL: http://d.repec.org/n?u=RePEc:boc:bocoec:674&r=ecm
  10. By: Mynbaev, Kairat
    Abstract: We find the asymptotics of the OLS estimator of the parameters $\beta$ and $\rho$ in the spatial autoregressive model with exogenous regressors $Y_n = X_n\beta+\rho W_nY_n+V_n$. Only low-level conditions are imposed. Exogenous regressors may be bounded or growing, like polynomial trends. The assumption on the spatial matrix $W_n$ is appropriate for the situation when each economic agent is influenced by many others. The asymptotics contains both linear and quadratic forms in standard normal variables. The conditions and the format of the result are chosen in a way compatible with known results for the model without lags by Anderson (1971) and for the spatial model without exogenous regressors due to Mynbaev and Ullah (2006).
    Keywords: mixed regressive spatial autoregressive model; OLS estimator; asymptotic distribution
    JEL: C31 C21
    Date: 2006–08–01
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:4411&r=ecm
  11. By: Luc, BAUWENS (UNIVERSITE CATHOLIQUE DE LOUVAIN, Department of Economics); G., STORTI
    Abstract: We present a novel GARCH model that accounts for time varying, state dependent, persistence in the volatility dynamics. The proposed model generalizes the component GARCH model of Ding and Granger (1996). The volatility is modelled as a convex combination of unobserved GARCH components where the combination weights are time varying as a function of appropriately chosen state variables. In order to make inference on the model parameters, we develop a Gibbs sampling algorithm. Adopting a fully Bayesian approach allows to easily obtain medium and long term predictions of relevant risk measures such as value at risk and expected shortfall. Finally we discuss the results of an application to a series of daily returns on the S&P500.
    Keywords: Persistence, Volatility components, Value-at-risk, Expected short-fall
    JEL: C11 C15 C22
    Date: 2007–03–28
    URL: http://d.repec.org/n?u=RePEc:ctl:louvec:2007012&r=ecm
  12. By: Victor Chernozhukov (MIT, Department of Economics & Operations Research Center, University College London and The University of Chicago); Ivan Fernandez-Val (Department of Economics, Boston University); Alfred Galichon (Harvard University, Department of Economics)
    Abstract: The most common approach to estimating conditional quantile curves is to fit a curve, typically linear, pointwise for each quantile. Linear functional forms, coupled with pointwise fitting, are used for a number of reasons including parsimony of the resulting approximations and good computational properties. The resulting fits, however, may not respect a logical monotonicity requirement-that the quantile curve be increasing as a function of probability. This paper studies the natural monotonization of these empirical curves induced by sampling from the estimated non-monotone model, and then taking the resulting conditional quantile curves that by construction are monotone in the probability. This construction of monotone quantile curves may be seen as a bootstrap and also as a monotonic rearrangement of the original non-monotone function. It is shown that the monotonized curves are closer to the true curves in finite samples, for any sample size. Under correct specification, the rearranged conditional quantile curves have the same asymptotic distribution as the original non-monotone curves. Under misspecification, however, the asymptotics of the rearranged curves may partially differ from the asymptotics of the original non-monotone curves. An analogous procedure is developed to monotonize the estimates of conditional distribution functions. The results are derived by establishing the compact (Hadamard) differentiability of the monotonized quantile and probability curves with respect to the original curves in discontinuous directions, tangentially to a set of continuous functions. In doing so, the compact differentiability of the rearrangement-related operators is established.
    Keywords: Quantile regression, Monotonicity, Rearrangement, Approximation, Functional Delta Method, Hadamard Di®erentiability of Rearrangement Operators.
    Date: 2007–04
    URL: http://d.repec.org/n?u=RePEc:bos:wpaper:wp2007-012&r=ecm
  13. By: Pierre Perron (Department of Economics, Boston University); Zhongjun Qu (Department of Economics, Boston University)
    Abstract: Recently, there has been an upsurge of interest on the possibility of confusing long memory and structural changes in level. Many studies have documented the fact that when a stationary short memory process is contaminated by level shifts the estimate of the fractional differencing parameter is biased away from zero and the autocovariance function exhibits a slow rate of decay, akin to a long memory process. Yet, no theoretical results are available pertaining to the distributions of the estimates. We fill this gap by analyzing the properties of the log periodogram estimate when the jump component is specified by a simple mixture model. Our theoretical results explain many findings reported and uncover new features. Simulations are presented to highlight the properties of the distributions and to assess the adequacy of our limit results as approximations to the finite sample distributions. Also, we explain how the limit distribution changes as the number of frequencies used varies, a feature that is different from the case with a pure fractionally integrated model. We confront this practical implication to daily SP500 absolute returns and their square roots over the period 1928-2002. Our findings are remarkable, the path of the log periodogram estimates clearly follows a pattern that would obtain if the true underlying process was one of short-memory contaminated by level shifts instead of a pure fractionally integrated process. A simple testing procedure is also proposed, which reinforces this conclusion.
    Keywords: structural change, jumps, long memory processes, fractional integration, Poisson process, frequency domain estimates.
    JEL: C22
    Date: 2006–12
    URL: http://d.repec.org/n?u=RePEc:bos:wpaper:wp2006-016&r=ecm
  14. By: Victor Chernozhukov (MIT, Department of Economics & Opperations Research Center, University College London and The University of Chicago); Ivan Fernandez-Val (Department of Economics, Boston University); Alfred Galichon (Harvard University, Department of Economics)
    Abstract: Suppose that a target function f0 : Rd ! R is monotonic, namely, weakly increasing, and an original estimate ^ f of the target function is available, which is not weakly increasing. Many common estimation methods used in statistics produce such estimates ^ f. We show that these estimates can always be improved with no harm using rearrangement techniques: The rearrangement methods, univariate and multivariate, transform the original estimate to a monotonic estimate ^ f¤, and the resulting estimate is closer to the true curve f0 in common metrics than the original estimate ^ f. We illustrate the results with a computational example and an empirical example dealing with age-height growth charts.
    Keywords: Monotone function, improved approximation, multivariate rearrange- ment, univariate rearrangement, growth chart, quantile regression, mean regression, series, locally linear, kernel methods
    Date: 2007–04
    URL: http://d.repec.org/n?u=RePEc:bos:wpaper:wp2007-013&r=ecm
  15. By: Green, Kesten C.; Armstrong, J. Scott
    Abstract: In 2007, the Intergovernmental Panel on Climate Change’s Working Group One, a panel of experts established by the World Meteorological Organization and the United Nations Environment Programme, issued its Fourth Assessment Report. The Report included predictions of dramatic increases in average world temperatures over the next 92 years and serious harm resulting from the predicted temperature increases. Using forecasting principles as our guide we asked: Are these forecasts a good basis for developing public policy? Our answer is “no.” To provide forecasts of climate change that are useful for policy-making, one would need to forecast (1) global temperature, (2) the effects of any temperature changes, (3) the effects of alternative policies, and (4) whether the best policy would be successfully implemented. Proper forecasts of all four are necessary for rational policy making. The IPCC Report was regarded as providing the most credible long-term forecasts of global average temperatures by 31 of the 51 scientists and others involved in forecasting climate change who responded to our survey. We found no references to the primary sources of information on forecasting methods despite the fact these are easily available in books, articles, and websites. We audited the forecasting processes described in Chapter 8 of the IPCC’s WG1 Report to assess the extent to which they complied with forecasting principles. We found enough information to make judgments on 89 out of a total of 140 forecasting principles. The forecasting procedures that were described violated 72 principles. Many of the violations were, by themselves, critical. The forecasts in the Report were not the outcome of scientific procedures. In effect, they were the opinions of scientists transformed by mathematics and obscured by complex writing. Research on forecasting has shown that experts’ predictions are not useful. We have been unable to identify any scientific forecasts of global warming. Claims that the Earth will get warmer have no more credence than saying that it will get colder.
    Keywords: accuracy; audit; climate change; evaluation; expert judgment; mathematical models; public policy
    JEL: C53 H23 H21
    Date: 2007–08–03
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:4361&r=ecm

This nep-ecm issue is ©2007 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.