nep-ecm New Economics Papers
on Econometrics
Issue of 2012‒10‒06
fourteen papers chosen by
Sune Karlsson
Orebro University

  1. Efficient estimation of conditional risk measures in a semiparametric GARCH model By Yang Yan; Dajing Shang; Oliver Linton
  2. A test of the conditional independence assumption in sample selection models By Martin Huber; Blaise Melly
  3. Efficient Estimation of Approximate Factor Models By Bai, Jushan; Liao, Yuan
  4. Estimation Adjusted VaR By Christian Gouriéroux; Jean-Michel Zakoian
  5. Averaging of moment condition estimators By Xiaohong Chen; David T. Jacho-Chavez; Oliver Linton
  6. Statistical Inference in Compound Functional Models By Arnak Dalalyan; Yuri Ingster; Alexandre B. Tsybakov
  7. Testing in the Presence of Nuisance Parameters: Some Comments on Tests Post-Model-Selection and Random Critical Values By Leeb, Hannes; Pötscher, Benedikt M.
  8. On Confidence Intervals for Autoregressive Roots and Predictive Regression By Peter C.B. Phillips
  9. The Dyanamic Location/Scale Model: with applications to intra-day financial data By Andres, P.; Harvey, A.
  10. Testing for the stochastic dominance efficiency of a given portfolio By Oliver Linton; Yoon-Jae Whang
  11. Searching for Rehabilitation in Nonparametric Regression Models with Exogenous Treatment Assignment By Henderson, Daniel J.; Maasoumi, Esfandiar
  12. New Taxonomies for Limited Dependent Variables Models By Biørn, Erik; Wangen, Knut R.
  13. Please Call Again, Correcting Non-response Bias in Treatment Effect Models By Luc Behaghel; Bruno Crépon; Marc Gurgand; Thomas Le barbanchon
  14. The Reactive Volatility Model By Sebastien Valeyre; Denis Grebenkov; Sofiane Aboura; Qian Liu

  1. By: Yang Yan; Dajing Shang; Oliver Linton (Institute for Fiscal Studies and Cambridge University)
    Abstract: This paper proposes efficient estimators of risk measures in a semiparametric GARCH model defined through moment constraints. Moment constraints are often used to identify and estimate the mean and variance parameters and are however discarded when estimating error quantiles. In order to prevent this efficiency loss in quantile estimation we propose a quantile estimator based on inverting an empirical likelihood weighted distribution estimator. It is found that the new quantile estimator is uniformly more efficient than the simple empirical quantile and a quantile estimator based on normalized residuals. At the same time, the efficiency gain in error quantile estimation hingeson the efficiency of estimators of the variance parameters. We show that the same conclusion applies to the estimation of conditional Expected Shortfall. Our comparison also leads to interesting implications of residual bootstrap for dynamic models. We find that these proposed estimators for conditional Value-at-Risk and expected shortfall are asymptotically mixed normal. This asymptotic theory can be used to construct confidence bands for these estimators by taking account of parameter uncertainty. Simulation evidence as well as empirical results are provided.
    Keywords: Empirical Likelihood; Empirical process; GARCH; Quantile; Value-at-Risk; Expected Shortfall.
    JEL: C14 C22 G22
    Date: 2012–09
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:25/12&r=ecm
  2. By: Martin Huber; Blaise Melly
    Abstract: Identi?cation in most sample selection models depends on the independence of the regressors and the error terms conditional on the selection probability. All quantile and mean functions are parallel in these models; this implies that quantile estimators cannot reveal any? per assumption non-existing? heterogeneity. Quantile estimators are nevertheless useful for testing the conditional independence assumption because they are consistent under the null hypothesis. We propose tests of the Kolmogorov-Smirnov type based on the conditional quantile regression process. Monte Carlo simulations show that their size is satisfactory and their power su¢ cient to detect deviations under realistic data generating processes. We apply our procedures to female wage data from the 2011 Current Population Survey and show that homogeneity is clearly rejected.
    Keywords: sample selection, quantile regression, independence, test
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:bro:econwp:2012-11&r=ecm
  3. By: Bai, Jushan; Liao, Yuan
    Abstract: We study the estimation of a high dimensional approximate factor model in the presence of both cross sectional dependence and heteroskedasticity. The classical method of principal components analysis (PCA) does not efficiently estimate the factor loadings or common factors because it essentially treats the idiosyncratic error to be homoskedastic and cross sectionally uncorrelated. For the efficient estimation, it is essential to estimate a large error covariance matrix. We assume the model to be conditionally sparse, and propose two approaches to estimating the common factors and factor loadings; both are based on maximizing a Gaussian quasi-likelihood and involve regularizing a large covariance sparse matrix. In the first approach the factor loadings and the error covariance are estimated separately while in the second approach they are estimated jointly. Extensive asymptotic analysis has been carried out. In particular, we develop the inferential theory for the two-step estimation. Because the proposed approaches take into account the large error covariance matrix, they produce more efficient estimators than the classical PCA methods or methods based on a strict factor model.
    Keywords: High dimensionality; unknown factors; principal components; sparse matrix; conditional sparse; thresholding; cross-sectional correlation; penalized maximum likelihood; adaptive lasso; heteroskedasticity
    JEL: C31 C33 C01
    Date: 2012–09
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:41558&r=ecm
  4. By: Christian Gouriéroux (Crest et Université de Toronto); Jean-Michel Zakoian (Canada et University Lille 3)
    Abstract: Standard risk measures, such as the Value-at-Risk (VaR), or the Expected Shortfall, have to be estimated and their estimated counterparts are subject to estimation uncertainty. Replacing, in the theoretical formulas, the true parameter value by an estimator based on n observations of the Profit and Loss variable, induces an asymptotic bias of order 1/n in the coverage probabilities. This paper shows how to correct for this bias by introducing a new estimator of the VaR, called Estimation adjusted VaR (EVaR). This adjustment allows for a joint treatment of theoretical and estimation risks, taking into account for their possible dependence. The estimator is derived for a general parametric dynamic model and is particularized to stochastic drift and volatility models. The finite sample properties of the EVaR estimator are studied by simulation and an empirical study of the S&P Index is proposed
    Keywords: Value-at-Risk,Estimation Risk,Bias Correction, ARCH Model
    Date: 2012–09
    URL: http://d.repec.org/n?u=RePEc:crs:wpaper:2012-16&r=ecm
  5. By: Xiaohong Chen (Institute for Fiscal Studies and Yale University); David T. Jacho-Chavez; Oliver Linton (Institute for Fiscal Studies and Cambridge University)
    Abstract: We establish the consistency and asymptotic normality for a class of estimators that are linear combinations of a set of v n- consistent estimators whose cardinality increases with sample size. A special case of our framework corresponds to the conditional moment restriction and the implied estimator in that case is shown to achieve the semiparametric efficiency bound. The proofs do not rely on smoothness of underlying criterion functions.
    Keywords: Instrumental Variables; Minimum Distance; Semiparametric Efficiency; Two-Stage Least Squares
    JEL: C12 C13 C14
    Date: 2012–09
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:26/12&r=ecm
  6. By: Arnak Dalalyan (Crest); Yuri Ingster (St Petersburg State Electotechnical University); Alexandre B. Tsybakov (Crest)
    Abstract: We consider a general nonparametric regression model called the compound model. It includes, as special cases, sparse additive regression and nonparametric (or linear) regression with many covariates but possibly a small number of relevant covariates. The compound model is characterized by three main parameters : the structure parameter describing the macroscopic form of the compound function, the microscopic sparsity parameter indicating the maximal number of relevant covariates in each component and the usual smoothness parameter corresponding to the complexity of the members of the compound. We find non-asymptotic minimax rate of convergence of estimators in such a model as a function of these three parameters. We also show that this rate can be attained in an adaptive way
    Keywords: Compound functional model, Minimax estimation, Sparse additive stucture, Dimension reduction, Structure adaptation
    Date: 2012–09
    URL: http://d.repec.org/n?u=RePEc:crs:wpaper:2012-20&r=ecm
  7. By: Leeb, Hannes; Pötscher, Benedikt M.
    Abstract: We point out that the ideas underlying some test procedures recently proposed for testing post-model-selection (and for some other test problems) in the econometrics literature have been around for quite some time in the statistics literature. We also sharpen some of these results in the statistics literature and show that some of the proposals in the econometrics literature lead to tests that do not have the claimed size properties.
    Keywords: Tests in presence of nuisance parameters; inference post model selection
    JEL: C52 C12 C01
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:41459&r=ecm
  8. By: Peter C.B. Phillips (Cowles Foundation, Yale University)
    Abstract: A prominent use of local to unity limit theory in applied work is the construction of confidence intervals for autogressive roots through inversion of the ADF t statistic associated with a unit root test, as suggested in Stock (1991). Such confidence intervals are valid when the true model has an autoregressive root that is local to unity (rho = 1 + (c/n)) but are invalid at the limits of the domain of definition of the localizing coefficient c because of a failure in tightness and the escape of probability mass. Consideration of the boundary case shows that these confidence intervals are invalid for stationary autoregression where they manifest locational bias and width distortion. In particular, the coverage probability of these intervals tends to zero as c approaches -infinity, and the width of the intervals exceeds the width of intervals constructed in the usual way under stationarity. Some implications of these results for predictive regression tests are explored. It is shown that when the regressor has autoregressive coefficient |rho| < 1 and the sample size n approaches infinity, the Campbell and Yogo (2006) confidence intervals for the regression coefficient have zero coverage probability asymptotically and their predictive test statistic Q erroneously indicates predictability with probability approaching unity when the null of no predictability holds. These results have obvious implications for empirical practice.
    Keywords: Autoregressive root, Confidence belt, Confidence interval, Coverage probability, Local to unity, Localizing coefficient, Predictive regression, Tightness
    JEL: C22
    Date: 2012–09
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1879&r=ecm
  9. By: Andres, P.; Harvey, A.
    Abstract: In dynamic conditional score models, the innovation term of the dynamic specification is the score of the conditional distribution. These models are investigated for non-negative variables, using distributions from the generalized beta and generalized gamma families. The log-normal distribution is also considered. Applications to the daily range of stock market indices are reported and models are fitted to duration data.
    Keywords: Burr distribution; Durations; Range; Score; Un-observed components; Weibull distribution
    JEL: C22 G10
    Date: 2012–09–26
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:1240&r=ecm
  10. By: Oliver Linton (Institute for Fiscal Studies and Cambridge University); Yoon-Jae Whang (Institute for Fiscal Studies and Seoul National University)
    Abstract: We propose a new statistical test of the stochastic dominance efficiency of a given portfolio over a class of portfolios. We establish its null and alternative asymptotic properties, and define a method for consistently estimating critical values. We present some numerical evidence that our tests work well in moderate sized samples.
    Date: 2012–09
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:27/12&r=ecm
  11. By: Henderson, Daniel J. (University of Alabama); Maasoumi, Esfandiar (Emory University)
    Abstract: This paper offers some new directions in the analysis of nonparamertric models with exogenous treatment assignment. The nonparametric approach opens the door to the examination of potentially different distributed outcomes. When combined with cross-validation, it also identifies potentially irrelevant variables and linear versus nonlinear effects. Examination of the distribution of effects requires distribution metrics, such as stochastic dominance tests for ranking based on a wide range of criterion functions, including dollar valuations. We can identify subgroups with different treatment outcomes. We offer an empirical demonstration based on the GAIN data. In the case of one covariate (English as the primary language), there is support for a statistical inference of uniform first order dominant treatment effects. We also find several others that indicate second and higher order dominance rankings to a statistical degree of confidence.
    Keywords: bootstrap, GAIN, nonparametric, rehabilitation, stochastic dominance, treatment effects
    JEL: C14
    Date: 2012–09
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp6874&r=ecm
  12. By: Biørn, Erik; Wangen, Knut R.
    Abstract: We establish a `map' for describing a wide class of Limited Dependent Variables models much used in the econometric literature. The classification system, or language, is an extension of Amemiya's typology for tobit models and is intended to facilitate communication among researchers. The class is defined in relation to distributions of latent variables of an arbitrarily high dimension; the region of support can be divided into an arbitrary number of subsets, and the observation rules in each subset can be any combination of the observed, censored, and missing status. Consistent labeling is suggested at different levels of detail.
    Keywords: Limited dependent variables; Latent variables; Censoring; Truncation; Missing observations
    JEL: C24 C25
    Date: 2012–07–17
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:41461&r=ecm
  13. By: Luc Behaghel (Paris School of Economics); Bruno Crépon (CREST); Marc Gurgand (J-PAL); Thomas Le barbanchon
    Abstract: We propose a novel selectivity correction procedure to deal with survey attrition, at the crossroads of the “Heckit" and of the bounding approach of Lee (2009). As a substitute for the instrument needed in sample selectivity correction models, we use information on the number of attempts that were made to obtain response to the survey from each individual who responded. We obtain set identification, but if the number of attempts to reach each individuals is high enough, we can come closer to point identification. We apply our sample selection correction in the context of a job-search experiment with low and unbalanced response rates
    Keywords: Survey non response ,sample selectivity , treatment effect models, randomized controlled trial
    JEL: C31 C93 J6
    Date: 2012–08
    URL: http://d.repec.org/n?u=RePEc:crs:wpaper:2012-15&r=ecm
  14. By: Sebastien Valeyre; Denis Grebenkov; Sofiane Aboura; Qian Liu
    Abstract: We present a new volatility model, simple to implement, that combines various attractive features such as an exponential moving average of the price and a leverage effect. This model is able to capture the so-called "panic effect", which occurs whenever systematic risk becomes the dominant factor. consequently, in contrast to other models, this new model is as reactive as the implied volatility indices. We also test the reactivity of our model using extreme events taken from the 470 most liquid European stocks over the last decade. We show that the reactive volatility model is more robust to extreme events, and it allows for the identification of precursors and replicas of extreme events.
    Date: 2012–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1209.5190&r=ecm

This nep-ecm issue is ©2012 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.