nep-ecm New Economics Papers
on Econometrics
Issue of 2007‒12‒19
eight papers chosen by
Sune Karlsson
Orebro University

  1. A robust bootstrap approach to the Hausman test in stationary panel data models By Herwartz, Helmut; Neumann, Michael H.
  2. Modelling heterogeneity and dynamics in the volatility of individual wages By Laura Hospido
  3. Using Firm Optimization to Evaluate and Estimate Returns to Scale By Yuriy Gorodnichenko
  4. Estimation and decomposition of downside risk for portfolios with non-normal returns By Boudt, Kris; Peterson, Brian; Croux, Christophe
  5. Joint Modeling of Call and Put Implied Volatility By Ahoniemi, Katja; Lanne, Markku
  6. A stochastic volatility Libor model and its robust calibration By Denis Belomestny; Stanley Matthew; John Schoenmakers
  7. Evaluations of likelihood based surveillance of volatility By Bock, David
  8. How to Measure Segregation Conditional on the Distribution of Covariates By Åslund, Olof; Nordström Skans, Oskar

  1. By: Herwartz, Helmut; Neumann, Michael H.
    Abstract: In panel data econometrics the Hausman test is of central importance to select an e±cient estimator of the models' slope parameters. When testing the null hypothesis of no correlation between unobserved heterogeneity and observable explanatory variables by means of the Hausman test model disturbances are typically assumed to be independent and identically distributed over the time and the cross section dimension. The test statistic lacks pivotalness in case the iid assumption is violated. GLS based variants of the test statistic are suitable to overcome the impact of nuisance parameters on the asymptotic distribution of the Hausman statistic. Such test statistics, however, also build upon strong homogeneity restrictions that might not be met by empirical data. We propose a bootstrap approach to specification testing in panel data models which is robust under cross sectional or time heteroskedasticity and inhomogeneous patterns of serial correlation. A Monte Carlo study shows that in small samples the bootstrap approach outperforms inference based on critical values that are taken from a X²-distribution.
    Keywords: Hausman test, random effects model, wild bootstrap, heteroskedasticity
    JEL: C12 C33
    Date: 2007
  2. By: Laura Hospido (Banco de España)
    Abstract: In this paper I consider a model for the heterogeneity and dynamics of the conditional mean and the conditional variance of standarized individual wages. In particular, I propose a dynamic panel data model with individual effects both in the mean and in a conditional ARCH type variance function. I posit a distribution for earning shocks and I build a modified likelihood function for estimation and inference in a fixed-T context. Using a newly developed bias-corrected likelihood approach makes it possible to reduce the estimation bias to a term of order 1 over T squared. The small sample performance of bias corrected estimators is investigated in a Monte Carlo simulation study. The simulation results show that the bias of the maximum likelihood estimator is substantially corrected for designs that are broadly calibrated to the PSID. The empirical analysis is conducted on data drawn from the 1968-1993 PSID. I find that it is important to account for individual unobserved heterogeneity and dynamics in the variance, and that the latter is driven by job mobility. I also find that the model explains the non-normality observed in logwage data.
    Keywords: Panel data, dynamic nonlinear models, conditional heteroskedasticity, fixed effects, bias reduction, individual wages
    JEL: C23 J31
    Date: 2007–12
  3. By: Yuriy Gorodnichenko
    Abstract: At the firm level, revenue and costs are well measured but prices and quantities are not. This paper shows that because of these data limitations estimates of returns to scale at the firm level are for the revenue function, not production function. Given this observation, the paper argues that, under weak assumptions, micro-level estimates of returns to scale are often inconsistent with profit maximization or imply implausibly large profits. The puzzle arises because popular estimators ignore heterogeneity and endogeneity in factor/product prices, assume perfect elasticity of factor supply curves or neglect the restrictions imposed by profit maximization (cost minimization) so that estimators are inconsistent or poorly identified. The paper argues that simple structural estimators can address these problems. Specifically, the paper proposes a full-information estimator that models the cost and the revenue functions simultaneously and accounts for unobserved heterogeneity in productivity and factor prices symmetrically. The strength of the proposed estimator is illustrated by Monte Carlo simulations and an empirical application. Finally, the paper discusses a number of implications of estimating revenue functions rather than production functions and demonstrates that the profit share in revenue is a robust non-parametric economic diagnostic for estimates of returns to scale.
    JEL: D24 D4 E23 L11
    Date: 2007–11
  4. By: Boudt, Kris; Peterson, Brian; Croux, Christophe
    Abstract: We propose a new estimator for Expected Shortfall that uses asymptotic expansions to account for the asymmetry and heavy tails in financial returns. We provide all the necessary formulas for decomposing estimators of Value at Risk and Expected Shortfall based on asymptotic expansions and show that this new methodology is very useful for analyzing and predicting the risk properties of portfolios of alternative investments.
    Keywords: Alternative investments; Component Value at Risk; Cornish-Fisher expansion; downside risk; expected shortfall; portfolio; risk contribution; Value at Risk.
    JEL: C13 C22 G11
    Date: 2007–08–17
  5. By: Ahoniemi, Katja; Lanne, Markku
    Abstract: This paper exploits the fact that implied volatilities calculated from identical call and put options have often been empirically found to differ, although they should be equal in theory. We propose a new bivariate mixture multiplicative error model and show that it is a good fit to Nikkei 225 index call and put option implied volatility (IV). A good model fit requires two mixture components in the model, allowing for different mean equations and error distributions for calmer and more volatile days. Forecast evaluation indicates that in addition to jointly modeling the time series of call and put IV, cross effects should be added to the model: putside implied volatility helps forecast callside IV, and vice versa. Impulse response functions show that the IV derived from put options recovers faster from shocks, and the effect of shocks lasts for up to six weeks.
    Keywords: Implied Volatility; Option Markets; Multiplicative Error Models; Forecasting
    JEL: C32 C53 G13
    Date: 2007
  6. By: Denis Belomestny; Stanley Matthew; John Schoenmakers
    Abstract: In this paper we propose a Libor model with a high-dimensional specially structured system of driving CIR volatility processes. A stable calibration procedure which takes into account a given local correlation structure is presented. The calibration algorithm is FFT based, so fast and easy to implement.
    Keywords: Libor modelling, stochastic volatility, CIR processes, calibration
    JEL: J31 I19 C51
    Date: 2007–12
  7. By: Bock, David (Statistical Research Unit, Department of Economics, School of Business, Economics and Law, Göteborg University)
    Abstract: The volatility of asset returns are important in finance. Different likelihood based methods of statistical surveillance for detecting a change in the variance are evaluated. The differences are how the partial likelihood ratios are weighted. The full likelihood ratio, Shiryaev-Roberts, Shewhart and the CUSUM methods are derived in case of an independent and identically distributed Gaussian process. The behavior of the methods is studied both when there is no change and when the change occurs at different time points. The false alarms are controlled by the median run length. Differences and limiting equalities of the methods are shown. The performances when the process parameters for which the methods are optimized for differ from the true values of the parameters are evaluated. The methods are illustrated on a period of Standard and Poor’s 500 stock market index.
    Keywords: surveillance; statistical process control; monitoring; likelihood ratio; Shewhart; CUSUM
    JEL: C10
    Date: 2007–01–01
  8. By: Åslund, Olof (Department of Economics); Nordström Skans, Oskar (Department of Economics)
    Abstract: This short paper proposes a non-parametric method of accounting for the distribution of background characteristics when testing for segregation in empirical studies. It is shown and exemplified—using data on workplace segregation between immigrants and natives in Sweden—how the method can be applied to correct any measure of segregation for differences between groups in the distribution of covariates by means of simulation, and how analytical results can be used when studying segregation by means of peer group exposure.
    Keywords: Segregation; Covariates; Workplaces; Immigrants
    JEL: C10 J10 J20
    Date: 2007–12–07

This nep-ecm issue is ©2007 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.