nep-ecm New Economics Papers
on Econometrics
Issue of 2009‒04‒05
twenty-six papers chosen by
Sune Karlsson
Orebro University

  1. Efficient Estimation of an Additive Quantile Regression Model By Cheng, Yebin; De Gooijer, Jan; Zerom, Dawit
  2. Poisson Autoregression By Konstantinos Fokianos; Anders Rahbek; Dag Tjøstheim
  3. Identification and estimation of marginal effects in nonlinear panel models By Victor Chernozhukov; Ivan Fernandez-Val; Jinyong Hahn; Whitney Newey
  4. Extreme Value GARCH modelling with Bayesian Inference By Les Oxley; Marco Reale; Carl Scarrott; Xin Zhao
  5. Copula-based nonlinear quantile autoregression By Xiaohong Chen; Roger Koenker; Zhijie Xiao
  6. Large-sample inference on spatial dependence By Peter Robinson
  7. A note on the estimation of asset pricing models using simple regression betas By Raymond Kan; Cesare Robotti
  8. A Bayesian mixed logit-probit model for multinomial choice By Martin Burda; Matthew Harding; Jerry Hausman
  9. Instrumental variable models for discrete outcomes By Andrew Chesher
  10. The (mis)specification of discrete duration models with unobserved heterogeneity: a Monte Carlo study By Concetta Rondinelli; Cheti Nicoletti
  11. Quadratic Variation by Markov Chains By Peter Reinhard Hansen; Guillaume Horel
  12. Using Backward Means to Eliminate Individual Effects from Dynamic Panels By G. EVERAERT
  13. Modelling intra-daily volatility by functional data analysis: an empirical application to the spanish stock market By Kenedy Alva; Juan Romo; Esther Ruiz
  14. A modified Kolmogorov-Smirnov test for normality By Drezner, Zvi; Turel, Ofir; Zerom, Dawit
  15. The role of Skorokhod space in the development of the econometric analysis of time series By Mc CRORIE, J. Roderick
  16. Alternative approaches to evaluation in empirical microeconomics By Richard Blundell; Monica Costa Dias
  17. Estimating autocorrelations in the presence of deterministic trends By Wang, Shin-Huei; Hafner, Christian
  18. Generalized power method for sparse principal component analysis By JournŽe, Michel; Nesterov, Yurii; Richtarik, Peter; Sepulchre, Rodolphe
  19. Estimation of Causal Effects in Experiments with Multiple Sources of Noncompliance By John Engberg; Dennis Epple; Jason Imbrogno; Holger Sieg; Ron Zimmer
  20. INDIRECT SAMPLING IN THE CONTEXT OF DUAL FRAME SURVEYS By Manuela Maia; Paula Vicente
  21. Identification of Lagged Duration Dependence in Multiple Spells Competing Risks Models By Guillaume, HORNY; Matteo, PICCHIO
  22. Performance of Various Estimators for Censored Response Models with Endogenous Regressors By Changhui Kang; Myoung-jae Lee
  23. On the Performance of Dual System Estimators of Population Size: A Simulation Study By Mauricio Sadinle
  24. State dependence in work-related training participation among British employees: A comparison of different random effects probit estimators. By Panos, Sousounis
  25. Computing the Accuracy of Complex Non-Random Sampling Methods: The Case of the Bank of Canada's Business Outlook Survey By Daniel de Munnik; David Dupuis; Mark Illing
  26. Inverting Bernoulli's theorem: the original sin By DE SCHEEMAEKERE, Xavier; SZAFARZ, Ariane

  1. By: Cheng, Yebin; De Gooijer, Jan; Zerom, Dawit
    Abstract: In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). With the aim to reduce variance of the first estimator, a second estimator is defined via sequential fitting of univariate local polynomial quantile smoothing for each additive component with the other additive components replaced by the corresponding estimates from the first estimator. The second estimator achieves oracle efficiency in the sense that each estimated additive component has the same variance as in the case when all other additive components were known. Asymptotic properties are derived for both estimators under dependent processes that are strictly stationary and absolutely regular. We also provide a demonstrative empirical application of additive quantile models to ambulance travel times.
    Keywords: Additive models; Asymptotic properties; Dependent data; Internalized kernel smoothing; Local polynomial; Oracle efficiency
    JEL: C14 C01
    Date: 2009–03–14
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:14388&r=ecm
  2. By: Konstantinos Fokianos (Department of Mathematics & Statistics, University of Cyprus); Anders Rahbek (Department of Economics, University of Copenhagen and CREATES); Dag Tjøstheim (Department of Mathematics, University of Bergen)
    Abstract: This paper considers geometric ergodicity and likelihood based inference for linear and nonlinear Poisson autoregressions. In the linear case the conditional mean is linked linearly to its past values as well as the observed values of the Poisson process. This also applies to the conditional variance, making an interpretation as an integer valued GARCH process possible. In a nonlinear conditional Poisson model, the conditional mean is a nonlinear function of its past values and a nonlinear function of past observations. As a particular example an exponential autoregressive Poisson model for time series is considered. Under geometric ergodicity the maximum likelihood estimators of the parameters are shown to be asymptotically Gaussian in the linear model. In addition we provide a consistent estimator of their asymptotic covariance matrix. Our approach to verifying geometric ergodicity proceeds via Markov theory and irreducibility. Finding transparent conditions for proving ergodicity turns out to be a delicate problem in the original model formulation. This problem is circumvented by allowing a perturbation of the model. We show that as the perturbations can be chosen to be arbitrarily small, the differences between the perturbed and non-perturbed versions vanish as far as the asymptotic distribution of the parameter estimates is concerned.
    Keywords: asymptotic theory, count data, generalized linear models, geometric ergodicity, integer GARCH, likelihood, noncanonical link function, observation driven models, Poisson regression, ø-irreducibility.
    JEL: C51 C22
    Date: 2009–03–24
    URL: http://d.repec.org/n?u=RePEc:aah:create:2009-12&r=ecm
  3. By: Victor Chernozhukov (Institute for Fiscal Studies and Massachusetts Institute of Technology); Ivan Fernandez-Val; Jinyong Hahn; Whitney Newey (Institute for Fiscal Studies and Massachusetts Institute of Technology)
    Abstract: <p><p><p><p><b>Please Note:</b>This is a substantial revision of "Identification and estimation of marginal effects in nonlinear panel models", CWP25/08.</p> </p><p></p><p></p><p><p>This paper gives identification and estimation results for marginal effects in nonlinear panel models. We find that linear fixed effects estimators are not consistent, due in part to marginal effects not being identified. We derive bounds for marginal effects and show that they can tighten rapidly as the number of time series observations grows. We also show in numerical calculations that the bounds may be very tight for small numbers of observations, suggesting they may be useful in practice. We propose two novel inference methods for parameters defined as solutions to linear and nonlinear programs such as marginal effects in multinomial choice models. We show that these methods produce uniformly valid confidence regions in large samples. We give an empirical illustration.</p></p></p></p>
    Date: 2009–03
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:05/09&r=ecm
  4. By: Les Oxley (University of Canterbury); Marco Reale; Carl Scarrott; Xin Zhao
    Abstract: Extreme value theory is widely used financial applications such as risk analysis, forecasting and pricing models. One of the major difficulties in the applications to finance and economics is that the assumption of independence of time series observations is generally not satisfied, so that the dependent extremes may not necessarily be in the domain of attraction of the classical generalised extreme value distribution. This study examines a conditional extreme value distribution with the added specification that the extreme values (maxima or minima) follows a conditional autoregressive heteroscedasticity process. The dependence has been modelled by allowing the location and scale parameters of the extreme distribution to vary with time. The resulting combined model, GEV-GARCH, is developed by implementing the GARCH volatility mechanism in these extreme value model parameters. Bayesian inference is used for the estimation of parameters and posterior inference is available through the Markov Chain Monte Carlo (MCMC) method. The model is firstly applied to relevant simulated data to verify model stability and reliability of the parameter estimation method. Then real stock returns are used to consider evidence for the appropriate application of the model. A comparison is made between the GEV-GARCH and traditional GARCH models. Both the GEV-GARCH and GARCH show similarity in the resulting conditional volatility estimates, however the GEV-GARCH model differs from GARCH in that it can capture and explain extreme quantiles better than the GARCH model because of more reliable extrapolation of the tail behaviour.
    Keywords: Extreme value distribution, dependency, Bayesian, MCMC, Return quantile
    JEL: C11 G12
    Date: 2009–04–01
    URL: http://d.repec.org/n?u=RePEc:cbt:econwp:09/05&r=ecm
  5. By: Xiaohong Chen (Institute for Fiscal Studies and Yale); Roger Koenker (Institute for Fiscal Studies and University of Illinois); Zhijie Xiao
    Abstract: <p>Parametric copulas are shown to be attractive devices for specifying quantile autoregressive models for nonlinear time-series. Estimation of local, quantile-specific copula-based time series models offers some salient advantages over classical global parametric approaches. Consistency and asymptotic normality of the proposed quantile estimators are established under mild conditions, allowing for global misspecification of parametric copulas and marginals, and without assuming any mixing rate condition. These results lead to a general framework for inference and model specification testing of extreme conditional value-at-risk for financial time series data.</p>
    Date: 2008–10
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:27/08&r=ecm
  6. By: Peter Robinson (Institute for Fiscal Studies and London School of Economics)
    Abstract: <p>We consider cross-sectional data that exhibit no spatial correlation, but are feared to be spatially dependent. We demonstrate that a spatial version of the stochastic volatility model of financial econometrics, entailing a form of spatial autoregression, can explain such behaviour. The parameters are estimated by pseudo Gaussian maximum likelihood based on log-transformed squares, and consistency and asymptotic normality are established. Asymptotically valid tests for spatial independence are developed.</p>
    Date: 2008–10
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:29/08&r=ecm
  7. By: Raymond Kan; Cesare Robotti
    Abstract: Since Black, Jensen, and Scholes (1972) and Fama and MacBeth (1973), the two-pass cross-sectional regression (CSR) methodology has become the most popular tool for estimating and testing beta asset pricing models. In this paper, we focus on the case in which simple regression betas are used as regressors in the second-pass CSR. Under general distributional assumptions, we derive asymptotic standard errors of the risk premia estimates that are robust to model misspecification. When testing whether the beta risk of a given factor is priced, our misspecification robust standard error and the Jagannathan and Wang (1998) standard error (which is derived under the correctly specified model) can lead to different conclusions. incompl s
    Keywords: two-pass cross-sectional regressions, risk premia, model misspecification, simple regression betas, multivariate betas CL HG2567 A4A5
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:fip:fedawp:2009-12&r=ecm
  8. By: Martin Burda; Matthew Harding (Institute for Fiscal Studies and Stanford University); Jerry Hausman (Institute for Fiscal Studies and Massachusetts Institute of Technology)
    Abstract: <p><p><p>In this paper we introduce a new flexible mixed model for multinomial discrete choice where the key individual- and alternative-specific parameters of interest are allowed to follow an assumption-free nonparametric density specification while other alternative-specific coefficients are assumed to be drawn from a multivariate normal distribution which eliminates the independence of irrelevant alternatives assumption at the individual level. A hierarchical specification of our model allows us to break down a complex data structure into a set of submodels with the desired features that are naturally assembled in the original system. We estimate the model using a Bayesian Markov Chain Monte Carlo technique with a multivariate Dirichlet Process (DP) prior on the coefficients with nonparametrically estimated density. We employ a "latent class" sampling algorithm which is applicable to a general class of models including non-conjugate DP base priors. The model is applied to supermarket choices of a panel of Houston households whose shopping behavior was observed over a 24-month period in years 2004-2005. We estimate the nonparametric density of two key variables of interest: the price of a basket of goods based on scanner data, and driving distance to the supermarket based on their respective locations. Our semi-parametric approach allows us to identify a complex multi-modal preference distribution which distinguishes between inframarginal consumers and consumers who strongly value either lower prices or shopping convenience. </p></p></p>
    Date: 2008–08
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:23/08&r=ecm
  9. By: Andrew Chesher (Institute for Fiscal Studies and University College London)
    Abstract: <p><b>Please note:</b> This is a substantial revision of "Endogeneity and Discrete Outcomes", CWP 05/07. <p>Single equation instrumental variable models for discrete outcomes are shown to be set not point identifying for the structural functions that deliver the values of the discrete outcome. Identified sets are derived for a general nonparametric model and sharp set identification is demonstrated. Point identification is typically not achieved by imposing parametric restrictions. The extent of an identified set varies with the strength and support of instruments and typically shrinks as the support of a discrete outcome grows. The paper extends the analysis of structural quantile functions with endogenous arguments to cases in which there are discrete outcomes. </p> </p>
    Date: 2008–11
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:30/08&r=ecm
  10. By: Concetta Rondinelli (Bank of Italy); Cheti Nicoletti (Institute for Social and Economic Research (ISER))
    Abstract: Empirical researchers usually prefer statistical models that can be easily estimated using standard software packages. One such model is the sequential binary model with or without normal random effects; such models can be adopted to estimate discrete duration models with unobserved heterogeneity. But ease of estimation may come at a cost. In this paper we conduct a Monte Carlo simulation to evaluate the consequences of omitting or misspecifying the unobserved heterogeneity distribution in single-spell discrete duration models.
    Keywords: discrete duration models, unobserved heterogeneity, Monte Carlo simulations
    JEL: C23 C25
    Date: 2009–03
    URL: http://d.repec.org/n?u=RePEc:bdi:wptemi:td_705_09&r=ecm
  11. By: Peter Reinhard Hansen (Stanford University and CREATES); Guillaume Horel (Merrill Lynch, New York)
    Abstract: We introduce a novel estimator of the quadratic variation that is based on the the- ory of Markov chains. The estimator is motivated by some general results concerning filtering contaminated semimartingales. Specifically, we show that filtering can in prin- ciple remove the effects of market microstructure noise in a general framework where little is assumed about the noise. For the practical implementation, we adopt the dis- crete Markov chain model that is well suited for the analysis of financial high-frequency prices. The Markov chain framework facilitates simple expressions and elegant analyti- cal results. The proposed estimator is consistent with a Gaussian limit distribution and we study its properties in simulations and an empirical application.
    Keywords: Markov chain, Filtering Contaminated Semimartingale, Quadratic Variation, Integrated Variance, Realized Variance, High Frequency Data
    JEL: C10 C22 C80
    Date: 2009–03–24
    URL: http://d.repec.org/n?u=RePEc:aah:create:2009-13&r=ecm
  12. By: G. EVERAERT
    Abstract: The within-groups estimator is inconsistent in dynamic panels with fixed T since the sample mean used to eliminate the individual effects from the lagged dependent variable is correlated with the error term. This paper suggests to eliminate individual effects from an AR(1) panel using backward means as an alternative to sample means. Using orthogonal deviations of the lagged dependent variable from its backward mean yields an estimator that is still inconsistent for fixed T but the inconsistency is shown to be negligibly small. A Monte Carlo simulation shows that this alternative estimator has superior small sample properties compared to conventional fixed effects, bias-corrected fixed effects and GMM estimators. Interestingly, it is also consistent for fixed T in the specific cases where (i) T = 2, (ii) the AR parameter is 0 or 1, (iii) the variance of the individual effects is zero.
    Keywords: Dynamic panel, Individual effects, Backward mean, Orthogonal deviations, Monte Carlo simulation
    JEL: C15 C32
    Date: 2009–01
    URL: http://d.repec.org/n?u=RePEc:rug:rugwps:09/553&r=ecm
  13. By: Kenedy Alva; Juan Romo; Esther Ruiz
    Abstract: We propose recent functional data analysis techniques to study the intra-daily volatility. In particular, the volatility extraction is based on functional principal components and the volatility prediction on functional AR(1) models. The estimation of the corresponding parameters is carried out using the functional equivalent to OLS. We apply these ideas to the empirical analysis of the IBEX35 returns observed each _ve minutes. We also analyze the performance of the proposed functional AR(1) model to predict the volatility along a given day given the information in previous days for the intra-daily volatility for the firms in the IBEX35 Madrid stocks index
    Keywords: Market microstructure, Ultra-high frequency data, Functional data analysis,Functional AR(1) model
    Date: 2009–03
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws092809&r=ecm
  14. By: Drezner, Zvi; Turel, Ofir; Zerom, Dawit
    Abstract: In this paper we propose an improvement of the Kolmogorov-Smirnov test for normality. In the current implementation of the Kolmogorov-Smirnov test, a sample is compared with a normal distribution where the sample mean and the sample variance are used as parameters of the distribution. We propose to select the mean and variance of the normal distribution that provide the closest fit to the data. This is like shifting and stretching the reference normal distribution so that it fits the data in the best possible way. If this shifting and stretching does not lead to an acceptable fit, the data is probably not normal. We also introduce a fast easily implementable algorithm for the proposed test. A study of the power of the proposed test indicates that the test is able to discriminate between the normal distribution and distributions such as uniform, bi-modal, beta, exponential and log-normal that are different in shape, but has a relatively lower power against the student t-distribution that is similar in shape to the normal distribution. In model settings, the former distinction is typically more important to make than the latter distinction. We demonstrate the practical significance of the proposed test with several simulated examples.
    Keywords: Closest fit; Kolmogorov-Smirnov; Normal distribution
    JEL: C01
    Date: 2008–10–22
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:14385&r=ecm
  15. By: Mc CRORIE, J. Roderick
    Abstract: This paper discusses the fundamental role played by Skorokhod space, through its underpinning of functional central limit theory, in the development of the paradigm of unit roots and co-integration. This paradigm has fundamentally affected the way economists approach economic time series as was recognized by the award of the Nobel Memorial Prize in Economic Sciences to Robert F. Engle and Clive W.J. Granger in 2003. Here, we focus on how P.C.B. Phillips and others used the Skorokhod topology to establish a limiting distribution theory that underpinned and facilitated the development of methods of estimation and testing of single equations and systems of equations with possibly integrated regressors. This approach has spawned a large body of work that can be traced back to Skorokhod's conception of fifty years ago. Much of this work is surprisingly confined to the econometrics literature.
    Keywords: Skorokhod space, functional central limit theorems, non-stationary time series, unit roots and co-integration, Wiener functionals, econometrics.
    Date: 2008–10
    URL: http://d.repec.org/n?u=RePEc:cor:louvco:2008059&r=ecm
  16. By: Richard Blundell (Institute for Fiscal Studies and University College London); Monica Costa Dias (Institute for Fiscal Studies and Institute for Fiscal Studies)
    Abstract: <p>This paper reviews a range of the most popular policy evaluation methods in empirical microeconomics: social experiments, natural experiments, matching methods, instrumental variables, discontinuity design and control functions. It discusses the identification of both the traditionally used average parameters and more complex distributional parameters. In each case, the necessary assumptions and the data requirements are considered. The adequacy of each approach is discussed drawing on the empirical evidence from the education and labor market policy evaluation literature. We also develop an education evaluation model which we use to carry through the discussion of each alternative approach. A full set of <a href="http://www.ifs.org.uk/publications.php?publication_id=4326">STATA datasets are provided free online</a> which contain Monte-Carlo replications of the various specifications of the education evaluation model. There are also a full set of STATA .do files for each of the estimation approaches described in the paper. The .do-files can be used together with the datasets to reproduce all the results in the paper.</p></p></p></p>
    Date: 2008–10
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:26/08&r=ecm
  17. By: Wang, Shin-Huei (UniversitŽ catholique de Louvain (UCL). Center for Operations Research and Econometrics (CORE)); Hafner, Christian (UniversitŽ catholique de Louvain (UCL). Center for Operations Research and Econometrics (CORE); ---)
    Abstract: This paper considers the impact of ordinary least squares (OLS) detrending and the first difference (FD) detrending on autocorrelation estimation in the presence of long memory and deterministic trends. We show that the FD detrending results in inconsistent autocorrelation estimates when the error term is stationary. Thus, the FD detrending should not be employed for autocorrelation estimation of the detrended series when constructing e.g. portmanteau-type tests. In an empirical application of volume in Dow Jones stocks, we show that for some stocks, OLS and FD detrending result in substantial differences in ACF estimates.
    Keywords: autocorrelations, OLS, first difference detrending, long memory.
    JEL: C22
    Date: 2008–12
    URL: http://d.repec.org/n?u=RePEc:cor:louvco:2008073&r=ecm
  18. By: JournŽe, Michel (---); Nesterov, Yurii (UniversitŽ catholique de Louvain (UCL). Center for Operations Research and Econometrics (CORE)); Richtarik, Peter (UniversitŽ catholique de Louvain (UCL). Center for Operations Research and Econometrics (CORE)); Sepulchre, Rodolphe
    Abstract: In this paper we develop a new approach to sparse principal component analysis (sparse PCA). We propose two single-unit and two block optimization formulations of the sparse PCA problem, aimed at extracting a single sparse dominant principal component of a data matrix, or more components at once, respectively. While the initial formulations involve nonconvex functions, and are therefore computationally intractable, we rewrite them into the form of an optimization program involving maximization of a convex function on a compact set. The dimension of the search space is decreased enormously if the data matrix has many more columns (variables) than rows. We then propose and analyze a simple gradient method suited for the task. It appears that our algorithm has best convergence properties in the case when either the objective function or the feasible set are strongly convex, which is the case with our single-unit formulations and can be enforced in the block case. Finally, we demonstrate numerically on a set of random and gene expression test problems that our approach outperforms existing algorithms both in quality of the obtained solution and in computational speed.
    Keywords: sparse PCA, power method, gradient ascent, strongly convex sets, block algorithms.
    Date: 2008–11
    URL: http://d.repec.org/n?u=RePEc:cor:louvco:2008070&r=ecm
  19. By: John Engberg; Dennis Epple; Jason Imbrogno; Holger Sieg; Ron Zimmer
    Abstract: The purpose of this paper is to study identification and estimation of causal effects in experiments with multiple sources of noncompliance. This research design arises in many applications in education when access to oversubscribed programs is partially determined by randomization. Eligible households decide whether or not to comply with the intended treatment. The paper treats program participation as the outcome of a decision process with five latent household types. We show that the parameters of the underlying model of program participation are identified. Our proofs of identification are constructive and can be used to design a GMM estimator for all parameters of interest. We apply our new methods to study the effectiveness of magnet programs in a large urban school district. Our findings show that magnet programs help the district to attract and retain students from households that are at risk of leaving the district. These households have higher incomes, are more educated, and have children that score higher on standardized tests than households that stay in district regardless of the outcome of the lottery.
    JEL: C21 H75 I21
    Date: 2009–04
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:14842&r=ecm
  20. By: Manuela Maia (Faculdade de Economia e Gestão, Universidade Católica Portuguesa (Porto)); Paula Vicente (ISCTE Business School)
    Abstract: Under-coverage is one of the most common problems of sampling frames. To reduce the impact of coverage error on survey estimates several frames can be combined in order to achieve a complete (or nearly complete) coverage of the target population. Multiple frame estimators have been developed to be used in the context of multiple frame surveys. Sampling frames may overlap which is the case when a single unit of the sampling frame is related with more than one element of the target population. Indirect sampling (Lavallée, 1995) is an alternative approach to classical sampling theory in dealing with the overlapping problem of sampling frames on survey estimates. Not infrequently a survey may need more than one sampling frame in order to improve coverage and simultaneously the sampling frame overlap. In this paper a new class of estimators is presented which is the result from merging multiple frames estimators (only the particular case of dual frames will be presented) with indirect sampling estimators in order to bring together in a single estimator the effect of several frames on survey estimates.
    Keywords: Indirect Sampling, Generalized Weight Share Method, Dual Frame Surveys
    Date: 2009–03
    URL: http://d.repec.org/n?u=RePEc:cap:mpaper:082009&r=ecm
  21. By: Guillaume, HORNY; Matteo, PICCHIO (UNIVERSITE CATHOLIQUE DE LOUVAIN, Institut de Recherches Economiques et Sociales (IRES))
    Abstract: We show non-parametric identification of lagged duration dependence in mixed proportional hazard models for duration data, in the presence of competing risks and consecutive spells. We extend the results to the case in which data provide repeated realizations of the consecutive spells competing risks structure for each subject
    Keywords: lagged duration dependence, competing risks, MPH models, identification
    JEL: C14 C41 J64
    Date: 2009–02–05
    URL: http://d.repec.org/n?u=RePEc:ctl:louvir:2009001&r=ecm
  22. By: Changhui Kang (Department of Economics, Chung-Ang University, Seoul, South Korea); Myoung-jae Lee (Department of Economics, Korea University, Seoul, South Korea)
    Keywords: censored response, tobit, endogenous regressors, instruments
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:iek:wpaper:0905&r=ecm
  23. By: Mauricio Sadinle
    Abstract: A simulation study is carried out in order to compare the performance of the Lincoln–Petersen and Chapman estimators for single capture–recapture or dual system estimation when the sizes of the samples or record systems are not fixed by the researcher. Performance is explored through both bias and variability. Unless both record probabilities and population size are very small, the Chapman estimator performs better than the Lincoln–Petersen estimator. This is due to the lower variability of the Chapman estimator and because it is nearly unbiased for a set of population size and record probabilities wider than the set for which the Lincoln– Petersen estimator is nearly unbiased. Thus, for those kind of studies where the record probability is high for at least one record system, such as census correction studies, it should be preferred the Chapman estimator.
    Date: 2008–12–30
    URL: http://d.repec.org/n?u=RePEc:col:000150:005377&r=ecm
  24. By: Panos, Sousounis
    Abstract: This paper compares three different estimation approaches for the random effects dynamic panel data model, under the probit assumption on the distribution of the errors. These three approaches are attributed to Heckman (1981), Wooldridge (2005) and Orme (2001). The results are then compared with those obtained from generalised method of moments (GMM) estimators of a dynamic linear probability model, namely the Arellano and Bond (1991) and Blundell and Bond (1998) estimators. A model of work-related training participation for British employees is estimated using individual level data covering the period 1991-1997 from the British Household Panel Survey. This evaluation adds to the existing body of empirical evidence on the performance of these estimators using real data, which supplements the conclusions from simulation studies. The results suggest that for the dynamic random effects probit model the performance of no one estimator is superior to the others. GMM estimation of a dynamic LPM of training participation suggests that the random effects estimators are not sensitive to the distributional assumptions of the unobserved effect.
    Keywords: state dependence; training; dynamic panel data models
    JEL: C23 C25
    Date: 2008–07
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:14261&r=ecm
  25. By: Daniel de Munnik; David Dupuis; Mark Illing
    Abstract: A number of central banks publish their own business conditions survey based on non-random sampling methods. The results of these surveys influence monetary policy decisions and thus affect expectations in financial markets. To date, however, no one has computed the statistical accuracy of these surveys because their respective non-random sampling method renders this assessment non-trivial. This paper describes a methodology for modeling complex non-random sampling behaviour, and computing relevant measures of statistical confidence, based on a given survey's historical sample selection practice. We apply this framework to the Bank of Canada's Business Outlook Survey by describing the sampling method in terms of historical practices and Bayesian probabilities. This allows us to replicate the firm selection process using Monte Carlo simulations on a comprehensive micro-dataset of Canadian firms. We find, under certain assumptions, no evidence that the Bank's firm selection process results in biased estimates and/or wider confidence intervals.
    Keywords: Econometric and statistical methods; Central bank research; Regional economic developments
    JEL: C42 C81 C90
    Date: 2009
    URL: http://d.repec.org/n?u=RePEc:bca:bocawp:09-10&r=ecm
  26. By: DE SCHEEMAEKERE, Xavier; SZAFARZ, Ariane
    Date: 2008–10
    URL: http://d.repec.org/n?u=RePEc:ulb:ecoulb:info:hdl:2013/14571&r=ecm

This nep-ecm issue is ©2009 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.