
on Econometrics 
By:  Abdou Kâ Diongue (UFR SAT  Université Gaston Berger  Université Gaston Berger de SaintLouis); Dominique Guegan (CES  Centre d'économie de la Sorbonne  CNRS : UMR8174  Université PanthéonSorbonne  Paris I, Ecole d'économie de Paris  Paris School of Economics  Université PanthéonSorbonne  Paris I); Rodney C. Wolff (School of Mathematical Sciences  Queensland University of Technology) 
Abstract:  In this paper, we discuss the class of Bilinear GATRCH (BLGARCH) models which are capable of capturing simultaneously two key properties of nonlinear time series : volatility clustering and leverage effects. It has been observed often that the marginal distributions of such time series have heavy tails ; thus we examine the BLGARCH model in a general setting under some nonNormal distributions. We investigate some probabilistic properties of this model and we propose and implement a maximum likelihood estimation (MLE) methodology. To evaluate the smallsample performance of this method for the various models, a Monte Carlo study is conducted. Finally, withinsample estimation properties are studied using S&P 500 daily returns, when the features of interest manifest as volatility clustering and leverage effects. 
Keywords:  BLGARCH process, elliptical distribution, leverage effects, Maximum Likelihood, Monte Carlo method, volatility clustering. 
Date:  2008–03 
URL:  http://d.repec.org/n?u=RePEc:hal:papers:halshs00270719_v1&r=ecm 
By:  Cizek, P. (Tilburg University, Center for Economic Research) 
Abstract:  Many estimation methods of truncated and censored regression models such as the maximum likelihood and symmetrically censored least squares (SCLS) are sensitive to outliers and data contamination as we document. Therefore, we propose a semipara metric general trimmed estimator (GTE) of truncated and censored regression, which is highly robust and relatively imprecise. To improve its performance, we also propose dataadaptive and onestep trimmed estimators. We derive the robust and asymptotic properties of all proposed estimators and show that the onestep estimators (e.g., onestep SCLS) are as robust as GTE and are asymptotically equivalent to the original estimator (e.g., SCLS). The infinitesample properties of existing and proposed estimators are studied by means of Monte Carlo simulations. 
Keywords:  Asymptotic normality;censored regression;onestep estimation;robust esti mation;trimming;truncated regression 
JEL:  C13 C14 C21 C24 
Date:  2008 
URL:  http://d.repec.org/n?u=RePEc:dgr:kubcen:200834&r=ecm 
By:  Marc K. Francke (Vrije Universiteit Amsterdam); Siem Jan Koopman (Vrije Universiteit Amsterdam); Aart de Vos (Vrije Universiteit Amsterdam) 
Abstract:  State space models with nonstationary processes and fixed regression effects require a state vector with diffuse initial conditions. Different likelihood functions can be adopted for the estimation of parameters in time series models with diffuse initial conditions. In this paper we consider profile, diffuse and marginal likelihood functions. The marginal likelihood is defined as the likelihood function of a transformation of the data vector. The transformation is not unique. The diffuse likelihood is a marginal likelihood for a specific data transformation that may depend on parameters. Therefore, the diffuse likelihood can not be used generally for parameter estimation. Our newly proposed marginal likelihood function is based on an orthonormal transformation that does not depend on parameters. Likelihood functions for state space models are evaluated using the Kalman filter. The diffuse Kalman filter is specifically designed for computing the diffuse likelihood function. We show that a modification of the diffuse Kalman filter is needed for the evaluation of our proposed marginal likelihood function. Diffuse and marginal likelihood functions have better small sample properties compared to the profile likelihood function for the estimation of parameters in linear time series models. The results in our paper confirm the earlier findings and show that the diffuse likelihood function is not appropriate for a range of state space model specifications. 
Keywords:  Diffuse likelihood; Kalman filter; Marginal likelihood; Multivariate time series models; Profile likelihood 
JEL:  C13 C22 C32 
Date:  2008–04–14 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20080040&r=ecm 
By:  Patrick Bayer; Shakeeb Khan; Christopher Timmins 
Abstract:  This paper considers nonparametric identification and estimation of a generalized Roy model that includes a nonpecuniary component of utility associated with each choice alternative. Previous work has found that, without parametric restrictions or the availability of covariates, all of the useful content of a crosssectional dataset is absorbed in a restrictive specification of Roy sorting behavior that imposes independence on wage draws. While this is true, we demonstrate that it is also possible to identify (under relatively innocuous assumptions and without the use of covariates) a common nonpecuniary component of utility associated with each choice alternative. We develop nonparametric estimators corresponding to two alternative assumptions under which we prove identification, derive asymptotic properties, and illustrate small sample properties with a series of Monte Carlo experiments. We demonstrate the usefulness of one of these estimators with an empirical application. Micro data from the 2000 Census are used to calculate the returns to a college education. If highschool and college graduates face different costs of migration, this would be reflected in different degrees of Roysortinginduced bias in their observed wage distributions. Correcting for this bias, the observed returns to a college degree are cut in half. 
JEL:  C1 C13 C14 J24 J3 J32 
Date:  2008–04 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:13949&r=ecm 
By:  T. W. Anderson (Department of Statistics and Department of Economics, Stanford University); Naoto Kunitomo (Faculty of Economics, University of Tokyo); Yukitoshi Matsushita (CIRJE, University of Tokyo) 
Abstract:  We consider the estimation of the coefficients of a linear structural equation in a simultaneous equation system when there are many instrumental variables. We derive some asymptotic properties of the limited information maximum likelihood (LIML) estimator when the number of instruments is large; some of these results are new and we relate them to results in some recent studies. We have found that the variance of the LIML estimator and its modifications often attain the asymptotic lower bound when the number of instruments is large and the disturbance terms are not necessarily normally distributed, that is, for the microeconometric models with many instruments. 
Date:  2008–01 
URL:  http://d.repec.org/n?u=RePEc:tky:fseres:2008cf542&r=ecm 
By:  Dominique Guegan (CES  Centre d'économie de la Sorbonne  CNRS : UMR8174  Université PanthéonSorbonne  Paris I, Ecole d'économie de Paris  Paris School of Economics  Université PanthéonSorbonne  Paris I) 
Abstract:  In this paper we deal with the problem of nonstationarity encountered in a lot of data sets, mainly in financial and economics domains, coming from the presence of multiple seasonnalities, jumps, volatility, distorsion, aggregation, etc. Existence of nonstationarity involves spurious behaviors in estimated statistics as soon as we work with finite samples. We illustrate this fact using Markov switching processes, Stopbreak models and SETAR processes. Thus, working with a theoretical framework based on the existence of an invariant measure for a whole sample is not satisfactory. Empirically alternative strategies have been developed introducing dynamics inside modelling mainly through the parameter with the use of rolling windows. A specific framework has not yet been proposed to study such noninvariant data sets. The question is difficult. Here, we address a discussion on this topic proposing the concept of metadistribution which can be used to improve risk management strategies or forecasts. 
Keywords:  Nonstationarity, switching processes, SETAR processes, jumps, forecast, risk management, copula, probability distribution function. 
Date:  2008–03 
URL:  http://d.repec.org/n?u=RePEc:hal:papers:halshs00270708_v1&r=ecm 
By:  Sugawara, Shinya (Graduate School of Economics, University of Tokyo); Yasuhiro Omori (Faculty of Economics, University of Tokyo) 
Abstract:  This paper proposes a Bayesian estimation of the payoff functions in entry games using Markov chain Monte Carlo simulation. In order to deal with the multiple Nash equilibria, we describe the econometric model with a latent variable that represents the player's choice of an equilibrium among the multiple equilibria. The statistical incoherency problem considered in previous studies is also discussed for our entry game model, and we provide an alternative justification for the previous estimation procedures. Our proposed methodology is applied to Japanese airline data, and model selection based on the marginal likelihood is conducted to investigate the nature of the strategic interaction between two major Japanese airline companies. Finally, we predict the entry probabilities of two airline companies for the Shizuoka airport that is currently under construction. 
Date:  2008–04 
URL:  http://d.repec.org/n?u=RePEc:tky:fseres:2008cf556&r=ecm 
By:  Magnus, J.R.; Powell, O.R.; Prufer, P. (Tilburg University, Center for Economic Research) 
Abstract:  Empirical growth research faces a high degree of model uncertainty. Apart from the neoclassical growth model, many new (endogenous) growth models have been proposed. This causes a lack of robustness of the parameter estimates and makes the determination of the key determinants of growth hazardous. The current paper deals with the fundamental issue of parameter estimation under model uncertainty, and compares the performance of various model averaging techniques. In particular, it contrasts Bayesian model averaging (BMA) ? currently one of the standard methods used in growth empirics ? with weightedaverage least squares (WALS), a method that has not previously been applied in this context. 
Keywords:  Model averaging;Bayesian analysis;Growth determinants 
JEL:  C51 C52 C13 C11 
Date:  2008 
URL:  http://d.repec.org/n?u=RePEc:dgr:kubcen:200839&r=ecm 
By:  Abba M. Krieger; Moshe Pollak; Ester SamuelCahn 
Abstract:  The present paper studies the limiting behavior of the average score of a sequentially selected group of items or individuals, the underlying distribution of which, F, belongs to the Gumbel domain of attraction of extreme value distribution. This class contains the Normal, log Normal, Gamma, Weibull and many other distributions. The selection rules are the “better than average” (β = 1) and the “βbetter than average” rule, defined as follows. After the first item is selected, another item is admitted into the group if and only if its score is greater than β times the average score of those already selected. Denote by Y<em><sub>k</sub></em> the average of the <em>k</em> first selected items, and by T<sub><em>k</em></sub> the time it takes to amass them. Some of the key results obtained are: Under mild conditions, for the better than average rule, Y<sub><em>k</em></sub> − G<sup>−1</sup>(log <em>k</em>) converges almost surely to a finite random variable, where G(<em>x</em>) = −log(1 − F(<em>x</em>)). When G(<em>x</em>) = <em>x<sup>α</sup></em> + <em>h</em>(<em>x</em>), α > 0 and <em>h</em>(<em>x</em>)/<em>x</em><sup>α</sup> → 0, then T<sub><em>k</em></sub> is of approximate order <em>k</em><sup>2</sup>. When β > 1, the asymptotic results, which are obtained, are of a completely different order of magnitude. 
Date:  2008–03 
URL:  http://d.repec.org/n?u=RePEc:huj:dispap:dp478&r=ecm 
By:  Micha Mandel; Yosef Rinott 
Abstract:  This note revisits the problem of selection bias, using a simple binomial example. It focuses on selection that is introduced by observing the data and making decisions prior to formal statistical analysis. Decision rules and interpretation of confidence measure and results must then be taken relative to the point of view of the decision maker, i.e., before selection or after it. Such a distinction is important since inference can be considerably altered when the decision maker's point of view changes. This note demonstrates the issue, using both the frequentist and the Bayesian paradigms. 
Keywords:  Confidence interval; Credible set; Binomial model; Decision theory; Likelihood principle 
Date:  2007–12 
URL:  http://d.repec.org/n?u=RePEc:huj:dispap:dp473&r=ecm 
By:  Makoto Abe (Faculty of Economics, University of Tokyo) 
Abstract:  This research extends a Pareto/NBD model of customerbase analysis using a hierarchical Bayesian (HB) framework to suit today's customized marketing. The proposed HB model presumes three tried and tested assumptions of Pareto/NBD models: (1) a Poisson purchase process, (2) a memoryless dropout process (i.e., constant hazard rate), and (3) heterogeneity across customers, while relaxing the independence assumption of the purchase and dropout rates and incorporating customer characteristics as covariates. The model also provides useful output for CRM, such as a customerspecific lifetime and survival rate, as byproducts of the MCMC estimation. Using three different types of databases  music CD for ecommerce, FSP data for a department store and a music CD chain, the HB model is compared against the benchmark Pareto/NBD model. The study demonstrates that recencyfrequency data, in conjunction with customer behavior and characteristics, can provide important insights into direct marketing issues, such as the demographic profile of best customers and whether longlife customers spend more. 
Date:  2008–01 
URL:  http://d.repec.org/n?u=RePEc:tky:fseres:2008cf537&r=ecm 