nep-ecm New Economics Papers
on Econometrics
Issue of 2008‒04‒21
eleven papers chosen by
Sune Karlsson
Orebro University

  1. Exact Maximum Likelihood estimation for the BL-GARCH model under elliptical distributed innovations By Abdou Kâ Diongue; Dominique Guegan; Rodney C. Wolff
  2. Semiparametric Robust Estimation of Truncated and Censored Regression Models By Cizek, P.
  3. Likelihood Functions for State Space Models with Diffuse Initial Conditions By Marc K. Francke; Siem Jan Koopman; Aart de Vos
  4. Nonparametric Identification and Estimation in a Generalized Roy Model By Patrick Bayer; Shakeeb Khan; Christopher Timmins
  5. "On the Asymptotic Optimality of the LIML Estimator with Possibly Many Instruments" By T. W. Anderson; Naoto Kunitomo; Yukitoshi Matsushita
  6. Non-stationarity and meta-distribution By Dominique Guegan
  7. "Bayesian Estimation of Entry Games with Application to Japanese Airline Data" By Sugawara, Shinya; Yasuhiro Omori
  8. A Comparison of Two Averaging Techniques with an Application to growth Empirics By Magnus, J.R.; Powell, O.R.; Prufer, P.
  9. Extreme(ly) Mean(ingful) By Abba M. Krieger; Moshe Pollak; Ester Samuel-Cahn
  10. On Statistical Inference Under Selection Bias By Micha Mandel; Yosef Rinott
  11. ""Counting Your Customers" One by One: A Hierarchical Bayes Extension to the Pareto/NBD Model" By Makoto Abe

  1. By: Abdou Kâ Diongue (UFR SAT - Université Gaston Berger - Université Gaston Berger de Saint-Louis); Dominique Guegan (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I, Ecole d'économie de Paris - Paris School of Economics - Université Panthéon-Sorbonne - Paris I); Rodney C. Wolff (School of Mathematical Sciences - Queensland University of Technology)
    Abstract: In this paper, we discuss the class of Bilinear GATRCH (BL-GARCH) models which are capable of capturing simultaneously two key properties of non-linear time series : volatility clustering and leverage effects. It has been observed often that the marginal distributions of such time series have heavy tails ; thus we examine the BL-GARCH model in a general setting under some non-Normal distributions. We investigate some probabilistic properties of this model and we propose and implement a maximum likelihood estimation (MLE) methodology. To evaluate the small-sample performance of this method for the various models, a Monte Carlo study is conducted. Finally, within-sample estimation properties are studied using S&P 500 daily returns, when the features of interest manifest as volatility clustering and leverage effects.
    Keywords: BL-GARCH process, elliptical distribution, leverage effects, Maximum Likelihood, Monte Carlo method, volatility clustering.
    Date: 2008–03
    URL: http://d.repec.org/n?u=RePEc:hal:papers:halshs-00270719_v1&r=ecm
  2. By: Cizek, P. (Tilburg University, Center for Economic Research)
    Abstract: Many estimation methods of truncated and censored regression models such as the maximum likelihood and symmetrically censored least squares (SCLS) are sensitive to outliers and data contamination as we document. Therefore, we propose a semipara- metric general trimmed estimator (GTE) of truncated and censored regression, which is highly robust and relatively imprecise. To improve its performance, we also propose data-adaptive and one-step trimmed estimators. We derive the robust and asymptotic properties of all proposed estimators and show that the one-step estimators (e.g., one-step SCLS) are as robust as GTE and are asymptotically equivalent to the original estimator (e.g., SCLS). The infinite-sample properties of existing and proposed estimators are studied by means of Monte Carlo simulations.
    Keywords: Asymptotic normality;censored regression;one-step estimation;robust esti- mation;trimming;truncated regression
    JEL: C13 C14 C21 C24
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:dgr:kubcen:200834&r=ecm
  3. By: Marc K. Francke (Vrije Universiteit Amsterdam); Siem Jan Koopman (Vrije Universiteit Amsterdam); Aart de Vos (Vrije Universiteit Amsterdam)
    Abstract: State space models with nonstationary processes and fixed regression effects require a state vector with diffuse initial conditions. Different likelihood functions can be adopted for the estimation of parameters in time series models with diffuse initial conditions. In this paper we consider profile, diffuse and marginal likelihood functions. The marginal likelihood is defined as the likelihood function of a transformation of the data vector. The transformation is not unique. The diffuse likelihood is a marginal likelihood for a specific data transformation that may depend on parameters. Therefore, the diffuse likelihood can not be used generally for parameter estimation. Our newly proposed marginal likelihood function is based on an orthonormal transformation that does not depend on parameters. Likelihood functions for state space models are evaluated using the Kalman filter. The diffuse Kalman filter is specifically designed for computing the diffuse likelihood function. We show that a modification of the diffuse Kalman filter is needed for the evaluation of our proposed marginal likelihood function. Diffuse and marginal likelihood functions have better small sample properties compared to the profile likelihood function for the estimation of parameters in linear time series models. The results in our paper confirm the earlier findings and show that the diffuse likelihood function is not appropriate for a range of state space model specifications.
    Keywords: Diffuse likelihood; Kalman filter; Marginal likelihood; Multivariate time series models; Profile likelihood
    JEL: C13 C22 C32
    Date: 2008–04–14
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20080040&r=ecm
  4. By: Patrick Bayer; Shakeeb Khan; Christopher Timmins
    Abstract: This paper considers nonparametric identification and estimation of a generalized Roy model that includes a non-pecuniary component of utility associated with each choice alternative. Previous work has found that, without parametric restrictions or the availability of covariates, all of the useful content of a cross-sectional dataset is absorbed in a restrictive specification of Roy sorting behavior that imposes independence on wage draws. While this is true, we demonstrate that it is also possible to identify (under relatively innocuous assumptions and without the use of covariates) a common non-pecuniary component of utility associated with each choice alternative. We develop nonparametric estimators corresponding to two alternative assumptions under which we prove identification, derive asymptotic properties, and illustrate small sample properties with a series of Monte Carlo experiments. We demonstrate the usefulness of one of these estimators with an empirical application. Micro data from the 2000 Census are used to calculate the returns to a college education. If high-school and college graduates face different costs of migration, this would be reflected in different degrees of Roy-sorting-induced bias in their observed wage distributions. Correcting for this bias, the observed returns to a college degree are cut in half.
    JEL: C1 C13 C14 J24 J3 J32
    Date: 2008–04
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:13949&r=ecm
  5. By: T. W. Anderson (Department of Statistics and Department of Economics, Stanford University); Naoto Kunitomo (Faculty of Economics, University of Tokyo); Yukitoshi Matsushita (CIRJE, University of Tokyo)
    Abstract: We consider the estimation of the coefficients of a linear structural equation in a simultaneous equation system when there are many instrumental variables. We derive some asymptotic properties of the limited information maximum likelihood (LIML) estimator when the number of instruments is large; some of these results are new and we relate them to results in some recent studies. We have found that the variance of the LIML estimator and its modifications often attain the asymptotic lower bound when the number of instruments is large and the disturbance terms are not necessarily normally distributed, that is, for the micro-econometric models with many instruments.
    Date: 2008–01
    URL: http://d.repec.org/n?u=RePEc:tky:fseres:2008cf542&r=ecm
  6. By: Dominique Guegan (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I, Ecole d'économie de Paris - Paris School of Economics - Université Panthéon-Sorbonne - Paris I)
    Abstract: In this paper we deal with the problem of non-stationarity encountered in a lot of data sets, mainly in financial and economics domains, coming from the presence of multiple seasonnalities, jumps, volatility, distorsion, aggregation, etc. Existence of non-stationarity involves spurious behaviors in estimated statistics as soon as we work with finite samples. We illustrate this fact using Markov switching processes, Stopbreak models and SETAR processes. Thus, working with a theoretical framework based on the existence of an invariant measure for a whole sample is not satisfactory. Empirically alternative strategies have been developed introducing dynamics inside modelling mainly through the parameter with the use of rolling windows. A specific framework has not yet been proposed to study such non-invariant data sets. The question is difficult. Here, we address a discussion on this topic proposing the concept of meta-distribution which can be used to improve risk management strategies or forecasts.
    Keywords: Non-stationarity, switching processes, SETAR processes, jumps, forecast, risk management, copula, probability distribution function.
    Date: 2008–03
    URL: http://d.repec.org/n?u=RePEc:hal:papers:halshs-00270708_v1&r=ecm
  7. By: Sugawara, Shinya (Graduate School of Economics, University of Tokyo); Yasuhiro Omori (Faculty of Economics, University of Tokyo)
    Abstract: This paper proposes a Bayesian estimation of the payoff functions in entry games using Markov chain Monte Carlo simulation. In order to deal with the multiple Nash equilibria, we describe the econometric model with a latent variable that represents the player's choice of an equilibrium among the multiple equilibria. The statistical incoherency problem considered in previous studies is also discussed for our entry game model, and we provide an alternative justification for the previous estimation procedures. Our proposed methodology is applied to Japanese airline data, and model selection based on the marginal likelihood is conducted to investigate the nature of the strategic interaction between two major Japanese airline companies. Finally, we predict the entry probabilities of two airline companies for the Shizuoka airport that is currently under construction.
    Date: 2008–04
    URL: http://d.repec.org/n?u=RePEc:tky:fseres:2008cf556&r=ecm
  8. By: Magnus, J.R.; Powell, O.R.; Prufer, P. (Tilburg University, Center for Economic Research)
    Abstract: Empirical growth research faces a high degree of model uncertainty. Apart from the neoclassical growth model, many new (endogenous) growth models have been proposed. This causes a lack of robustness of the parameter estimates and makes the determination of the key determinants of growth hazardous. The current paper deals with the fundamental issue of parameter estimation under model uncertainty, and compares the performance of various model averaging techniques. In particular, it contrasts Bayesian model averaging (BMA) ? currently one of the standard methods used in growth empirics ? with weighted-average least squares (WALS), a method that has not previously been applied in this context.
    Keywords: Model averaging;Bayesian analysis;Growth determinants
    JEL: C51 C52 C13 C11
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:dgr:kubcen:200839&r=ecm
  9. By: Abba M. Krieger; Moshe Pollak; Ester Samuel-Cahn
    Abstract: The present paper studies the limiting behavior of the average score of a sequentially selected group of items or individuals, the underlying distribution of which, F, belongs to the Gumbel domain of attraction of extreme value distribution. This class contains the Normal, log Normal, Gamma, Weibull and many other distributions. The selection rules are the “better than average” (β = 1) and the “β-better than average” rule, defined as follows. After the first item is selected, another item is admitted into the group if and only if its score is greater than β times the average score of those already selected. Denote by Y<em><sub>k</sub></em> the average of the <em>k</em> first selected items, and by T<sub><em>k</em></sub> the time it takes to amass them. Some of the key results obtained are: Under mild conditions, for the better than average rule, Y<sub><em>k</em></sub> − G<sup>−1</sup>(log <em>k</em>) converges almost surely to a finite random variable, where G(<em>x</em>) = −log(1 − F(<em>x</em>)). When G(<em>x</em>) = <em>x<sup>α</sup></em> + <em>h</em>(<em>x</em>), α > 0 and <em>h</em>(<em>x</em>)/<em>x</em><sup>α</sup> → 0, then T<sub><em>k</em></sub> is of approximate order <em>k</em><sup>2</sup>. When β > 1, the asymptotic results, which are obtained, are of a completely different order of magnitude.
    Date: 2008–03
    URL: http://d.repec.org/n?u=RePEc:huj:dispap:dp478&r=ecm
  10. By: Micha Mandel; Yosef Rinott
    Abstract: This note revisits the problem of selection bias, using a simple binomial example. It focuses on selection that is introduced by observing the data and making decisions prior to formal statistical analysis. Decision rules and interpretation of confidence measure and results must then be taken relative to the point of view of the decision maker, i.e., before selection or after it. Such a distinction is important since inference can be considerably altered when the decision maker's point of view changes. This note demonstrates the issue, using both the frequentist and the Bayesian paradigms.
    Keywords: Confidence interval; Credible set; Binomial model; Decision theory; Likelihood principle
    Date: 2007–12
    URL: http://d.repec.org/n?u=RePEc:huj:dispap:dp473&r=ecm
  11. By: Makoto Abe (Faculty of Economics, University of Tokyo)
    Abstract: This research extends a Pareto/NBD model of customer-base analysis using a hierarchical Bayesian (HB) framework to suit today's customized marketing. The proposed HB model presumes three tried and tested assumptions of Pareto/NBD models: (1) a Poisson purchase process, (2) a memoryless dropout process (i.e., constant hazard rate), and (3) heterogeneity across customers, while relaxing the independence assumption of the purchase and dropout rates and incorporating customer characteristics as covariates. The model also provides useful output for CRM, such as a customer-specific lifetime and survival rate, as by-products of the MCMC estimation. Using three different types of databases --- music CD for e-commerce, FSP data for a department store and a music CD chain, the HB model is compared against the benchmark Pareto/NBD model. The study demonstrates that recency-frequency data, in conjunction with customer behavior and characteristics, can provide important insights into direct marketing issues, such as the demographic profile of best customers and whether long-life customers spend more.
    Date: 2008–01
    URL: http://d.repec.org/n?u=RePEc:tky:fseres:2008cf537&r=ecm

This nep-ecm issue is ©2008 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.