nep-ecm New Economics Papers
on Econometrics
Issue of 2017‒08‒06
twelve papers chosen by
Sune Karlsson
Örebro universitet

  1. Semiparametric Quasi-Bayesian Inference with Dirichlet Process Priors: Application to Nonignorable Missing Responses By Igari Ryosuke; Takahiro Hoshino
  2. Dynamic conditional score models with time-varying location, scale and shape parameters By Escribano Sáez, Álvaro; Blazsek, Szabolcs Istvan; Ayala, Astrid
  3. The Memory of Volatility By Wenger, Kai; Leschinski, Christian; Sibbertsen, Philipp
  4. Structural change in non-stationary AR(1) models By Chong, Terence Tai Leung; Pang, Tianxiao; Zhang, Danna; Liang, Yanling
  5. A Primer on Bayesian Distributional Regression By Thomas Kneib; Nikolaus Umlauf
  6. Specification Tests for the Multinomial Logit Model Revisited: The Role of Alternative-specific Constants By Jan J. Rouwendal
  7. Theoretical and Empirical Differences Between Diagonal and Full Bekk for Risk Management By David Allen; Michael McAleer
  8. On Biased Correlation Estimation By Thomas Sch\"urmann; Ingo Hoffmann
  9. Discrete Choice Models for Commuting Interactions By Jan J. Rouwendal; Or Levkovich; Ismir Mulalic
  10. A Theory of Dichotomous Valuation with Applications to Variable Selection By Hu, Xingwei
  11. An Econometric Method for Estimating Population Parameters from Non-Random Samples: An Application to Clinical Case Finding By Rulof P. Burger; PhD; Zoë M. McLaren; PhD
  12. Identifying Distributions in a Panel Model with Heteroskedasticity: An Application to Earnings Volatility By Irene Botosaru

  1. By: Igari Ryosuke (Graduate School of Economics, Keio University); Takahiro Hoshino (Faculty of Economics, Keio University)
    Abstract: Quasi-Bayesian inference, in which we can use an objective function such as generalized method of moments (GMM), M-estimators, or empirical likelihoods instead of log-likelihood functions, has been studied in Bayesian statistics.However, existing quasi-Bayesian estimation methods do not incorporate Bayesian semiparametric modeling such as Dirichlet process mixtures. In this study, we propose a semiparametric quasi-Bayesian inference with Dirichlet process priors based on the method proposed by Hoshino and Igari (2017) and Igari and Hoshino (2017), which divide the objective function into likelihood function and objective function of GMM.In the proposed method, auxiliary information such as population information can be incorporated in a GMM-type function,whereas the likelihood function is expressed as infinite mixtures.In the resulting Markov chain Monte Carlo (MCMC) algorithm, the GMM-type objective function is considered in the Metropolis Hastings algorithm in the blocked Gibbs sampler. For illustrative purposes, we apply the proposed estimation method to the missing data analysis with nonignorable responses, in which the missingness depends on the dependent variable.We show the performance of our model using a simulation study.
    Keywords: Dirichlet Process Mixture Model, Blocked Gibbs Sampler, GMM, Auxiliary Information, Selection Model
    JEL: C11 C14 C15
    Date: 2017–06–26
    URL: http://d.repec.org/n?u=RePEc:keo:dpaper:2017-020&r=ecm
  2. By: Escribano Sáez, Álvaro; Blazsek, Szabolcs Istvan; Ayala, Astrid
    Abstract: We introduce new dynamic conditional score (DCS) models with time-varyinglocation, scale and shape parameters. For these models, we use the Student's-t, GED(general error distribution), Gen-t (generalized-t), Skew-Gen-t (skewed generalized-t),EGB2 (exponential generalized beta of the second kind) and NIG (normal-inverseGaussian) distributions. We show that the maximum likelihood (ML) estimates of thenew DCS models are consistent and asymptotically Gaussian. As an illustration, weuse daily log-return time series data from the S&P 500 index for period 1950 to 2016.We find that, with respect to goodness-of-fit and predictive performance, the DCSmodels with dynamic shape are superior to the DCS models with constant shape andthe benchmark AR-t-GARCH model.
    Keywords: Score-driven shape parameters; Dynamic conditional score models
    JEL: C58 C52 C22
    Date: 2017–07–01
    URL: http://d.repec.org/n?u=RePEc:cte:werepe:25043&r=ecm
  3. By: Wenger, Kai; Leschinski, Christian; Sibbertsen, Philipp
    Abstract: The focus of the volatility literature on forecasting and the predominance of the conceptually simpler HAR model over long memory stochastic volatility models has led to the fact that the actual degree of memory estimates has rarely been considered. Estimates in the literature range roughly between 0.4 and 0.6 - that is from the higher stationary to the lower non-stationary region. This difference, however, has important practical implications - such as the existence or non-existence of the fourth moment of the return distribution. Inference on the memory order is complicated by the presence of measurement error in realized volatility and the potential of spurious long memory. In this paper we provide a comprehensive analysis of the memory in variances of international stock indices and exchange rates. On the one hand, we find that the variance of exchange rates is subject to spurious long memory and the true memory parameter is in the higher stationary range. Stock index variances, on the other hand, are free of low frequency contaminations and the memory is in the lower non-stationary range. These results are obtained using state of the art local Whittle methods that allow consistent estimation in presence of perturbations or low frequency contaminations.
    Keywords: Realized Volatility; Long Memory; Perturbation; Spurious Long Memory
    JEL: C12 C22 C58 G15
    Date: 2017–07
    URL: http://d.repec.org/n?u=RePEc:han:dpaper:dp-601&r=ecm
  4. By: Chong, Terence Tai Leung; Pang, Tianxiao; Zhang, Danna; Liang, Yanling
    Abstract: This paper revisits the asymptotic inference for non-stationary AR(1) models of Phillips and Magdalinos (2007a) by incorporating a structural change in the AR parameter at an unknown time k0. We derive the limiting distributions of the t-ratios of beta1 and beta2 and the least squares estimator of the change point for the cases above under some mild conditions. Monte Carlo simulations are conducted to examine the finite-sample properties of the estimators. Our theoretical findings are supported by the Monte Carlo simulations.
    Keywords: AR(1) model, Least squares estimator, Limiting distribution, Mildly explosive, Mildly integrated, Structural change, Unit root.
    JEL: C2 C22
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:80510&r=ecm
  5. By: Thomas Kneib; Nikolaus Umlauf
    Abstract: Bayesian methods have become increasingly popular in the past two decades. With the constant rise of computational power even very complex models can be estimated on virtually any modern computer. Moreover, interest has shifted from conditional mean models to probabilistic distributional models capturing location, scale, shape and other aspects of a response distribution, where covariate effects can have flexible forms, e.g., linear, nonlinear, spatial or random effects. This tutorial paper discusses how to select models in the Bayesian distributional regression setting, how to monitor convergence of the Markov chains, evaluate relevance of effects using simultaneous credible intervals and how to use simulation-based inference also for quantities derived from the original model parameterisation. We exemplify the work flow using daily weather data on (i) temperatures on Germany's highest mountain and (ii) extreme values of precipitation all over Germany.
    Keywords: Distributional regression, generalized additive models for location, scale and shape, Markov chain Monte Carlo simulations, semiparametric regression, tutorial
    JEL: C11 C14 C61 C63
    Date: 2017–07
    URL: http://d.repec.org/n?u=RePEc:inn:wpaper:2017-12&r=ecm
  6. By: Jan J. Rouwendal (Vrije Universiteit Amsterdam; Tinbergen Institute, The Netherlands)
    Abstract: This paper considers specification tests for the multinomial logit model if alternative-specific constants are used to absorb the impact of omitted variables in the deterministic parts of the utilities. It finds that such tests then do not have any power. This implies that such tests have no power to detect violations of IIA, but only for specification errors in the deterministic part of the utility function.
    Keywords: multinomial logit; Hausman-McFadden specification test; nested logit; GEV models; mixed logit
    JEL: D1 D4
    Date: 2017–07–31
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20170068&r=ecm
  7. By: David Allen (Department of Mathematics, University of Sydney, Australia); Michael McAleer (Department of Quantitative Finance, National Tsing Hua University, Taiwan; Discipline of Business Analytics, University of Sydney Business School, Australia; Econometric Institute, Erasmus School of Economics, Erasmus University Rotterdam, The Netherlands)
    Abstract: The purpose of the paper is to explore the relative biases in the estimation of the Full BEKK model as compared with the Diagonal BEKK model, which is used as a theoretical and empirical benchmark. Chang and McAleer [4] show that univariate GARCH is not a special case of multivariate ARCH, specifically, the Full BEKK model, and demonstrate that Full BEKK which, in practice, is estimated almost exclusively, has no underlying stochastic process, regularity conditions, or asymptotic properties. Diagonal BEKK (DBEKK) does not suffer from these limitations, and hence provides a suitable benchmark. We use simulated financial returns series to contrast estimates of the conditional variances and covariances from DBEKK and BEKK. The results of non-parametric tests suggest evidence of considerable bias in the Full BEKK estimates. The results of quantile regression analysis show there is a systematic relationship between the two sets of estimates as we move across the quantiles. Estimates of conditional variances from Full BEKK, relative to those from DBEKK, are lower in the left tail and higher in the right tail.
    Keywords: DBEKK; BEKK; Regularity Conditions; Asymptotic Properties; Non-Parametric; Bias; Quantile regression.
    JEL: C13 C21 C58
    Date: 2017–07–31
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20170069&r=ecm
  8. By: Thomas Sch\"urmann; Ingo Hoffmann
    Abstract: In general, underestimation of risk is something which should be avoided as far as possible. Especially in financial asset management, equity risk is typically characterized by the measure of portfolio variance, or indirectly by quantities which are derived from it. Since there is a linear dependency of the variance and the empirical correlation between asset classes, one is compelled to control or to avoid the possibility of underestimating correlation coefficients. In the present approach, we formalize common practice and classify these approaches by computing their probability of underestimation. In addition, we introduce a new estimator which is characterized by having the advantage of a constant and controllable probability of underestimation. We prove that the new estimator is statistically consistent.
    Date: 2017–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1707.09037&r=ecm
  9. By: Jan J. Rouwendal (Vrije Universiteit Amsterdam; Tinbergen Institute, The Netherlands); Or Levkovich (Vrije Universiteit Amsterdam); Ismir Mulalic (DTU, KRAKS)
    Abstract: An emerging quantitative spatial economics literature models commuting interactions by a gravity equation that is mathematically equivalent to a multinomial logit model. This model is widely viewed as restrictive because of the independence of irrelevant alternatives (IIA) property that links substitution behavior in response to changes in the attractiveness of choice alternatives to choice probabilities in a mechanistic way. This is relevant for counterfactual analysis. In this paper we examine the appropriateness of the commuting model from a theoretical as well as an empirical point of view. We show that conventional specification tests of the multinomial logit model are of limited use when alternative specific constants are used, as is common in the recent literature, and offer no information with respect to the validity of IIA. In particular, we show that maximum likelihood estimation of relevant nested logit model is impossible because the crucial parameters are not identified. We discuss cross-nested and mixed logit as alternatives. We argue that a comparison between predicted and actual changes in commuting flows in response to a change in the attractiveness of choice alternatives provides a more informative test for the validity of the multinomial logit model for commuting interaction and report the results of such a test – as well as others – for data referring to Copenhagen.
    Keywords: quantitative spatial economics; multinomial logit; mixed logit; independence of irrelevant alternatives
    JEL: R1 R2 R4
    Date: 2017–07–31
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20170067&r=ecm
  10. By: Hu, Xingwei
    Abstract: An econometric or statistical model may undergo a marginal gain when a new variable is admitted, and marginal loss if an existing variable is removed. The value of a variable to the model is quantified by its expected marginal gain and marginal loss. Under a prior belief that all candidate variables should be treated fairly, we derive a few formulas which evaluate the overall performance of each variable. One formula is identical to that for the Shapley value. However, it is not symmetric with respect to marginal gain and marginal loss; moreover, the Shapley value favors the latter. Thus we propose a unbiased solution. Two empirical studies are included: the first being a multi-criteria model selection for a dynamic panel regression; the second being an analysis of effect on hourly wage given by additional years of schooling.
    Keywords: unbiased multivariate Shapley value; variable selection; marginal effect; endowment bias; model uncertainty
    JEL: C11 C52 C71 D81
    Date: 2017–06–15
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:80457&r=ecm
  11. By: Rulof P. Burger; PhD; Zoë M. McLaren; PhD
    Abstract: The problem of sample selection complicates the process of drawing inference about populations. Selective sampling arises in many real world situations when agents such as doctors and customs officials search for targets with high values of a characteristic. We propose a new method for estimating population characteristics from these types of selected samples. We develop a model that captures key features of the agent's sampling decision. We use a generalized method of moments with instrumental variables and maximum likelihood to estimate the population prevalence of the characteristic of interest and the agents’ accuracy in identifying targets. We apply this method to tuberculosis (TB), which is the leading infectious disease cause of death worldwide. We use a national database of TB test data from South Africa to examine testing for multi-drug resistant TB (MDR-TB). Approximately one-quarter of MDR-TB cases were undiagnosed between 2004-2010. The official estimate of 2.5% is therefore too low and MDR-TB prevalence is as high as 3.5%. Signal-to-noise ratios are estimated to be between 0.5 and 1. Our approach is widely applicable because of the availability of routinely collected data and abundance of potential instruments. Using routinely collected data to monitor population prevalence can guide evidence-based policy making.
    Keywords: drug resistance, Instrumental variables, sample selection, South Africa, tuberculosis
    JEL: I18 I15 C15 C26
    Date: 2017–07
    URL: http://d.repec.org/n?u=RePEc:rza:wpaper:692&r=ecm
  12. By: Irene Botosaru (Simon Fraser University)
    Abstract: This paper considers a panel model with heteroskedasticity, where the parameter of interest is the probability density function of the heteroskedasticity. The nonparametric identification results are established sequentially via a deconvolution argument (in the first step) and solving a linear Fredholm integral equation of the first kind (in the second step). The identification results are constructive and give rise to nonparametric estimators. The model is relevant to the literature on earnings dynamics. Applied to data from the Panel Study of Income Dynamics (PSID), the method developed in this paper reveals a high degree of unobserved heterogeneity in earnings risk. In particular, the evolution over time of the quantiles of the conditional shock variance shows that it is those in the right tail of the distribution who experience the highest volatilities (particularly during recessions), with lower quantiles experiencing relatively constant volatilities during the business cycle. This type of heterogeneity may be relevant to the study of the cyclicality of income risk.
    Keywords: Earnings dynamics, panel data, deconvolution, integral equation
    JEL: C14 C23 D31
    Date: 2017–07–24
    URL: http://d.repec.org/n?u=RePEc:sfu:sfudps:dp17-11&r=ecm

This nep-ecm issue is ©2017 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.