nep-ecm New Economics Papers
on Econometrics
Issue of 2015‒07‒25
ten papers chosen by
Sune Karlsson
Örebro universitet

  1. Robust linear static panel data models using ε-contamination By Guy Lacroix; Badi H. Baltagi; Georges Bresson; Anoop Chaturvedi
  2. An Improved Bootstrap Test for Restricted Stochastic Dominance By Lok, Thomas M.; Tabri, Rami V.
  3. Factor augmented autoregressive distributed lag models with macroeconomic applications By Dalibor Stevanovic
  4. Bayesian Variable Selection in Spatial Autoregressive Models By Jesus Crespo Cuaresma; Philipp Piribauer
  5. Factorisable Sparse Tail Event Curves By Shih-Kang Chao; Wolfgang K. Härdle; Ming Yuan;
  6. "Cholesky Realized Stochastic Volatility Model" By Shinichiro Shirota; Yasuhiro Omori; Hedibert. F. Lopes; Haixiang Piao
  7. Statistical matching and uncertainty analysis in combining household income and expenditure data By Pier Luigi Conti; Daniela Marella; Andrea Neri
  8. ROC Curve Analysis for Randomly Selected Patients By Bandyopadhyay, Tathagata; Sumanta Adhya; Guha, Apratim
  9. Semi-parametric time series modelling with autocopulas By Antony Ware; Ilnaz Asadzadeh
  10. Accounting for Adaptation in the Economics of Happiness By Miles Kimball; Ryan Nunn; Dan Silverman

  1. By: Guy Lacroix; Badi H. Baltagi; Georges Bresson; Anoop Chaturvedi
    Abstract: The paper develops a general Bayesian framework for robust linear static panel data models using ε-contamination. A two-step approach is employed to derive the conditional type-II maximum likelihood (ML-II) posterior distribution of the coefficients and individual effects.The ML-II posterior densities are weighted averages of the Bayes estimator under a base prior and the data-dependent empirical Bayes estimator. Two-stage and three stage hierarchy estimators are developed and their finite sample performance is investigated through a series of Monte Carlo experiments. These include standard random effects as well as Mundlak-type, Chamberlain-type and Hausman-Taylor-type models. The simulation results underscore the relatively good performance of the three-stage hierarchy estimator. Within a single theoretical framework, our Bayesian approach encompasses a variety of specications while conventional methods require separate estimators for each case. We illustrate the performance of our estimator relative to classic panel estimators using data on earnings and crime.
    Keywords: ε-contamination, hyper g-priors, type-II maximum likelihood posterior density, panel data, robust Bayesian estimator, three-stage hierarchy.,
    JEL: C11 C23 C26
    Date: 2015–07–15
    URL: http://d.repec.org/n?u=RePEc:cir:cirwor:2015s-30&r=ecm
  2. By: Lok, Thomas M.; Tabri, Rami V.
    Abstract: This paper proposes a uniformly asymptotically valid method of testing for restricted stochastic dominance based on the bootstrap test of Linton et al. (2010). The method reformulates their bootstrap test statistics using a constrained estimator of the contact set that imposes the restrictions of the null hypothesis. As our simulation results show, this characteristic of our test makes it noticeably less conservative than the test of Linton et al. (2010) and improves its power against alternatives that have some non-violated inequalities.
    Keywords: Empirical Likelihood, Constrained Estimation, Restricted Stochastic Dominance, Bootstrap Test
    Date: 2015–06
    URL: http://d.repec.org/n?u=RePEc:syd:wpaper:2015-15&r=ecm
  3. By: Dalibor Stevanovic
    Abstract: This paper proposes a factor augmented autoregressive distributed lag (FADL) framework for analyzing the dynamic effects of common and idiosyncratic shocks. We first estimate the common shocks from a large panel of data with a strong factor structure. Impulse responses are then obtained from an autoregression, augmented with a distributed lag of the estimated common shocks. The approach has three distinctive features. First, identification restrictions, especially those based on recursive or block recursive ordering, are very easy to impose. Second, the dynamic response to the common shocks can be constructed for variables not necessarily in the panel. Third, the restrictions imposed by the factor model can be tested. The relation to other identification schemes used in the FAVAR literature is discussed. The methodology is used to study the effects of monetary policy and news shocks.
    Keywords: Factor models, structural VAR, impulse response,
    JEL: C32 E1
    Date: 2015–07–13
    URL: http://d.repec.org/n?u=RePEc:cir:cirwor:2015s-33&r=ecm
  4. By: Jesus Crespo Cuaresma (Department of Economics, Vienna University of Economics and Business); Philipp Piribauer (Department of Economics, Vienna University of Economics and Business)
    Abstract: This paper compares the performance of Bayesian variable selection approaches for spatial autoregressive models. We present two alternative approaches which can be implemented using Gibbs sampling methods in a straightforward way and allow us to deal with the problem of model uncertainty in spatial autoregressive models in a flexible and computationally efficient way. In a simulation study we show that the variable selection approaches tend to outperform existing Bayesian model averaging techniques both in terms of in-sample predictive performance and computational efficiency.
    Keywords: spatial autoregressive model, variable selection, model uncertainty, Markov chain Monte Carlo methods
    JEL: C18 C21 C52
    Date: 2015–07
    URL: http://d.repec.org/n?u=RePEc:wiw:wiwwuw:wuwp199&r=ecm
  5. By: Shih-Kang Chao; Wolfgang K. Härdle; Ming Yuan;
    Abstract: In this paper, we propose a multivariate quantile regression method which enables localized analysis on conditional quantiles and global comovement analysis on conditional ranges for high-dimensional data. The proposed method, hereafter referred to as FActorisable Sparse Tail Event Curves, or FASTEC for short, exploits the potential factor structure of multivariate conditional quantiles through nuclear norm regularization and is particularly suitable for dealing with extreme quantiles. We study both theoretical properties and computational aspects of the estimating procedure for FASTEC. In particular, we derive nonasymptotic oracle bounds for the estimation error, and develope an efficient proximal gradient algorithm for the non-smooth optimization problem incurred in our estimating procedure. Merits of the proposed methodology are further demonstrated through applications to Conditional Autoregressive Value-at-Risk (CAViaR) (Engle and Manganelli; 2004), and a Chinese temperature dataset.
    Keywords: High-dimensional data analysis, multivariate quantile regression, quantile regression, value-at-risk, nuclear norm, multi-task learning
    JEL: C38 C63 G17 G20
    Date: 2015–07
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2015-034&r=ecm
  6. By: Shinichiro Shirota (Department of Statistical Science, Duke University); Yasuhiro Omori (Faculty of Economics, The University of Tokyo); Hedibert. F. Lopes (Insper Institute of Education and Research); Haixiang Piao (Nippon Life Insurance Company)
    Abstract: Multivariate stochastic volatility models are expected to play important roles in financial applications such as asset allocation and risk management. However, these models suffer from two major difficulties: (1) there are too many parameters to estimate using only daily asset returns and (2) estimated covariance matrices are not guaranteed to be positive denite. Our approach takes advantage of realized covariances to attain the efficient estimation of parameters by incorporating additional information for the co-volatilities, and considers Cholesky decomposition to guarantee the positive definiteness of the covariance matrices. In this framework, we propose a exible modeling for stylized facts of financial markets such as dynamic correlations and leverage effects among volatilities. Taking a Bayesian approach, we describe Markov Chain Monte Carlo implementation with a simple but efficient sampling scheme. Our model is applied to nine U.S. stock returns data, and the model comparison is conducted based on portfolio performances. --
    Date: 2015–07
    URL: http://d.repec.org/n?u=RePEc:tky:fseres:2015cf979&r=ecm
  7. By: Pier Luigi Conti (Dipartimento di Scienze Statistiche, Sapienza Università di Roma); Daniela Marella (Dipartimento di Scienze della Formazione, Università Roma Tre); Andrea Neri (Bank of Italy)
    Abstract: The availability of microdata on both income and expenditure is highly recommended if one wants to assess the distributional consequences of policy changes. In Italy, the main sources used for estimating household income and expenditure are the Bank of Italy's Survey on Household Income and Wealth and the Italian National Institute of Statistics Household Budget Survey. However, there is no single data source containing information on both expenditure and income. The problem is generally overcome with statistical matching procedures based on the conditional independence (CIA) assumption. The aim of this paper is to present a method to combine information coming from different databases relaxing the CIA assumption. In particular we propose a method to combine household income and expenditure data under logical constraints regarding the average propensity to consume. We also propose an estimate of a plausible joint distribution function for household income and expenditure.
    Keywords: statistical matching, uncertainty, matching error, iterative proportional fitting
    JEL: C15 C14 C42
    Date: 2015–07
    URL: http://d.repec.org/n?u=RePEc:bdi:wptemi:td_1018_15&r=ecm
  8. By: Bandyopadhyay, Tathagata; Sumanta Adhya; Guha, Apratim
    Abstract: Receiver operating characteristic (ROC) curves and the area under the curve (AUC) are widely used in medical studies to examine the effectiveness of markers in diagnosing diseases. In most of the existing literature for ROC curve analysis it is assumed that the healthy and the diseased populations are independent of each other, which may lead to bias in the studies. In this paper we consider the disease status as a binary random variable. Assuming the disease status is determined by a latent variable and the marker and the latent variable have a bivariate normal distribution, we derive the properties of the ROC curve and the AUC. We also look at the problem of choosing optimum combination of markers when multiple markers are present. Limiting distributions are obtained and confidence intervals are discussed as well. A small simulation study is performed which confirms the superiority of our methods over the general practice of considering the two populations to be independent.
    URL: http://d.repec.org/n?u=RePEc:iim:iimawp:13665&r=ecm
  9. By: Antony Ware; Ilnaz Asadzadeh
    Abstract: In this paper we present an application of the use of autocopulas for modelling financial time series showing serial dependencies that are not necessarily linear. The approach presented here is semi-parametric in that it is characterized by a non-parametric autocopula and parametric marginals. One advantage of using autocopulas is that they provide a general representation of the auto-dependency of the time series, in particular making it possible to study the interdependence of values of the series at different extremes separately. The specific time series that is studied here comes from daily cash flows involving the product of daily natural gas price and daily temperature deviations from normal levels. Seasonality is captured by using a time dependent normal inverse Gaussian (NIG) distribution fitted to the raw values.
    Date: 2015–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1507.04767&r=ecm
  10. By: Miles Kimball; Ryan Nunn; Dan Silverman
    Abstract: Reported happiness provides a potentially useful way to evaluate unpriced goods and events; but measures of subjective well-being (SWB) often revert to the mean after responding to events, and this hedonic adaptation creates challenges for interpretation. Previous work tends to estimate time-invariant effects of events on happiness. In the presence of hedonic adaptation, this restriction can lead to biases, especially when comparing events to which people adapt at different rates. Our paper provides a flexible, extensible econometric framework that accommodates adaptation and permits the comparison of happiness-relevant life events with dissimilar hedonic adaptation paths. We present a method that is robust to individual fixed effects, imprecisely-dated data, and permanent consequences. The method is used to analyze a variety of events in the Health and Retirement Study panel. Many of the variables studied have substantial consequences for subjective well-being - consequences that differ greatly in their time profiles.
    JEL: C1 D6 I31
    Date: 2015–07
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:21365&r=ecm

This nep-ecm issue is ©2015 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.